data
stringlengths 115
7.61k
|
---|
StellaAthena#3530: I’ll check it out
bmk#1476: https://arxiv.org/pdf/1902.09469.pdf here's an arxiv-paper version of it, which i presume is more polished than the AF version
bmk#1476: on my last read-through, i *really* got stuck around the counterfactuals and Löb's theorem bits
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/754882335941591111/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/754882382393376948/unknown.png
bmk#1476: (and then i just kind of stopped reading lol)
StellaAthena#3530: Oh that’s very different from the paper I found
bmk#1476: ?
StellaAthena#3530: https://arxiv.org/abs/1810.08575
StellaAthena#3530: Iterative amplification of experts.
bmk#1476: wait
bmk#1476: oh, i meant to say embedded agency lol
bmk#1476: my bad
bmk#1476: this is why i cant understand this stuff lol
StellaAthena#3530: Wow that’s poorly explained
bmk#1476: which parts specifically?
bmk#1476: or all of it
bmk#1476: the bottom line for me is that i have no clue how any of this weird logic stuff or the "search for proofs for some time" stuff relates to how the agent chooses the 5
StellaAthena#3530: Right, the problem is that the example that they give isn’t a real one.
bmk#1476: ? |
StellaAthena#3530: The example with the $5 and the $10 does not actually illustrate their point
bmk#1476: hmm
bmk#1476: i mean isnt their point that naively thinking about counterfactuals can cause weird behavior?
bmk#1476: and if the weird math stuff is correct it should justify that
StellaAthena#3530: Right, but nothing weird happens in their example because there’s no reason *to desire to desire* the $5
StellaAthena#3530: Here’s an example that does: suppose that you are the kind of person who doesn’t like to study mathematics. You just don’t find it enjoyable, but you know that increasing your skill at mathematics would make you more successful at things you do enjoy.
StellaAthena#3530: Unfortunately, you’ve tried to motivate yourself and find it difficult. You chat with me, and envy the fact that I enjoy and can quickly pick up mathematics. You form the desire:
> I wish I was more like Stella.
Unpacking this a bit, we realize it’s really a meta-statement about your own desires. It is
> I desire to *desire to* study mathematics.
StellaAthena#3530: Following so far?
bmk#1476: yeah, makes sense
StellaAthena#3530: Here’s the problem: you don’t “actually” desire to desire to study mathematics. If you “really did” then you would act on your desire and begin to desire to study mathematics.
bmk#1476: well, it's because i'm not sufficiently capable to self-modify, no?
StellaAthena#3530: Maybe “want” would be a better word.
bmk#1476: like, the action of "begin to desire to study" is not feasible, for example
bmk#1476: maybe if i were an ai who could peek inside its own code i might have done that action
StellaAthena#3530: > I want to want to enjoy mathematics
3dprint_the_world#6486: ...but you already enjoy mathematics?
StellaAthena#3530: If you want to want to do something, it seems that you should also want to do it |
3dprint_the_world#6486: seems like that's what that's implying. Unless I misunderstand.
bmk#1476: so "I want to want to enjoy math" + "I am a sufficiently powerful agent to change my own desires" -> "I begin wanting to enjoy math"
bmk#1476: right?
StellaAthena#3530: Right
StellaAthena#3530: Löb’s theorem is a similar principle in formal logic. It says that if you can prove that you can prove P, then you can have a “direct proof” of P.
bmk#1476: ah! so "I can prove that I can prove P" + "I am sufficiently good at proving things" -> "I can prove P"?
StellaAthena#3530: Right.
StellaAthena#3530: Now we as humans are logically limited
bmk#1476: ah, that makes sense
StellaAthena#3530: It’s possible that you know P and know P -> Q but not know Q
StellaAthena#3530: That’s an unfortunate fact about human reasoning
StellaAthena#3530: But a perfect logical agent doesn’t have that limitation.
StellaAthena#3530: This (and a related statement when it comes to self-modifying agents) shortcuts counterfactual reasoning
StellaAthena#3530: The reason this is specifically relevant to embedded agents is that it requires you to reason *about yourself*
bmk#1476: i don't think i see how that changes it
StellaAthena#3530: Maybe a formal discussion is more helpful.
StellaAthena#3530: Löb’s Theorem is only true of logical systems that include a “proves” predicate.
StellaAthena#3530: Normal first order logic can’t make sense of the statement “there exists a proof of 1 + 1 = 2”
bmk#1476: is there any way to get around the logic part and only think about it intuitively instead of rigorously?
bmk#1476: where it = the weird agent behavior stuff |
StellaAthena#3530: Read section 2.4
bmk#1476: hm, that makes sense
bmk#1476: so even if you only do it once, you still want to go through with the plan that gives you the higher expected return
bmk#1476: even if you already know which "branch" you're on
bmk#1476: this feels a lot like game theory
bmk#1476: i dont see how this leads to any kind of paradoxical behavior though
bmk#1476: because in this case there's some adversary and you have to prevent exposing information
StellaAthena#3530: I don’t totally buy the argument myself. I’ve never bought the argument that “updateless decision theory” is necessary
bmk#1476: i dont know *anything* about decision theory
bmk#1476: there's a chapter in my stats textbook labelled "statistical decision theory" but skimming it, it doesn't seem very much related
StellaAthena#3530: So this is an idea that’s been in the LW sphere for close to a decade.
bmk#1476: huh
bmk#1476: is this whole Löb and counterfactual weirdness stuff pretty central to the current state of alignment research then?
StellaAthena#3530: https://www.lesswrong.com/posts/de3xjFaACCAk6imzv/towards-a-new-decision-theory
StellaAthena#3530: This blog post is from 2009
StellaAthena#3530: > is this whole Löb and counterfactual weirdness stuff pretty central to the current state of alignment research then?
@bmk There is a lot of different approaches to alignment research. This is important to some approaches but not others.
bmk#1476: ah
bmk#1476: I mean, I think it's reasonable for me to try and understand multiple major approaches
StellaAthena#3530: Are you familiar with Newcomb’s Paradox? |
bmk#1476: not really
StellaAthena#3530: An AGI puts two boxes before you. In one, you see $100. The other is opaque. The AI tells you “you may take any box(es) you wish. I have predicted how you will act. Had I predicted you will take both boxes, the opaque box will be left empty. Has I predicted you will take one box, the opaque box will have $1,000,000 in it.”
StellaAthena#3530: Do you take one box, or two?
bmk#1476: well, obviously two
bmk#1476: wait
StellaAthena#3530: Why
bmk#1476: this depends on how it "predicts"
StellaAthena#3530: This is all the information you have.
StellaAthena#3530: You believe the AGI is (compared to you) arbitrarily powerful.
bmk#1476: well, it doesn't matter I guess
bmk#1476: because if I don't take the opaque box, no matter how much money is in it, it doesn't matter
bmk#1476: If I do take it, I know it can't be *negative*
bmk#1476: wait, i can take only the opaque one too
bmk#1476: hmm
bmk#1476: one moment i'mma draw this out on paper
StellaAthena#3530: Eliezer and those who agree with him can prove that a rational agent will take two boxes.
bmk#1476: there are 6 outcomes here right?
bmk#1476: i'm drawing this out on paper rn
StellaAthena#3530: Yeah
StellaAthena#3530: Opaque box, clear box, both |
$1,000,000 there, $1,000,000 not there
StellaAthena#3530: > Eliezer and those who agree with him can prove that a rational agent will take two boxes.
My response has always been that I hope that that proof makes them happier than my $1,000,000 makes me.
bmk#1476: ok so
bmk#1476: the AI precommits right?
StellaAthena#3530: Yes
bmk#1476: so i'm stuck in one of the two universes
bmk#1476: in either universe, the most reward can be had by taking both boxes
bmk#1476: but again this depends on what "predict" means
bmk#1476: how much does the AI know about my strategy
StellaAthena#3530: *\*shrug\**
bmk#1476: if i can somehow credibly commit to taking only the opaque box i guess i could do that
bmk#1476: but i feel like that's outside the scope of the problem, no?
bmk#1476: because this doesn't feel like an equilibrium
bmk#1476: if the AI thinks i'm only gonna take one then the best action for me is to take both
StellaAthena#3530: This is all you know about the world.
asparagui#6391: what's wrong with just taking the mystery box
bmk#1476: i want to say something like, "i precommit to pick only one box and put up a $101 bond that the AI can take if i choose both boxes instead of just one"
bmk#1476: but i'm pretty sure that's cheating
bmk#1476: because then that finds an equilibrium |
bmk#1476: right?
bmk#1476: even in the "ai predicts take one" universe i'll still want to pick only opaque
StellaAthena#3530: I disagree with the line argued in this paper, so I’m not a good person to try to convince you
bmk#1476: i mean, convince me of your argument
bmk#1476: more arguments better
StellaAthena#3530: Maybe another time. I’m tired and it’s bed time 😛
bmk#1476: ah, ok
3dprint_the_world#6486: My solution has always been: Just take the mystery box. Sure, you may lose $100. But then you show that this supposedly omnipotent AI is actually quite fallible. That's worth way more than $100. And you could even viably convert that knowledge into money.
3dprint_the_world#6486: I think the argument that "you should take both boxes" assumes the payoff can only be monetary. But the payoff of demonstrating the fallibility of a superintelligent AI can't be ignored imo.
3dprint_the_world#6486: And it's a win-win for the AI too - if the box is empty, then the AI saves $1,000,000. If the box actually has a million dollars in it, the AI can hold that up as proof of its own superiority in prediction and it would probably start amassing worshippers.
bmk#1476: i think the problem with this problem is that the solution hinges on a lot of words with wishy washy definitions
3dprint_the_world#6486: https://link.springer.com/article/10.1007/s11229-011-9899-3
3dprint_the_world#6486: Basically, they argue once you fill in the 'missing details' of the problem, it actually has a definite resolution, and depending on how you fill in the details, it can be either or.
3dprint_the_world#6486: But I'm not an expert on this. In fact I'm closer to an anti-expert on this.
asparagui#6391: ^^
asparagui#6391: why would a superintelligent being give a shit about your earth dollars
asparagui#6391: there are more things in heaven and earth than are dreamt of in your philosophy
bmk#1476: i think you're taking it a bit *too* far lol
asparagui#6391: i dislike this concept of super-rationality
asparagui#6391: if god existed, they'd think like me, but only 100% (exactly defined) so |
asparagui#6391: literally you can conceive an infinite universe, but not a denizen who will disagree with you for the sake of doing so
bmk#1476: I dislike this concept of super unrigorousness
bmk#1476: I can conceive of a computer program that is completely useless and wrong, and yet I wouldn't want to make one despite encountering such programs on a day to day basis
asparagui#6391: go on
bmk#1476: Or said differently, of course you can be wrong and God (whatever you define that to be) won't give a shit
Daj#7482: > Eliezer and those who agree with him can prove that a rational agent will take two boxes.
@StellaAthena this is news to me. Elkezers whole point of TDT is that you should take one box
bmk#1476: Ooh
bmk#1476: Duke it out
bmk#1476: But before you do pls explain what the whole decision theory stuff is all about lol
Daj#7482: The explanation of Löb's theorem was really good btw Stella but I can't verify whether it's true since I'm still in the "bash head against problem" phase hah
Daj#7482: Decision Theory is the algorithm you use to make decisions (duh), given your current state of knowledge
Daj#7482: E.g. Causal Decision Theory (CDT) picks both boxes, while timeless decision Theory (TDT) picks one
bmk#1476: Ok so I'mma sleep soon but
bmk#1476: There's way too many *DTs
Daj#7482: Causal, Evidential, Timeless, Functional
Daj#7482: Dunno if there's any others
bmk#1476: thats too many
bmk#1476: rule of 3s
bmk#1476: all numbers bigger than 3 are "pretty damn big" |
Daj#7482: Haha I think people quietly dropped functional
Daj#7482: But I'm not an expert
Daj#7482: I still haven't read the functional DT paper
asparagui#6391: you all like math
asparagui#6391: so let me throw in mr gödel
asparagui#6391: my decision is the decision not in the set of decisions in decision theory
bmk#1476: i..
bmk#1476: i dont think thats how gödel works
bmk#1476: then again i dont know it well enough to say for sure
bmk#1476: but that doesnt sound right
Daj#7482: Gödel is the halting problem for math
bmk#1476: is there a literal connection
Daj#7482: > my decision is the decision not in the set of decisions in decision theory
@asparagui This is Russel's paradox iirc
bmk#1476: or is it just a nice analogy
Daj#7482: > is there a literal connection
@bmk Yes
bmk#1476: oh boy
Daj#7482: It is literally the same thing
Daj#7482: And much easier to understand imo |
bmk#1476: i dont think i can handle this level of math this late
bmk#1476: Idk the halting problem seems kinda intuitive to me
Daj#7482: Scott Aaronson had a good paper on that on his website
bmk#1476: Meanwhile I couldn't prove gödel on my own
Daj#7482: Yea My point is that seeing incompleteness as a halting problem is easier
bmk#1476: Ah
bmk#1476: I thought you mean the other way
asparagui#6391: bah lol
asparagui#6391: let me reduce it to "not in the set of 100% perfect logical decisions" category
bmk#1476: >category
bmk#1476: Did someone say category
asparagui#6391: NO MONADS
bmk#1476: It's ok I still don't know what a Monad is yet
asparagui#6391: it's fine, nobody knows
asparagui#6391: https://cdn.discordapp.com/attachments/729741769738158194/754934079182340169/monadJeopardy.jpg
3dprint_the_world#6486: A monad is just a monoid in the category of endofunctors, what's the problem?
kindiana#1016: once you understand monads you lose the ability to explain monads
3dprint_the_world#6486: The easiest way to understand monads is the list monad. A list monad is just a list where the functions make_list(x)=[x] and flatten_list([[x],[y]]) = [x, y] are defined.
3dprint_the_world#6486: and that's pretty much it
Daj#7482: https://www.smbc-comics.com/comics/1444919671-20151015.png |
3dprint_the_world#6486: the problem is all these Haskell people who discovered they can make their language look like it's really cool if they used Monads to do printf()
3dprint_the_world#6486: but just because they like wearing hair shirts, doesn't mean everyone else does too.
3dprint_the_world#6486: (j/k, I like Haskell)
Daj#7482: I have similar feelings about Haskell lol
Ravna#1831: The problem of Monad in Haskell is that it's a total unconditional surrender to the imperative programming crowd. It's using the functional concepts to deceive yourself while still being stuck with an imperative programming semantics that has no difference from, say, Java.
Ravna#1831: My hardcore FRP heart bleeds every time I hear someone claiming Monads are purely-functional.
Ravna#1831: :nooo:
Ravna#1831: Please read this article if you are deceived by those who tell you Monads in Haskell are purely functional: http://conal.net/blog/posts/the-c-language-is-purely-functional
3dprint_the_world#6486: For a functional program to actually *do* something useful, it can't be pure, but it can do the next-best thing which is to pass around state *explicitly.* Haskell takes the approach of explicit passing of state. I guess this is the charitable interpretation of what Haskellers mean when they say Haskell is 'pure'. The IO monads are just 'syntactic sugar' to make the state-passing-around look less like a horrible mess and more like something a C programmer would find intuitive to read.
Ravna#1831: The problem of the IO Monad in Haskell is that it's just like any other imperative language: throw all side effects (of different natures) out of your operational semantics into a huge dump. You can't reason about this dump, because it's already outside your semantics.
Ravna#1831: If we really believe in pure FP, we should find ways to incorporate some of the dump back to our semantics. There are many progress in Haskell on this topic.
Ravna#1831: Using a pure-FP syntactic sugar to "abstract" the problem away doesn't really make the problem go away. We should do real hard job here, because it's inherently hard.
Daj#7482: > If we really believe in pure FP, we should find ways to incorporate some of the dump back to our semantics. There are many progress in Haskell on this topic.
@Ravna Could you give some examples of progress here? I'm trying to imagine what this would look like
Ravna#1831: @Daj You could start with the FRP literature. It's never reached a good enough status to find real-world-scaled use, but it's a serious line of work that really tries to reason about the side effects within pure FP instead of "IO it away". This is a good summary of past work I guess: https://github.com/conal/talk-2015-essence-and-origins-of-frp#readme
Daj#7482: Thanks 👍
Ravna#1831: The basic intuition of FRP is, instead of seeing side effects as some variables that are changing with time, treat everything as a function from time to value. If we reason on basis of world lines instead of points in space, we don't have to deal with the concept of "changing with time".
Daj#7482: The video of my talk is up, in case anyone wants to see
https://www.youtube.com/watch?v=pGjyiqJZPJo
3dprint_the_world#6486: That's a really nice talk! |
Daj#7482: Thank you :)
StellaAthena#3530: > this is news to me. Elkezers whole point of TDT is that you should take one box
@Daj I could have worded this better. The whole point of TDT is that you should take one box. However the *reason to develop it* is that E felt that under other frameworks you can prove that a rational agent will always get the inferior reward.
Daj#7482: Ah yes that is fully correct
Daj#7482: EY's point was that current theories of "rationality" didn't make an agent "win", and rational agents should win
StellaAthena#3530: > Gödel is the halting problem for math
@bmk Remember the conversation we were having about an algorithm searching through the tree of possible proofs? A reasonable gloss of the connection between Gödel’s Theorem and the Halting Problem is that that algorithm is not guaranteed to halt, even for true statements.
genai (Immortal Discoveries)#0601: Connor, all the above comments seem off-topic, I think I was doing much better... BTW I'm definitely not a beginner. I'll present something really interesting in the coming months, and followup RL once I watch those lectures given to me above to make sure I'm seeing things clearer.
As for an agent wanting to learn ex. math, the want is based on what it predicts. Like I want to learn (or go see) the forest, or rock climb, or write about dogs, or create AGI. - What it predicts causes it to train on specific data. AGI begins as a baby wanting food, and it learns Semantically food is similar to cash and house and car, so now it predicts car because food is related, and food is rewarding / controls prediction. So, food/ car has a bit extra weight.
Daj#7482: I'm not sure what you mean by off topic, but it sounds interesting what you say, I look forward to seeing what you will present
genai (Immortal Discoveries)#0601: The posts seem time wasting, there's no AGI in them lol
Daj#7482: I think you're on the right track, I'm just think there is already common vocabulary for some of the themes you discuss
Daj#7482: Eh this isn't #research , some socializing is acceptable here
genai (Immortal Discoveries)#0601: yup, i tend to throw away algerbra and terms and simplfy it all (so it's super relatable)
Daj#7482: and memery
Daj#7482: That unfortunately makes it very hard for people like me to understand and comment on your ideas
Daj#7482: That's all I'm saying
Youssef Abdelmohsen#8707: Forgive me if this is old news to you guys, but have you seen this (Shared by Gwern recently): https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/
Youssef Abdelmohsen#8707: "10x bigger model training on a single GPU with ZeRO-Offload: We extend ZeRO-2 to leverage both CPU and GPU memory for training large models. Using a machine with a single NVIDIA V100 GPU, our users can run models of up to 13 billion parameters without running out of memory, 10x bigger than the existing approaches, while obtaining competitive throughput. This feature democratizes multi-billion-parameter model training and opens the window for many deep learning practitioners to explore bigger and better models." |
Daj#7482: Yea, Seen it, pretty cool stuff
Daj#7482: Not super relevant to us unfortunately since we don't have tons of GPUs
Youssef Abdelmohsen#8707: If only I was a gazillionaire! :/ I'd donate
Youssef Abdelmohsen#8707: I saw you guys were looking for OCR tools for math, you're also prob already familiar with this but just in case not: https://mathpix.com/
Youssef Abdelmohsen#8707: Also in terms of data cleaning for PDF to text: https://arxiv.org/abs/2008.13533 (just another option) which the amazing Gwern so wonderfully pointed out to me as a potential avenue
StellaAthena#3530: @Youssef Abdelmohsen I have not seen that, no! How does it work, do you know?
StellaAthena#3530: Oh, it's severely throttled.
StellaAthena#3530: You only get 50 (general) or 100 (student) snips per month for free
Youssef Abdelmohsen#8707: @StellaAthena Oh yeah unfortunately 😦
StellaAthena#3530: That’s a non-starter then. We are processing hundreds of GB of data.
StellaAthena#3530: We want something that we can use not just on a book but on an entire library of books.
Youssef Abdelmohsen#8707: Yeah true. Well I do feel like it's a part of the puzzle there though I could be wrong.
Youssef Abdelmohsen#8707: The first part would be recognizing where on the page is the math
Youssef Abdelmohsen#8707: Second figuring out how to convert that to LaTeX
Youssef Abdelmohsen#8707: Or there may be other methods
StellaAthena#3530: We have a reasonably clean download of arXiv. The cost-to-improvement ratio doesn’t really seem to be there to me.
bmk#1476: We are *outputting* hundreds of GB
bmk#1476: We are inputting *counts on fingers* over 100TB
StellaAthena#3530: Oh good clarification lol.
Youssef Abdelmohsen#8707: I see |
Youssef Abdelmohsen#8707: Are you guys planning to include Libgen sci papers as well?
Youssef Abdelmohsen#8707: (Or only books?)
Youssef Abdelmohsen#8707: I checked the Libgen sci paper archive size and that was around 76 TB last time I checked.
Youssef Abdelmohsen#8707: But that's only in PDF size / not text of course
Youssef Abdelmohsen#8707: So probably much lower in size
bmk#1476: All of it
Youssef Abdelmohsen#8707: Wow. Yeah I'd have preferred that that be included in the original GPT-3.
bmk#1476: It might be
bmk#1476: OA has been awfully vague about it
bmk#1476: For good reason
Singularity#9001: It wouldn't make sense not to have it in the original GPT-3, I feel like that's a pretty powerful training set
gwern#1782: I believe OA did not include arxiv or latex sources, because gpt-3 spits out very little of that when you prompt with ML paper abstracts
bmk#1476: What about libgen?
bmk#1476: Can we rule that out
gwern#1782: no. they could be using the epubs. you guys are better positioned to evaluate if that checks out quantiatively than I am
bmk#1476: libgen contains epubs
researcher2#9294: lol your covid analogy @Daj
researcher2#9294: I started stockpiling in Feb, everyone called me a tinfoil hat
researcher2#9294: And you're right, that was late, we knew enough in Jan
Noa Nabeshima#0290: Do you guys think pretraining GPT on lots of stock data and then doing monte carlo to search the space of strategies for particular stock(s)? Mostly asking people who have interacted with the economy professionally, I don't know how it works or how EMH, other factors work in practice. |
bmk#1476: So I'm not a Professional Economy Person™, but my prior is that it probably wouldn't work
bmk#1476: There probably are ways to make it work, but they would be highly nontrivial
3dprint_the_world#6486: I am not a Professional Stock Maker either, but check out https://towardsdatascience.com/stock-predictions-with-state-of-the-art-transformer-and-time-embeddings-3a4485237de6
bmk#1476: X - doubt
bmk#1476: medium + predicting stock market + candle based trading + no fundamentals data = *massive doubt*
3dprint_the_world#6486: yeah well I haven't seen any evidence for *any* stock prediction algorithm ever actually beating the market.
3dprint_the_world#6486: change my mind.
bmk#1476: ive dabbled in this area as an amateur, as every engineer ever has most likely done, and i can smell flaws from a mile away
bmk#1476: also i dont disagree there
bmk#1476: no amateur algorithm i know has succeeded
3dprint_the_world#6486: yeah
bmk#1476: that being said, rentech and jane street and 2sigma have been absolutely killing it
3dprint_the_world#6486: Obviously algorithmic trading is a thing, but afaik it mostly relies on having super quick access to market data (as in sub-millisecond latencies), access to large trading volume, and mostly exploiting human trader flaws. Not actually 'predicting' the market.
3dprint_the_world#6486: again, change my mind.
bmk#1476: disagree on 1
bmk#1476: youre thinking of market makers
bmk#1476: it is possible to make money without being a market maker
3dprint_the_world#6486: no not necessarily
bmk#1476: which means you dont need nanosecond latency
bmk#1476: or order flow data |
bmk#1476: or whatever
3dprint_the_world#6486: ok
bmk#1476: ~~we should totally spin off an algotrading business after we replicate gpt3~~
3dprint_the_world#6486: lol
bmk#1476: im not kidding, i actually think gpt3 could be very useful for algotrading, and moreover i feel confident in stating that most trading firms probably arent caught up to this particular tech quite yet
bmk#1476: it's just the reasons i think it's useful are a bit more complex than "predict future price"
bmk#1476: i figured out a thing to get quick estimates of the size of a directory!
bmk#1476: (only works for dirs with no subdirs)
bmk#1476: ```bmk@nuck:~/nas/pmc$ python3 estimate_size.py pmc_extract
61.30 GiB ± 3.27 GiB
59.08 GiB ± 2.30 GiB
58.94 GiB ± 1.65 GiB
58.65 GiB ± 1.55 GiB
59.64 GiB ± 1.65 GiB
59.70 GiB ± 1.44 GiB
59.60 GiB ± 1.39 GiB
59.44 GiB ± 1.31 GiB
59.30 GiB ± 1.18 GiB
59.60 GiB ± 1.08 GiB
59.76 GiB ± 1.05 GiB |
59.70 GiB ± 968.85 MiB
59.62 GiB ± 967.86 MiB
59.76 GiB ± 946.74 MiB
59.70 GiB ± 833.13 MiB
```
bmk#1476: im way too proud of myself haha
bmk#1476: tfw you actually find a chance to use something from a stats textbook in the real world and not on some contrived problemset
3dprint_the_world#6486: how does it work
bmk#1476: it uses bootstrap for the confidence bound
bmk#1476: https://gist.github.com/leogao2/80b9a41c385831cb45c2d459a95ab523 here's the code
bmk#1476: nonparametric estimation ftw
bmk#1476: i'm actually surprised that there dont seem to be any fast approximate size estimators out there
StellaAthena#3530: This seems like a worthwhile technical contribution.
3dprint_the_world#6486: neat
gwern#1782: (for stock trading, gpt-3 would be useful for things like text embeddings to process incoming data like social media APIs or processing financial filings somehow)
researcher2#9294: @thenightocean starting to check your web page daily now
researcher2#9294: found this, looks possibly interesting?
researcher2#9294: https://arxiv.org/abs/2009.07253v1
thenightocean#6100: > @thenightocean starting to check your web page daily now
@researcher2 Cool! I am glad people find it useful. I think I will add even more features soon. Feel free to give suggestions what else should be there. |
researcher2#9294: My first thought was to try to get curators on board for their field of specialty.
researcher2#9294: But that wouldn't enhance what you have, just add an entirely new feature.
researcher2#9294: What I really want is to be able to stick a cable in the back of my head to download it all, can you do this?
researcher2#9294: 🙂
researcher2#9294: Also, isn't NVIDIA prolific?
researcher2#9294: geez
thenightocean#6100: yeah NVIDIA are crazy.
something is messed up with google ai xml data, its a giant text salad where it should be only an extract. Will remove them if I dont figure out solution
Davidweiss2#3174: How can I help?
I am a Team leader of small research groups of senior students at cyber education center.
Daj#7482: Hey @Davidweiss2 ! Welcome. We have a (somewhat up to date) document about our projects here: https://docs.google.com/document/d/1yOnxEMlU57M8YFlQC3XNOvyMVX2EpU5LeIWhEBcwQNk
Basically, our model code is close to complete, we are now mostly in the stages of bugfixing/optimizing our model code, data collection and cleaning ( #the-pile , last I checked we're pretty low on compute resources), evaluation (this would probably be the best thing for coders to jump in on) and we also have some side projects such as making our model's output detectable ( #the-rad-lab ) and more speculative attempts to understand how GPT works ( #interpretability )
Daj#7482: What kind of work would you be most interested in doing?
Daj#7482: pinging @Sid @bmk @StellaAthena @Commutative Conjecture, they might have concrete things that need doing
Davidweiss2#3174: > What kind of work would you be most interested in doing?
@Daj
I would love to help with computer resources and optimization of the code base. |
Davidweiss2#3174: How can I make my computer gpu available for use?
Daj#7482: We could definitely use large amounts of CPUs (and network bandwidth) for data processing. We're currently getting our hardware for training from Google's TFRC program, so we don't need GPUs unless you have hundreds of them available
Daj#7482: As for optimization, what kind of coding experience do you/your team have? Our model is coded in Mesh Tensorflow and runs on TPUs, which is a bit exotic
Daj#7482: Unfortunately, a good chunk of our team is American so probably currently asleep hah
zphang#7252: 🙃
StellaAthena#3530: @Davidweiss2 The #1 thing you could do in terms of impact is build out the framework for evaluating the model. While the model is mostly done (there is some bug fixing and performance upgrades to squeeze out), the code for evaluating how well it performs is more or less non-existent. We would like to replicate every experiment that OA set GPT-3 to.
StellaAthena#3530: Also, if you have the ability to set and forget the training of a neural network on ImageNet that would be very useful to #the-rad-lab. Unfortunately we want to modify the standard training dataset and so we can’t just download the pre-trained weights.
Sid#2121: oh yeah pinging @zphang I'm available to help integrate evaluation harness with gptneo when you are
StellaAthena#3530: @Sid we need to build the harness first 😛
Sid#2121: I thought @zphang said it was working for gpt2/3
StellaAthena#3530: The framework is compatible with GPT-2/3, but the harness currently contains 2 tests (out of ~10).
StellaAthena#3530: It’s not ready to be used on GPT-Neo, unless you mean a test run to ensure that the code based are compatible
bmk#1476: Aren't there a lot more than 10 tests in the gpt3 paper
StellaAthena#3530: IDK, there’s at least 10 and it’s a number much larger than 2.
StellaAthena#3530: I picked it because it gets the point across and I didn’t want to check 😛
StellaAthena#3530: Speaking of which, I should go make Issues for each of the OpenAI test cases.
bmk#1476: Aren't there a lot more than 10 tests in the gpt3 paper
StellaAthena#3530: Uh yes? I just admitted that.
stellie#3553: wait, is the original GPT-3 model unreleased? this project makes a ton more sense now
Davidweiss2#3174: > We could definitely use large amounts of CPUs (and network bandwidth) for data processing. We're currently getting our hardware for training from Google's TFRC program, so we don't need GPUs unless you have hundreds of them available |
@Daj I am no coder expert, buy my team is team of young hackers that will love to get their hands dirty.
Daj#7482: > wait, is the original GPT-3 model unreleased? this project makes a ton more sense now
@stellie Correct
Daj#7482: > @Daj I am no coder expert, buy my team is team of young hackers that will love to get their hands dirty.
@Davidweiss2 As @StellaAthena said, getting them involved with writing evaluations for our models might be a good task then
Davidweiss2#3174: > @Davidweiss2 As @StellaAthena said, getting them involved with writing evaluations for our models might be a good task then
@Daj On it.
Do we have an project tasks board?
Daj#7482: Yes thanks to @StellaAthena we've been becoming a lot more organized lately haha, I'd wait for her to give you the details when she finds the time
Daj#7482: Are you like a professor and these are your grad students?
Davidweiss2#3174: Yes
Daj#7482: Very cool!
Daj#7482: So yes, if Stella doesn't mind, I'd defer to her to help onboarding you and your students, since she's been more directly involved with the evaluation project
StellaAthena#3530: Howdy
Davidweiss2#3174: Thank you, I will wait to her to respone.
StellaAthena#3530: Why don't you DM me?
StellaAthena#3530: > wait, is the original GPT-3 model unreleased? this project makes a ton more sense now
@stellie Not only is GPT-3 unreleased, but the *training dataset* is unreleased as well.
stellie#3553: I mean, common crawl is publicly available; it's enormous, but theoretically if you were determined enough you could make your own dataset from it
StellaAthena#3530: Yes (and we are doing so) but several of the other datasets are not |
stellie#3553: ah, i thought it was all common crawl
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/755808261210374144/unknown.png
StellaAthena#3530: CC is public, but they don't document how they process and filter it. WebText2 is public, Books1 and Books2 are not public and they're vague about the contents of them. Wikipedia is public.
bmk#1476: > Uh yes? I just admitted that.
@StellaAthena sorry I had a discord malfunction
bmk#1476: I think discord tried to send a message just as it disconnected so it sent the message again when I reconnected
Kazumi#1297: I noticed the "what you need is retrieval" paper isn't double columned, is double columned actually easier to read?
StellaAthena#3530: @Kazumi Who are you talking to?
Kazumi#1297: general consensus? anyone who has an opinion
zphang#7252: usually the format is dictated by the venue being submitted to (or whichever template you downloaded when you started writing :p )
StellaAthena#3530: Oh you’re talking about the paper we are writing.
StellaAthena#3530: Yeah, formatting is generally determined by the venue. I just threw together a minimal formatting as a placeholder.
StellaAthena#3530: I’m not sure why but I read this as a data question at first.
Kazumi#1297: for me, double column is kind of hard to read, with so many line breaks per sentence
Daj#7482: D&D books trained me to like double column hahaha
StellaAthena#3530: It’s generally not something that we have control over, unfortunately. I mean, we can format the arXiv version however but publishers set how you need to format your stuff
StellaAthena#3530: Speaking of publishers, my paper on the complexity of *Magic: the Gathering* is officially published: https://doi.org/10.4230/LIPIcs.FUN.2021.9
Kazumi#1297: I just noticed because the paper with single column felt weirdly easier to read than usual
Noa Nabeshima#0290: We should ask mathematicians that do proof-verified learning their estimations of powerful, general learning techniques from mathematics in the next N years
We should also ask them for examples of when a more applied field has been transformed by a good formalization, things of that nature |
Anyone here have intuitions about the probability of powerful insights from mathematics that change AI timelines in a surprising way?
StellaAthena#3530: @Noa Nabeshima most of that comment was about AI impacting mathematics, and then you switched to mathematics impacting AI. Was that deliberate?
StellaAthena#3530: We’ve made huge advances in formal proof verification, but we are nowhere near an original computer generated proof of a problem a human can’t solve.
Noa Nabeshima#0290: Oh, I mostly am interested in information about mathematics influencing AI
StellaAthena#3530: The G-CNN breakthrough in non-Euclidean deep learning comes to mind
StellaAthena#3530: String theory is arguably an example. Witten’s work on it was so fundamental that he became the first (and so far only) person who isn’t a mathematician to win a fields medal.
StellaAthena#3530: TBH, a lot of cutting edge fundamental physics is an example of formalisms driving physics discoveries
Noa Nabeshima#0290: When I say proof-verified learning I actually mean something different than what I think you thought: I mean you can show some guarantees about the learning, not that we are searching for proofs in some space
StellaAthena#3530: Ah
StellaAthena#3530: That’s generally called computational learning theory
Noa Nabeshima#0290: By G-CNN do you mean this? https://arxiv.org/pdf/1512.07729.pdf
Noa Nabeshima#0290: no, you mean this https://arxiv.org/abs/1501.06297
StellaAthena#3530: No I mean this: https://arxiv.org/abs/1602.07576
Noa Nabeshima#0290: haha, thanks
StellaAthena#3530: Non-Euclidean CNN theory and practice is clearly divisible into before and after this paper. Before this paper non-Euclidean deep learning was extremely *ad hoc* with wildly different ideas about “what matters.” Almost everyone now accepts that this is the right formalization for CNNs, and the right way to extend them to non-Euclidean spaces. Sometimes what this framework tells you to do is computationally intractable or practically inadvisable and so people resort to ad hoc methods, but those ad hoc methods often have the goal of approximating what this paper tells you to do.
Noa Nabeshima#0290: Oh, that's fascinating. Thank you for sharing
bmk#1476: How much mathematical background is needed to read this paper
StellaAthena#3530: A basic understanding of what a group is and their connection with symmetry
StellaAthena#3530: I’m of course overemphasizing this paper, and many of its ideas were inspired by others. But it’s the first systematic description of what’s currently accepted as the right way to do it. For a more detailed explanation of the ideas involved as well as prior and subsequent papers, see this excellent survey: https://arxiv.org/abs/2004.05154
StellaAthena#3530: Another paper that deserves to be called out specifically is this one by Risi Kondor at Chicago: https://arxiv.org/abs/1802.03690 |
bmk#1476: Define basic
bmk#1476: Ive been bitten by people who have.. very different definitions of basic than me
StellaAthena#3530: If you’ve been to the first two meetings of an intro course on group theory you’re fine
StellaAthena#3530: “What is a group” is necessary
StellaAthena#3530: Almost nothing about the theory of groups is necessary.
StellaAthena#3530: Watch this 20 minute video and you’re good: https://youtu.be/mH0oCDa74tE
Daj#7482: Yea question about the monster
Daj#7482: wtf yo
bmk#1476: Ok group theory happens to be a field in which I actually know more than nothing
bmk#1476: I was afraid the answer would be something something xyz type of complicated groups
StellaAthena#3530: Wow this video is great
StellaAthena#3530: 3Blue1Brown is a phenomenal instructor
StellaAthena#3530: (I had actually recommended it without viewing, due to how highly I hold 3B1B’s pedagogy)
bmk#1476: Also I don't know what a convolution is
bmk#1476: Or a correlation
bmk#1476: In this context
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/755882544536485908/Screenshot_2020-09-16-14-07-06-893_com.google.android.apps.docs.png
bmk#1476: (just doing a quick skim over rn to take stock of what I need before reading it)
StellaAthena#3530: @bmk the cross correlation measures how much two sequences agree when you shift one of them. It is quite literally the correlation (from stats) of f with a shifted ψ. The input is the amount that it is shifted. So you plug in a shift and you get told how well f and ψ shifted by g line up
StellaAthena#3530: This is widely used in signal processing, to talk about time-lags between signals. |
StellaAthena#3530: The convolution is a slight modification of this concept, where instead of shifting by g you shift *backwards* by g. The reason for this is abstract and unimportant, but it’s related to why you typically want to conjugate the transpose of a complex matrix.
StellaAthena#3530: I would actually represent those formula using vectors to cut down on a sum. It may help to know that the inner sum is just f(y)•ψ(gy)
bmk#1476: Hmm ok, and I presume the reasons for doing correlation/backward correlation make sense in context
StellaAthena#3530: It’s a lot of math to justify, but yes
Louis#0144: I was working a while back on equivariance in attention mechanisms
Louis#0144: just as a small few week side project
Louis#0144: and it turns out its a super cool way to look at the kind of knowledge an LM stores for particular tokens
Louis#0144: for instance you can show in some contexts that poodle and labrador are equivariant
Louis#0144: (backproping on trying to keep some metric on the attention mechanism constant while you look for all tokens that satisfy that attention vector)
Louis#0144: basically, if you are doing self attention on n tokens you have n attention vectors. So you want to explore the space where the dot product between some subset of them is kept constant
Louis#0144: I wanted to link that work to like basins of attraction in hopfield networks back in april
Louis#0144: but I got sick with covid
Louis#0144: :/
Louis#0144: so I wrote a blog post about it instead
Louis#0144: dont mind my crazy ramblings :^)
Louis#0144: its actually turning into a project eventually but not till late next spring
Louis#0144: I convinced my advisor its worth his time
Louis#0144: > Speaking of publishers, my paper on the complexity of *Magic: the Gathering* is officially published: https://doi.org/10.4230/LIPIcs.FUN.2021.9
@StellaAthena wasnt this years ago omg how did it take so lon
Louis#0144: long |
bmk#1476: > International Conference on Fun with Algorithms
bmk#1476: This sounds like an amazing conference
StellaAthena#3530: @bmk it is! You can see the list of accepted papers here: https://sites.google.com/view/fun2020/acceptedpapers
StellaAthena#3530: @Louis the preprint came out two years ago. Fun with Algorithms only meets every other year so we had to wait to submit it. The conference was just this past June and the official proceedings are now out
Louis#0144: ohh
Louis#0144: ok
Abecid#5364: Are there any undergrads here
Jman005#7755: @shawwn hey just wondering, I was trying out your gpt-2 tpu training the other day, wouldn't work because apparently the GCP storage you were using for it expired (I think it has to load the models from there?). Any idea if it's possible to not use gcp storage, couldn't find any other obvious alternatives online
shawwn#3694: sure. if you hop into the ML discord server (check #communities) then I can help get you set up on my gpt fork
shawwn#3694: that was one of the original reasons I wrote it ... wanted to be able to train gpt in colab without needing a cloud bucket
Jman005#7755: oh thanks okay
shawwn#3694: (this discord server is project-focused, and is for replicating GPT-3)
Jman005#7755: yeah I'm aware lol I just noticed you were on it
Jman005#7755: server #1 or #2?
shawwn#3694: lol I forgot there are multiple servers now
shawwn#3694: the catface one.
shawwn#3694: TPU Podcast
FractalCycle#0001: @Abecid i'm an undergrad rn
Abecid#5364: @FractalCycle DMed you
Noa Nabeshima#0290: @Abecid I'm an undergrad too 🙂 |
Daj#7482: I'm technically an undergrad too lol
Abecid#5364: Yeah that's insane
Daj#7482: I had a few years before undergrad of dicking around haha
StellaAthena#3530: I’m currently “under” a “grad” but I don’t think that’s what you mean lol 😛
bmk#1476: https://twitter.com/numpy_team/status/1306268442450972674?s=19
bmk#1476: Hey, a paper about this new "numpy" library just came out
zphang#7252: Python already has a built-in array library, seems pretty unnecessary to me. Will only add dependency bloat.
https://docs.python.org/3/library/array.html
Sid#2121: I don't know, I can see this becoming pretty big in the future
zphang#7252: https://arxiv.org/abs/2009.06489
Ravna#1831: My reaction to The Hardware Lottery paper: rejecting real free lunch that's magnitudes better, is either a very far-sighted behavior (low probability), or just a typical reaction of an old-time artisan against new tech (high probability). Hardware specialization might lead us into a local minimum of computing technology, but in most of the time, local minima are better than hypothetical global minima that we don't know how to find.
researcher2#9294: Looks like a great history of the field
researcher2#9294: And just the right amount of group theory
Stephen5311#6349: @Sid I could help with webtext processing
Sid#2121: hey @Stephen5311 let's move into #the-pile
Dal#7192: Hello all. I'm mostly new to the field but one of my hobbies is meta-cognition. I was hoping you guys could help me clarify something and maybe point me in the right direction.
Dal#7192: I have a pretty robust model of intelligence sketched out. E.g. I can map how & why one starts from sensory data and processes it into models, decisions, social interaction, goals.
Dal#7192: Is that considered well-understood or is that potentially novel/helpful?
genai (Immortal Discoveries)#0601: Your new to the field but have a robust model sketched out....huh? Do you know a lot about AGI you're saying?
Dal#7192: I keep up with the AI/Control theory headlines but have very little experience in the specific structures of modern computational neural models |
bmk#1476: I'd recommend you to maybe take a look at the established literature? A "robust model of intelligence" is very, very hard (one could argue that it's an unsolved problem), so the probability of any of us coming up with one is quite slim
Dal#7192: I can account for why GPT2 was underestimated and where there are likely a lot more overhangs in waiting
bmk#1476: Unless you mean a much narrower scope?
Dal#7192: Much broader
bmk#1476: that's a massive unsolved problem in philosophy, afaik
genai (Immortal Discoveries)#0601: explain how to improve GPT-2
Dal#7192: E.g. If you treat information in specific ways, it gives rise to intelligent behavior
bmk#1476: what are those ways
Dal#7192: What's your objective, just learning better?
bmk#1476: our goal is safe aligned AGI
Dal#7192: I have definitely not solved control theory, if anything my approach leans heavily into black boxes 😅
bmk#1476: control theory?
Dal#7192: That's the term I'm familiar with for AI safety
bmk#1476: https://en.wikipedia.org/wiki/Control_theory i dont think this is what youre talking about
bmk#1476: ah
bmk#1476: we call it AI alignment around these parts
Dal#7192: eg /r/controlproblem
bmk#1476: do you have any opinions on how to make AI *safe and aligned*?
Dal#7192: Yeah, build an oracle you can talk to and ask it
Dal#7192: THAT we can do |
bmk#1476: and?
Dal#7192: And trust it? Or don't
bmk#1476: theres no alignment or safety anywhere here
Dal#7192: I can map data to human compatible communication, beyond that is beyond me for now
bmk#1476: ok so you're thinking less about safety and more about getting to AGI in the first place then?
Dal#7192: Intelligence theory more than AGI theory, yes
bmk#1476: our problem rn is we're pretty confident we can do AGI in a few decades
Dal#7192: At this point I don't find them distinct from each other
bmk#1476: but we have absolutely no idea how to do *safe* AGI
bmk#1476: and that's really important
Dal#7192: So that would be to say the structures of intelligence are reasonably well understood
bmk#1476: well, do you mean intelligence in general or human intelligence?
Dal#7192: No distinction to me
bmk#1476: we have only vague glimpses at what makes human intelligence work, mostly because humans are fucking weird
Dal#7192: That sounds like I may have something to contribute then
bmk#1476: but agi will probably only resemble humans superficially, just like planes resemble birds only superficially
Dal#7192: *If* my model correlates to reality I will be very surprised if we develop anything resembling a general intelligence that isn't very weird/quirky
bmk#1476: ok so what is your model?
Dal#7192: The part that covers world model communication relies on the agent simulating its communication partner's model, i.e us
bmk#1476: can you elaborate on that |
Dal#7192: The nature of communication requires modeling what your partner knows and what information is salient to them
Dal#7192: Achieving the goal of communication means finding the most salient information, and as such being familiar with the partner's own world model to whatever extent
Dal#7192: In that way it's utterly unsurprisingly that the first overhang is with the model that uses human language
bmk#1476: so what testable predictions does this model make?
bmk#1476: so that we can run experiments
Dal#7192: Give me a moment to figure out some constructive language
Dal#7192: With the disclaimer that I am relatively unfamiliar with the ways data is treated in modern ML algorithms and have not yet meshed my models onto any, here is my general outline. Perhaps it will at least provide some food for thought.
Dal#7192: ```
General hypotheses:
- Knowledge is chiefly sorted by salience/information entropy of the patterns in input data, with models constructed by patterns of patterns, with many mechanisms for streamlining, compressing, or discarding duplicate or low-salience data.
i.e. Living intelligences are intentionally imperfect at learning and efficient artificial intelligences will mirror this.
- Information contributes to world models and sets that fresh input data corresponds to or conflicts with.
- Any sufficiently generalized pattern recognition model has the components necessary for intelligence, whether text, audio, image, or other input.
i.e. Many non-GPT models could demonstrate the novel attributes ascribed to it.
- Communication is achieved by modeling the communication partner's model and outputting salient signals to achieve the desired understanding.
i.e. Any agent with adaptive communication processes will require/acquire your own world model until it is capable of producing a sufficiently salient signal.
- Most human/mammal idiosyncrasies are adaptations to address processing capacity limits, particularly adapted to social environments.
e.g. Emotions, trauma, fixations, generalizing, "gut" prediction.
- Because idiosyncrasies are more efficient, a fully realized intelligence will exhibit similar behavior to constructively process input when reaching its limitations.
i.e. If we don't build one first, when an AI is asked to build a more performant AI it will build an imperfect, idiosyncratic one. |
Consequently, the control problem will always be inexact, defined by whatever parameters we're able to communicate. I'd posit that our best bet is set up a Reverse Genie, an oracle that takes our imperfect wish and reinterprets it as we would desire it to be understood.
```
Dal#7192: ```Specific poorly informed hypotheses:
- A generalized intelligence model could be created and would resemble a living creature to the extent of its relative processing power (insect->mouse->cat->dog->human->SI)
- The primary bottleneck of current intelligence models is information streamlining
- Existing GPT functionality could build a pretty good intelligence model by having it filter "What would a human do?" questions as an internally consistent model
- - A dataset like the above could be used to build a functional Reverse Genie```
StellaAthena#3530: > With the disclaimer that I am relatively unfamiliar with the ways data is treated in modern ML algorithms and have not yet meshed my models onto any, here is my general outline. Perhaps it will at least provide some food for thought.
@Dal weren’t you just making concrete claims about improving GPT-2? How could you know how to do that if you don’t know how GPT-2 works
Dal#7192: Could you point me to what I said that could be taken that way? I posited that GPT2 was underestimated
StellaAthena#3530: Also, there’s an extensive literature on communication that I think would be interesting to you.
StellaAthena#3530: See, e.g., Clark, Herbert H.; Brennan, Susan E. (1991), Resnick, L. B.; Levine, J. M. (eds.), Perspectives on socially shared cognition, American Psychological Association
Dal#7192: Much appreciated
StellaAthena#3530: Deanna Wilkes-Gibbs is a third person to look into
StellaAthena#3530: The three of them have been writing at things that you’re eluding to for decades
StellaAthena#3530: > Could you point me to what I said that could be taken that way? I posited that GPT2 was underestimated
@Dal oh, I misread what you wrote exactly
Stephen5311#6349: @researcher2 Danteh used to be a Minecraft PvPer
Stephen5311#6349: Then he switched to Overwatch
researcher2#9294: ❤️ overwatch |
Stephen5311#6349: Because he kinda became imo the best Minecraft PvPer in the game of all time
Stephen5311#6349: 2nd would be Tylarzz
Stephen5311#6349: Who is now a pro Fortnite player
Stephen5311#6349: But yeah, he went from no FPS experience
Stephen5311#6349: To pro Overwatch in a year
researcher2#9294: damn
Stephen5311#6349: And he went from being one of the worst Tracers in year 1
Stephen5311#6349: (2018)
bmk#1476: Wait, you can do things in minecraft other than build CPUs? This is news to me
Stephen5311#6349: He is now the best Western Tracer in Overwatch
Stephen5311#6349: Kinda crazy
Stephen5311#6349: How good he is
Stephen5311#6349: and how fast he learns
Stephen5311#6349: He just has the genes
Stephen5311#6349: lol
researcher2#9294: jealous
researcher2#9294: though not jealous of arthritis at 30
Stephen5311#6349: https://twitter.com/Corey_OW/status/1258908153820975104
Stephen5311#6349: This is another OWL player
Stephen5311#6349: Now he plays pro Valorient |
Stephen5311#6349: I didn't know this till now
Stephen5311#6349: But I guess this was his first streaming playing it LOL
researcher2#9294: ugh, the precision
researcher2#9294: never quite had that level of refinement
Stephen5311#6349: Same lol
researcher2#9294: Any character that didn't require consistent headshotting I could excel at.
researcher2#9294: I used to demolish servers with roadhog pre (2018?) nerf. Zarya, Winston (hurr), all tanks basically.
researcher2#9294: I actually started to get ok with headshotting at ground level, but the verticality of overwatch is crazy
researcher2#9294: Oh and junkrat ❤️
researcher2#9294: love me some junkscum
researcher2#9294: Do you play card games at all? I was hooked on magic arena last year, this year gwent
Stephen5311#6349: I don't play any card games
Stephen5311#6349: Other than like with a deck of cards sometimes
Ken#8338: This article about the prediction of when 'Transformative' AI might arrive is very in depth (73,000 words) https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines
gwern#1782: _has been dreading reading that because it's so long and detailed_
Ken#8338: I think you would enjoy it. And might go with your comment here ( https://www.lesswrong.com/posts/SZ3jDHXHb4WF4jmbr/where-is-human-level-on-text-prediction-gpts-task) regarding GPTx not reaching human level for the perplexity score until sometime after 2040 (or at least that was what I gathered).
gwern#1782: hm, well, that depends on how you want to calculate it - keep extrapolating out the peak-AI-compute-doubling-every-2-years or look instead at Hernandez's 1/60th the cost every decade and then look at specific budgets like $1b or $10b
Ken#8338: The long transformative AI end up with a 2052 ish projection. But I think his definition of transformative might be slightly above human level (10 to the 15th FLOPS (human) vs 10 to the 16th FLOPS for the transformative tech).
gwern#1782: https://openai.com/blog/ai-and-compute/ says doubles every 3.4 months, so 2.2 million X more compute would be... d be 21 doublings away? or 6 years. seems a little aggressive
Ken#8338: I thought you had a very reasoned take with: "Let's see, to continue my WebText crossentropy example, 1000x reduces the loss by about a third, so if you want to halve it (we'll assume that's about the distance to human performance on WebText) from 1.73 to 0.86, you'd need (2.57 * (3.64 * (10^3 * x))^(-0.048)) = 0.86 where x = 2.2e6 or 2,200,000x the compute of GPT-3. Getting 2.2 million times more compute than GPT-3 is quite an ask over the next decade or two." |
Ken#8338: I am guessing the doubling time will slow downs, but who knows.
gwern#1782: yeah, but I don't think it's going to keep going for the next 6 years
gwern#1782: I think it may be not too far from breaking down. you see anyone training GPT-3 successors beside eleutherai?
gwern#1782: even MS's deepspeed stuff only demonstrates it *could* train 1t models. they don't actually.
Ken#8338: No, but you are far more plugged in than me.
Ken#8338: If you had to hazard a wild guess what time frame would you bet your money - if someone gave you free money to bet 🙂
gwern#1782: and at least from the countless gpt-3 tweets I've read, and the discussions on the OA API slack, I don't think that OA is finding gpt-3 to be that profitable
Ken#8338: One would think google has at least tried internally to see how far things can scale, even if they are not full believers in the scaling hypothesis.
gwern#1782: let's see... hernandez has a halving of costs every 16 months
bmk#1476: i'm about 90% confident that there are non-OA people out there currently training GPT3 and larger with the intention to let the world know (i.e not military)
gwern#1782: you'd think so but my god it's been like half a year and crickets
bmk#1476: Atop DeepSpeed, where else do you think M$ is headed?
gwern#1782: MS didn't bother to. and who's going to pony up?
bmk#1476: why would they boast training models with trillions of parameters using newfangled tech and then just.. not?
gwern#1782: why does IBM write little 'IBM' logos using individual atoms?
gwern#1782: why does MSR show off how many gigabytes they can encode into synthesized DNA and read back accurately?
bmk#1476: so theyre just doing this to show that they can but they dont actually have any good reason for pulling it off?
bmk#1476: also, i still have my doubts that the folks at google aren't training gpt3 soley because they dont believe in the scaling hypothesis or something
gwern#1782: yeah, big tech companies do all sorts of stunts to show off. heck, what was Deep Blue all about?
Ken#8338: With Google level money you would think some of the deep learning researchers would get together and test out how far they can reasonably scale a NLP model, even for the sake of checking out a potential competitor. I can imagine the GPTx being a next level search engine - not for facts but for discussions. |
gwern#1782: you think Google put up $1m in prizes for lee sedol because it was vitally important to datacenter optimization research?
gwern#1782: so let's see... Hernandez is 'cost halves every 16 months / 1.3 years'. if you want to bring the current $10t ($5m * 2.2e6) down to just $1b (1e9), you have to wait...
bmk#1476: google didnt announce that they had developed a truly marvelous Go AI that they didn't feel like training
bmk#1476: they went the full mile
gwern#1782: 18 years?
bmk#1476: M$ announces deepspeed with little fanfare- like, heck, why would they not even bother to put up an arxiv paper
gwern#1782: well, they had one for the RAM stuff
gwern#1782: but yes I was annoyed how badly Turing-NLG was announced/evaluated
Ken#8338: > 18 years?
@gwern that fits in the framework of some other numbers I have played around with, but I was no where near a rigorous as the lessthanwrong posted 73,000 word article.
gwern#1782: https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/ 🤔
bmk#1476: 2031 median is scary
Ken#8338: When I first read the criteria of the metaculus question of when will we see the first AGI I could see why people are predicting 2031 (75% percentile of a SAT like test), but then when I read this: "By "unified" we mean that the system is integrated enough that it can, for example, explain its reasoning on an SAT problem or Winograd schema question, or verbally report its progress and identify objects during videogame play. (This is not really meant to be an additional capability of "introspection" so much as a provision that the system not simply be cobbled together as a set of sub-systems specialized to tasks like the above, but rather a single system applicable to many problems.)" - that seems like a reach, at least currently.
bmk#1476: i dontthink it's too much of a reach
Ken#8338: @bmk I would trust your judgement. Maybe not that much of a leap if we are talking 11 year time frame.
bmk#1476: something something people overestimate next year, underestimate next 10
bmk#1476: i dont think much will change in 1 year
Ken#8338: True
bmk#1476: maybe a few Tn, up to 10Tn
bmk#1476: no *massive* differences |
bmk#1476: but 10 years from now.. that's a *long* time
bmk#1476: DL didnt exist as a field 10 years ago
Ken#8338: I think I remember you mentioning that you can imagine 100Trillion parameter models by 2030, ot was it 1 quadrillion?
bmk#1476: sounds like something i would say
bmk#1476: yeah wouldnt be surprised if that happened
bmk#1476: dont remember the specifics but something along those lines
Ken#8338: I can imagine if we reach 1 quadrillion parameter models that AGI as defined at the metaculus site could be reached.
bmk#1476: yeah i would think so too
bmk#1476: data requirments table https://cdn.discordapp.com/attachments/729741769738158194/757443583769837599/unknown.png
bmk#1476: pile v1 is just big enough for gpt3
bmk#1476: pile v2 is just big enough for, like, 2T?
bmk#1476: Pile v3 (w full HUMONGOUS) would be enough for just over 10T
gwern#1782: _summarizes his projections at https://www.gwern.net/newsletter/2020/05#fn17_
Ken#8338: Yes, I was reading this table and trying to do some estimates if there will be enough words that have been written, recorded, and assessible in a digital format to train a 1 quadrillion or 100 quadrillion parameter model and I can't exactly remember but 'words' might become a limiting factor (unless of course we use data augmentation from the best NLP models available).
bmk#1476: i dont think there exists enough high quality text (that we here at eleutherai could reasonably obtain) for more than 20 or 30T though
bmk#1476: so to go higher you might need to start adding in images
bmk#1476: and ofc there exists so much images that you wont run out for a while
Ken#8338: > _summarizes his projections at https://www.gwern.net/newsletter/2020/05#fn17_
@gwern I was just re-reading your post for the third time early today.
Ken#8338: @bmk Yes, plenty of data I would think if we go multi-modal. |
gwern#1782: yeah, once you start hitting 10 to 100t, I think you either need to start training on social media (how many words ya need? twitter's got'em!), or finally stop dragging your heels and go multimodal
bmk#1476: is twitter *that* big?
bmk#1476: X - doubt
Ken#8338: agree with the multi-modal - would think it would help 'generalization'
gwern#1782: twitter claims 200 billion tweets a year
gwern#1782: and that's just one social media site
bmk#1476: 28 TB a year
bmk#1476: hmm, not bad
gwern#1782: figure that you throw out 99% of it as spam, retweets, bots, garbage (no idea if that 200b number does it or not), toss in a dozen other websites (remember all the foreign language social media! like China! 1b people there as or more active than you lot)
Ken#8338: From reading between the lines it sounds like Open AI will go multimodal soon.
bmk#1476: also i miscalculated a bit for the size of CC, you could probably get like 30T with CC
bmk#1476: so all tweets ever + all of CC is enough to get us to 100T, most likely
gwern#1782: @Ken you'd think so, but there's been no hint of that on the API end. it's pretty thoroughly text-centric right now, no flexibility. I'm not sure even the researchers are doing multimodal yet. I hope they are but there's no visible evidence
bmk#1476: i know you're a big fan of multimodal, gwern, and i think going from 100T -> 1Q will probably make it necessary to go full multimodal
gwern#1782: (hopefully by the law of dramatic irony OA will tomorrow dump a 1t image+text transformer with a fancy blog and writeup now that I've said that)
bmk#1476: ok so the line in the sand is 100T for the biggest text-only transformer we have the data for
bmk#1476: so we can get three piles out
bmk#1476: 1TB, 10TB, 100TB (upsampling by 1.5 for 100T)
Ken#8338: So you can see EleutherAI pulling off a 100T model in the foreseeable future (5 years)?
bmk#1476: haha nope |
bmk#1476: well
bmk#1476: i hope haha
bmk#1476: we have nowhere near the hardware necessary
bmk#1476: no, 100T will be done by OA or google or M$ or fb or someone big
Ken#8338: All comes down to affording the compute
bmk#1476: and probably about 10 years from now
bmk#1476: maybe a bit earlier
Ken#8338: At least your are thinking ahead
gwern#1782: eleutherai will live on in a farm of tpus whirring away in a forgotten datacenter, slowing down each iteration due to a minor coding error... living on the Edge of Eternity
bmk#1476: to be fair, according to connor he thought gpt2 would be the last model replicated by a band of roving hackers
bmk#1476: so
bmk#1476: you never know
bmk#1476: i mean, i have up to 1T planned for eleutherai
bmk#1476: our original planning doc was titled `1T or bust`
bmk#1476: but it's hard to imagine going further (and no, MoE doesnt count)
Ken#8338: Roving hackers need hope that a 1 quadrillion to 100 quadrillion (wild guess) parameter model- human level equivalent - will be open sourced. Or the world will be too close to many dystopian novels 🙂
bmk#1476: 100Q is too out there for me to speculate on
bmk#1476: still, man, it's crazy to think that this might *actually* be the last time we ever get to replicate a SOTA LM before the scaling wars really start taking off and it becomes forever impossible for the common hacker
Ken#8338: That would be very impressive.
gwern#1782: there's always distributed computing. maybe MoEs aren't as dumb as I think they are - maybe the lottery ticket hypothesis smiles upon them |
bmk#1476: i mean, MoE parameters dont map cleanly so thats why i dont include them in any of my estimates
bmk#1476: i think a 100T MoE will be achieved much sooner than 10 years
gwern#1782: we should sacrifice a hectatomb of 1080tis at the shrine of saint schmidhuber that he may guide the way of MoEs into the light so we can run agi@home
bmk#1476: what was that one project again?
bmk#1476: the one doing basically that, model_training@home
bmk#1476: what was it called again
gwern#1782: (yeah, I just forgot their actual name)
bmk#1476: we could join forces with em
bmk#1476: i dont remember their name either lol
kindiana#1016: i think moe parameters can get closer to regular parameters with clever model structuring imo
gwern#1782: their fault for not naming themselves 'agi@home'. I can remember *that*
bmk#1476: to be fair, thier thing wasnt quite *that* ambitious
bmk#1476: we can fork theirs and claim agi@home for our 1Q MoE project
Ken#8338: The other thing we need to consider in our calculations is what new forms of architecture we will either discover or find via NAS. There will be more researchers and more compute spent by people like google to find better fundamental architectures (think how long evolution has been searching)
bmk#1476: when it comes to NAS
bmk#1476: X - doubt
bmk#1476: we dont do nas here
genai (Immortal Discoveries)#0601: anyone here know if GPT can be trained on text, output a prediction for a prompt i decide to feed it, then trained on more text...and so on?
genai (Immortal Discoveries)#0601: or is it a train first, finish up, then do ur prediction "thing"?
kindiana#1016: you can slap a linear classifier head to the end of gpt in parallel with the final embedding layer, and interleave the training between the two objectives |
genai (Immortal Discoveries)#0601: I'm not sure what you mean, so it can learn, predict, learn, predict, etc ?
bmk#1476: learning doesnt conflict with predicting
genai (Immortal Discoveries)#0601: cuz i'm trying to figure out why GPT hasn't been successfully ran on the Hutter Prize, even if it doesn't exactly follow the rules, let me explain a bit:
genai (Immortal Discoveries)#0601: Correct me if wrong but GPT hasn't been tested on the enwik8 hutter prize challenge because GPT creates a huge neural network which ruins the whole point of its score? And is GPU based?
But 1) we can see the speed difference compared to the current winning algorithm's speed. And 2) if we train GPT as it sees data, then we would know it isn't storing the definite answers, and would get the actual compression, even though the net would still be huge - we know it cannot be that much novel information in there at least.
kindiana#1016: The hutter prize computational limits are far too strict
bmk#1476: gpt is too big
kindiana#1016: You can train it online, but you need a lot of data before it gets good
bmk#1476: it's not a very good compression algorithm when the limits of the challenge arent big enough to let you amortize away the size of a 350GB model
genai (Immortal Discoveries)#0601: "but you need a lot of data before it gets good"
how do we know it's learning isn't linear?
kindiana#1016: You can look at the openai scaling plots
bmk#1476: it doesnt even matter how fast it learns
bmk#1476: gpt3 is 350GB
bmk#1476: thats far larger than the hutter set
genai (Immortal Discoveries)#0601: all AI i hae seen have a slope in accuracy that gets less better the more data you feed it, and seem to look the same, maybe i could compare them though better
kindiana#1016: You don't need to send the model though, just train it deterministically on both ends
kindiana#1016: Probs not gpt3 sized though lol |
genai (Immortal Discoveries)#0601: > gpt3 is 350GB
@bmk But above I asked: And 2) if we train GPT as it sees data, then we would know it isn't storing the definite answers, and would get the actual compression, even though the net would still be huge - we know it cannot be that much novel information in there at least.
bmk#1476: i have a feeling that that might go *slightly* over the compute limit
genai (Immortal Discoveries)#0601: yes, it's be doing more compute, but it'd be faster, i'm betting
bmk#1476: if you trained gpt3 on the 1gb hutter set then it would very quickly hit the bayes error by just memorizing the hutter data as well as you possibly can
kindiana#1016: The limit is like 100 core hours iirc?
bmk#1476: also wait hold up
kindiana#1016: That's defs not enough, especially with no gpu
bmk#1476: *train deterministically on both ends*?
bmk#1476: how the hell do you do that
bmk#1476: sure, training while compressing is no big deal
bmk#1476: but decompressing?
kindiana#1016: Take 1% of the data, send it across
kindiana#1016: Train on both sides
bmk#1476: so youre only training on 1% of the data
kindiana#1016: Use trained model to compress 1% more
kindiana#1016: Retrain
kindiana#1016: Etc
bmk#1476: *retrain*?
kindiana#1016: Continue training |
bmk#1476: uh, im not sure i understand
genai (Immortal Discoveries)#0601: batches?
kindiana#1016: at t=0. the compressor trains the model on a part of the data, and sends that part of the data across without compression
kindiana#1016: the decompressor receives it and trains an identical model
kindiana#1016: then that model is used for compressing the next part of the data
kindiana#1016: and the model on both sides is trained on both batches
kindiana#1016: repeat until done
bmk#1476: ohh
genai (Immortal Discoveries)#0601: this is only done if want to decompress though, right?
kindiana#1016: i mean, thats the important part lol
bmk#1476: i mean, i guess it could work with *much* smaller networks
genai (Immortal Discoveries)#0601: i still have my 1 important question above though: And 2) if we train GPT as it sees data, then we would know it isn't storing the definite answers, and would get the actual compression, even though the net would still be huge - we know it cannot be that much novel information in there at least.
bmk#1476: >i have a truly marvellous compress-only compression algorithm that can compress every file smaller than the original but its source code is too large to fit in this discord messahe
genai (Immortal Discoveries)#0601: the key part of my question above is the end, yes although the net gets big, we know it can only be storing what is saw so-far, not the actual answer ahead of time.
kindiana#1016: yeah
kindiana#1016: but theres not really any different to a held out test set
kindiana#1016: as a method of evaluating networks
genai (Immortal Discoveries)#0601: Because I want to see it do the compress test 🙂
genai (Immortal Discoveries)#0601: I want to see if it can beat the record of 100MB > 14.8MB
kindiana#1016: theres no way a gpt3 sized network will work for that little data |
kindiana#1016: theres a optimal network size for given data constraints, and gpt3 is wayyyy too big
genai (Immortal Discoveries)#0601: but its not gpt3, nor that big, only sees the enwik (100MB) bit by bit as it tests on it...
bmk#1476: off the top of my head, you're probably looking for about 10M parameters for 1GB of data
kindiana#1016: yeah something like that is used by the sota hutter prize, it uses a LSTM language model which I'm pretty sure is trained as its evaluated
kindiana#1016: maybe transformers are more data/compute efficient, but 🤷
genai (Immortal Discoveries)#0601: but my proposal is fail-safe, if we train from scratch a GPT on the enwik8 100MB to compress it as it trains on it, it's ok if its fast on GPU, we care only about the compression size because the speed makes up for the cheat of using GPU, and the big net is not bad for the test cuz it trains as it is evaluated and can't be storing any answers ahead of time.
genai (Immortal Discoveries)#0601: the net might not being storing it efficiently but we can at least see what it thinks
genai (Immortal Discoveries)#0601: the goal is to see if it compresses it more than the top record, dependless of net size
genai (Immortal Discoveries)#0601: as long as trains as goes on it
genai (Immortal Discoveries)#0601: and is faster than top winning algorithm
kindiana#1016: feel free to try it, but I think its going to be difficult to beat existing approaches with a purely learned implemention; the baseline for simple frequency/substring based approaches are already quite good, and at the initial stages the network is going to be pretty bad
genai (Immortal Discoveries)#0601: interesting point
genai (Immortal Discoveries)#0601: if only the best record didn't hand-code it so hard lol....darn it...
genai (Immortal Discoveries)#0601: i know they for example group similar wiki articles
genai (Immortal Discoveries)#0601: or want the code for the wiki code
genai (Immortal Discoveries)#0601: the <>[name]etcetc
genai (Immortal Discoveries)#0601: will have to sleep on that one, night fellas
spirit-from-germany#1488: I have an interesting idea how you could maximize the impact of your work. :)
How would it be if you would create a detailed tutorial series about how your code works, what issues and bugs you had to solve, how you trained it, how you made it run on the TPUs, ... |
Then it would become really easy to replicate your work... It could encourage others to build on your shoulders and multiply the impact your project has
Daj#7482: I'm in favor of good documentation for sure, and also plan on writing some extended blog posts on both the meta and technical aspects of the project. Though tbh from my experiences getting GPT2 to run on TPUs in the past, I don't expect many people to have _that_ much interest in the technical details. Most devs don't use TPUs and don't want to switch from PyTorch (which is fair enough haha)
Daj#7482: But yes I agree that an unusually detailed description of the journey and pitfalls would be valuable
Louis#0144: https://twitter.com/kevin_scott/status/1308438898553638912?s=20
Louis#0144: @Daj
Louis#0144: !!!!!
bmk#1476: was
Daj#7482: lol is this fr
Louis#0144: yes
bmk#1476: M$
bmk#1476: does this mean beta access will end?
Daj#7482: Man the jokes write themselves
Louis#0144: yes
Louis#0144: yes to both
Daj#7482: I'm just numb at this point lol
bmk#1476: nooooooooooooooooooooooooooooooooooooooooooooo
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/758015776803258519/unknown.png
zphang#7252: what does exclusive even mean here?
bmk#1476: **Nothing changes for you or the applications you’re building with our API. ** |
zphang#7252: yea that quote mentions nothing that is "exclusive"
Daj#7482: wtf is wrong with OpenAI's PR
Daj#7482: Seriously
Daj#7482: Why can't they communicate _anything_ clearly?
bmk#1476: you gotta phone up jack and rant his ears off
Daj#7482: I'm considering it lol
Daj#7482: But I honestly can't think even of anything to say
zphang#7252: verbalize a "?" into the phone
Daj#7482: But it's not even confusing imo
Daj#7482: I think all I'd say is a verbalization of a sigh
Quill#9732: one of these two sources is just outright wrong
bmk#1476: maybe they have a different definition of the word exclusive
Daj#7482: ~~Is this a polyamory thing?~~
Daj#7482: Sorry lol
Louis#0144: LOL
Louis#0144: pls
zphang#7252: microsoft `console limited window launch exclusive`
cfoster0#4356: Exclusively license = only MSFT can offer it in addition to OpenAI
bmk#1476: ah, so it doesnt preclude oa from still offering it
cfoster0#4356: That's how I read it |
zphang#7252: so MSFT is the only reseller
bmk#1476: if that's the case, they picked possibly the worst way to word that
Daj#7482: What if I build an app that offers GPT3 functionality?
Daj#7482: Seems weird
Daj#7482: > if that's the case, they picked possibly the worst way to word that
@bmk this
bmk#1476: oa has been very strict about allowing "backdoor" access through other apps
bmk#1476: or rather, not allowing
Daj#7482: AI dungeon?
zphang#7252: yea I think there's some hard-to-enforce/define restrictions around free user-input
cfoster0#4356: Yeah that makes sense now
bmk#1476: if your app lets you use gpt3 for arbitrary things they might shut you down
bmk#1476: AI dungeon is too big, and anyways its gpt3-capabilities are neutered
zphang#7252: do we know how it was neutered?
bmk#1476: through training on AI Dungeon stuff
Daj#7482: Eh well this doesn't really change much
bmk#1476: and also all the state AI Dungeon passes around through the context
Daj#7482: If anything, "picking on microsoft" is a time honored hacker tradition, so we should keep going lol
bmk#1476: this doesnt change much, it further provides evidence foe "OA has no idea how to communicate"
bmk#1476: which |
bmk#1476: doesnt change the prior much
bmk#1476: so yeah uh M$ bad
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/758017506219655219/unknown.png
bmk#1476: GPT-Clippy
Daj#7482: > GPT-Clippy
@bmk _literally a paperclip_
Louis#0144: NO
Louis#0144: NOT LIKE THIS
Daj#7482: If this was a black mirror episode I'd be booing at the screen
bmk#1476: omg it's perfect
Louis#0144: has Microsoft not seen the memes!!!
Louis#0144: ;-;
bmk#1476: clippy was a harbinger
Daj#7482: Why is the writing in this timeline so shit
Daj#7482: I'm updating towards the simulation hypothesis
bmk#1476: it's all such a terrible meme
Louis#0144: humans are just GPT-N+1
zphang#7252: it's called foreshadowing
bmk#1476: the good news is that i've updated away from "life has no meaning"
bmk#1476: the bad news is i've updated towards "the meaning of life is a shitty soap opera" |
Daj#7482: Does feel that way sometimes ngl
Daj#7482: Especially lately
bmk#1476: so uh
bmk#1476: we need to really focus on alignment
bmk#1476: i've been studying measure theory in preparation for reading the infrabayesian sequence
bmk#1476: if i understand correctly, that's pretty close to where the cutting edge is and i'm not really losing too much by not first reading all the other AF sequences, right?
Daj#7482: I pinged Jack about this communication let's see what he says hah
bmk#1476: the "bad communication" part?
Daj#7482: Yea
Daj#7482: And Infrabayesianism is so new no one knows if it's useful yet
Daj#7482: Read the other Sequences first
bmk#1476: ah
bmk#1476: embedded agency has been a *really* hard read so far
Louis#0144: infrabayesian
Louis#0144: wtf
Daj#7482: Infrabayesianism is so much worse than embedded agency lol
Daj#7482: I can't read the infra sequence to save my life
Daj#7482: You can also read other alignment stuff than MIRI
bmk#1476: is there a canonical intro
Daj#7482: IDA is more down to earth for example |
Daj#7482: The canonical intro is Rationality: A-Z haha
Daj#7482: And then the three AF sequences I guess
bmk#1476: is it even about alignment
Daj#7482: And then a disheveled mess of hundreds of disconnected blog posts
Daj#7482: > is it even about alignment
@bmk yes
bmk#1476: rationality a-z is more just about rationalist stuff
bmk#1476: or?
Daj#7482: The rationality is like the definitions in a math book
Daj#7482: Just groundwork to talk about alignment
Sid#2121: > I'm in favor of good documentation for sure, and also plan on writing some extended blog posts on both the meta and technical aspects of the project. Though tbh from my experiences getting GPT2 to run on TPUs in the past, I don't expect many people to have _that_ much interest in the technical details. Most devs don't use TPUs and don't want to switch from PyTorch (which is fair enough haha)
@Daj catching up on this convo but our code *does* run on GPUs as well, and can do model and data parallelism over multiple GPUs
bmk#1476: i'm going to be completely honest
Sid#2121: which should be of interest to people
Daj#7482: Ah right I forgot we work on GPU
bmk#1476: the idea of reading like 2000 pages of stuff before even getting to the AF stuff which is itself extremely long and difficult to read doesn't sound too attractive
Daj#7482: Yea that's research/being a hero in a nutshell, not very attractive lol
Daj#7482: We could maybe do like a book club or something
bmk#1476: that's more reading than all of the mathematics i've ever learned combined, probably
Daj#7482: ¯\\_(ツ)\_/¯ |
Daj#7482: You wanted to know the canon
Daj#7482: You can get in through other ways
Daj#7482: IDA is pretty independent
bmk#1476: ok so yeah, let's do a reading club
bmk#1476: there is absolutely no way i'll get through rationality a-z on my own before the singularity without external pressure
Daj#7482: That might be fun.
Daj#7482: fwiw I do recommend building a habit of reading several hours a day if you can
Daj#7482: Best thing I've ever done
Daj#7482: We could make it a server wide book club, maybe a weekly call or something
bmk#1476: What do you think a reasonable pace for A-Z is?
bmk#1476: I want to get done *before* the singularity
Daj#7482: I finished the audio book in about a month
bmk#1476: Huh
Daj#7482: And reread it in a similar time frame to about 60% in writing
bmk#1476: The reason I want to do a book club is because it's hard to remember stuff if you don't discuss it with others
bmk#1476: And also because social pressure is helpful
Daj#7482: Yea honestly this is a good idea we should have thought about this earlier
Daj#7482: Let's do it
Daj#7482: I'm sure other people will be interested as well
bmk#1476: We should also do some other related books in parallel too |
bmk#1476: I've always wanted to read thinking fast and slow
Daj#7482: Good book!
Daj#7482: If I could choose I'd recommend history books
Daj#7482: I've profited immensely from history books
Daj#7482: Currently reading Rise and Fall of the Third Reich
Daj#7482: What a tome
bmk#1476: Eh I personally am more interested in fast tracking the Rationalist stuff
bmk#1476: And then alignment stuff
Daj#7482: Sure
bmk#1476: And I think that relates best to the eleutherai mission anyways
Daj#7482: Of course
Daj#7482: We could be aggressive and say like one R;A-Z book per week
Daj#7482: Then we're done in six weeks
bmk#1476: Hm, if so I'd like more frequent meetings
bmk#1476: I can only read so many pages before stuff starts falling out of my head
Daj#7482: I'm not sure I could commit to more than once per week but we can try
bmk#1476: Ok then maybe we read slower
bmk#1476: Or wait how many pages per sub book
Daj#7482: R:A-Z is an easy read
Daj#7482: But we can be more chill |
Daj#7482: Page is not equal page imo
Daj#7482: Some books I need two weeks for 100 pages, others I can do in an afternoon
Daj#7482: (and infra I read at 1 sentence/week)
bmk#1476: Fair
bmk#1476: So if you think one book per week is doable then let's do it
Daj#7482: I think it's absolutely doable but you need to read every day
Daj#7482: It's aggressive but I've done it before
Daj#7482: Maybe we can pause on some of the larger essays like the generalized anti zombie principle
bmk#1476: And I think we should try to make an Anki deck for it
Daj#7482: I've never used Anki, sounds interesting
bmk#1476: A few cards per chapter to summarize
bmk#1476: It's quite useful
bmk#1476: And we can publish the deck at the end
Daj#7482: My mutant anti-autism brain is great at absorbing unstructured information, not at structuring it lol
bmk#1476: For anyone who might want to use it
Daj#7482: > And we can publish the deck at the end
@bmk love the idea!
bmk#1476: The problem with personal decks is they're too personalized
bmk#1476: So if we create it together it'll be more broadly useful
Daj#7482: I like it, yea |
Daj#7482: This'll be fun! I've been meaning to reared the sequences anyways
bmk#1476: Awesome
Daj#7482: Think we can finish the first book this week? I think I could
bmk#1476: I've already read it once so it'll be fairly
easy
Daj#7482: Then how does maybe Sunday sound for book club meetup?
Daj#7482: We could also make a channel for it
bmk#1476: I'll get a github repo set up for the collaborative ankiing
bmk#1476: What should we call the channel
Daj#7482: #book-club?
Daj#7482: #the-black-library
Daj#7482: #infohazard-containment
Daj#7482: #there-is-no-anti-mimerics-division
bmk#1476: #book-speedrun
Daj#7482: #rationality-any%
bmk#1476: #words-words-words
bmk#1476: #church-of-the-lord-reverend-thomas-bayes
Louis#0144: https://twitter.com/mark_riedl/status/1308472270562750464?s=20
StellaAthena#3530: https://twitter.com/kevin_scott/status/1308438898553638912?s=20
StellaAthena#3530: Open AI has sold out to Microsoft |
Louis#0144: ya
Louis#0144: scroll up
Louis#0144: 😛
Quill#9732: ClosedAI
bmk#1476: what an original joke
Quill#9732: thank you
bmk#1476: in fact, it literally is pretty original, it's only ever been used one other time in this server https://cdn.discordapp.com/attachments/729741769738158194/758038904434589716/unknown.png
Noa Nabeshima#0290: Oh, something about a book club?
Noa Nabeshima#0290: Anyone interested in Paul Christiano's blog?
Noa Nabeshima#0290: https://ai-alignment.com/approval-maximizing-representations-56ee6a6a1fe6
bmk#1476: we're going to read rationality a-z first, then we're going to read the 3 sequences on AF, then we might deep dive into christiano's stuff
bmk#1476: but we're like, 2-3 months away from getting all that done
Noa Nabeshima#0290: sweet
Noa Nabeshima#0290: Also I'm looking for undergrad senior thesis ideas if any of you have any
Noa Nabeshima#0290: It does need math proofs.
Louis#0144: anyone know a good efficient transformer thats good at abstractive summarization
Louis#0144: need a context window thats about ~3k tokens
bmk#1476: @mistobaan that headline causes me physical pain
bmk#1476: > AI devs created a lean, mean, GPT-3-beating machine that uses 99.9% fewer parameters
bmk#1476: the paper is already of.. questionable quality |
bmk#1476: and then the editorializing
bmk#1476: **AAAAAA**
bmk#1476: is this what gell mann amnesia feels like
Daj#7482: I have to hate that paper because it's from the LMU, my university's sworn enemy
Daj#7482: Haha
Daj#7482: The common joke is if the worst half of TUM's (my uni) students went to the LMU, both would improve
trsohmers#5863: Greetings. I have a few 8xV100 (16GB) systems that I may be able to contribute for GPT-3 replication; Is there any place/person to discuss this with?
bmk#1476: how many is "a few"?
bmk#1476: we've been using TPUs so far but our code also runs on GPUs and there's been a bit of a TPU glut recently
trsohmers#5863: At least one constantly, can spin up probably ~5 or 6 additional ones at different points
bmk#1476: ah
bmk#1476: so the thing is, *technically* we can spin up up to 2048 TPUs at once
bmk#1476: however, that's only if nobody else is using TPUs
asparagui#6391: eliminate the human interference?
bmk#1476: in practice we can never get more than, like 512 TPUs
trsohmers#5863: How much is that costing?
bmk#1476: and sometimes (like about now) we can't even get more than 100
bmk#1476: google is providing tpus for free
trsohmers#5863: Oh, are you just using collab?
asparagui#6391: tfrc |
bmk#1476: no, tfrc
bmk#1476: so yeah the major catch is that there's absolutely no guarantee how many tpus we can reserve
bmk#1476: TPU podcast had a major glut recently where they couldnt create a single tpu
bmk#1476: so having more cores to run experiments on is always a good thing
trsohmers#5863: I may be able to reserve a total of ~80 V100s for a weekend if it is useful
bmk#1476: the next upcoming experiments are mostly ablations for dataset composition using gpt2 sized models
bmk#1476: we're probably going to be running months worth of experiments
bmk#1476: that's excluding the main gpt3 replication run, which will almost certainly be on tpus because theres nowhere else we can get enough compute for it
trsohmers#5863: I'm curious what is the real performance difference you are seeing between a V100 instance and the TPU 32 core slice
bmk#1476: you mean a V100 vs a single TPU core?
bmk#1476: i'm not sure actually
trsohmers#5863: a 8x V100 instance vs a comparable (price wise) TPU instance
bmk#1476: price doesnt really matter for us
asparagui#6391: it's more about having enough memory for the model
trsohmers#5863: sure, but to get an idea of what the effective difference is; my company offers V100 cloud instances at half the price of equivalent AWS and GCP instances, and I'm trying to get roughly what the true performance difference is
bmk#1476: and compute
bmk#1476: the thing for us is that since we're not paying for any of this price doesnt really matter for us and so we've never thought about it
trsohmers#5863: Yea, I would think that something like an 8x RTX 8000 instance would be more useful, as it could actually fit the entire ~350GB of something like GPT3 in a single instance's GPU VRAM
bmk#1476: we could never pay for V100 instances because those arent free and we dont have that kind of cash
bmk#1476: also 8x RTX 8000 can't fit GPT3, you're going to need about 4x that |
bmk#1476: at least
bmk#1476: probably more if you want it done in a reasonable timeframe
trsohmers#5863: 8x RTX 8000s would be 384GB of VRAM; OpenAI's paper lists 353GB of VRAM necessary for the full 175B parameter model
bmk#1476: you need to fit the activations too
Sid#2121: @trsohmers just reiterating we'd be very interested in having a pool of GPUs at our disposal. Is there a way to get in touch with you via email or social media, or is here best? (DM me or @Daj or @bmk )
Louis#0144: brb, bullshit detectors going off https://cdn.discordapp.com/attachments/729741769738158194/758092014125580348/Screen_Shot_2020-09-22_at_6.15.30_PM.png
gwern#1782: (you keep using that word 'glut'. I do not think it means what you think it means.)
gwern#1782: @Louis sounds like salesforce and uber's stuff on controlling GPT
Louis#0144: its actually almost exactly like my advisors work
Louis#0144: salesforce stuff is a bit different
Louis#0144: thats on controling bias
Louis#0144: where as you want to control an LM to reach plot points here
Louis#0144: typically this is done using plan & write
Louis#0144: AFAIK, they literally reimplemented plan & write
Louis#0144: no one in my lab knows these authors and my lab is the biggest in the field
Louis#0144: both authors have never published NLP work...
3dprint_the_world#6486: that's hardly reason to dismiss someone though...
Sid#2121: > (you keep using that word 'glut'. I do not think it means what you think it means.)
@gwern I've seen bmk use it that way a few times and it has caused me to almost reprogram my internal definition for glut
en3r0#3241: Greetings! |
en3r0#3241: Not sure where to throw this, but the discord link in the EleutherAI Core Document is broken. https://docs.google.com/document/d/1yOnxEMlU57M8YFlQC3XNOvyMVX2EpU5LeIWhEBcwQNk/edit#heading=h.vjo85qqx8r3b
Daj#7482: Thanks! Pinging whoever's in charge ( @StellaAthena ? @bmk ?)
en3r0#3241: That was where I found the project (from a HackerNews comment), so others may have tried and given up.
en3r0#3241: @StellaAthena @bmk also seeing the project board link broken: https://docs.google.com/document/d/1yOnxEMlU57M8YFlQC3XNOvyMVX2EpU5LeIWhEBcwQNk/edit#heading=h.bw1phwz3fhvh
Daj#7482: Yea outreach/PR hasn't really been a focus for us
Daj#7482: Even with basically zero publicity word of mouth seems to spread hah
en3r0#3241: @Daj I would say that is the best way to do it if you can. The people who are truly interested will show up and settle in.
Daj#7482: Mhm there will probably be a surge of interest after we release our code, and a much bigger surge if/when we release our version of GPT3, but I quite like the cozy atmosphere we currently have
en3r0#3241: Oh for sure, it is very exciting to see a community pick up where OpenAI went closed.
en3r0#3241: I have not read all documentation, but I am curious if you have considered any "fail safes" to prevent this loose organization from falling victim to a closed source future?
Daj#7482: What do you mean?
en3r0#3241: Well, what if one day you or the group just decide to stop checking in code? I suppose there is nothing practical that can be done, maybe a legal document?
Daj#7482: I don't think that's a problem or something we want or can guard against
Daj#7482: This is just a hobby project by a bunch of hackers in their free time
Daj#7482: If we lose interest, that's just how things go
Daj#7482: Other people are free to use our code to continue
en3r0#3241: Ya, that makes sense.
en3r0#3241: I suppose that is exactly what is happening here in the OpenAI case.
Daj#7482: OpenAI is a billion dollar organization comprised of some of the brightest minds of the field working full time
Daj#7482: Very different dynamic haha |
en3r0#3241: Haha very true!
en3r0#3241: If you are serious about funding though, I think you may find it. Especially with Microsoft announcement yesterday.
Daj#7482: It's not something we've been particularly keen on pursuing
Daj#7482: Most of us have day jobs, and it would taint the "purity" of this place if we were explicitly soliciting investment, we'd just be another startup
en3r0#3241: Absolutely agree with you there.
en3r0#3241: Well I am very excited to see how this thing evolves and pitching in when I can!
StellaAthena#3530: @Daj @en3r0 The google doc has been updated with the new website URL, a permanent discord invite link, a fix to the discussion of project boards, and some misc edits.
Daj#7482: Thanks!
StellaAthena#3530: The status reports on where each project is are also out of date, because it was written about a month ago. @Sid @bmk
Sid#2121: a little busy to do this over the next few days, moving country! (a.k.a procrastinating from doing the main tasks required for me to move country)
Daj#7482: I could have a crack at that tomorrow
Daj#7482: I'm not sure what the exact status of all the stuff around Neo is, but I have a pretty good feel of the code maturity, which is high
en3r0#3241: @StellaAthena thanks!
Sid#2121: https://tenor.com/view/angry-grandpa-funny-im-old-and-i-need-sleep-gif-13658228
Sid#2121: high code maturity
Daj#7482: Kinda how TPUs feel sometimes
Sid#2121: trying to run code on TPUs is kind of how i imagine it feels to look after a child who's just an absolute jerk
Daj#7482: No because the TPUs eventually do what they're supposed to
Daj#7482: At least until they preempt
StellaAthena#3530: @Daj are we interested in people donating GPUs to help train? Or are we doing it all on TPU? |
Sid#2121: definitely interested but unless they have a lot the training times are likely too high
Daj#7482: I'd be interested in GPUs for side projects other than Neo
Daj#7482: Neo is too big for GPUs unless we have like a few hundred at least
Daj#7482: But could be also useful for testing, or even dataprocessing since GPU boxes usually have big CPUs (even so that's a bit of an abuse haha)
Daj#7482: But yeah, with stuff like GeDi or various smaller model architectures worth testing, if someone wanted to give us access to big GPUs, I'm sure we could find cool uses
StellaAthena#3530: How much in Google Cloud credits do we have? How much do we expect training GPT-Neo to cost?
Daj#7482: We have no GC credits
kindiana#1016: whats GeDi?
Daj#7482: We have access to TPUs through TFRC. We can have up to 2048 TPU cores but only preemptible, meaning only when other people aren't using them. Which means that in practice we rarely get more than 256 at a time (which is still a lot, one TPU core is ~one V100)
Daj#7482: Our plan with Neo is to either beg Google for access to more consistent numbers of TPUs, or if that fails, just let it train extremely slowly
Daj#7482: Estimated training on 256 cores is ~1 year iirc
Daj#7482: GPT3 is _really_ big
Daj#7482: > whats GeDi?
@kindiana https://arxiv.org/abs/2009.06367
Daj#7482: a nifty little way to control the output of larger models
kindiana#1016: neat
Daj#7482: Now that we've really started to figure out unsupervised world models, controlling those models is the next big thing
Daj#7482: mark my words
StellaAthena#3530: So if we could use the full 2048 it would take a month or two.
Daj#7482: Yep |
Daj#7482: assuming no loss of efficiency etc
kindiana#1016: has there been any work with controlling generation by just adding vectors to the residual conenctions going through the network? (i.e. for sentiment from movie reviews, and take the mean of the hidden representation of all the positive samples at each layer, do the same for the negative, and you have a vector which lets you control the output tokens if you add them to the residual connections to control if the next token is more positive or negative)
kindiana#1016: on a model trained with layerdrop regularization, i think the network shouldn't freak out if the hidden representation is fiddled with a little
Daj#7482: I wouldn't be familiar with any such work, I'm generally bearish on understanding what the hidden layers are computing, but nostalgeabrist's posts may mean I'm mistaken. I also think it might run into one of the problems GeDi tries to fix: If you e.g. take samples from positive movie reviews, you will bias the model both towards positivity and movie-related words
Daj#7482: I like GeDi because it's really nifty, not sure if it'll be useful long term
Daj#7482: Learning from human feedback is the most likely long term trajectory
Daj#7482: (to my horror haha)
kindiana#1016: you can do some vector math, take the mean of the positive movie vector and the negative movie vector and you'll end up with the movie vector, positive vector and negative vector
kindiana#1016: theoretically xP
Daj#7482: That's GeDi
Daj#7482: yea
Daj#7482: except it's on the logits
Daj#7482: But I honestly have such a cool idea for a project I'll be doing after Neo is all set: Make a chaotic truly open proto-AGI! Set up a website where people enter whatever they want, the model produces an output, and the user rates whether it is what they wanted or not. Then train a reward model on the feedback. Probably will result in chaos but I think it would be fun as hell
kindiana#1016: you still need to train a cc-lm to get the logits, I suspect the hidden representation is rich enough you can build meaningful vector representations, but the issue is it might be too heavy handed (makes all the words 1% positive instead of making like 10% of the words 10% positive)
StellaAthena#3530: \**looks at Tay\**
kindiana#1016: just don't tell 4chan lol
Daj#7482: Yea I expect a Tay scenario if it was widely publiciszed
Daj#7482: but even that would be interesting info
Daj#7482: What do people ask for? What kinds of filtering make sense?
Daj#7482: You all know how fond I am of hands-on engineering in this brave new world of ours |
Daj#7482: I think this could be a Eleuther flagship project
Daj#7482: "The internet's communal AGI, train it to do anything!"
Daj#7482: I don't think it'll work ofc, but could also result in a cool dataset of paired commands and outputs if we allow the user to enter what they actually wanted
Daj#7482: Should focus on Neo first, but that's what I'll be pursuing next with my free time. If any web devs wanna help hit me up sometime soon hah
StellaAthena#3530: My next project will be going back to “dumping large piles of mathematics on CS problems and claim I made them easier”
Daj#7482: Haha
Daj#7482: You're not helping the mathematician stereotype
StellaAthena#3530: Helping the stereotype isn’t exactly my goal. Doing cool research is 😛
Daj#7482: Then have mercy on us CS people that are bad at math ;_;
StellaAthena#3530: More like “have mercy on Stella, who has spent a year trying to convince reviewers a paper she wrote is interesting”
Daj#7482: You know my opinions on peer review hah
Daj#7482: Or actually I'm not sure if you do
Daj#7482: Oh yeah right I never clarified the Einstein thing
StellaAthena#3530: I mean, I know you think it’s worthless virtue signaling and correlates poorly with actually interesting work
StellaAthena#3530: I broadly agree, though am not quite as extreme as you. Unfortunately it also correlates *highly* with people reading your papers.
Daj#7482: It's not worthless
Daj#7482: But yes it correlates poorly with actually interesting work
Daj#7482: I mean, the correlates with people reading your paper seems fishy too
Daj#7482: I read 1000x more arxiv than published papers
Daj#7482: And 99.9% of papers have like 1-2 citations max |
StellaAthena#3530: NeurIPS papers published from 2008 to 2014 average ~40 citations
StellaAthena#3530: (Ignoring recent years because of expected future citations)
StellaAthena#3530: https://www.microsoft.com/en-us/research/project/academic/articles/neurips-conference-analytics/
Daj#7482: Yea you're probably right
Daj#7482: I probably read a lot of published stuff that just happens to also be on arxiv
Daj#7482: You're right that it's the 100% correct move from a career perspective
Daj#7482: I should publish stuff in conferences too
StellaAthena#3530: > You're right that it's the 100% correct move from a career perspective
@Daj Yeah. I networked my way into a mostly-research job but it’s at a US government contractor and they don’t really supporting publishing much.
Daj#7482: Yea I'm surprised they let you do anything unsupervised lol
StellaAthena#3530: Like this? This isn’t on company time
StellaAthena#3530: (Okay that’s a lie, but they don’t know I work on this on company time)
Daj#7482: hahaha
Daj#7482: Well this and the stuff you have published or will publish
Daj#7482: I'm lucky that this is actually part of my job description now
StellaAthena#3530: You’re self-employed right?
Daj#7482: Nope, startup
Daj#7482: And I've succesfully lobbied my boss to let me do Eleuther-adjacent research
Daj#7482: also I allegedly am a student
Daj#7482: that will graduate sometime maybe |
StellaAthena#3530: Nice!
StellaAthena#3530: What degree are you working towards?
Daj#7482: Just a CS B.Sc.
Daj#7482: nothing exciting
StellaAthena#3530: http://www.artificialintelligencebits.com/self-driving-cars-will-hit-the-indianapolis-motor-speedway-in-a-landmark-a-i-race/
StellaAthena#3530: ^^ Don't let the results of this change your opinion of self-driving cars. This is something you should **expect** computers to be good at and is 99% a PR stunt.
StellaAthena#3530: > As Matt Peak, the managing director of the event’s organizer, Energy Systems Network, explained, the competition will help researchers discover how to create autonomous vehicles that can handle so-called edge cases when driving. Edge cases, or atypical events, can cause deep-learning systems to stumble because they haven’t been trained to take them into account.
>
> For instance, “fast-moving race cars with obstacles coming at them at lightning-quick speeds,” requiring vehicles to “joust and maneuver” through the track, represent a “quintessential edge-case scenario,” Peak said.
TFW your edge case is so atypical that it's literally impossible for most cars on the market. You need to maintain an *average speed* of 120 mph for 25 minutes to even qualify for the prize. The winning entrant will likely push closer to 200.
chilli#5665: Lol
chilli#5665: A lot of people I recognize from the ML subreddit here
Daj#7482: I have no idea where people are coming from honestly, but it seems we have a bit of a word of mouth thing going on
chilli#5665: Concretely, this is the comment I'm from: https://www.reddit.com/r/MachineLearning/comments/ixs88q/n_microsoft_teams_up_with_openai_to_exclusively/g6bbhkj
Daj#7482: Oh so that's why Stella asked me those questions haha
Daj#7482: Guess I'm the "compute expert" lol
StellaAthena#3530: Yeah lol
Daj#7482: Im surprised how many people have joined with the minimal amount of outreach we do. I guess there's a real appetite for GPT3
bmk#1476: im surprised that appetite hasnt died down yet
bmk#1476: i thought everyone would have forgotten about gpt3 by now |
StellaAthena#3530: (For those wondering, we do literally zero systematic outreach)
Daj#7482: You know I'm bullish on GPT3 and related tech hah
StellaAthena#3530: > i thought everyone would have forgotten about gpt3 by now
@bmk have you met ML people? 😛
bmk#1476: if the month on the arxiv id is more than one month in the past, it's old news
Daj#7482: The NLP world simps for OpenAI and HF though
Daj#7482: Also, GPT3 has prestige value
Daj#7482: It's "forbidden"
StellaAthena#3530: There have been about four dozen posts about GPT-3 on r/ML in the past two months and ten of them have 50+ comments.
bmk#1476: usually the hype dies with the news cycles
bmk#1476: people had all but forgotten gpt2 by the time they released 1.5B
Daj#7482: I think people are feeling what I'm feeling
Daj#7482: GPT3 is a new paradigm
Daj#7482: Not just a new model
Daj#7482: GPT2 is to Schmidhuber as GPT3 is to Hinton
StellaAthena#3530: It’s literally not. It’s not even a new model. It might be a new world of performance, but it’s not a new design paradigm
Daj#7482: It is though
Daj#7482: Prompt design as programming
StellaAthena#3530: > GPT2 is to Schmidhuber as GPT3 is to Hinton
@Daj “X is the Schmidhuber of Y” is my new favorite way to describe things. |
Daj#7482: Hahaha
bmk#1476: something something a difference in degree so big that it's a difference in kind
Daj#7482: GPT3 is software 3.0
StellaAthena#3530: > Prompt design as programming
@Daj this wasn’t invented by GPT-3. People were talking about it for years before GPT-3. It was made practical by GPT-3.
Daj#7482: Software 1: Program with code
Software 2: Train models on data
Software 3: Elicit the behavior you want from strong pretrained models
Daj#7482: > @Daj this wasn’t invented by GPT-3. People were talking about it for years before GPT-3. It was made practical by GPT-3.
@StellaAthena that's why I say it's not the Schmidhuber lol
PlumpTheBrave#4056: yooooo Stella! I had no idea you were involved with this, that is awesome
Daj#7482: Schmidhuber invented AGI in 1992
Daj#7482: It just wasn't practical
StellaAthena#3530: Heya @PlumpTheBrave! Welcome 🙂
StellaAthena#3530: This is what I do with my free time when I need a break from the side research I usually ignore my job to do.
StellaAthena#3530: (JK, but mostly the “when need a break from my side research” part not the “ignore my job to do side research” part)
PlumpTheBrave#4056: I'm the same way, I actually was going to reach out to you about this open, distributed meta learning research group I launched with some colleagues and advisors
StellaAthena#3530: Sick
PlumpTheBrave#4056: Open research is blowing up man, we work with OpenMined and also do a bunch of privacy focused research with them.
StellaAthena#3530: GT people or Harvard people? |
PlumpTheBrave#4056: Plus open research means better feedback loops and convergence to impact (i hope lol)
PlumpTheBrave#4056: Both! Dr. Joyner also has been involved (to a very small degree lol)
PlumpTheBrave#4056: Let me add you, I can dm you about it
StellaAthena#3530: Also, if you (or anyone else) is interested in AI privacy and security research I’m going to shamelessly plug the AI Village. It’s the official AI for Security hub of DEF CON, and we have a year-round discord with journal clubs, research discussion, and more.
**Disclaimer:** I am an officer of AIV but I was not paid for this message.
https://aivillage.org
https://discord.gg/cNvMeyB
PlumpTheBrave#4056: heck yes, I'll join 🙂
lychrel#5628: > yooooo Stella! I had no idea you were involved with this, that is awesome
@PlumpTheBrave we meet again lol
PlumpTheBrave#4056: Jack this is such a small world lol, Stella and I are labmates from GT's Lucylabs and I had absolutely no idea about this whole Eleuther project
PlumpTheBrave#4056: so freakin cool
lychrel#5628: Oh that's dope!!
StellaAthena#3530: Did you find us from reddit @PlumpTheBrave
PlumpTheBrave#4056: Jack and I are part of a community called GenZMafia, and someone there posted about Eleuther
PlumpTheBrave#4056: I'm a little busy today/tomorrow, but I think I'll be able to catch up to all the progress you all have made this week and figure out how to contribute meaningfully
Commutative Conjecture#6969: @Daj
> Prompt design as programming
I think the craziness of this is massively underrated |
"You have to figure out how to convince the AI to give you the answer in a human-like manner and the AI is an intelligence that you can not understand" would have been deemed unrealistic SF 5 years ago
Daj#7482: @PlumpTheBrave That'd be awesome! Best to just be shameless and ask around rather than relying on our sometimes (despite Stella's best efforts :) )out of date documentation, the regulars will be happy to get you on board
Daj#7482: > @Daj
> I think the craziness of this is massively underrated
> "You have to figure out how to convince the AI to give you the answer in a human-like manner and the AI is an intelligence that you can not understand" would have been deemed unrealistic SF 5 years ago
@Commutative Conjecture Strong agree, I had a whole talk where I embarrassed myself publicly to get across how big of a deal this is haha
StellaAthena#3530: #gpt-neox-devs is the main model
#the-pile is where we are building the best open source curated NLP dataset in existence (no joke, 500 GiB of high quality processed text and counting)
#lm-thunderdome is where we are developing the evaluation suit to test our model
#the-rad-lab is where I am doing some **super cool** research on tracking when other people deploy your model with ideas about using it to tag open source projects like our own.
StellaAthena#3530: Also, feel free to query me about what’s going on. I am group’s self-appointed Queen of Documentation
gwern#1782: _is glad posterity is vindicating him on gpt-3 and 'prompt programming'. now let's see if 'scaling hypothesis' and 'blessings of scale' can catch on, memetically..._
Aran Komatsuzaki#5714: I'm King of Doing Nothing.
Sid#2121: > I'm King of Doing Nothing.
@Aran Komatsuzaki this is how i interpret your role in the group https://cdn.discordapp.com/attachments/729741769738158194/758447548901228552/Screenshot_2020-09-23_at_23.59.00.png
Aran Komatsuzaki#5714: I know that you're secretly King of Memes.
Sid#2121: I think that's @bmk 's role haha
Sid#2121: I am king of reverting commits 👑
cfoster0#4356: Duke of Dubious Datasets @me
Adam Fisher#5266: Well, I just fell through a trap door in my reality... and here I am in a real "AI Dungeon"! 😆 |
bmk#1476: welcome to the dungeon of ai
bmk#1476: we only do weird stuff here
Adam Fisher#5266: I'm so down. Chain me to the floor! 🤪
kindiana#1016: https://ai.googleblog.com/2020/09/advancing-nlp-with-efficient-projection.html
genai (Immortal Discoveries)#0601: new paper https://arxiv.org/pdf/2009.07118.pdf
genai (Immortal Discoveries)#0601: from:
https://thenextweb.com/neural/2020/09/21/ai-devs-created-a-lean-mean-gpt-3-beating-machine-that-uses-99-9-fewer-parameters/
genai (Immortal Discoveries)#0601: LOL "pQRNN"
genai (Immortal Discoveries)#0601: "Today we describe a new extension to the model, called pQRNN, which advances the state of the art for NLP performance with a minimal model size. The novelty of pQRNN is in how it combines a simple projection operation with a quasi-RNN encoder for fast, parallel processing. We show that the pQRNN model is able to achieve BERT-level performance on a text classification task with orders of magnitude fewer number of parameters."
companioncube#0123: Does anyone know about other groups trying to train their own open very large GPT-3 / T5 models?
bfredl#1945: @companioncube I'm quite interested in that question. but also more broadly GPT-3-scale models but for other languages than English.
bfredl#1945: for instance, I know a research team at the Swedish Royal Library are at least in the phase of collecting a dataset for Swedish on the same order of magnitude :]
bmk#1476: ~~there exist other languages?~~
bmk#1476: In all seriousness though, do you know if they plan on making that dataset public?
bmk#1476: It would be really interesting to add that to pile v2
companioncube#0123: GPT-3 can already read and understand most languages.
bmk#1476: Not nearly as well as English though
companioncube#0123: So as far as folks know, this is the only public effort to train an open source GPT-3-scale model?
en3r0#3241: Reading through this article right now and finding it very interesting: https://blog.marketmuse.com/marketmuse-first-draft-vs-gpt-3/
Ken#8338: Introducing Dynabench: Rethinking the way we benchmark AI https://ai.facebook.com/blog/dynabench-rethinking-ai-benchmarking |
StellaAthena#3530: > for instance, I know a research team at the Swedish Royal Library are at least in the phase of collecting a dataset for Swedish on the same order of magnitude :]
@bfredl do you know where they get their data from? Maybe I’m biased by speaking only English, but finding such data online is quite hard.
cfoster0#4356: @en3r0 This article can't be serious, can it?
en3r0#3241: @cfoster0 after finishing it, it definitely is markety. I would love to see more real world examples from it. I believe it is probably more topically complete since it is probably analyzing the top results from Google, but can it write it better? I'm not so sure.
StellaAthena#3530: Wow, it’s almost like generalizing from cherry picked examples is misleading or something.
en3r0#3241: mhmm
bfredl#1945: @StellaAthena I believe they rely on a mixture of online data and digitalized swedish lang books and newspaper, over quite a long time range
bfredl#1945: like, they don't want a model that just reflects online swedish from 2010-ish to current day, but something that captures variation over a longer time
bfredl#1945: (IIRC, this was from a conference two weeks ago I very passively listened to while doing other stuff : )
bfredl#1945: But surely, GPT-3 already "does" quite a bit of swedish. with some prompting I made it translate a swedish question to the english translated question to english answer and then back to swedish answer (with 80-ish accuracy, maybe)
StellaAthena#3530: @bfredl do you happen to have any idea where such digitized books and newspapers can be accessed?
StellaAthena#3530: I would love to access them
bfredl#1945: ohh I can try to look it up again
bfredl#1945: I should have links to the conf, which should have links to the research groups
bfredl#1945: https://github.com/Kungbib/kblab not sure how much is there there yet
bfredl#1945: (the main contributor marma was also the speaker at the conf, I suppose if you are curious the simplest way is to bother him directly)
StellaAthena#3530: That’s awesome @bfredl! Thanks 🙂 I’m going to mention it in #the-pile
bmk#1476: We could use all the data we can get
bfredl#1945: nice, I gotta try the swedish bert like thing they already have, soon..
bfredl#1945: Though the dream if I were dreaming dreams would be a model that brings multiple languages upon equal footing, i e attempt to map the "continuum" of all the indo-european languages (and beyond..). From my own perspective, maybe start with the germanic ones and then expand. |
bfredl#1945: GPT-3 already goes ways towards that, but the strong English bias is noticable..
bmk#1476: Our open gpt3 will be strongly english biased too but we plan on going even bigger and building a much more balanced model
bmk#1476: Also, you speak German?
bfredl#1945: I suspect the somewhat swedish overrepresentation on reddit might have increased my utility of GPT-2 and GPT-3
bfredl#1945: a little german, but Swedish is my language
bmk#1476: Ah
bmk#1476: How similar is swedish to german?
bfredl#1945: the scandinavian languages are often called "north germanic"
bfredl#1945: though standard german (hochdeutsch) is derived from southly variants that have diverged quite a lot
bmk#1476: Yeah I only know Hochdeutsch
bfredl#1945: some still living low german variants are quite close though
bfredl#1945: but I have no idea how hard it would be to get a proper plattduutch dataset 😛
bmk#1476: Dutch is low germanic right?
bfredl#1945: yea
bfredl#1945: even from knowing "no" Dutch directly, I can still understand some stuff
bmk#1476: same haha
bfredl#1945: from both scandinavian and german and english similarities
bmk#1476: swedish looks completely unintelligible to me though
bfredl#1945: Even if I do not work with it directly, I am quite interested in the works of preserving endangered dialects / minority lang variants
bfredl#1945: I wonder what role ML/AI could play in that, if any. |
bmk#1476: then you're in good company here
StellaAthena#3530: > Even if I do not work with it directly, I am quite interested in the works of preserving endangered dialects / minority lang variants
@bfredl this is dope
bmk#1476: one major advantage of *multilingual* gpt3 for minority languages is that it could transfer a lot of knowledge from higher resource languages
StellaAthena#3530: My alma mater is big in reconstructing dead languages
bmk#1476: so with pile v2 we could build a true omniglot, or as close as you can get anyways
bfredl#1945: I mean, the ppl living on the border between sweden and norway in some places like jämtland, would speak very similarly on each side, but have different "dachsprache"
bfredl#1945: jämtlänska was considered to be norwegian until the union dissolution 1905, IIRC
bfredl#1945: also, the danish island of Bornholm is believed to have a few speakers of the "true" old Skånska
bmk#1476: (the ä is pronounced the same as in german, right?)
bfredl#1945: (not the modern skånska which is swedish with a few remaining funny words and dipthongs)
bfredl#1945: yea ä is the same
bfredl#1945: I would love to see an AI "interpolate" the missing, now dead links.
bmk#1476: i hypothesize that adding more languages will help the lm get better at modelling the world
bfredl#1945: ^ yea that as well
bmk#1476: since knowing how to model the world and then projecting that into each language is easier than learning seperate worlds
bmk#1476: we will see, with 1T
Alm#9130: yeah would be very interesting. Planning on doing some smaller experiments with t5 on english danish norwegian and swedish. Im not sure if BPE/wordpiece/sentencepiece/unigram is a good fit to "interpolate". i think something needs to happen at embedding/tokenization
bfredl#1945: hmm, I don't worry too much about BPE itself. I expect the heavily lifting is to bee in the dense/attention layers themselves
bfredl#1945: "bad" BPE mainly means you lose total text lenght, is my very limited experience |
Alm#9130: yeah but if the wrong combinations are there
bfredl#1945: (I e GPT-3 swedish text becomes shorter, as the chars are more expensive to encode)
bmk#1476: We plan on redoing BPE for multilingual
bmk#1476: But sticking with OA BPE for monolingual
Alm#9130: well.i i get better score with larger vocab in swedish on just words
Alm#9130: on wikipedia synonyms
bfredl#1945: I have only finetuned them, so never tried to change the BPE layer
Alm#9130: when testing on how well it picks up/ stores words
Alm#9130: compound words etc
Alm#9130: done most of my tests with electra though
Alm#9130: need more and better data to test on
Alm#9130: if vowel-shifts etc are hidden in the characters behind the token, does the network figure out all that stuff and store it in the embedding and other alyers?
Alm#9130: do you know any papers looking in to that stuff? It would be interesting if it just picked up the different branches and merges from indo-european and further back. dont think the current methods would be the best way?
Alm#9130: Rhyming is some sort of hint that it has actually picked up on whats in the token, but im not sure if gpt-3 is good at that?
Louis#0144: https://twitter.com/lcastricato/status/1309289214081630215?s=21
Louis#0144: Vomit
Louis#0144: @shawwn @StellaAthena
Louis#0144: Shawn doesn’t think it’s unethical
Louis#0144: Idk how to explain it better than I have already
bmk#1476: Can you explain to me |
Louis#0144: The issue is masked photo of someone goes in, their address and credit card and name information comes out
Louis#0144: This isn’t the model right now
Louis#0144: But by implementing their model, they’re collecting data for this
Louis#0144: Think of the implications in HK or at BLM protests
Louis#0144: It’s an awful model
Louis#0144: It shouldn’t exist
bmk#1476: > The issue is masked photo of someone goes in, their address and credit card and name information comes out
@Louis hold up where does the address and credit card info stuff come in
Louis#0144: It’s Uber
Louis#0144: They have it
bmk#1476: Ok
bmk#1476: Explain this to me as if I was someone in the upper half of the political compass
Louis#0144: LOL
Louis#0144: I mean what’s not making sense
bmk#1476: If I understand correctly all it does is detect if you're wearing a mask
bmk#1476: It doesn't actually identify masked people
Louis#0144: Yes but we don’t know if the data is collected
Louis#0144: They didn’t clarify that
Louis#0144: That’s the real concern
bmk#1476: It identifies unmasked people and asks them to mask themselves |
Louis#0144: We just don’t know
bmk#1476: I mean if the adversary is the govt
bmk#1476: And they want to do it
bmk#1476: Nobody's able to stop them anyways
Louis#0144: Mhm....
bmk#1476: If anything, at least if Uber does this everyone knows they're being watched
bmk#1476: (this has been an exercise in devil's advocacy)
Louis#0144: also while the discussion with @shawwn has its merits
Louis#0144: I dont know if its an unintended side effect
Louis#0144: It might be their intentions
Louis#0144: we just dont know
Louis#0144: Im highly skeptical that this possibility was not considered
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/758861821255352340/Screen_Shot_2020-09-24_at_9.25.51_PM.png
bmk#1476: For the record, I lean libertarian on the political compass, but covid has seriously challenged many of my assumptions
StellaAthena#3530: How so @bmk?
bmk#1476: i mean, masks for example
bmk#1476: forcing people to wear masks is not very freedom
bmk#1476: forcing people to stay at home is not very freedom
bmk#1476: but it's in the best interests of everyone
bmk#1476: so where do you strike that balance? i have no clue |
StellaAthena#3530: See, this is why I don't get libertarianism.
Louis#0144: man the amount of techbros defending uber
StellaAthena#3530: My response is "... duh?"
Louis#0144: is actually absurd
bmk#1476: > See, this is why I don't get libertarianism.
is this in response to "lean libertarian"?
bmk#1476: if so, i dont actually mean like Capital L Libertarianism
bmk#1476: i just mean downwards on the political compass
StellaAthena#3530: More in response to what you said about masks
StellaAthena#3530: ah
StellaAthena#3530: You're european right?
StellaAthena#3530: Germany?
bmk#1476: no i'm canadian
StellaAthena#3530: No that's Daj
StellaAthena#3530: ah
StellaAthena#3530: My main exposure to libertarianism is small government fetishists who think it's a moral abomination that the government has things like drivers licensing or public libraries.
bmk#1476: that's not what i'm talking about
bmk#1476: that's pretty far down the line
StellaAthena#3530: Americans are nutso
Louis#0144: I mean |
Louis#0144: canada has it bad too...
Louis#0144: I doubt bmk likes trudeau at all
Louis#0144: his management of the virus has been pretty bad
Louis#0144: like even compared to the US
Louis#0144: its pretty bad...
Louis#0144: canadian populations arent dense tho
Louis#0144: so it isnt as obvious
StellaAthena#3530: https://twitter.com/charliekirk11/status/1308866450233479175?s=20
StellaAthena#3530: Okay but are these people in Canada
Louis#0144: no
Louis#0144: Canadian extremism doesnt really exist tbh
Louis#0144: not to the degree it does in the states
Louis#0144: Canada isnt a very patriotic country imo
bmk#1476: in my mind here's the spectrum:
- we should keep govt in check through democracy, and whatever degree of control that democratically elected govt exerts we should accept it as long as it doesnt interfere with the election process itself
- govt is usually slow and inefficient, so for things where there's no good reason for the govt to get involved in, it's best that it doesn't get involved. govt involvement is still very important though most of the time
- the govt should only get involved to uphold basic human rights (disagreement on which rights are basic enough is common, ofc) and under the condition that those are upheld, if it's possible to make govt smaller then it's better (insert js mill quote here)
- the govt should only exist symbolically but wield little real power over the states/whatever
- there should be no govt at all |
bmk#1476: when i say lean, i'm somewhere between the first two points
bmk#1476: and i still really dont know
bmk#1476: Capital L Libertarianism is somewhere around the 3rd point
bmk#1476: anarchy is 4/5
bmk#1476: > I doubt bmk likes trudeau at all
@Louis the general opinion of trudeau that i get is that he's very good at promising things and very poor at delivering
bmk#1476: of course, the people i know arent a representative sample of canadian citizens
bmk#1476: ~~ahem still waiting on election reform~~
bmk#1476: it was a big part of his platform and he gave up on it
bmk#1476: at least he got weed legalized so thats one thing
Louis#0144: That was his ENTIRE platform
Louis#0144: He’s still incredibly racist towards natives
Louis#0144: Which is beyond infuriating for me
bmk#1476: *insert trudeau blackface here*
Louis#0144: Mhm
bmk#1476: tbh legalizing weed doesnt mean much to me personally
bmk#1476: still salty about election reform tho
bmk#1476: fptp is a cancer
StellaAthena#3530: How do national elections work in Canada
Louis#0144: You tie your vote to a goose |
Louis#0144: The goose flies to Ottawa
Louis#0144: Giant tournament
Louis#0144: The last goose standing wins
Louis#0144: That person is then elected
bmk#1476: lmao
bmk#1476: so disclaimer that im a bit rusty on this
bmk#1476: (i know more about how the us system works than canada just because of the sheer volume of us content i see- inevitable on the internet, especially months before the presidential election)
bmk#1476: the tldr is you vote in your riding for an mp, and whichever party has the plurality gets to pick the PM (typically party leader)
bmk#1476: so its not like the US where the presidential election is a seperate thing
bmk#1476: since we have more than two parties, we sometimes (and indeed, currently) have a minority govt
bmk#1476: which means that both the govt and opposition need to try and win over smaller parties
StellaAthena#3530: Oh so it’s a standard parliamentary system?
StellaAthena#3530: Except you’re voting for the MP directly instead of voting for a party list like in the UK.
bmk#1476: im pretty sure its the same thing because only one candidate from each party can run in each riding anyways
StellaAthena#3530: Ah
bmk#1476: Ironically, the Canadian senate is the exact opposite of the US senate in that while in the US the senate serves the function of a veto for whichever party controls it, the Canadian senate is basically just a rubber stamp and rarely rejects anything
bmk#1476: Oh, also the representative of the monarch, the governor general, has to approve everything
bmk#1476: It's fun to think about the fact that *in theory*, the queen has full power over the entire government and can dissolve parliament and become a tyrant
StellaAthena#3530: What's a normal test accuracy for resnet on CIFAR-10
bmk#1476: apparantly like 90-95% |
bmk#1476: https://github.com/kuangliu/pytorch-cifar
StellaAthena#3530: hmmmmm
bmk#1476: which seems consistent with paperswithcode https://paperswithcode.com/sota/image-classification-on-cifar-10
bmk#1476: unrelated but: X - doubt https://cdn.discordapp.com/attachments/729741769738158194/758890491344322590/unknown.png
StellaAthena#3530: @researcher2 Hey are you around
StellaAthena#3530: So this is like catastrophically bad @bmk
StellaAthena#3530: INFO - 09/24/20 23:15:24 - 0:00:11 - Average Train Loss: 2.625187516969163e-05
INFO - 09/24/20 23:15:24 - 0:00:11 - Top-1 Train Accuracy: 99.546
INFO - 09/24/20 23:15:24 - 0:00:11 - Top-1 Test Accuracy: 74.07
bmk#1476: yeah that's pretty bad
kindiana#1016: looks pretty overfit
bmk#1476: It would be the second worst model on that pwc leaderboard
StellaAthena#3530: I'm firing it from scratch and seeing how it does on another run
StellaAthena#3530: This is with benchmarked stuff, maybe something's wonky
StellaAthena#3530: But if this isn't wonky I know why our radioactive replication isn't working
StellaAthena#3530: It's because the model doesn't work 😛
bmk#1476: It's hard to even get the model that bad
bmk#1476: So there must be a major issue somewhere
StellaAthena#3530: hmmm
StellaAthena#3530: After 30 epochs I am getting 81% |
kindiana#1016: i mean, if you don't do any data augmentation or regularization, its not surprising
bmk#1476: It still shouldn't be that bad, no?
StellaAthena#3530: I believe we are doing random crop and flipping
bmk#1476: What lr? That's responsible for about 50% of problems
kindiana#1016: the train loss is like 2e-5, so its basically memorized the train set
StellaAthena#3530: lr?
bmk#1476: Learning rate
bmk#1476: Just do some good ol grad student descent
bmk#1476: To find the best lr
StellaAthena#3530: I thought @researcher2 did that
bmk#1476: oh hmm
StellaAthena#3530: The file for it is on github: https://github.com/EleutherAI/radioactive-lab/blob/master/resnet18_on_cifar10.py
StellaAthena#3530: Looks like lr=0.01 and momentum=0.9
bmk#1476: 0.01 sounds way too high
bmk#1476: try going to 0.001
bmk#1476: then 0.0001
bmk#1476: that's the typical lr size ime
bmk#1476: if your training really crazy stuff you might need to go even lower
StellaAthena#3530: Actually it's Adam with lr = 0.1
bmk#1476: oh shit |
bmk#1476: that's *way* too high
StellaAthena#3530: lol
bmk#1476: all of ML is voodoo like this, you just know things are off from hours of bashing your head into things
StellaAthena#3530: What's a good lr
StellaAthena#3530: 0.001?
bmk#1476: eh 1e-4 is a good starting point
bmk#1476: also my instincts are mostly calibrated for big models or systems with weird stuff going on so my lr estimates are usually low
Sid#2121: > 0.001?
@StellaAthena they don’t quote an LR in the paper @StellaAthena ?
Sid#2121: ml research is basically alchemy lmao
ntakouris#5483: Cifar10 way easy for resnets!
ntakouris#5483: try cifar100
researcher2#9294: marking loss https://cdn.discordapp.com/attachments/729741769738158194/759014988039323648/unknown.png
researcher2#9294: learning_rates = [1, 0.1, 0.01, 0.001, 0.0001]. Assume faster convergence -> higher learning rate
researcher2#9294: This is backprop direct to the images (like gans). So no test loss applies.
researcher2#9294: For training the resnet classifier:
researcher2#9294: The resnet18cifar10 example runs a whole lot of different stuff, but the adamw test used default settings (lr=0.001) with transforms.RandomCrop(32, padding=4) followed by transforms.RandomHorizontalFlip(), producing around 84% accuracy. In the radioactive experiments the marking network and target networks also used default adamw settings.
researcher2#9294: The 74% figure came from turning off augmentations when we tried to get table1 to work with the marked data.
researcher2#9294: I would still like to get the basic example up to 93ish if possible.
researcher2#9294: Much more discussion in the-rad-lab if interested |
researcher2#9294: > try cifar100
@ntakouris That could be a good next step, thanks.
ntakouris#5483: I would also suggest to try some colour augmentations and preprocess the dataset with per-channel mean before doing model input.
ntakouris#5483: Cifar 10 and 100 are so tiny datasets that you can even get top1 accuracy of 90% by using like 4 bit weights and activations.
ntakouris#5483: https://arxiv.org/pdf/2004.05284.pdf
researcher2#9294: Ok thanks
ntakouris#5483: Also, is your model big enough? Trying some bigger models till you find out a `double descent` is happenning could help with the accuracy a bit
researcher2#9294: With cifar10 it sounds like resnet18 should be, but I can definitely try some of the larger variants.
researcher2#9294: Is this what you mean(haha) by per channel mean?
researcher2#9294: NORMALIZE_CIFAR = transforms.Normalize(mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010])
ntakouris#5483: Yes.
ntakouris#5483: resnet18 is way too much for cifar10 🙂 you can try quantizing and dropping layers on each block
researcher2#9294: Ok, GPU cranking now with transforms.ColorJitter added. Back in 20 minutes ⏲️
researcher2#9294: Then I'll try different model size or dropping. No experience with quantizing, will have to read the paper.
kindiana#1016: i don't think you have to worry about quantizing, unless you want to deploy your model to a phone or something
researcher2#9294: Yeah I think that was its goal, but it sounds like it effectively reduces model size?
researcher2#9294: Or "model intelligence"
kindiana#1016: yes, reduces model size and increases inference speed
kindiana#1016: new ampere gpus support int4 math lol
researcher2#9294: Is there a quick explanation why you'd want smaller weight bit size but then add more weights? |
kindiana#1016: it gives better model accuracy at iso size
kindiana#1016: i.e. the most significant bits of weights are more important, so with more quantized weights, you get more MSBs
researcher2#9294: Ok, so lots of wasted space, basically you want this for phones.
researcher2#9294: Or to cram more in memory for large models?
kindiana#1016: wherever the weights are downloaded and/or you want really high performance inference
kindiana#1016: training doesn't work great below 16 bits
researcher2#9294: Ok. Are we using fp16 for GPTNeo?
Sid#2121: Haven’t been so far but will do in future runs
ntakouris#5483: resnet18 is too big for a small task like cifar10, even cifar100.
quantization can take advantage of sparsity and reduce the capacity of the model, while keeping the multiple non-linearities of the layers, ultimately leading to a properly sized model for the task at hand.
It's not only about inference speed
StellaAthena#3530: The paper we are trying to replicate uses ImageNet. I guess we should switch to that?
kindiana#1016: I feel like quantization is an inefficient method of reducing capacity no? training with quantization is slower, and model capacity can be reduced without removing nonlineararities using split depthwise and pointwise filters (like efficeintnet)
ntakouris#5483: I am not aware of what you are trying to replicate, sorry.
training with quantization is actually faster if not done in fp-32 emulation mode. Training with learned quantizations is (up to ~4x) slower.
Sure, there are ways of reducing model capacity, like reducing layers and filters. Capacity is not reduced by depthwise separable convolutions. Capacity is *increased* by adding extra pointwise filters. |
EfficientNet defines capacity with certain criteria, in order to find out proper hyperparameter values for size.
If you are building on tensorflow and need a huge dataset like imagenet, try using GCP trial to get a VM that packages everything to sharded TFRecords and then use TPUs on colab for super-fast and free training. Storage cost is < 1e per month
asparagui#6391: i would agree that quantization up front isn't a great strategy
asparagui#6391: but the idea of using specialized ops for greater throughput is interesting
kindiana#1016: @ntakouris do you have a library you like for training with quantization? speedup sounds nice and would be interesting to try
I was taking capacity to mean parameters, and I though using pointwise -> depthwise 3x3 -> pointwise was a good way to reduce parameters compared to regular 3x3 convs used by resnet
ntakouris#5483: @asparagui for CNNs, the most extreme speedup you can get is 1-bit quantization. Convolutions become bitcounts and xnor operations. Efficiently done real-time on the CPU
@kindiana all the popular frameworks support some kind of quantization alraedy
asparagui#6391: yes, but most people are quantizing after training, not while
kindiana#1016: pytorch does offer quantization aware training, but I don't think it has any way of doing it that doesn't involve converting to fp somewhere in the process (hence no speedup)
asparagui#6391: well that's the idea of these mixed precison libraries
asparagui#6391: fp when needed, smaller ops when possible
kindiana#1016: as in, the quantized weight matrix is converted to floating point before its used, I dont think pytorch has GPU kernels for doing interger to fp matmuls efficiently
ntakouris#5483: if you're going to quantize a model when training from scratch, it's better to quantize when training (at least mixed precision).
I am a fan of transfer learning by knowledge distillation. Smaller, more efficient models and accurate, but a more time consuming.
ntakouris#5483: @kindiana this is a problem that most research attemps face. Altough, just by using fp16 (or bfloat16 even better, with TPUs), you can get a major speedup. You could also try nvidias AMP |
kindiana#1016: yeah amp is cool, although I'm not sure if I'd call fp16 quantization lol, its rare that you lose any model accuracy moving to it
ntakouris#5483: it is quantization. you're dropping half of the bit resolution. bfloat16 are more efficient than fp16 in deep learning because they changed the mantissa and exponent bits to support greater dynamic range
kindiana#1016: you don't really lose any model capacity dropping half the bits though, its effectively the same as fp32
StellaAthena#3530: Let’s take this convo to #the-rad-lab
Arjuna#4172: General question: lets say hypothetically you "stumble" over a real, self-aware AI, what would you do?
Daj#7482: > General question: lets say hypothetically you "stumble" over a real, self-aware AI, what would you do?
@Arjuna Depends on how self aware/intelligent we're talking and how much control k have over it
Daj#7482: Probably at least some.mild amounts of [SCREAMING]
asparagui#6391: > come with me, i will protect you from the humans
Parzival#4180: Hey all!
Parzival#4180: Happy to be here. I just started working at Latitude a few weeks ago. We maintain AI Dungeon. I am really just starting my ML learning journey though already h ave a ton of amazing support. I am quite interested in what you guys are up to here, and am always on the lookout to get more hands on experience, and to help Latitude continuing pushing it technological boundaries. 🙂
Sid#2121: Hey @Parzival ! Welcome to the *real* ai dungeon🙏 ⛓
StellaAthena#3530: Today in “horrible lies told by people using ML”: https://twitter.com/baumard_nicolas/status/1308715606196342784?s=20
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/759109345882406932/fetchimage.webp
bmk#1476: i have an idea
bmk#1476: what if we render papers as images
bmk#1476: and see the trustworthiness of ML papers over time
bmk#1476: :bigbrain:
StellaAthena#3530: lol
bmk#1476: Seriously we need to do this |
bmk#1476: Or something similarly hilarious
bmk#1476: The paper needs to be as bad as physically possible
StellaAthena#3530: If we replace "GDP" with "yearly mean distance to pluto" I bet nothing about the paper would meaningfully change
bmk#1476: What if we trained a CNN to classify images of proofs as valid or not valid
bmk#1476: :bigbrain:
bmk#1476: And then we use it as justification that our paper is valid by feeding it various augemntations of the paper rendered out until it says that the paper is good
bmk#1476: "We used our system to validate the validity of our paper"
bmk#1476: Wait, what if we took text, rendered it out, and then put that image into an iGPT to classify
Daj#7482: > What if we trained a CNN to classify images of proofs as valid or not valid
@bmk you jest, but this is how mathematicians work
StellaAthena#3530: > If we replace "GDP" with "yearly mean distance to pluto" I bet nothing about the paper would meaningfully change
@StellaAthena the more I think about this, the better an idea I think it is.
A Replication of “Tracking historical changes in trustworthiness using machine learning analyses of facial cues in paintings.”
> In this paper we successful replicate the results of “Tracking historical changes in trustworthiness using machine learning analyses of facial cues in paintings,” a paper that generated much controversy when it was published last year. We thoroughly examine the methodology and repreform the analysis as close to the original paper as possible. Unfortunately due to technical limitations we were unable to use the GDP per capita index used in the original paper, but we find that the mean yearly distance to Pluto (MYDP) produces similar results. We conclude with an analysis of using GDP vs using MYDP as a metric and find that MYDP does a better job predicting trustworthiness than GDP.
StellaAthena#3530: @Daj I'm not even joking. Would you like to do this with me?
Daj#7482: Abso-fucking-lutely
Daj#7482: My years of shitposting expirience has prepared me for this moment
StellaAthena#3530: So it turns out they posted all their code online which is convenient
StellaAthena#3530: https://osf.io/asmy3/?view_only=61995a283e9f4c55b43c9f31d6bd1e97 |
Daj#7482: :berk:
StellaAthena#3530: They also seem to have mysteriously forgotten to mention that they applied LOESS before doing the analysis?
Daj#7482: Do you know https://www.tylervigen.com/spurious-correlations ?
StellaAthena#3530: Yup!
Daj#7482: We should replicate like a dozen bad papers
StellaAthena#3530: I've been a fan for years
StellaAthena#3530: wow
StellaAthena#3530: A decade?
StellaAthena#3530: Jeez
Daj#7482: And find a spurious correlation for each
StellaAthena#3530: That would be fabulous
Daj#7482: Probably too much work
Daj#7482: But would be fun as hell
Daj#7482: Maybe we can get Taleb to retweet it
Daj#7482: That would be so fucking funny
StellaAthena#3530: I have to run, but I'm serious about wanting to do this. Maybe see if you can download their code and get it running?
Daj#7482: I'll take a look tomorrow
Daj#7482: Timezones yay
Daj#7482: And normal job schedule
bmk#1476: > @Daj I'm not even joking. Would you like to do this with me? |
@StellaAthena count me in
Daj#7482: No more 4am hacking binges feels bad man
bmk#1476: this is fucking glorious
Daj#7482: Agreed
Daj#7482: I've always wanted Eleuther to be the merry pranksters of ML lol
bmk#1476: Ah shoot, I don't know R
Daj#7482: > R
Ah I see where this paper went wrong
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/759134320241147904/Screenshot_2020-09-25-13-28-30-996_org.mozilla.focus.png
bmk#1476: Who even uses R, data people? Pshhh
Louis#0144: https://twitter.com/yet_so_far/status/1309475976376725504?s=21
bmk#1476: Data science is Machine Learning minus the learning
bmk#1476: And minus the machine
bmk#1476: Pile v2 is probably bigger data than 80% of "big data" and 99% of "data science"
bmk#1476: High energy ML ftw
Louis#0144: https://twitter.com/boredbengio/status/1309344275239522305?s=21
thenightocean#6100: Oh no, now they angered the big guy: https://edition.cnn.com/2020/09/27/tech/elon-musk-tesla-bill-gates-microsoft-open-ai/index.html
Sid#2121: quick, someone tell elon about us
Daj#7482: Oh God could you imagine if Elon knew about this
Daj#7482: 99% nothing happens |
Daj#7482: But Elon is nothing if not high variance
bmk#1476: oh god please do not let elon know about the existence of eleuther
bmk#1476: that would have a 90% chance of fucking all our shit up
Louis#0144: downloading a 600GB dataset rn
Louis#0144: a small chunk of common crawl
Louis#0144: Ive been training LMs this last week
bmk#1476: why not just use pile
Louis#0144: because I need a very specific kind of data
bmk#1476: ok
Louis#0144: but I guess I could have used the pile and removed what I didnt need
Louis#0144: 🤷♂️
bmk#1476: what kind of data do uneed
Louis#0144: forums where people are troubleshooting things
Louis#0144: Im working on an abstractive explanation dataset rn
Louis#0144: also an open domain arguing dataset
Louis#0144: LOL
Louis#0144: (they are one together)
Sid#2121: @Louis stackexchange?
Sid#2121: how are you planning to extract troubleshooting things from common crawl
Louis#0144: its a secret |
Louis#0144: 😉
Louis#0144: lol
Louis#0144: wait 8 months
Louis#0144: Ill publish it by then
bmk#1476: wait till pile is done
bmk#1476: then extract from that
bmk#1476: pile is much higher signal than CC which is mostly garbage
Louis#0144: proof of concept rn
Louis#0144: Ill use the pile in a few months
3dprint_the_world#6486: is this anyone here? https://github.com/lolemacs/pytorch-eventprop
3dprint_the_world#6486: seems to me like a slight abuse of terminology to call this 'eventprop' when it's actually just using discretized time dynamics
mefrem#5884: https://blogs.microsoft.com/blog/2020/09/22/microsoft-teams-up-with-openai-to-exclusively-license-gpt-3-language-model/
mefrem#5884: Hmm
Louis#0144: @Daj how does TFRC work btw
Louis#0144: I have a month trail
Louis#0144: trial
Louis#0144: do I get a month of TPU time, no hour limits
Louis#0144: what happens when it runs out
Louis#0144: they dont explain this
Louis#0144: 😠 |
Sid#2121: best to ask this all over in TPU pod but: - month of tpu time, you tend to get allocated a certain amount of non-preemptible smaller pods, and a quota of preemptible larger ones
Louis#0144: so if I need to finetune a massive LM I can do that np?
Sid#2121: no hour limits although in practice the larger ones get pre-empted a lot, and can't stay alive for more than 24hrs
Sid#2121: you might need to be a bit more specific than massive, but yes
Sid#2121: probably
Louis#0144: not GPT3 scale
Louis#0144: like 120GB of VRAM scale
Sid#2121: after your months runs out, write them another email saying what you've done with the pods, and ask for more. Ideally include some writeup somewhere.
Sid#2121: each core only has 16GB of vram, and the model gets duplicated across all the cores unless you're doing model parallelism
Sid#2121: (use gptneo lol)
Louis#0144: I thought its the 40GB TPUs?
Sid#2121: idk where you got that number from
Louis#0144: I assumed it was the same TPUs thats on colab
Louis#0144: no?
Louis#0144: those have a fair chunk of storage
Sid#2121: there's the CPU ram, which is big, but not as easy to use, then there's the TPU's ram, which is 16Gb (or Gib, can't remember)
Sid#2121: are we talking about storage, or ram here
Louis#0144: RAM
Sid#2121: because the TPUs don't have inbuilt storage, you need to use a cloud bucket
Sid#2121: right, yeah, it's just 16gb for the TPU's ram |
Louis#0144: wait but I can use massive models with TPUs on colab
Sid#2121: the TPU CPU's ram has something like 300gb of ram
Louis#0144: I was using reformer-large
Louis#0144: sorry wait
Louis#0144: T5-large
Louis#0144: and like
Sid#2121: pretty sure T5 uses mesh under the hood
Louis#0144: that doesnt fit in my local 48GB of VRAM
Sid#2121: colabs are v2s which have like.. 12Gib i think
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/760254191850487838/Screenshot_2020-09-28_at_23.38.28.png
Louis#0144: i see
Louis#0144: wonder how its so efficient with storage then
Sid#2121: storage? or RAM? Seems like you're talking about two different things
Sid#2121: colabs have a ton of disk storage, yeah
Louis#0144: like
Louis#0144: storage of matricies
Louis#0144: in RAM
Louis#0144: sorry
bmk#1476: Three different things actually
bmk#1476: VRAM, RAM, Disk |
Sid#2121: VRAM is just gpu RAM though, right
Sid#2121: i assumed Louis meant the TPU's RAM
Louis#0144: so how can TF and TPUs fit such a massive model into only 12GB
bmk#1476: Yes but it's disjoint from normal ram
Louis#0144: T5-Large is HUGE
Sid#2121: @Louis mesh
Louis#0144: but colab only uses 1 TPU
Louis#0144: no?
Sid#2121: 1 tpu with 8 cores
bmk#1476: VRAM is not shared with normal RAM
Louis#0144: each core has 12 GB?
Sid#2121: yes
Louis#0144: OH
Louis#0144: ok
Sid#2121: wait
Sid#2121: yes?
Sid#2121: i think
Sid#2121: lol
bmk#1476: That 12GB is RAM not VRAM
Louis#0144: it would have to be thats the case |
Louis#0144: otherwise theres no way it would fit
bmk#1476: Also each tpuv2 is only 4 cores apparently
Sid#2121: i don't think that's right
Sid#2121: where did you get that from
bmk#1476: Nvm
bmk#1476: Ok so each tpuv2 core has 8gb
bmk#1476: Vram
Louis#0144: can I run on TFRC using a colab notebook
Sid#2121: no
Sid#2121: i mean, you could maybe rig up some jupyter notebook, but you have to get a cloud bucket and a VM
Louis#0144: oh
Louis#0144: so its just
Sid#2121: seriously if you go over to #communities , join TPU podcast, tell them about your research and ask nicely, they'll help onboard you, and probably give you a few pods to play around with to get you set up
Sid#2121: that's their whole thing
Louis#0144: a google cloud instance
Louis#0144: I dont need pods yet
Louis#0144: soon
Louis#0144: still need to get my hands on 100 CPU cores
Louis#0144: to preproc
Louis#0144: lol |
archivus#7382: https://cdn.discordapp.com/attachments/729741769738158194/760380730110640148/image0.png
archivus#7382: I’m tempted to tweet this but a bunch of doctors follow me 😂
ntakouris#5483: @Louis tfx transform component map reduce is fast
Daj#7482: I don't think anyone would be in the mood for rewriting our working code from scratch? If it would even work, I haven't been keeping up with JAX
Sid#2121: Afaik there’s no model parallelism library for jax
spirit-from-germany#1488: https://youtu.be/jgTX5OnAsYQ
spirit-from-germany#1488: https://github.com/NVlabs/imaginaire
Daj#7482: https://twitter.com/PyTorch/status/1310974060214403077?s=19
Daj#7482: Speaking of model parallelism
bmk#1476: is XLA finally no longer completely and utterly unusable?
Daj#7482: ¯\\_(ツ)\_/¯
bmk#1476: ah yes english https://cdn.discordapp.com/attachments/729741769738158194/760578175976734800/unknown.png
Daj#7482: Seems it still needs one VM per 8 TPU cores
Noa Nabeshima#0290: What's definitely *not* in GPT-3's corpus?
Noa Nabeshima#0290: oh, wait articles from past 2019, nvm
Noa Nabeshima#0290: I'm good
StellaAthena#3530: > What's definitely *not* in GPT-3's corpus?
@Noa Nabeshima Erotica
FractalCycle#0001: Finally saw the Microsoft gpt news, makes this project even more necessary now
Noa Nabeshima#0290: What do you folks here think about the MSFT thing? Why do you think it is or isn't a big deal? |
Noa Nabeshima#0290: I currently don't believe it's a big deal, so I'm looking for other perspectives/other people's knowledge.
bmk#1476: I don't think it is either
bmk#1476: But the communication was absolute garbage
bmk#1476: I don't think they could have phrased it worse
StellaAthena#3530: I think it moved a lot of people’s assessment of how independent and greater-good oriented OpenAI is
StellaAthena#3530: It didn’t move my needle much because I already didn’t believe them, but other people did
Noa Nabeshima#0290: wow GPT-3 beats the crap out of me for next word prediction. That makes a lot of sense.
Noa Nabeshima#0290: Prompt:
Human: test
GPT-3: The
Actual: (NEWSER)
Prompt:(NEWSER)
Human: -
GPT-3: –
Actual: –
Prompt:(NEWSER) – |
Human: The
GPT-3: A
Actual: Attention
Prompt:(NEWSER) – Attention,
Human: all
GPT-3: all
Actual: NFL
Prompt:(NEWSER) – Attention, NFL
Human: lovers
GPT-3: fans:
Actual: announcers.
Prompt:(NEWSER) – Attention, NFL announcers.
Human: You
GPT-3: You |
Actual: It's
Prompt:(NEWSER) – Attention, NFL announcers. It's
Human: going
GPT-3: time
Actual: Patrick
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick
Human: Mckenzie
GPT-3: Peterson
Actual: Mahomes
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick Mahomes,
Human: the
GPT-3: not
Actual: not
|
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick Mahomes, not
Human: the
GPT-3: Patrick
Actual: Pat
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick Mahomes, not Pat
Human: Mahomes
GPT-3: Mahomes.
Actual: Mahomes.
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick Mahomes, not Pat Mahomes.
Human: Today
GPT-3: The
Actual: Just
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick Mahomes, not Pat Mahomes. Just |
Human: today
GPT-3: ask
Actual: ask
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick Mahomes, not Pat Mahomes. Just ask
Human: him
GPT-3: the
Actual: his
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick Mahomes, not Pat Mahomes. Just ask his
Human: followers
GPT-3: mom.
Actual: mother.
Prompt:(NEWSER) – Attention, NFL announcers. It's Patrick Mahomes, not Pat Mahomes. Just ask his mother.
Human:
bmk#1476: we should run a rigorous test for this |
Subsets and Splits