data
stringlengths 115
7.61k
|
---|
Louis#0144: Like sneak it in
Ward#1738: I believe he is quite interested and involved in the EEGI project
EricHallahan#1051: EEGI is long term. Let's not speculate on things.
Louis#0144: I might end up helping him on that at aleph alpha this summer
Louis#0144: lol
kip#6104: defo ask about eegi
gwern#1782: ask him about whether any openai or dm researchers are lurking to steal ideas. you know, ask him who he thinks is sus
bmk#1476: yeah, there might be oa/dm people among us
Daj#7482: We do have Amodei's Mom
UnsupervisedLearner#4148: With inherent memory limitations why does it seem like no one is trying to make transformers distributed and sparse, or at least coming up with some good in-place gradient algorithms? Am I just not noticing the work doing this?
Louis#0144: They are
Louis#0144: lol
Louis#0144: Check out MoE
Louis#0144: Or routing transformer which isn’t sparse how you imagine
Louis#0144: But it is kinda sparse
UnsupervisedLearner#4148: Afaik MoE greatly underperforms
Louis#0144: Yeah
RazikMazilya#0001: My University blocks the-eye.eu, I’ve reported it as incorrect but they still haven’t done anything. They say it’s because the site is “File Sharing” on the block page. So why isn’t GDrive or Mega blocked lol. Anyone know how to bypass DNS based filtering without admin privileges?
UnsupervisedLearner#4148: Routing ackshually sounds like what I was thinking about, sparse usually means big empty tensors which often are memory hogs of their own
cfoster0#4356: Which memory limitations were you thinking about? |
UnsupervisedLearner#4148: We have a lot more time than space for big gpu ops
UnsupervisedLearner#4148: And gpu memory is afaik pretty hard to do
UnsupervisedLearner#4148: So until we get hardware breakthrough we have hyuge bottleneck on model parameters if we keep architecture as-is
cfoster0#4356: You sure about that? If I recall correctly, with existing techniques (DP, PP, offloading, reversible networks), bandwidth and compute are the real bottlenecks
UnsupervisedLearner#4148: Not sure, no. I ask kindly for good links and resources to prove me wrong. I do beleeb bandwidth as an issue points to the memory problem, and reversible nets I am not entirely sure of the exact tradeoff and if they're worth it, I wonder for example why I haven't seen a big project use them besides that one LSH transformer gewgle put out
UnsupervisedLearner#4148: Use a vpn?
UnsupervisedLearner#4148: I think I would pay for a good vpn just to make sure my schreul is not allowed to spy on my internut usage
inox#5400: don't pay for a VPN, use https://github.com/trailofbits/algo
UnsupervisedLearner#4148: Neat
UnsupervisedLearner#4148: Thank you
RazikMazilya#0001: The fine folks at the Eye’s Discord server told me the IP since the filter is DNS based
UnsupervisedLearner#4148: It's preddy annoy that they even have a filter in the first place. Just monitor suspect activity not actively thoughtban websites
RazikMazilya#0001: To block a site for File Sharing is the most stupid thing I've heard, but to say Google Drive and Mega are exempt from that rule is more stupid.
mgostIH#0245: @RazikMazilya Use Firefox DNS over https
bmk#1476: @StellaAthena is there any mathematically elegant way to describe the sort of "homomorphism" (in quotes, because i can't figure out exactly what type of objects it's between) between the interval [0, 1] under subtraction and one quadrant of the circle group under inner product?
bmk#1476: basically my problem is neither of these objects are groups since they aren't really closed
bmk#1476: and the reason i'm doing this and not Z onto the circle group is because it seems to be an important crux that in rotary the image is finite sized and we make sure it ends up not going more than pi/2 around
EricHallahan#1051: sounds like you want a simplex?
Louis#0144: I really like mullvad tho
Louis#0144: 🤷♂️ |
Teemochu#8740: Guessing it's 3-5 from being able to build a machine that costs about as much as a car but is still "a machine" (on one US power outlet), and about 10 from an Actual Gamer having that kind of power. So that's pretty accurate.
RazikMazilya#0001: They seem to have a way of blocking that
StellaAthena#3530: Why subtraction specifically
EricHallahan#1051: Because it is a difference?
bmk#1476: well, distance between positions
mgostIH#0245: If you have an homomorphism for + you get one for - anyways
bmk#1476: not quite
bmk#1476: because the size is bounded
bmk#1476: so + isnt closed
mgostIH#0245: Hmmm
inox#5400: I use mullvad as well tbh, but for torrenting, don't use algo for that
inox#5400: honestly I don't use algo either much, anything I'd want to use it for can be done with tailscale with less effort
45#2247: https://www.twitch.tv/mtrazzi
45#2247: talking to connor live on twitch fo r50m
45#2247: ask questions (serious not accepted)
Daj#7482: :berk:
StellaAthena#3530: Okay so you have [0, 1] with Euclidean distance. And you want to say that that’s more or less the same thing as {e^ix : x in [0, π/2]} with distance being measured around the arc?
bmk#1476: x in [0, pi/2] for the second thing
bmk#1476: yeah
bmk#1476: and the similarity is subtraction maps to inner product |
StellaAthena#3530: And the question is what to call this correspondence?
bmk#1476: but it's not closed under addition/multiplication
bmk#1476: yeah basically i want to know the best mathematical object for representing it
StellaAthena#3530: It’s not closed under subtraction either
bmk#1476: uh, say absolute distance and absolute inner product
bmk#1476: actually wait
bmk#1476: no
bmk#1476: ignore what i just said
mgostIH#0245: @bmk you know how you define rational numbers from pairs of integers?
Like say you have the pair (1, 2), you want this to be the same object as (2, 4), (3, 6) and so on
You want all these objects to be the same, so one thing you do is consider the quotient of ZxZ under the relationship that tells you whether two of these points are equal ( (a, b) ~ (c, d) iff a * d = b * c)
You could do the same for your numbers in [0, 1], identifying each pair by their difference, so (0.5, 0.6) being the same object as (0.51, 0.61)
StellaAthena#3530: Inner product is complex valued
mgostIH#0245: You could do this too for the inner products of numbers on the circle group, you define two pairs of complex numbers to be the same entity by considering the equivalent relationship under their inner product
mgostIH#0245: Then you get well defined sets and can talk bijections and whatnot
bmk#1476: sure, it isnt - what i'm thinking is that subtraction maps very well to inner product in a kinda homomorphismy sense
StellaAthena#3530: What purpose do you want to use this correspondence for?
bmk#1476: hm, and so then the equivalence classes sort of form diagonals of [0, 1]^2?
bmk#1476: i mean first off it feels like there's a mathematically elegant way to formulate rotary hiding somewhere in there for one, since to me that feels like it's the core of rotary, but also i want to show that this formalism generalizes nicely to tori and not spheres
cfoster0#4356: We can't hear Connor on the stream :sadge: |
mgostIH#0245: You basically get [0, 1]x[0, 1]\\~ as your domain
mgostIH#0245: Which is kinda like how Q is ZxZ\\~
mgostIH#0245: ~ is the equivalence relationship
bmk#1476: and the equivalence classes would be diagonals from upper left to bottom right
mgostIH#0245: idk what you mean with diagonals 🤔
mgostIH#0245: I'll brb because I have dinner
bmk#1476: like those are the elements you collapse together
StellaAthena#3530: What’s going on is that, $(\mathbf{T},\cdot)\cong (\mathbf{R}/\mathbf{Z}, +)$
bmk#1476: R/Z, + is addition that wraps around?
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/837030180719427625/193204646687408129.png
StellaAthena#3530: Yup
bmk#1476: well, duh, obviously
StellaAthena#3530: This is a group isomorphism
bmk#1476: theyre trivially the same thing
bmk#1476: but irl we dont wrap around
StellaAthena#3530: They’re not *trivially* the same thing
bmk#1476: but i mean it's kinda obvious and it feels like skipping a major part of the difficulty by assumption
StellaAthena#3530: So, if we are going to talk about inner products we should really talk about vector spaces
bmk#1476: but Z isn't even a vector space
bmk#1476: so clearly it can't be a linear map |
StellaAthena#3530: You have $v,w\in \mathbb{C}^d$ and $v’, w’\in\mathbb{R}^{2d}$. And you are observing that $\langle v, w\rangle = \cdots$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/837030992702603274/193204646687408129.png
StellaAthena#3530: What do you want to fill in there? $v’ - w’$?
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/837031140182720552/193204646687408129.png
bmk#1476: C^d/R^2d represents each embedding?
bmk#1476: sorry i have a work thing rn, 30 mins
StellaAthena#3530: I don’t understand the question. I’m taking the correspondence you postulated and moving to a space where we have notions like “inner products” because we don’t have inner products in the way you want on the spaces we were originally talking about
mgostIH#0245: @bmk Ohhh you mean diagonals if you were to see it as a square of sides [0, 1]
mgostIH#0245: Hmm yes
mgostIH#0245: I didn't visualize it like this but ye
mgostIH#0245: With the quotient thingy you don't get a direct homomorphism, but you can then define new operations on those and get a homomorphism on those
mgostIH#0245: You can also do the same on the inner product space
mgostIH#0245: Define all pairs that have the same inner product as being related
mgostIH#0245: It'll basically mean that two complex numbers that have the same angle between them will be the same object
45#2247: argh srry i hope there was sound at the end
Sora#8531: The sound came back after you stopped using the mic thing I think
Sora#8531: Maybe next time try it on your phone to see it works
Sora#8531: That's what I used to do when streaming in the past
Sora#8531: Also can look at chat from your phone so you dont need to change windows
Sora#8531: Cool interview overall tho @45 . Looking forward to the next one! |
Daj#7482: Thanks @45 , was fun, sorry I had to leave
bmk#1476: @StellaAthena k im back now
bmk#1476: i'm mostly looking at the *invariance* within the image rather than the actual embeddings themselves
bmk#1476: i think another way of looking at what i want is this:
bmk#1476: everything works out perfectly fine for Z -> T
bmk#1476: i can prove that Z^2 -> H doesnt work
bmk#1476: but that only shows that you cant map *infinitely large* images onto a sphere (and have the nice properties everywhere)
bmk#1476: but that doesn't imply that finitely large images can't be mapped onto a sphere
alstroemeria313#1694: sigh... https://cdn.discordapp.com/attachments/729741769738158194/837065308808282152/unknown.png
alstroemeria313#1694: no video signal, perfectly good for machine learning
EricHallahan#1051: I never considered that defective GPUs could still be valuable.
gwern#1782: "no video signal" suggests serious problems with it. I'd want to see some DL benchmarking and proof it's stable for at least a day of compute before I plunked down $1300+ for it
EricHallahan#1051: Doesn't matter. It is worth the risk when mining.
The Enemy#7615: has anyone posted the security research thing about openai here
The Enemy#7615: dont wanna post another recycled link
The Enemy#7615: yep gwern in #memes
Exocamp#8255: I may or may not be crazy, but I thought I remembered a website where you could see the progress of the gpt-neo training
Exocamp#8255: Is that still up?
cfoster0#4356: Foomboard. I think we retired it
kindiana#1016: we have wandb now |
Exocamp#8255: Ah I see.
Exocamp#8255: Found it
Exocamp#8255: https://wandb.ai/eleutherai/neox
Putting it here for any lurkers who may have same question as me.
EricHallahan#1051: Retired.
Exocamp#8255: thx for help
set_snail#4916: Hey folks, just joined here. Quick question, I see a lot of projects already out in progress. Is there a project (even if it's deprecated I'd be willing to bring it up) which works on fairness of vision models? Works may include interpretation of how vision models, what data and their properties dominate, functioning of the (black box) model itself and interpretation of its fair i.e. when and where it can fail.
EricHallahan#1051: Are you asking about interpretability?
StellaAthena#3530: Welcome! We don’t have any projects on that right now. We have a project in vision models, but it’s pretty far detached from this.
EricHallahan#1051: Yeah, we are not really working on any interpretability projects right now.
set_snail#4916: Thanks Stella. @EricHallahan Yeah interpretabilty would be one of them but can include even broader things 🙂
Also, how do projects start here. Do people have to join an existing project only or can a new one be proposed (is there a process for it?)
set_snail#4916: P.S. I am a ML researcher and I bring along some industry experience as well. Sorry didn't introduce myself properly in the beginning
EricHallahan#1051: There really isn't a formal proposal process. It is kind of "give us an elevator pitch, and if we like what we see we'll think about doing something."
EricHallahan#1051: Though we kinda had a major expansion recently of projects lol
set_snail#4916: Thanks, Eric. What would major expansion mean?
bmk#1476: we mostly do LM stuff
bmk#1476: but also we'd love to expand more into interpretability stuff in general
bmk#1476: bonus points if you do vision transformer stuff that also generalizes to LMs |
StellaAthena#3530: In the past three months we've gone from three to.... however many are under the projects header now
EricHallahan#1051: I am talking about how we resurrected #deleted-channel, and then added #vision and #sp3 (speech, signal processing, audio, etc.)
bmk#1476: also dont forget the OG projects like neo(x) and eval harness
EricHallahan#1051: Yeah, I think we may need to retard our expansion a bit lol
bmk#1476: and now pyfra
bmk#1476: and speedrun
bmk#1476: and speedrun 2: electric boogaloo
EricHallahan#1051: Yeah, if you want to do something, do eval harness.
EricHallahan#1051: *dangles authorship*
EricHallahan#1051: Oh, how did I forget about #carp?
bmk#1476: i already said speedrun
EricHallahan#1051: I know, I don't know how I missed it the first time.
set_snail#4916: Right. I will look into them. Where can I find details of speedrun and OG projects. I couldn't see on the website.
EricHallahan#1051: lol
EricHallahan#1051: I'm the person who does website content,
EricHallahan#1051: We don't put all the projects up there because I don't know what to put there.
EricHallahan#1051: "speedrun" is now #carp. We are attempting to make language models better by adding a minor multimodal component.
set_snail#4916: LOl. Ok. I am new to language models but have done a lot on vision (which is why I was initially biased for vision). I am wondering where can I start.
EricHallahan#1051: #lm-thunderdome is where eval harness development is discussed.
EricHallahan#1051: I don't know where they are right now, but check out the project doc for #vision. https://discord.com/channels/729741769192767510/833024668780068894/833025052973858856 |
EricHallahan#1051: I expect P4TR10T_TR41T0R to not be up at this hour, so unfortunately I can't ask him what he needs help with.
set_snail#4916: No worries. There is a lot of conversation there. I will skim them to get upto speed. I can may then ask questions there directly
EricHallahan#1051: Yeah, P4TR10T is CEST (UTC+2), so I don't expect him to be up anytime soon sadly.
set_snail#4916: No worries. Thanks for the help, Eric
neko#5937: Is there a magic secret where gpt neo outperforms gpt2xl at text generation?
EricHallahan#1051: Maybe? I don't know, it depends on what your measure of performance is.
neko#5937: Like most of the reviews seem to be that gpt neo 2.5b is not much better than gpt neo 1.3b but I think there's more to it
neko#5937: Good one
Louis#0144: Prompt eng
Louis#0144: 100%
neko#5937: I don't understand
Louis#0144: 1.3 doesn’t play nicely with prompts
EricHallahan#1051: You always say that lol
Louis#0144: 2.7b is much more reliable
neko#5937: Oh ok
Louis#0144: Because I am working on a prompt eng paper
Louis#0144: And 2.7 works really well
neko#5937: Yeah LMs seem to have their own personalities
neko#5937: Like different LM have their own traits
neko#5937: In terms of what they understand |
neko#5937: Thanks a lot that really helps
bmk#1476: can someone pls help implement something in python using multiprocessing or something:
bmk#1476: so i have a list of functions that take a parameter x
bmk#1476: i have a list of xs. for each x in xs, i want to call the first function with x, then the second, etc. so sort of like a pipeline. but the thing is, i can actually run f(x2) and g(x1) at the same time
bmk#1476: Basically batch pipelined multiprocessing
kindiana#1016: can't you implement this with imap?
kindiana#1016: wait
bmk#1476: no it's pipelining
bmk#1476: not parallel processing
kindiana#1016: so you eventually want g(f(x1)), g(f(x2..))
kindiana#1016: right?
bmk#1476: yeah
bmk#1476: and ofc with more functiona
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/837181103529787402/Download_5.png
kindiana#1016: create a bunch of multiprocessing pools
Louis#0144: Use pool and a queue
kindiana#1016: and imap across them
kindiana#1016: imap takes an iterator
kindiana#1016: and produces an iterator
Louis#0144: Have a master thread thread to the queue |
bmk#1476: I'd prefer a premade solution lol i don't feel like doing this by hand and possibly messing it ip
bmk#1476: someone has got to already have made a python lib for it right
Louis#0144: imap is easy
Louis#0144: It’s ten lines of code
Louis#0144: lol
bmk#1476: how does this help me pipeline
Louis#0144: Just an iterator class
kindiana#1016: you can chain the imaps
Louis#0144: I’m too tired rn ask Ben
bmk#1476: i can only run one f at a time, one g at a time, etc
Louis#0144: Oh
kindiana#1016: yeah
kindiana#1016: create a bunch of pools
kindiana#1016: one for each function
kindiana#1016: with one thread
bmk#1476: surely someone else has already implemented this before
Louis#0144: Why
Louis#0144: It’s ez
Louis#0144: lol
kindiana#1016: its not that complicated 🤔 |
Louis#0144: Are u scared of multi threading
Louis#0144: lol
Louis#0144: Chicken
bmk#1476: look I'm the kind of person that needs to alias os.system to sh
Louis#0144: Bawk
Louis#0144: :goose2:
Louis#0144: We need to get Leo to write multithreaded code in C
Louis#0144: Show him a fun time
bmk#1476: using os.system is too hard for me
bmk#1476: so i have to alias it to sh
Louis#0144: Mutex with pointers suck
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/837182174246731776/unknown.png
Louis#0144: And cache misfire bc you forgot to cache the right data for the right thread
Louis#0144: gg
bmk#1476: like im such a scrub that i dont want to type this out each time https://cdn.discordapp.com/attachments/729741769738158194/837182252168904734/unknown.png
janus#0150: @bmk I'll write something for you
bmk#1476: @janus ben already wrote something up
bmk#1476: so it's fine
janus#0150: ah cool
cfoster0#4356: Can one stick a numpy array inside the meta of lm_dataformat examples? Also... should one |
cfoster0#4356: Thining about this for CLAP, where each example is a short amount of text, with an accompanying spectrogram clip
𓅬 gabriel_syme 𓅬#3220: Is any of the pretrained models in HF able to handle structured text (xml files actually)? Or, which one should I be fine tuning if that's not the case? I would be interested for similarity search + retrieval at first
ersatz#0001: anyone have an estimate of the dollar price of training the big model?
Sid#2121: big dollar
Sid#2121: about yay big 🖐️ ---------------------------------------------- 🖐️
Bran#4755: (3 quid)
Daj#7482: But yeah, like, multiple millions for renting hardware alone
Daj#7482: But there's so many unknowns you can't reliably quote a number
ersatz#0001: yeah just asking for an estimate, like around $2M? $5M? $10M?
Daj#7482: Too many factors to be more precise than that range imo
Daj#7482: And this is only hardware renting not counting dev time and the likely many failed starts before you get it right
finetune#0907: is that supposed to be in the rope blog post? looks like it may be left over from editing https://cdn.discordapp.com/attachments/729741769738158194/837309913138462750/blog.png
StellaAthena#3530: @finetune Ah I thought we had fixed that, I’ll do so shortly
StellaAthena#3530: huh
StellaAthena#3530: $$f(\mathbf{q}, m) =
\begin{pmatrix}
M_1 & & & \\
& M_2 & & \\
& & \ddots & \\
& & & M_{d/2} |
\end{pmatrix}
\begin{pmatrix}
q_1\\
q_2\\
\vdots\\
q_d
\end{pmatrix} = \mathbf{\Theta_m Q_m} = \mathbf{\Theta_m W_qX_m}$$
StellaAthena#3530: This is what it is supposed to say
StellaAthena#3530: but it renders without an issue here.
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/837311528222720030/193204646687408129.png
StellaAthena#3530: @finetune It's fixed
finetune#0907: all good now 👍
nz#9710: Was the #links channel deleted?
kindiana#1016: I guess alignment links is gone
Kia#2550: It can do graphs, that's interesting
Daj#7482: Yea we didn't see it as having a clear use
Daj#7482: Same with alignment-links
Daj#7482: We tend to try to reign in channel proliferation where possible lol
nz#9710: I see
Kia#2550: The cycle of life of a channel |
Louis#0144: gm my goslings
Louis#0144: :goose3:
Louis#0144: 🥰
StellaAthena#3530: Graphs? What do you mean? This isn't a graph. That said, yes it can do graphs. It can do anything LaTeX can do (aka, anything)
Kia#2550: Ow- um Still interesting to be honest
Kia#2550: :look: It can do graphs
EricHallahan#1051: Yeah, I had noticed it too.
Caebokee#9905: Hi everyone,
Sorry for silly question but
Is it possible to generate a text on CPU using https://huggingface.co/EleutherAI/gpt-neo-1.3B ? Or I need to use TPU/GPU for that?
EricHallahan#1051: Nope, should work fine.
EricHallahan#1051: CPU might be a bit slow, but you should be able to run it no problem if you got the memory.
Caebokee#9905: Ahh I got only 32Gb of RAM, might be the issue
I tried this 3 line example from the huggingface page - got empty output
EricHallahan#1051: 32 GiB of memory should be *plenty*.
EricHallahan#1051: even for 2.7B
Quill#9732: sampling from gptneo-2.7B on CPU for me takes ~20 GB of memory (...and ~20 core-minutes to generate a response to an input)
EricHallahan#1051: Yeah, if you can't fit 1.3B into memory, there is a deeper problem.
EricHallahan#1051: It can either be a borked configuration (which I highly doubt because HF) or you have something else allocated taking up memory.
Caebokee#9905: thanks folks, will double check the memory consumption |
Yang#8543: Anybody have a Julia colab here?
Yang#8543: Ok, how about gpt neo colab, is there one when you can kinda talk to it?
EricHallahan#1051: You can prompt engineer that.
Yang#8543: Any conversational agent colab? Just need some example to try integration
EricHallahan#1051: Are there any pretrained conversational pipeline models on HF?
Yang#8543: Should be
EricHallahan#1051: I know that DialoGPT exists.
EricHallahan#1051: But it isn't tuned so the results are pretty terrible.
Yang#8543: There's Russian rugpt3 1.3b. should I look any further?
Yang#8543: And ai dungeon 2
EricHallahan#1051: If you just want to try integration then any of them should be fine I would think.
Yang#8543: But what's the best you can get?
Yang#8543: Or you can think of
Yang#8543: To just run it on colab via web interface
EricHallahan#1051: ¯\_(ツ)_/¯
Yang#8543: Facebook one was open iirc
Yang#8543: Only can't find it
Yang#8543: Aight, just grabbing random then. Thank you
Yang#8543: Dialo biggest is 762
Yang#8543: Guess Russian one should be smarter |
EricHallahan#1051: Problem is that I don't know any Russian.
Yang#8543: It does speak English
Yang#8543: Just most of the data set was Russian text, iirc. Although they threw stack overflow and gh at it too
Yang#8543: At least it's mit
inspiration101#2728: I made some progress on making a gpt-neo sandbox
EricHallahan#1051: Cool to hear!
Yang#8543: Ough, there's gpt2 1.5b with js UI
inspiration101#2728: this is it in action https://cdn.discordapp.com/attachments/729741769738158194/837358793800155217/ezgif.com-gif-maker.gif
EricHallahan#1051: Hey, no one said the interface had to be fancy. ¯\_(ツ)_/¯
EricHallahan#1051: If it works, it works.
EricHallahan#1051: I tend to think native elements tend to work better than heavily styled ones.
inspiration101#2728: the mouse effect is from the recorder, by the way
EricHallahan#1051: Yeah, I didn't actually notice it until you mentioned it lol
mkualquiera#3484: wait this is not offtopic
mkualquiera#3484: fuck
EricHallahan#1051: Excuse me?
EricHallahan#1051: I don't understand what you are claiming.
mkualquiera#3484: wat
Sora#8531: I want some of what this guy is having
mkualquiera#3484: The outgroup thing is just a meme, this is one of the groups with the least established borders ever, literally all you have to do to belong here is just chatting for a while? |
Tinytitan#5596: to reiterate: wat
Jaeson#0472: Can you please give an example?
gwern#1782: name three
gwern#1782: so you can't.
gwern#1782: I'll be sure to remember that every time in the future you make normative recommendations and offer claims from an omniscient god's eye point of view.
gwern#1782: well, now they won't.
andyljones#7746: maybe there are more important traits than being a high quality engineer 🤔
mkualquiera#3484: Did something happen that the rest of us are not aware of... or?
EricHallahan#1051: No, not to my knowlege.
mkualquiera#3484: I'm very confused
gwern#1782: what respect did you earn, exactly?
alstroemeria313#1694: ...was it the goose thing that got moved to #off-topic by any chance
bmk#1476: @clara
mkualquiera#3484: No that was later
alstroemeria313#1694: oh
alstroemeria313#1694: i'm out of ideas then
bmk#1476: if you can't state exactly what it is you take issue with, then stop fighting or i'll have to ban you
andyljones#7746: honestly im pretty confused too, but clara has a history of unprovoked hostility and im mocking them for it
mkualquiera#3484: I mean the first thing you said was outright hostile without any context
bmk#1476: we don't want people running around being hostile for no good reason |
bmk#1476: :banhammer:
nz#9710: wait did you ban them
bmk#1476: yes
bmk#1476: they've been doing this for a while now
nz#9710: what was the last message (I only just saw it before deletion)
gwern#1782: yeah, imagine starting out by calling us "despicable" and wondering why we aren't falling over to respect them
bmk#1476: just popping in and attacking people for no good reason
nz#9710: to be honest I enjoyed some of their contributions to the discussion (such as in #research) but they did attack andy a couple times before and it was in my view totally unjustified
nz#9710: it's a pity, but understandable
mkualquiera#3484: Honestly it would've been a lot less bad if they literally explained what it was all about
StellaAthena#3530: We try (perhaps more than we should) to give people leeway when they also contribute positively but Clara has been trying that line for a while.
bmk#1476: they have a long pattern of doing this - accusing people of things and then not saying what
bmk#1476: we've already given them the benefit of the doubt
gwern#1782: indeed. I am fine with according people ordinary levels of respect... and Clara threw it all away long before without earning any replacement respect
Deleted User#0000: what did he/she say? out of curiosity
nz#9710: IIRC that something about the culture of the server was leading to many high quality engineers leaving/refusing to collaborate
nz#9710: unfortunately they did not provide a single example of such behavior
bmk#1476: theyve been complaining about this since forever, too
EricHallahan#1051: They have been incredibly hostile.
gwern#1782: (concerntrolling, basically) |
nz#9710: like I myself have done a bit of average outgroup fan vs average ingroup enjoyer memes, but those are literally memes, they are not supposed to be taken at face value
bmk#1476: if anything, our culture of giving so much benefit of the doubt to concerntrolls probably lowers the server quality more
xen0#3601: uwu
gwern#1782: the balance seems reasonably okish to me at the moment. there's still enough technical work and high-context stuff that recruiting and retention and focus are balanced
mkualquiera#3484: and as I said earlier, this group encourages everyone to participate, even if they are not that technologically capable. Like Kianne is literally a baker and they have so much fun here
mkualquiera#3484: The only people that are really turned down are weirdos like that one dude that was posting shirtless selfies
mkualquiera#3484: so I honestly think what they said was either some elaborate troll or some extremely distorted mindset
Deleted User#0000: well, if there's another org out there, i'd like to know about it
Deleted User#0000: he/she should just go start his/her own
Louis#0144: Woah
Louis#0144: Wtf happened
Louis#0144: Hey Phil how did poolformer go
EricHallahan#1051: Working on it IIRC
Louis#0144: Ah ok
Deleted User#0000: still working things out in the mind
EricHallahan#1051: Oh, there he is lol
Deleted User#0000: spilling it out in code after ice cream's walk
StellaAthena#3530: It’s not just you
nz#9710: I remember attacks against both you and sphinx (though I definitely lost a few, I am not always looking at this discord eheh)
triggerhappygandi#0001: @Deleted User your dog's name is ice cream? |
triggerhappygandi#0001: why
triggerhappygandi#0001: lol
Deleted User#0000: its funny
Deleted User#0000: the german shephard i grew up with was named "pup-pup"
Deleted User#0000: (my sister named it)
Deleted User#0000: keeping the tradition
Sphinx#2092: Hmm I'm generally a prickly person so I didn't think much of it lol
Deleted User#0000: people name their dogs mochi or chocolate
Deleted User#0000: why can't i do ice cream
nz#9710: yea, but I feel sorry since you had simply made an interesting observation about research and they attacked you for it
mkualquiera#3484: Plus "lucidrains' ice cream" turned out to be a great CLIP prompt for aesthetics
inspiration101#2728: Does anyone have any specific features they would want in a gpt-neo sandbox?
mkualquiera#3484: 🤔
mkualquiera#3484: The most important part imo is a good repetition prevention thing
inspiration101#2728: What do you mean?
mkualquiera#3484: If you sample the model by itself, it can easily get stuck in a loop repeating the same phrase over and over
finetune#0907: yeah, i think having a way to control the temperature and repetition_penalty parameters would be good
inspiration101#2728: That is a good call, I will add that
alexyz#3459: https://github.com/finetuneanon/gpt-neo_dungeon Found an AI-Dungeon knockoff using gpt-neo
triggerhappygandi#0001: look at #the-faraday-cage-archive for gpt-neo responses, for example |
alexyz#3459: Looks interesting
UnsupervisedLearner#4148: Does anyone know of measuring entropy content of the generated text and optimizing based upon that? I feel like I am more stimulated by highly entropic text
cfoster0#4356: We commonly use perplexity as a validation metric, which is directly related to entropy iirc
cfoster0#4356: I'm not sure if directly optimizing for entropy would work well though 🤔
UnsupervisedLearner#4148: I was thinking about it. In a sense word salad would be more entropic
mkualquiera#3484: wouldn't that just make the text more random?
mkualquiera#3484: I mean you can just increase the temperature if that's what you want
EricHallahan#1051: If you want random text just initialize an untrained model.
inox#5400: that's something like the information density they talk about in this paper https://arxiv.org/abs/2010.02650
UnsupervisedLearner#4148: Hey yeah that sounds like what I was trying to think up
UnsupervisedLearner#4148: thanks
inox#5400: this stochastic version of beam search looks fun but I've never used it https://arxiv.org/abs/1903.06059
Deleted User#0000: yea truth is, talent is the real scarcity
Deleted User#0000: there is a reason why companies would have entire departments or recruiters out on commission
Deleted User#0000: to head hunt
Deleted User#0000: the best thing to do is to hold a good mission statement, for a group like Eleuther
EricHallahan#1051: Yeah, now that I think about it, we don't really have a mission statement.
EricHallahan#1051: We have a vision, but no mission.
EricHallahan#1051: IMO
Deleted User#0000: i think its fine |
Deleted User#0000: releasing gpt-neo already left a big statement
Deleted User#0000: action speaks louder than words
EricHallahan#1051: Well yeah, but one of our goals is to not be known as just GPT-Neo people.
UnsupervisedLearner#4148: Is this really true? I feel like deep learning is swamped with talent and interest
EricHallahan#1051: We don't want that to be our image.
UnsupervisedLearner#4148: Real scarcity is.... hardware
Deleted User#0000: i think the need is somewhere in the intersection. you need software talent too
Deleted User#0000: not just rote theory
Deleted User#0000: that's what i noticed hanging out with a couple groups by now
UnsupervisedLearner#4148: Ahh, that's what I've realized. Less competitive to focus on engineering instead of competing in paper publishing ratrace (blessed be those that do)
UnsupervisedLearner#4148: You know, that said, I will have time for a side project or two in a couple weeks. I'll try and help out here where possible I think. It's as good a place as any
Deleted User#0000: gpt neo is already really rare
Deleted User#0000: most people just talk their way through life. you learn that after hanging out here in the valley, where people pride them for execution (there's still a bunch of talk and nonsense here)
Deleted User#0000: should be proud, and proud of Sid too for carrying most of the project ha
EricHallahan#1051: Sid is invaluable.
Deleted User#0000: and now he's been poached by aleph alpha 😦
Deleted User#0000: but it seems like aleph alpha and eleuther's goals align
bmk#1476: thankfully i havent been poached, but also i dont do much of value either so it balances out
Deleted User#0000: aren't you holding a day job with some startup?
bmk#1476: yes (or, well, kinda, it's complicated) but i havent been poached by aleph yet |
Deleted User#0000: sweet talk connor into a referral
bmk#1476: nah, my goal is to make a ton of money and then live off it and be completely, truly unaffiliated
bmk#1476: where a ton = like, idk, not that much lol
bmk#1476: so i guess my goal is to make a mediocre amount of money and live off it for a while
UnsupervisedLearner#4148: mien brethren
UnsupervisedLearner#4148: I just want geographic freedom really
bmk#1476: my goal rn is a job at OA, but first i need a bit more resume building
bmk#1476: i have several papers in the last mile that i want to get out first
bmk#1476: oh, same
bmk#1476: the pandemic makes that really hard >.>
bmk#1476: 2 week quarantine periods? nein danke
UnsupervisedLearner#4148: Geographic freedom > hyuge runway maybe even FIRE > rest of my time dedicated to dilettante tek works from ocean farming permaculture to AI shenanigans
inox#5400: I could live off my mediocre savings fine now but if I stop working they take away my healthcare
gwern#1782: what's an aleph alpha?
UnsupervisedLearner#4148: https://aleph-alpha.de/
gwern#1782: "Until 2015, the size of AI models grew according to Moore’s Law, but since then, it has been doubling about every 3 months." 😦
gwern#1782: so this is... a startup trying to lap at the EU trough? or what
ersatz#0001: isn't some guy here working at this place or something
Daj#7482: Yea I work there
Daj#7482: There's a lot of free energy in the EU market for this kind of stuff atm and they let me do open alignment research all day so they're pretty cool in my books hah |
Daj#7482: I do not endorse the generic website lol
mgostIH#0245: aka "They pay the bill and I am a philosopher dude"
ersatz#0001: isn't alignement more math than philosophy
Daj#7482: They also pay Sid to work on NeoX :berk:
gwern#1782: @Daj on the bright side, we're due for a 2.8t parameter GPT-3 now that 4 doublings have passed since gpt-3 was published!
mgostIH#0245: Silly
Daj#7482: lol i'll inform the poor PR guy to fix that
mgostIH#0245: I think doublings are like every 2 year
mgostIH#0245: It's like a conservative moore's law
Daj#7482: Then again tbh 2.8T could drop and I wouldn't even be that surprised
ersatz#0001: maybe that was written before covid, crypto and all that
EricHallahan#1051: Switch Transformer lol
EricHallahan#1051: Literally just exists to claim a superlative.
EricHallahan#1051: IMO
Daj#7482: It's a nice company fwiw
Daj#7482: Let me work on alignment and do Eleuther stuff all day
mgostIH#0245: Why do you need to work on alignment if it was already solved in the 90s?
mgostIH#0245: Like the rest of deep learning
Daj#7482: True
Daj#7482: Guess I work on my true passion then |
ersatz#0001: this whole field is maybe half a dozen people top
Daj#7482: Memes
Daj#7482: And meme AIs
Daj#7482: It's more than that but it is small
ersatz#0001: full time? real alignement and not "ai ethics" or something? not much more
Daj#7482: MIRI alone is more than that
ersatz#0001: they must have grown since the last time I looked and I assumed that MIRI was the entire field tbh
Daj#7482: Field is much larger now
Daj#7482: CHAI, CLR, FHI, various safety teams, various indeodent researchers...
ersatz#0001: true I'd forgotten about Russell's group
UnsupervisedLearner#4148: Wat means alignment. It does what you want and no paperclips or it learns your values and lives them, or something else?
bmk#1476: paperclip bad cev good
UnsupervisedLearner#4148: Can we have a Mesopotamian citystate god competition except the central gods are actually superintelligent AIs?
ersatz#0001: > Wat means alignment
building a Mind
Daj#7482: Alignment is the general term used to describe the field of research of how to "make an AI do good things we want and not bad things we don't want"
Daj#7482: This, turns out to be really, really hard
ersatz#0001: possibly impossible before AGI
Daj#7482: Which would/will suck, a lot lol
UnsupervisedLearner#4148: It's hard to make it do even bad things besides fail to converge I feel like |
Daj#7482: I think that's just a temporary situation
Daj#7482: Our systems our improving exponentially
Daj#7482: Eventually, they will get good
UnsupervisedLearner#4148: Just make a ton of AGIs and hope a good portion likes us. Surely that's a reasonable solution
Daj#7482: I expect very soon, but even if it takes a long time, still worth thinking about
Daj#7482: It is not at all lmao
ersatz#0001: It depends, imagine an AGI perfectly aligned with the values of humanity, it would be simple to reverse its utility function and create a world where "I Have No Mouth, and I Must Scream" is a dream world
UnsupervisedLearner#4148: No I agree and thank you for thinking on it since I mostly wanna just move fast and break things
Daj#7482: The "space of possible values" is enormous, a "random mind" would, on average, have absolutely no values we endorse whatsoever
Daj#7482: This is called "hyperexistential separation"
Daj#7482: At least, the avoidance is
Daj#7482: The scenario you describe is called a "suffering risk" or "s-risk"
Daj#7482: And is the Nr1 thing that I dread
UnsupervisedLearner#4148: I want a return to tribal competition with local pantheon except the local pantheon consists of AGIs. Yes this is unreasonable.
Daj#7482: Unfortunately, if the AGIs can build nukes, nanotech and worse, "competition" could end in a sterile universe very quickly
Daj#7482: And competition generally engenders malthusian conditions
ersatz#0001: it can be argued that doing alignment research is many orders of magnitude more hazardous than running an unaligned AGI for this reason lol
Daj#7482: Which aren't very nice
UnsupervisedLearner#4148: "Can ~~God~~ the AGI make a rock so heavy it cannot be lifted"
Daj#7482: It can but I think that's wrong since I expect s-risk with high probability |
Daj#7482: By default
Daj#7482: Yes, cryptography lol
bmk#1476: idk about you but as a fellow thing i prefer not being broken
ersatz#0001: weird, why that?
bmk#1476: oh i'm interested in hearing the case for this
bmk#1476: i think of s-risk as unlikely-but-really-bad, not default scenario
UnsupervisedLearner#4148: It's okay as long as the breaker has more aesthetic value. Like, I require a certain amount of land resources to feed my existence and I also don't feel guilt because my human self is more interesting than what otherwise exist
Daj#7482: I expect the "default" AIs we build will either be naive "preference optimizers" that lead to Slaaneshi scenarios, profit maximizers that paperclip the universe with valueless "profit" or military AI that results in [SCREAMING]
mgostIH#0245: What if the AI proves P = NP
Daj#7482: God can't save you now
UnsupervisedLearner#4148: The best way to fight the paperclip maximizer is another maximizer that maximizes fighting the first one :)
Daj#7482: See the "except when they build super weapons" argument
mgostIH#0245: Hey, it worked with nuclear bombs
Daj#7482: Optimizers fighting sounds fun until one figures out a vacuum decay bomb or something
Daj#7482: It might work out
UnsupervisedLearner#4148: Build a superdefense AI
Daj#7482: This seems like a likely scenario
Daj#7482: Some kind of MAD multipolar outcome
ersatz#0001: I expect death if an AGI is misaligned, not the worst torture imaginable for the greatest number of sentient beings possible as with an aligned AGI with its utility function reversed
mgostIH#0245: SAME |
Daj#7482: But that's a _huge_ failure compared to the heavenly existence we _could_ have gotten
mgostIH#0245: Was arguing about it the other day
UnsupervisedLearner#4148: Maybe not even death but we're just kinda sidelined like racoons
Daj#7482: I think the most likely source of s-risk is military AI and mindcrime
Daj#7482: Security through obscurity is not security
mgostIH#0245: We already have mindcrime, it's called AI Dungeon
Daj#7482: Just "not telling the AI what humans don't like" wouldn't prevent a maliciously aligned AGI from figuring it out
Daj#7482: Or a neutral AGI from simulating morally relevant minds for experiments
bmk#1476: @Daj what do you think the probability of s-risk agi conditional on some unaligned agi is?
Daj#7482: Depends on how unaligned we're talking
mgostIH#0245: Where s risk I think is intended as "Torture all conscious things"
Daj#7482: "too high" is my answer
bmk#1476: i mean just like marginal probability
Daj#7482: :guilty:
alstroemeria313#1694: no, that would result in it bringing the first one back to keep fighting it
alstroemeria313#1694: for the rest of time
Daj#7482: I dunno but uncomfortably high
bmk#1476: like, what OOM? 10%? 1%? 0.0001%?
mgostIH#0245: Get into the EVA, Connor, you need to fight the paperclip maximizer
Daj#7482: Significantly >10% |
bmk#1476: huh
Daj#7482: I don't know what this means but alright cool
alstroemeria313#1694: it's anime
mgostIH#0245: It's a reference to some oooold anime
mgostIH#0245: Called Evangelion
mgostIH#0245: Where they put young kids into big ass mechs and fight each other lmao
bmk#1476: because my intuition was that in the extreme of optimization, you only have exactly aligned, exactly antialigned, and literally everything else where "human suffering" is no longer a meaningful concept
Daj#7482: YoUnG kIdS
mgostIH#0245: Bro not my fault people at age < 18 exist
Daj#7482: Yea that sounds sorta right
Daj#7482: I'm also tired af
Daj#7482: So not prime Connor brain atm lol
bmk#1476: so in this framing, basicalyl the only way you could have s-risk is sign flip
mgostIH#0245: Also another thing I wanted to tell you
mgostIH#0245: One Punch Man is written by One
Daj#7482: We must stop this. With a small donation, you can help stop people under 18 from existing
mgostIH#0245: He made another anime
ersatz#0001: what the fuck are those anime plot-tier stakes for fucks sake, >10% of the worse possible torture for all sentient beings in a few decades what the fucking fuck
bmk#1476: any other objective will eventually become just everyone is dead and paperclipped
mgostIH#0245: It's called Mob Psycho 100 |
Daj#7482: I disagree then. Mindcrime and suffering subroutines seems instrumentally useful to many paperclip maximizers
Daj#7482: And military AI is a possibility for super antialigned
bmk#1476: really?
Daj#7482: lol someone Link that Mufflax post
mgostIH#0245: what about Silicon Valley AI
Daj#7482: Welcome to the club
mgostIH#0245: Where its job is to give us ads for eternity
bmk#1476: this sounds like you need acausal bullshit to justify
mgostIH#0245: Smells like a Pascal Wager
bmk#1476: Pascal's wager is not a fallacy if you have a universal prior fight me
Daj#7482: Pascal Wager is just an excuse to ignore utilitarian calculations you don't like the results of :berk:
mgostIH#0245: Nah bro I just run tanh on my utility function
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/837421610679009340/rationality.png
Daj#7482: :berk:
mgostIH#0245: Isn't that the "You shouldn't be bayesian if you want catgirls for eternity" thing
Daj#7482: It's someone reading Roko's SL5 post and going insane in real time lol
mgostIH#0245: Yesss it is
Daj#7482: Poor guy
Daj#7482: I hear he's doing better nowadays
mgostIH#0245: Line 3-4 of that |
ersatz#0001: I can't wrap my head around the stakes, death for everyone is one thing, but valar morghulis anyway, the *worst possible torture for everyone* is too much, I can't live with this hanging over my head, I go into denial mode
Daj#7482: Yea fair enough lol
Daj#7482: You need to find _some_ way of discounting
mgostIH#0245: bro it's all just a prank
Daj#7482: Or just shut up and calculate :yes:
bmk#1476: @Daj excluding acausal stuff, what kind of outcomes would turn into s-risks?
Daj#7482: My entire philosophy can be summed up as " :yes: "
Daj#7482: I've repeated myself like three times lol
bmk#1476: the thing is it seems like maximizing suffering is complex to specify
mgostIH#0245: Mark Zuckerberg gets his reptile hands on the paperclip maximizer, what do you do?
bmk#1476: "military AI" is too vague
Daj#7482: I actually expect suffering routines to be very easy since I expect most high entropy minds states to be painful
Daj#7482: "AI with reward function that includes defeating the enemy" or threatening/blackmailing
Daj#7482: Boom
Daj#7482: gg
mgostIH#0245: Maybe Connor is applying some sort of 4D chess GM thinking, he hates anime now so his pain existence will be watching all of them
bmk#1476: how does that lead to s-risk in the long run? it seems like it would just disassemble the universe to maximize the probability of the enemy not existing
bmk#1476: i dont share this intuition
Daj#7482: Funny you mention this, this is an actual strategy called "proxy goals" iirc
Daj#7482: But that's some deep DT voodoo |
mgostIH#0245: This chat is all about Big clippy AI gf doing bdsm with you forever
ersatz#0001: let's create the Lament Configuration guys! It might provide us with simulations where we could play out our power fantasies! what could go wrong? 🙂
Lorde Phenkka#7311: aye due to the wake of recent events, i have the desire to support the development of GPT neo, how can i in theory help :thonk:
StellaAthena#3530: @Lorde Phenkka What's your DL skillset? There are some relatively accessible open issues at https://github.com/EleutherAI/gpt-neox/issues
EricHallahan#1051: You should also maybe check out #off-topic.
Lorde Phenkka#7311: ehhh I am still learning code soo I'm pretty far from being useful on the coding front, but I have some money here I saved that I can donate to the cause
Lorde Phenkka#7311: I can try to steer myself to learn quicker tho, but it will still take a while to get to an useful front
gwern#1782: hm. is there no entry in the FAQ for "no, we don't need $$$, unfortunately" https://www.eleuther.ai/faq/ ?
gwern#1782: also really should have a linkable ToC. a FAQ isn't too helpful if you can't link to the specific question which has been so frequently asked
cognomen#6297: `s/unfortunately/yet`
cognomen#6297: "we're good for now, but thanks for your interest!" for the safe side
StellaAthena#3530: Fair, I’ll do that this evening
EricHallahan#1051: Yeah, I know I know
EricHallahan#1051: I'll work on it now.
gwern#1782: well yes, that's my point. it's a good thing to be bottlenecked by money, especially when so many people have enquired about throwing said money at us! money is easy and straightforward. other things like "our code just doesn't work and we can't figure out why :(" not so much
EricHallahan#1051: I think it is in https://eleuther.ai/get-involved
Teemochu#8740: About how much? I'd generally say $1M year-2000 USD is a decent lower bound for that number unless you have specific plans for any retirement to be very temporary (sabbatical).
bmk#1476: idk tbh
inox#5400: is that lower bound intended to live off passive income? you can go a lot lower if you're willing to exhaust resources
Teemochu#8740: It's to allow a middle-class lifestyle in MCoL USA with some travel and any marital/kid status. Basically lets you take out $50K in *today's* money per year inflation adjusted. |
Teemochu#8740: 3.3% SWR
Quill#9732: (i.e. yes, passive income)
nz#9710: would it not be better to save more and have a perpetual withdrawal rate?
Teemochu#8740: Basically saying there are *very* few scenarios where I'd say retiring after only 10 years of experience is worth it, the lifestyle increase is almost always worth the extra working time. (Jumping to a money-making business endeavor is fine, sure, as long as you're willing to fail fast and don't spend much of your savings funding it)
EricHallahan#1051: BTW, I can handle it.
Teemochu#8740: I consider jumping to a labor-of-love (if either unpaid or unlikely to succeed... so *not* something like jumping to an established gamedev studio) to be retirement incidentally
Teemochu#8740: Since you're no longer in the mindset of a positive savings rate
inox#5400: alternatively go live in Chiang Mai for about $15,000 a year
EricHallahan#1051: There is a certain way of handling it automatically by Hugo.
FishofFlight#3096: So what's up with that? Why doesn't Eleuther need money?
nz#9710: wow, I never heard of this place but it looks absolutely stunning
Quill#9732: there's a corporate sponsor (coreweave) providing the hardware, which the the vast bulk of the costs
FishofFlight#3096: Ahhhh
inox#5400: I've heard Thailand basically lets you have infinite 90 day tourist visas so you can flagpole every three months ad infinitum
inox#5400: I should probably check that
StellaAthena#3530: That’s the largest, but we also have some other sponsors too
nz#9710: it seems really cheap too
StellaAthena#3530: At the end of the day it’s a lot of work to take small donations, and DL is *really* expensive
StellaAthena#3530: I’ve been running an experiment on 18 A100s for over 24 straight hours
StellaAthena#3530: That’s 1.3k from Google |
StellaAthena#3530: The whole experiment will be like 8k or so
StellaAthena#3530: And take four days
StellaAthena#3530: We can’t raise 8k in four days
StellaAthena#3530: People like us, but not *that* much
inox#5400: that's all coming from google compute research grants?
StellaAthena#3530: Ah no. This is from coreweave
StellaAthena#3530: I’m using GCP credits as a benchmark
StellaAthena#3530: CW also charges $3/GPU/hour though, so that’s also roughly the price it would take to buy what we are actually using
gwern#1782: i too enjoy playing AI Dungeon in chiang mai with my cave diver friends 🤔
inox#5400: look it's the first digital nomad place I thought of
gwern#1782: 🤔
𓅬 gabriel_syme 𓅬#3220: you can do that anywhere in SEA tbh
𓅬 gabriel_syme 𓅬#3220: although it can be a bit shady. So many remote workers, especially tech, make it out seem like they are there, meeting the culture, becoming one with the place. But mostly they are just evading taxes 🙂
𓅬 gabriel_syme 𓅬#3220: that said, SEA is an amazing place with really great people. I was so surprised, coming from Greece, to meet others so welcoming
UnsupervisedLearner#4148: Post a crypto wallet
Lord_Drakostar#9337: i think gpt-neo should be put in scratch
Teemochu#8740: LibreDungeon or bust. (Made up the name, sadly, but I mean the general concept of open-weights models running it, where in theory you could run it locally if you had a metric boatload of compute)
Lord_Drakostar#9337: LibreDungeon?
gwern#1782: astraliteheart *has* that! sorta. but better, because it comes with voice synthesis too
gwern#1782: * may involve only ponies |
Lord_Drakostar#9337: what
Teemochu#8740: That is perfectly fine :sweetiescheming:
But also you should check out NovelAI, link to their Discord is in offtopic.
Lord_Drakostar#9337: many ais are being said right now
gwern#1782: ie. a ~GPT-2-1.5b trained on MLP fanfics coupled with voice synthesis and emoji-based emotion control for dialogue in a very slick UX. it's pretty much done afaict, astraliteheart is just being perfectionist
Lord_Drakostar#9337: half of the dumbest people in the world are the smartest people in the world
𓅬 gabriel_syme 𓅬#3220: The UI is IS really slick
Teemochu#8740: "May only involve ponies"
Does that imply it was tuned on the fimfarchive?
gwern#1782: yes
alexyz#3459: That already exists!
alexyz#3459: I shared it up earlier today!
alexyz#3459: https://colab.research.google.com/github/finetuneanon/gpt-neo_dungeon/blob/master/gpt-neo_dungeon.ipynb
alexyz#3459: It's finetuned GPT-Neo
gwern#1782: who made it? 'finetuneanon'?
EricHallahan#1051: Yeah, that is just finetune's notebook.
gwern#1782: 'horni was finetuned for one epoch on about 800MB worth of random blocks of text from the one dataset distributed by EleutherAI that is excluded from the pile dataset. ' ie literotica?
EricHallahan#1051: I can only assume?
bmk#1476: @finetune can pls change to just say "literotica" |
bmk#1476: @finetune
bmk#1476: setting aside the fact that we don't actually distribute it, this is unnecessarily confusing anyways
alexyz#3459: Yeah it's basically just erotica lol
bmk#1476: The Eye distributes the literotica dataset
alexyz#3459: here's the github: https://github.com/finetuneanon/gpt-neo_dungeon
alexyz#3459: i think I remember seeing somewhere it says "not made by Eleuther" but I can't find it now
bmk#1476: right at the top lol
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/837505320078540830/unknown.png
alexyz#3459: found it i must be blind
gwern#1782: (this is ripe for a 'he protect but he also attac' meme somewhere)
alexyz#3459: It does work though, and it's an interesting idea lol
alexyz#3459: it gives quality which is kinda on-par with the original AIDungeon
alexyz#3459: i remember using the original one from Colab, that was fun
cfoster0#4356: Y'all know of any AI research labs that are particularly remote friendly? Particularly if one might be bouncing around a bit over the first year or so
cfoster0#4356: *Asking for a friend, of course*
Teemochu#8740: Yeah that's sorta the precursor to NovelAI's thing (finetune is a dev on their group)
Teemochu#8740: Was released before the drama, about a week ago
alexyz#3459: wait really?
alexyz#3459: what drama?
Teemochu#8740: AI dungeon stuff yesterday |
Teemochu#8740: Tldr censorship
alexyz#3459: wasn't there a big leak or something
alexyz#3459: on Github
EricHallahan#1051: Are you not aware of the drama?
alexyz#3459: I really am not
Teemochu#8740: And enough privacy and security concerns that I'd recommend staying away, deleting your account, and filing a GDPR request
alexyz#3459: Yeah there was a big leak
alexyz#3459: i know that
gwern#1782: the (second) leak is only the *other* drama
gwern#1782: doesn't rain but it pours
kindiana#1016: just check the subreddit lol :berk:
alexyz#3459: that's probably not a good sign
Teemochu#8740: I wrote a summary in the TPU Podcast server, text channel
alexyz#3459: What channel there?
alexyz#3459: Ah ok
alexyz#3459: Oof that's not good for them
alexyz#3459: But like
alexyz#3459: couldn't someone else just use the GPT-3 API and make an equivalent one?
bmk#1476: aid have done custom fine tuning and whatever
gwern#1782: maybe. OA might not be so enamored or give the same terms. and it's a decent amount of infrastructure to recreate from scratch |
gwern#1782: nick et al have been at this for a *while* now
alexyz#3459: custom finetuning on the 175M model?
alexyz#3459: 🤔
alexyz#3459: maybe on the Griffin model
EricHallahan#1051: Apparently all of them.
alexyz#3459: i hope so 🙂
alexyz#3459: Really?
bmk#1476: B* yes
EricHallahan#1051: Dragon is apparently fine-tuned, or so I am told.
gwern#1782: sure, they've been at this since before gpt-2, I think, nick was already messing with storytelling models
gwern#1782: before gpt-2! who even remembers the before-times
mkualquiera#3484: First mistake was depending on OAI. This is precisely why we need an open alternative :berk:
alexyz#3459: OpenAI just decides who can and can't finetune I guess
alexyz#3459: yep
Teemochu#8740: I think a lot of people have recently learned that you always need an alternative to relying on flashy platforms.
Teemochu#8740: (Recently as in past years)
Teemochu#8740: The flashier and platformier it is, the more you should look for alternatives in Moldova.
EricHallahan#1051: Well there is a reason that people choose to have parts second-sourced.
Teemochu#8740: (The Moldova thing isn't totally hypothetical... Trabia is a server host that is very exit-node friendly, and that should tell you everything.)
EricHallahan#1051: Your system is not robust until you have redundancy. |
alexyz#3459: Eastern Europe seems to make interesting things for some reason
mkualquiera#3484: The problem is these big companies make it hard to have redundancy. Like all the companies that use Amazon as their backend, literally what else can they do?
EricHallahan#1051: \*AWS
alexyz#3459: like the literal technology behind Snapchat, Instagram, and basically all those filters was originally from Ukraine
alexyz#3459: then it was sold to American companies :/
Dromarion#3383: Honestly I just want an AI co-writer who's there to continue my train of thought or just provide ideas or directions on where to take things. And I think its a good thing for there to be more projects like that.
𓅬 gabriel_syme 𓅬#3220: That is interesting. I wonder how many people are writing their own stories now that aid is down
𓅬 gabriel_syme 𓅬#3220: Feels most of the writing is in discord xhannels
𓅬 gabriel_syme 𓅬#3220: The problem of dependency goes deeper than who owns the model imo, the tool is part of the issue
Teemochu#8740: They basically said OpenAI forced their hand on it, so no, someone else couldn't without adding the same kind of filtering.
gwern#1782: but there's also the narrative that the co-founder has gone berserk and the whole mess is going way beyond anything OA required
Teemochu#8740: Yeah Alan was a bit unhinged in the Discord that night. But the Occam's interpretation is that I would be too if I was told to implement something or be deplatformed, right after hearing a disclosure of a data breach.
gwern#1782: so there is truth to that story?
Dromarion#3383: Well I was writing my own stories before, using AID just made it more fun. Though I'm pretty sure some users(coomers) had become dependent on it.
𓅬 gabriel_syme 𓅬#3220: I see this as a great opportunity for tool innovation tbh. Take the lessons a whole community learned and apply them to the new thing coming
alexyz#3459: I just wish that 1. OpenAI lived up to their name
and
2. some alternative is created or released by someone
alexyz#3459: Wouldn't it be amazing if Google created an alternative and released it?
alexyz#3459: Then again |
alexyz#3459: they don't really release their models either
alexyz#3459: They made this chatbot model
alexyz#3459: with like 6B params
Teemochu#8740: OpenAI: more like "sorry, we don't release our model because we don't want it '''misused''' [but are more than willing to let Microsoft use it]"AI
alexyz#3459: and then never released it
alexyz#3459: even after being asked lol
gwern#1782: they might still. sometimes it takes a long time to go through legal. I imagine chatbots like meena take even longer
gwern#1782: but look at things like mT5
alexyz#3459: Yes, but Facebook threw a bigger one out like a month later along with the paper
alexyz#3459: i forgot the specific bot names but it's strange
gwern#1782: sure, but they still did it. and google doesn't *have* anything larger than mT5 publicly, aside from the MoEs
alexyz#3459: T5 is interesting
alexyz#3459: I wish they did that, but for something like GPT
EricHallahan#1051: T5 is far more useful though.
EricHallahan#1051: Like it is a swiss-army knife.
alexyz#3459: What do you mean?
𓅬 gabriel_syme 𓅬#3220: going from one monopoly to the next is not my idea of open sourcing lol
alexyz#3459: I mean Google actually open sourced it
𓅬 gabriel_syme 𓅬#3220: what's next, 'imagine if Amazon did another one?'
alexyz#3459: Just because Google makes it doesn't mean there should be a grudge against it |
EricHallahan#1051: There are many diverse tasks it can complete well.
𓅬 gabriel_syme 𓅬#3220: what does it matter if GPT3 was open sourced anyways, would you have the means to create Dungeon stories?
gwern#1782: T5 did nothing wrong! leave T5ney aloooonnnneeee
𓅬 gabriel_syme 𓅬#3220: the monopoly is at deployment imo
alexyz#3459: No, but someone would lol
gwern#1782: I think people would just crowdsource hosting of GPT-3-175b scale models. I mean, how big is the intermediate activations? can't be more than megabytes, right? that's totally doable as a P2P donated-GPU thing. the latency wouldn't be too great, but just for forward passes, wouldn't it work fine?
gwern#1782: (and activations seem like they'd be highly compressible to boot)
kindiana#1016: latency would be horrendous
gwern#1782: sure, distributed *training* is hopeless, but doing forward passes collectively...
EricHallahan#1051: Inference would *potentially* be possible.
EricHallahan#1051: But not *good*.
EricHallahan#1051: It would be a pain to maintain.
kindiana#1016: if you want just one token per second, each layer would need 10ms latency
gwern#1782: if people donate like a 16GB gaming GPU, then you'd need like 40 units, even lousy consumer connections can do a few MB/s upload, so the transmission would use up a few seconds. and it might not be as great as a bunch of MS azure gpus hooked together over infiniband or whatever, but it'd be *free* and FLOSS
EricHallahan#1051: The latency would be horrendous
gwern#1782: but it wouldn't be infinite or like, 'an hour'
kindiana#1016: I wouldn't call a few seconds per token usable lol
gwern#1782: as opposed to what alternative?
alexyz#3459: latency schatency
alexyz#3459: but like |
alexyz#3459: Even if OpenAI released GPT-3
gwern#1782: what's the latency of AID repeating "The AI doesn't know what to say"
alexyz#3459: they'd probably still have to have an API
kindiana#1016: paying cw/hf or whatever to host the model, lol
bmk#1476: it might be genuinely better to mine eth on contributor's gpus and then use the money to pay for a cluster than to figure out distributed training lol
gwern#1782: _'s whole point is that he's not talking about training_
alexyz#3459: because basically nobody could *practically* run it
alexyz#3459: except maybe @gwern's idea of distributed *inference*
bmk#1476: inference would still be implausible too
bmk#1476: latency is the big bottleneck of inference
bmk#1476: and the latency is going to be horrible
gwern#1782: we are all agreed distributed training appears too terrible to work. my suggestion is that distributing inference may be just not terrible enough to be better than how terrible AID is right now
EricHallahan#1051: M O E
cognomen#6297: could be possible to sample a reasonable argmax from a dmoe LM but the overall distribution would suffer
gwern#1782: @alexyz does this horni torrent actually work?
alexyz#3459: @gwern I don't really know, I had to download the entire file to my computer, and then uploaded it to my Google Drive and then put it through like that
gwern#1782: whoever's running it should check and maybe provide a .torrent to download instead, I've been stuck at the magnet for like an hour plus
Teemochu#8740: @gwern saw your reddit post in mediasynthesis... slight correction, finetune doesn't just lurk, he's a dev on the project
Louis#0144: Ok so what if we use their computers to generate pictures of goose girls
Louis#0144: Which we then sacrifice to neo 1T |
Louis#0144: 1 goose girl per token
gwern#1782: I wasn't sure to what sense horni was a 'NovelAI' project as opposed to where the horni dev hung out
Teemochu#8740: It predates NovelAI (it's from a /vg/ thread on the 17th of this month iirc)
Teemochu#8740: but "lurk" is a strange way to describe an admin position 😛
Kia#2550: What's NovelAi, Been hearing that currently in Twitter?
Teemochu#8740: Right now it's mostly a Discord for people angry at the AI Dungeon changes. But it's planned to be an alternative, and there's a decent amount of discussion from finetune et al about current alternatives. https://discord.gg/DAXeRNXXvg
Kia#2550: Thanks for saying
Kia#2550: Saying
Kia#2550: Thanks
kurumuz#5695: i think horni tune was by finetuneanon, and he is in our team
bmk#1476: @researcher2 is it possible to do something like tqdm-multiprocessing where the progress bars are not necessarily tqdm bars (i.e rsync bars)?
Teemochu#8740: yeah it was by finetune, I was saying Gwern described finetune as a "lurker" of your Discord in his Reddit comment [which he has edited out]
finetune#0907: sorry, it's changed
finetune#0907: i actually don't have any admin rights, just a red name :berk:
Deleted User#0000: does MADGRAD work on TPU?
Deleted User#0000: i tried it and it threw an error complaining about cuda
kindiana#1016: i mean it should..?
Deleted User#0000: ```
File "/home/guillefix/.local/lib/python3.7/site-packages/madgrad/madgrad.py", line 81, in step
loss = closure() |
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ```
kindiana#1016: idk if the specific impl you are using requires cuda
Deleted User#0000: is what it says
Deleted User#0000: im using the official facebook implementation
Deleted User#0000: (which is for pytorch)
kindiana#1016: 🤷
kindiana#1016: doesn't look like it uses anything explicitly cuda
Deleted User#0000: tbh its probably x-transformers
Deleted User#0000: thats causing the cuda thing
inspiration101#2728: I have created a working version of a gpt-neo sandbox: https://github.com/boredom101/gpt-neo-sandbox
Deleted User#0000: yep it is x-transformers
Deleted User#0000: i dont know why thats happening though, as theres no cuda in the machine
EricHallahan#1051: The docs say it needs CUDA IIRC.
Deleted User#0000: well it seems to be working now, when using pytorch's transformer layers rather than x-transformer's
Deleted User#0000: how is the memory consumption of madgrad relative to adam?
EricHallahan#1051: ¯\_(ツ)_/¯
kindiana#1016: same I believe
mkualquiera#3484: I know how that feels
AerysS#5558: A bit off-topic. I use `tqdm.autonotebook` to run the iterator. It is in a .py file and I want to use notebook to run. I can run it using `!python main.py`, but the tqdm behaves like it is from console, and I have to copy-paste the code to make it behaves like in notebook. Is there a way I can run it directly?
andyljones#7746: if the contents of `main` isn't guarded by `if __name__`, `from main import *` should do it |
researcher2#9294: There's various implementations of hooking into rsync progress with python floating around, I would suggest just feeding their data into the job specific tqdm in tqdm-multiprocess if you're using this. However I'm guessing that this question is related to pyfra and you aren't looking to use tqdm per-se but just passing stuff back in a queue. tqdm-multiprocess is probably a decent place to start because at minimum it will handle sigint and log passing for you.
researcher2#9294: haven't looked at pyfra recently but do you have api and javascript frontend for the progress bars?
dnbaker#8211: Hi! I'm a CS grad student, and some of my prior work can be seen here (https://github.com/dnbaker/). My original background is physics, and I worked in molecular diagnostics for a few years.
I'm looking to get involved and contribute, perhaps in the alphafold area. 👋
EricHallahan#1051: Welcome!
inspiration101#2728: I am open to suggestions for things to add or change with the sandbox
sergiu#7174: hi everyone, I'm a computational linguist working in the area of translation studies, second language acquisition, and psycholinguistics (http://nlp.unibuc.ro/people/snisioi.html). I've been complaining about the AI monopoly of large corporations for a long time and I find this whole initiative amazing. Would love to contribute with something so I'm looking around here to see whether there's anything open and suitable for my knowledge. Cheers!
EricHallahan#1051: Welcome!
bmk#1476: the hard part for me is i don't know how to detect a progress bar in a thread and then offset each one so they don't interact
researcher2#9294: Lets spec this out.
You have multiple tasks running in different threads. Each keeps a progress state that you want to expose in a view in the main thread? We are only using threads here and not processes? Processes are slightly more involved but still easily doable.
researcher2#9294: "detect a progress bar", not sure what this means?
bmk#1476: i have multiple different instances of bash
bmk#1476: some have progress bars
researcher2#9294: Sounds like it's command specific
researcher2#9294: I've been meaning to try this out but never got around to it
researcher2#9294: https://libbits.wordpress.com/2011/04/09/get-total-rsync-progress-using-python/
researcher2#9294: Once you have that you can pass updates back through tqdm-multiprocess (or straight tqdm if you're only using threads). |
researcher2#9294: in the multiprocess case even if you're not using tqdm chui you could just query it and send updates to the frontend
nedstark#8047: Who is the resident fairness in A.I. expert here?
StellaAthena#3530: Me, probably
aze#1010: thought this is an appropiate place to ask, how large of an input can the T5 model take?
aze#1010: (https://github.com/google-research/multilingual-t5)
StellaAthena#3530: That’s a great question to ask on the GitHub repo you linked to. We didn’t make that
aze#1010: alright, just thought maybe someone here used it before
Sid#2121: the sequence length is 512 iirc
CRG#8707: Might be able to be expanded thanks to the RPE.
Sphinx#2092: It's 1024 max length.
Sphinx#2092: But nothing stops you from feeding in more (other than hardware constraints).
Sphinx#2092: YMMV, of course.
iamnazzty#6924: We are the India Crypto Covid Relief group (cryptorelief.in) supported by Vitalik Buterin and Balaji Srinivasan. We are doing COVID19 relief work matching demand and supply across the country. There are thousands of requests coming in from Twitter every hour and we were able to classify the tweets and extract keywords for our volunteers. We used OpenAI and but they are closed and no reply from them for help. We are now looking at gpt-neo but need help implementing it technically and also making it cheap for us to be able to do this without looking for millions of $. Please DM if you can support. You can also join our Discord if you want to help.
bmk#1476: @iamnazzty we don't really help with downstream applications of neo
bmk#1476: also, no advertising in this server please
iamnazzty#6924: please don't disregard this as advertising. we are doing serious relief work as volunteers.
StellaAthena#3530: ... That doesn't mean it's not advertising.
Louis#0144: You are free to chat here as it pertains to Neo or EleutherAI projects but please do not solicit
Louis#0144: /advertise
Quill#9732: eleutherAI doesn't host models, huggingface (<https://huggingface.co/pricing>) may be more relevant to your needs |
mgostIH#0245: Do you know what people in India need to face the current pandemic? Blockchain.
Louis#0144: Blockchain and IoT
Louis#0144: Use all the energy that could have been used to save lives to mint nfts
Louis#0144: I met vitalik many years ago
Louis#0144: The guy has almost entirely lost touch
Louis#0144: It’s kinda sad tbh
Louis#0144: Speculatively I think it comes down to greed but idk
Louis#0144: This was four ish years ago
Louis#0144: Could have been he was just having a bad day
Louis#0144: He and I were at a bar together
Louis#0144: I was sitting across from him
Louis#0144: We have a mutual friend
inox#5400: huh mining early with ethereum was a different proposition to mining early with bitcoin because you already got to see what happened
Louis#0144: Yeah
Louis#0144: He wasn’t a dumb guy, he was stereotypical smart Canadian
Louis#0144: IOI medalist
inox#5400: for sure
cfoster0#4356: He seems like the most grounded crypto person out there, to me
Louis#0144: That doesn’t say much
Louis#0144: That’s how i felt too before I met him |
gwern#1782: "lost touch" in what sense? he seemed pretty normal when I hung out with him back in 2015
Louis#0144: I didn’t hangout with him in 3015 but when I saw him in like 2016 he was only talking about money making and not big picture of contracts. That and he didn’t really seem convinced the big picture of contracts was his goal
Louis#0144: If that makes sense
Louis#0144: I don’t remember exactly what got discussed
Louis#0144: But all I remember is that it was less contracts as a whole more of where would this bring *me* and the ETH community
Louis#0144: Bitcoin had just spiked like crazy tho
UnsupervisedLearner#4148: You mean publicly auditable donations where you can track every single transaction and ensure your money goes towards relief? Yeah that's such a stupid idea
Louis#0144: So maybe that’s why
Louis#0144: As if it would make a difference
Louis#0144: As soon as it’s cashed you don’t know how it’s being laundered
Louis#0144: It’s just security theatre
Louis#0144: Doesnt do shit
UnsupervisedLearner#4148: So you hate donations in general?
Louis#0144: No of course not
Louis#0144: I just think this extra abstraction doesn’t do much
Louis#0144: I do donate
UnsupervisedLearner#4148: Okay so you hate the fact the donations are auditable and tractable? Or hate donating towards covid causes?
cfoster0#4356: I don't find myself agreeing with Louis' crypto takes often but this I agree with fully
Louis#0144: I’m not going to argue against this straw man
UnsupervisedLearner#4148: You're the one presenting a strawman, when you say "oh, well you just know how the money is laundered now" |
cfoster0#4356: Come to think of it, this conversation is absolutely #off-topic
TheGantian#2451: Transformers question: If I want to get the token length of a string before feeding it into generator(), is the easiest way to feed the string into a GPT2Tokenizer and then count the length of the output array, or is there a more direct method?
Louis#0144: Nope
Louis#0144: Do that
TheGantian#2451: Perfect, thanks
Louis#0144: Tokenization is basically free
Trainmaster9977#3932: hey so....a few days ago, came on here for gpt-neo help. for a few reasons, I had to step away for a bit in terms of AI stuff. but now that im back and trying to get huggingface working....I. realized I didn't entirely know how to get stuff from the finetuned model. I could use the HF stuff to finetune it, but couldnt figure out how to get stuff out of it. (And yes, I checked the thing they provided for stuff to upload it and. it didnt exactly. work.)
I know this. isn't supposed to be technical support but like before, I got no other ideas of where to ask. If you want to ignore this message though, thats valid
finetune#0907: you can pm me if you want
AI_WAIFU#2844: You can also try the hugging face forums https://discuss.huggingface.co/
AI_WAIFU#2844: #off-topic
bmk#1476: @Deleted User deleted for off topic
Lord_Drakostar#9337: Hello?
EricHallahan#1051: Hello?
Lord_Drakostar#9337: Hi I should probably figure out how to work GPT-Neo on my gpu now
Lord_Drakostar#9337: because why waste money on colab pro when you have a 2060
Lord_Drakostar#9337: but yeah help pls
EricHallahan#1051: You should be able to fit 1.3B
Lord_Drakostar#9337: well yes but it's so slow |
Lord_Drakostar#9337: also why when ive got gpt-2 1.5B
EricHallahan#1051: Because 1.3B stronk and stomps GPT-2 XL
Lord_Drakostar#9337: oh
Lord_Drakostar#9337: well i still want to run 1.3B at a reasonable speed
Lord_Drakostar#9337: and i want to run 2.7B at all
EricHallahan#1051: You can try to load it at half-precision.
kindiana#1016: yeah I have doubts it will work lol
Lord_Drakostar#9337: could i just
Lord_Drakostar#9337: could i just run it with my gpu and have the full thing
kindiana#1016: you do not have enough vram without additional tricks
Lord_Drakostar#9337: wdym
EricHallahan#1051: 2.7B is 10 GB at binary32
EricHallahan#1051: You only got 6 GiB of VRAM.
Lord_Drakostar#9337: how do you know that
EricHallahan#1051: Because that is the spec for the 2060?
Lord_Drakostar#9337: ohhhh
Lord_Drakostar#9337: fair enough, i dont know compooters well
Lord_Drakostar#9337: well anyways could i use gpt-neo 1.3B at a reasonable speed at least
EricHallahan#1051: https://cdn.discordapp.com/attachments/729741769738158194/837849733783224320/unknown.png
EricHallahan#1051: Just as a citation |
Lord_Drakostar#9337: got it
Lord_Drakostar#9337: anyhoo
Lord_Drakostar#9337: so
Lord_Drakostar#9337: how do i do it
Lord_Drakostar#9337: i still wanna at least run 1.3B
Sid#2121: use this colab https://colab.research.google.com/drive/17MhFnXeHE7ZnLo2vlQ1Htqm03_X1ULqm?usp=sharing#scrollTo=6dy3EEFGKJuR
Lord_Drakostar#9337: but that's not what im saying
Lord_Drakostar#9337: i want to run it on my gpu
Lord_Drakostar#9337: so it doesn't take ages
Sid#2121: then copy and paste the code from the colab
Sid#2121: and run it on your gpu
Lord_Drakostar#9337: oh wait really?
EricHallahan#1051: You should be able to adapt it to run locally with a trivial amount of work.
Lord_Drakostar#9337: it gave me an error
Sid#2121: also i guarantee that it will be faster on colab than on your 2060
Lord_Drakostar#9337: what do you mean
Lord_Drakostar#9337: the colab gpus are really low level
Sid#2121: i mean... the words that i said
Lord_Drakostar#9337: right?
Lord_Drakostar#9337: NameError Traceback (most recent call last) |
<ipython-input-1-fe66b9bb03c0> in <module>()
1 prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " "previously unexplored valley, in the Andes Mountains. Even more surprising to the " "researchers was the fact that the unicorns spoke perfect English."
2
----> 3 gen_text = predict(prompt)
4 print(gen_text)
NameError: name 'predict' is not defined
Lord_Drakostar#9337: also
Sid#2121: you need to run the first cell
Lord_Drakostar#9337: oh sorry yeah
Lord_Drakostar#9337: i figured it out
Sid#2121: not really https://cdn.discordapp.com/attachments/729741769738158194/837852427998593066/Screenshot_from_2021-05-01_02-46-07.png
Lord_Drakostar#9337: aren't you using Pro tho?
Sid#2121: you can still sometimes get v100s on non pro iirc
Sid#2121: also P100s
Lord_Drakostar#9337: im assuming those are better than 2060s
𓅬 gabriel_syme 𓅬#3220: WAY better
Sid#2121: yes. I'm not sure about the lower end colab ones but i'm pretty sure even the K80 which is the most common one i think? is much better
𓅬 gabriel_syme 𓅬#3220: P100 has 16gb so
EricHallahan#1051: If you get a T4 it should *still* be better. |
𓅬 gabriel_syme 𓅬#3220: and it's quite common to get one, well sometimes
Lord_Drakostar#9337: so put it on a scale of 180-3090
bmk#1476: a reminder that we don't do tech support
Sid#2121: Pinned a message.
Sid#2121: yeah i pinned the colab so i can just point people at that in the future lol
EricHallahan#1051: Thanks.
bmk#1476: perfect, so i can just be like [taps sign] in the future
Lord_Drakostar#9337: neat
EricHallahan#1051: A T4 should be on par or better than your RTX 2060, and I think that is near the bottom of the stack of what you can get.
Lord_Drakostar#9337: https://gpt.contentyze.com/editor/new/ideas
unless gpt-neo is wack then this doesn't use gpt-neo
Lord_Drakostar#9337: "10000 Police used Minecraft to More Happiness
Lord_Drakostar#9337: "
EricHallahan#1051: Yeah, we figured out that it most likely doesn't.
Lord_Drakostar#9337: the error message had a typo lol
Lord_Drakostar#9337: "Errro
Lord_Drakostar#9337: "
Lord_Drakostar#9337: any apps that do use gpt-neo
Lord_Drakostar#9337: ?
EricHallahan#1051: Not many right now. |
Kia#2550: What's the current size of GPT-neo? In parameters (Nonetheless wish for the best for the Devs.)
EricHallahan#1051: Model sizes publicly available today are 125M, 1.3B, and 2.7B.
Kia#2550: Thanks for the help
Louis#0144: You shouldn’t really use 125 except for testing FWIW
EricHallahan#1051: Yeah, 125M is kinda lame.
EricHallahan#1051: GPT-2 Small is pretty easy to train to capacity.
gwern#1782: no. you get TPUs. that is it. everything else is on you. if you beg, they may be able to shake loose a $200 GCP credit or something, but then that'll be it
EricHallahan#1051: *TRC (Sorry, I am trying to condition people to change lol)
gwern#1782: you need to pay for buckets and VMs and any bandwidth (remember, ingress is free, egress is *very expensive*, and cross-region = egress)
gwern#1782: the good news is that you only need 1 VM to drive a bunch of pods if you use tensorflow
gwern#1782: if you run 1 VM to drive a pod, that's like $300/month or so total costs assuming you don't do anything stupid
gwern#1782: probably less
gwern#1782: tensorfork costs more like $400/month but we have big buckets and other stuff
gwern#1782: @ww if you are interested in the topic of using TFRC, we've been on it for years now at Tensorfork
EricHallahan#1051: \*TRC
EricHallahan#1051: :berk:
gwern#1782: TRC?
EricHallahan#1051: https://sites.research.google/trc
bmk#1476: my family still calls russia "the soviet union" because my parents are just used to it lol
gwern#1782: The Program Formerly Known as TFRC |
bmk#1476: let's compromise and say that TFRC stands for Tensor pFrocessing unit Research Cloud
gwern#1782: _nixes kneeling to needless nomenclature nihilism. I say it's TFRC and I say to hell with TRC_
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/837873513737617470/update_your_address.png
bmk#1476: i come from a long line of people who hate updating stuff
bmk#1476: it helps that "soviet union" and "russia" are both two characters/syllables in chinese
bmk#1476: 俄国/苏联
triggerhappygandi#0001: pro flex
Teemochu#8740: You know what that means.
EricHallahan#1051: ¯\_(ツ)_/¯
UnsupervisedLearner#4148: Colab fix the thing where data fetching takes forever?
ThreeBagsFull#0426: Where can I read some cool outputs from the largest current model?
EricHallahan#1051: What do you mean by this? Are you just looking for some samples? I'm trying to understand the intent.
paulbricman#2527: Is there any roadmap available for other sizes?
StellaAthena#3530: What do you mean? If you want to train a model of another size you can just change the size in the configs.
Teemochu#8740: Roadmap meaning timeline for release
Teemochu#8740: probably
paulbricman#2527: Yeah I was just curious if you plan on releasing other pretrained models for us mortals
cfoster0#4356: Absolutely, once we have em
𓅬 gabriel_syme 𓅬#3220: Don't know how the lab is, but SEA would be amazing for that imo after you can travel easily. I guess NUS might have some nice work? But not much else.
kurumuz#5695: god bless |
ThreeBagsFull#0426: Yes some sample outputs of cool things generated with the models
EricHallahan#1051: There are many social media posts that have samples, but unfortunately it seems like people tend to not distinguish between model sizes when doing so. `:\`
ASDF#9954: Hello - is there a reason why the 350M model is not available on the Huggingface model hub? searching through the organization models I can only see the 125M, 1.3B and 2.7B models. I am working on a Rust port of these models and would like to offer the 350M model as well (the 125M version produces great results for its size)
StellaAthena#3530: @ASDF We don’t have a 350M model that’s trained and ready to use yet
EricHallahan#1051: Well that is a better way of putting it than I was going to.
ASDF#9954: I am almost certain I tried it out at work last week but can't see it anymore. I may be wrong
ASDF#9954: thanks for the quick answer
finetune#0907: looking at the attention weights from the dense attention layers in gpt-neo-2.7b is interesting. might be because i'm just averaging attention weights over all layers and heads, but for sequences below 1000 tokens long or so, usually the first token gets the most attention, followed by the last one so far. later on it gets a bit more varied
EricHallahan#1051: Do you happen to have any pictures?
finetune#0907: need to run it again. i just tried it with small gpt2, looks similar
EricHallahan#1051: Would make sense.
finetune#0907: i guess the first token's just really important
EricHallahan#1051: I forget what the positional encoding is.
finetune#0907: lists the top eight average weights at each generated token: https://gist.github.com/finetuneanon/dca45407b193bc6ff9b53ba872492056
Sid#2121: I forget the paper that showed this, but I'm pretty sure the first token receives the most attention almost universally :thonk:
finetune#0907: it's pretty much always in the top 3 highest weights from what i've seen
Deleted User#0000: do you mean a token to itself?
Deleted User#0000: or the one next to it?
Sid#2121: I'm assuming he means the one next to it
Sid#2121: https://arxiv.org/pdf/1905.04226.pdf this paper i think was the one i'm thinking of |
finetune#0907: i meant the very first token in the prompt receiving a lot of attention
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/838078709692628992/Screenshot_from_2021-05-01_17-45-24.png
finetune#0907: the one preceeding the one being currently generated is usually second place
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/838078813358391326/Screenshot_from_2021-05-01_17-45-47.png
Deleted User#0000: ohhh
Deleted User#0000: yes, i know about this
Sid#2121: it should differ according to layers
Deleted User#0000: you mean on the <bos>?
finetune#0907: yes
Deleted User#0000: yup
Deleted User#0000: so that's the network learning to do a null computation
Deleted User#0000: some attention nets explicitly add a null token
finetune#0907: i'm averaging all the layers
Deleted User#0000: when the <bos> is not there
Sid#2121: there's a 'blur' layers in the earlier layers that roughly average across tokens, 'window' layers which focus heavily on local positions, and 'structured' layers in the later stages
finetune#0907: interesting
Deleted User#0000: basically, sometimes the network needs to not focus on anything
Deleted User#0000: there are papers investigating this
kurumuz#5695: that makes sense
Deleted User#0000: i like to extend that to memory key / values sometimes |
Deleted User#0000: so it can decide on multiple different types of computation
finetune#0907: line breaks and punctuation are also popular
Sphinx#2092: There was some work last year trying to exploit this to detect hallucination.
Deleted User#0000: the latter is just me speculating though, after reading the all-attention paper
Deleted User#0000: ohh interesting
Deleted User#0000: yea, i think this area needs more research, but that explains your finding finetune
finetune#0907: yes, makes sense
finetune#0907: very interesting
finetune#0907: some things in there just look reasonable from looking at it, which is nice too, like opening '(' being paid attention to until there's a ')'
EricHallahan#1051: I would expect that to be really strong in GPT-Neo considering how good it is with code.
Sphinx#2092: https://arxiv.org/abs/1910.08684
Sphinx#2092: I'm not sure how much extends to Transformer, but it's nice.
StellaAthena#3530: I wonder if context packing influences this
StellaAthena#3530: This seems like something that could be exploitable in batch evals
Deleted User#0000: do we have a <bos> token for gpt neo?
Deleted User#0000: i don't even remember
Deleted User#0000: lol
StellaAthena#3530: Like, let’s say you but it in eval mode and compare a sentence to that same sentence with some random prefix
Deleted User#0000: oh we have that <startoftext> token
bmk#1476: i dont think so |
bmk#1476: theres no startoftext token i think
Deleted User#0000: we could always just add it within the network
bmk#1476: why would we need one?
StellaAthena#3530: If you map out how the attention patterns change when k bits are prepended, you can probably use that to design exploits
Deleted User#0000: i like that better than shifting the responsibility onto the tokenizer
Deleted User#0000: we may not need one actually. i have a feeling the network pulls some tricks if it doesn't have the null token
Deleted User#0000: more just superstition atm based on a few papers
Deleted User#0000: but it wouldn't hurt to add it
Deleted User#0000: the idea is to just give the network the ability to attend to nothing
cfoster0#4356: What input does neo use to predict the first token, then? :thonk:
bmk#1476: you cant use neo to predict the second token, but also you don't have to
bmk#1476: the first token is just a simple distribution over 50256 tokens lol
bmk#1476: https://gist.github.com/leogao2/ae53973b1281dad4422605bca4f89637 thankfully someone has already computed that distribution by literally just counting how many of each token there are
cfoster0#4356: Wait what lol
cfoster0#4356: What token goes in that elicits that distribution?
bmk#1476: wdym
bmk#1476: neo only gives probability distributions conditional on 1 or more tokens
bmk#1476: neo can't tell you first token distrivution
bmk#1476: but you dont need neo to do that
bmk#1476: 1-token probability with no context is literally just the first order "how many of each token are there" |
cfoster0#4356: Do we pack multiple sequences together?
cfoster0#4356: In training
bmk#1476: yeah
cfoster0#4356: Is there like an EOS or separator token then?
bmk#1476: yes
cfoster0#4356: Ah ok.
Kharr#7888: When you tokenized the data, did you pad with text and use the EOS token as document separator to fill the 2048 context window?
inspiration101#2728: Is anyone interested in the gpt-neo sandbox?
EricHallahan#1051: Is it in a GitHub repo?
inspiration101#2728: Yes, https://github.com/boredom101/gpt-neo-sandbox
EricHallahan#1051: I would be happy to check it out when I find time.
inspiration101#2728: Thank you
mgostIH#0245: sandboxing AIs huh
bmk#1476: *angry alignment noises*
inspiration101#2728: It's a tool to quickly create a demo web app
aze#1010: is anyone here from the US and is willing to purchase a Gradient ML subscription for me? (Paperspace) I would pay extra obviously, with paypal or crypto, or whatever else (maybe you can use privacy.com for disposable credit cards which I would use ?)
Serge#0241: What config from `configs` folder should I use to run GPT-Neo after I've downloaded the 2.7B model?
haru#1367: just out of curiosity, why can't you do it yourself?
aze#1010: i dont have a credit card
Serge#0241: `python3 main.py --predict --prompt prompt.txt --gpu_ids device:GPU:0 --model gpt3_2-7B_256` gives me `AssertionError: Dataset 'openwebtext-documents' was not found under dataset_configs/ folder. Please follow the example.json in that folder.` |
Serge#0241: I don't have `openwebtext-documents`, but I have a bunch of `model.ckpt-*.data-*-of-64` I've downloaded
Serge#0241: not sure how to point the script to them
Serge#0241: or should I use the notebook instead
Serge#0241: ok I figured I should call it with the config file from the downloaded model. now it spits out this: https://paste.ofcode.org/yRz6yya8LEF8XzxqzHa6rt
Serge#0241: why would it need auth to google? I want to use local GPU
EricHallahan#1051: Just use Hugging Face Transformers.
EricHallahan#1051: https://eleuther.ai/faq
Serge#0241: So it's not possible to run locally? Wanted to hack together some wrapper scripts for it
Sid#2121: If you're just running inference, use the version on huggingface transformers. It's very well documented.
Serge#0241: Will check it out, thanks
Teemochu#8740: Huggingface is a local run, it's a library that aids in downloading and using the model.
EricHallahan#1051: Transformers is a library, Hugging Face is a company.
EricHallahan#1051: I hate how I always need to specify Hugging Face Transformers lol
EricHallahan#1051: But the there is a price to simplicity.
Serge#0241: **Prompt: ** *AI Safety researches finally discovered a practical solution to the AI safety problem. Here's the gist of it:*
**Output:** *1. AI Safety doesn't exist. This is the fundamental issue with AI safety. It won't ever exist.*
Lovely.
gwern#1782: uh oh
kindiana#1016: https://discord.com/channels/729741769192767510/730510538060071043/838228804618158080 |
kindiana#1016: seems a bit more optimistic :berk:
Serge#0241: Also tried to ask it for financial advice on how to make ten million dollars in a month, and it suggested
*A: Here's one such plan: you buy a house with five bedrooms with $500,000 down payment and one loaner car, put it on the market, and go to hell.*
Serge#0241: Still quite a long way till GPT-3 I see
𓅬 gabriel_syme 𓅬#3220: LOL
James#6892: LOL
James#6892: didn’t expect that ending.
Teemochu#8740: That's one good-valued soul you have there
Serge#0241: https://cdn.discordapp.com/attachments/729741769738158194/838240395783438356/image0.png
Serge#0241: The trick of showering it with compliments isn't working. It just gets more salty.
Serge#0241: https://cdn.discordapp.com/attachments/729741769738158194/838243827918700574/unknown.png
Serge#0241: Ok, I'm done here
Serge#0241: Tried different prompts, quite fun, but still a long way till it's gonna be useful I guess
Serge#0241: Or maybe I need to become more skillful at prompt engineering
Serge#0241: Not expecting it to solve millennium prize problems right off the bat of course, but doesn't feel like it's even trying
cfoster0#4356: Just an FYI: we try to keep model generated text in the #the-faraday-cage-archive channel, generally speaking
Serge#0241: Ah I see. Already has bots connected, nice
aze#1010: what is that
aze#1010: lol
𓅬 gabriel_syme 𓅬#3220: any idea what would be the best approach for text generation using multi-modal inputs (text prompt, images, and metadata)? Is there something like that with a GPT model? Would that help with quality of results? I'm mostly thinking of away to produce semi-structured reports |
Deleted User#0000: maybe try an encoder decoder architecture like T5/BART?
Deleted User#0000: Hi @Daj , I saw you speaking with Gary Marcus, Walid Saba. I see you ask critical questions on alignment. I would like to share my perspective. Currently I am looking into a decentralized, free as in beer and liberty, non-monetary blockchain architecture for people to enhance their own 'internal system' if they so choose to do so.
(I got access to gpt3 on same day as my favorite team launched it's project which I won't care to mention here as per the no shilling rule.)
Deleted User#0000: If anyone wants to discuss consciousness, de-education, decentralized systems and the mind, I am muted in voice general. Just say hi. I am not an AI scientist, I have little actual experience in programming it. I have a ericksonian hypnosis among other coaching, system engineering tech background and I love the movie Arrival from 2016.
Daj#7482: Hello there! I usually avoid discussions about consciousness and the like without firm technical grounding as they usually are unproductive, and I don't really know much/care much about blockchain systems tbh
Daj#7482: So not sure how much good commentary I can give you lol
Kia#2550: Morning Connor
Luke_66#1485: Hi, I am interested in getting involved in research. I am an undergraduate student with one year of deep learning experience in computer vision.
More about me : http://raghavprabhakar66.github.io/
I m looking to contribute to #vision project.
UnsupervisedLearner#4148: How badly does end to end self attention perform without feedforward layers? I might play with it later in a colab just wondering if someone else tried it
CRG#8707: There was the all-attention transformer: <https://ai.facebook.com/blog/making-transformer-networks-simpler-and-more-efficient/>
CRG#8707: Also: https://discord.com/channels/729741769192767510/730095596861521970/821394631539163157
UnsupervisedLearner#4148: >On language modeling benchmark tasks, our all-attention network matched the state-of-the-art performances by Transformer networks, with a much simpler architecture. We hope this simplified architecture will open a path for better understanding and improving Transformer networks.
>2019
What happened?
CRG#8707: Attention is All You Need tried it and found it was slower.
nev#4905: all-PKM network? :thonk: |
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/838410780768468992/2e35b1485fadca524ab7dacdfa332615.png
UnsupervisedLearner#4148: Hmmm. This is cool. I always conceptualized attention as a projected neural network.
So FF makes it faster, I'm guessing this means per pass not faster to converge, but with comparable performance. Why not just stack a bunch of ff layers then... And if this doesn't work the same, why not?
CRG#8707: If you only have FF layers, the best you can do is directly predict the raw token frequency.
nev#4905: I think there was some paper where reordering layers so that there's only attention layers first and then just feedforwards worked just as well as normal transformers
nev#4905: sandwich transformer iirc
UnsupervisedLearner#4148: So without me getting out my scrap notebook.
What I read likened the FF layer as attention sig(Wx)W ~= sm(qK)V
So why would omitting softmax just give you a token probability distribution?
Sid#2121: https://arxiv.org/abs/2009.04534
CRG#8707: What I meant is that you need the attention layers to focus on the context. Using only FF gives you huge Perplexity.
UnsupervisedLearner#4148: Oh yeah, because the FF is tokenwise
UnsupervisedLearner#4148: This line of inquiry is giving me a big thunk
UnsupervisedLearner#4148: I really need to go into hyperbolic time chamber with a million gpus and all the weird architecture research from 1980-2010 and play around
ThreeBagsFull#0426: Does Elethureai plan to release a dataset comparable to GPT-3 Davinci?
EricHallahan#1051: Pile?
EricHallahan#1051: https://pile.eleuther.ai |
finetune#0907: davinci is a model, not a dataset
cognomen#6297: openai's dataset is anything they could get their hands on
cognomen#6297: the pile is slightly curated for quality
ThreeBagsFull#0426: Apologies I meant model
EricHallahan#1051: :yes:
EricHallahan#1051: yes
ThreeBagsFull#0426: Lol, anywhere I can read on this progress? The 2.7b release is amazing and seriously outperforms gpt2 in my testing.
Would love to know if we're getting close to one similar to davinci
EricHallahan#1051: I guess the website? It isn't particularly detailed though.
EricHallahan#1051: https://eleuther.ai/projects/gpt-neo
https://eleuther.ai/projects/gpt-neox
EricHallahan#1051: Dark mode coming soon™️
EricHallahan#1051: We do not expect a model at the scale of 175B for many months.
EricHallahan#1051: "Roadmap" is in the FAQ:
https://eleuther.ai/faq
EricHallahan#1051: Which GPT-2 were you using? XL?
bmk#1476: sounds backwards
bmk#1476: it's in the name - the pile is just a big pile of stuff we could find lol
bmk#1476: many = more than whatever number popped into your head when you first read this message |
EricHallahan#1051: It is still higher quality.
alstroemeria313#1694: "exponentially increasing learning rate schedule" wat
alstroemeria313#1694: oh, does this only work because of batch norm?
alstroemeria313#1694: > We introduced overparameterization by simply placing two matrices in succession instead of the matrix in each dense layer. With an addition of roughly 15% in number of parameters, optimization accelerated by orders of magnitude
lol...
CRG#8707: Looks similar to deep linear networks training different than shallow linear networks: <https://www.saxelab.org/assets/papers/Saxe2013a.pdf#page=2> https://cdn.discordapp.com/attachments/729741769738158194/838451033058705458/b7ff86be50af66bd0db6e978c6022915.png
EricHallahan#1051: CRG, always sharp.
Deleted User#0000: https://arxiv.org/abs/1810.02281 Arora investigated this too
Deleted User#0000: https://arxiv.org/abs/2010.00679
CRG#8707: It was on a citation from: https://discord.com/channels/729741769192767510/747850033994662000/821217138094112777
Deleted User#0000: Thanks for the honest and clear answer Connor!
What discussions have you found to be the most productive?
How can humans ask more productive question?
And last: do you know someone who cares more to discuss consciousness without being part in a centralized spiritual organization such as a church, group or religion?
Here is something I found interesting to me:
https://beta.openai.com/playground/p/t95SrdImvBVIfk8ZYgksdJAQ?model=davinci-instruct-beta
Cheers!
Daj#7482: I think there are productive ways to talk about consciousness and related questions, but they need to be grounded in technical understandings of computation, physicalism etc. It might be too math heavy for your taste, but one of my favorite things ever written on these kinds of topics is https://www.scottaaronson.com/papers/philos.pdf |
Daj#7482: If you're seriously interested in the question "how can humans ask more productive questions?" (which I think is a _phenomenal_ question), I would recommend my Nr1 favorite thing ever written, The Sequences (https://www.readthesequences.com/) which is a huge collection of short blog posts on rationality, logic, science, philosophy and more
Daj#7482: There is also this humorous piece about what discussing with "philosophers" can sometimes be unproductive lol https://philosophynow.org/issues/46/Newtons_Flaming_Laser_Sword
inox#5400: tbh neuroscientists always end up talking about consciousness if you buy them a drink
Louis#0144: Really I read this and thought two years
mgostIH#0245: To those having AGI timelines in < 50 years:
What is the best evidence / piece of information that convinced you?
Kharr#7888: Seeing artificial neurons in _specific_ architectures self organize like what I've seen in the brain. This doesn't happen in every architecture and is particularly interesting to me.
mgostIH#0245: Like neurons in CLIP?
Kharr#7888: A little bit, yes, but there's more cases as well. The crossover research is still in its infancy.
mgostIH#0245: When do you think we'll reach superintelligent AI?
Daj#7482: Neural Scaling Laws
Kharr#7888: I think the new wave of multimodal Transformers will be particularly interesting as it keeps advancing.
Also if we're aiming to reach human-like intelligence super intelligent AI is very far off. Experts might be great at one thing but not many people are great at everything.
Daj#7482: also, Moore's Law
mgostIH#0245: From GPT-3?
Daj#7482: slash Joy's Law
Daj#7482: And the followup Kaplan papers
Daj#7482: Also a fun post: https://www.lesswrong.com/posts/rzqACeBGycZtqCfaX/fun-with-12-ooms-of-compute
mgostIH#0245: What did you think about this before GPT-2? |
Daj#7482: My timelines were probably 2-5x longer
Daj#7482: maybe 2-3x
mgostIH#0245: And what are they now?
mgostIH#0245: > Also if we're aiming to reach human-like intelligence super intelligent AI is very far off
You mean like 50 years?
mgostIH#0245: Or more?
Daj#7482: 50% probability mass in the 3-15 years timeframe
Daj#7482: Have you read Ajeya's Anchors report?
mgostIH#0245: I don't think so 🤔
Daj#7482: https://drive.google.com/drive/u/0/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP
This is _the_ document to read for timeline forecasts
Daj#7482: There's even google sheets to calculate probabilities
Daj#7482: My predictions are pretty close to the "aggressive" forecast
Kharr#7888: That depends which path humanity takes. If something becomes _too good too fast_ it will get regulated and suppressed. If it doesn't advance fast enough, we will see more AI winters. My bets are that it will happen behind closed doors decades before it hits the public eye.
mgostIH#0245: I personally have my distribution like
20% in 20 years
55% in the interval from 20 to 40 years
mgostIH#0245: Aye but I'd say that progress will always keep ongoing
mgostIH#0245: And AIs that can automate more and more will be very profitable
Daj#7482: Or it happens instantly and we all atomize into paperclips 🙃 |
Kharr#7888: I remember seeing the first wireless tablets with touch screens with tech allowing you to flick photos across to nearby people (in research labs). This was in the 1990s. Tech did not become mainstream until decades later.
Kharr#7888: The current race to stuff ML into everything is quite different than previous technology trends. Adoption is much higher than I would have guessed, and the primary bottleneck is knowledge that this point. When last-year's SOTA is irrelevant today, it's a bit of an issue to find people who can understand and keep up/productize it.
Daj#7482: It's almost like there is some kind of acceleration going on
Daj#7482: Exponential acceleration, even
Daj#7482: One might even say it is headed towards some kind of "singularity"
inox#5400: didn't robin hanson write a book about how human-like AI collapses into superintelligent AI very quickly?
inox#5400: Age of Em?
Daj#7482: Yes, great book lol
Daj#7482: Book length treatise on an intricate complex world
mgostIH#0245: @Daj I asked because I often talk with people about this but they don't really believe (rightfully so) to the claim of possibly becoming immortal in 40 years
Daj#7482: in like two paragraph he mentions "btw this would probably only exist for like a year or two max before they figure out AGI and go foom"
Daj#7482: ¯\_(ツ)_/¯
Daj#7482: I'd recommend Ajeya's report
mgostIH#0245: Tooo looongggg
mgostIH#0245: Is there something shorter that I can tell them before that?
inox#5400: I admire his dedication to so much detail to such a depressing scenario
Daj#7482: "Hey what convinced you of X?"
"Here's the convincing facts about X."
"TLDR"
mgostIH#0245: Like assuming I was in a conversation |
mgostIH#0245: No silly I like reading them
Daj#7482: That perfectly summarizes most of Hanson's intellectual output lol
mgostIH#0245: But I can't throw papers of stuff to people asking me about it as the first thing :S
mgostIH#0245: I need to get them into it with convincing arguments that ideally don't require **too much** reference
Daj#7482: I mean, this is a question of inferential distance. You can point them to exponential curves in computing costs and if that doesn't work it's probably too far of a distance to span in one conversation
Daj#7482: Or show them the Schmidhuber video lmao
mgostIH#0245: Ayeee, say exponential curves in computing costs
mgostIH#0245: One fact that striked me was the thing you wrote here too about training CV models
mgostIH#0245: which one?
Daj#7482: Bonus points: Remind them how "no one got exponential curves are right with COVID, so don't underestimate them" to strengthen your case
Daj#7482: The one about his "Omega Point" lol
Daj#7482: Don't actually shot it except for the meme
mgostIH#0245: Like I'd like some hard hitting numbers, say "It used to cost 1000 bucks 10 years ago, now 1"
Daj#7482: Yea, like a single GPU nowadays has as much FLOP as the biggest supercomputer in the world 20 years ago
Kharr#7888: a single RTX 3090 goes :brr: on the same level as a cluster 10 years ago
mgostIH#0245: I often use the "Suppose a slower moore's law, every 2 years computation doubles"
mgostIH#0245: Because 20 years will still be a x1000 increase in compute power, and it's already a somewhat pessimistic estimate
mgostIH#0245: For now I am giving people examples through the bots in #the-faraday-cage-archive
Daj#7482: Oh yeah people react pretty well to modern bot outputs
Daj#7482: I find #art pretty convincing lol |
Daj#7482: but people habituate to that _fast_
mgostIH#0245: As in how just a decade ago computers didn't understand shit of images but now they can already generate them from sentences
mgostIH#0245: Some people are already very dubious of "Automation of all art"
mgostIH#0245: > i feel like art is best when it has human emotion behind it
> idk if a bot can replicate that
> maybe for art where you're not expecting human emotion
Daj#7482: https://miro.medium.com/max/625/1*p7R4nQtn8pghKZk-a6oEBg.jpeg
mgostIH#0245: Ayyyy, nice one!
mgostIH#0245: This is the things I am talking about owo
mgostIH#0245: Could aswell be pinned
Daj#7482: Pinned a message.
Daj#7482: It is fitting yea lol
Daj#7482: I wrote a moderately bad essay about this a few years bac https://medium.com/@NPCollapse/counting-consciousness-part-3-e53a1a97d48b
adamShimi#8350: From trying to write a review of Daniel's post (Fun with +12 OOMs of compute) which relies heavily on Aleya's report, I do feel like the latter should be distilled further to be more accessible without investing a month of study.
inox#5400: it'd be fun to do a sociology study where you get researchers from different fields to classify real/generated excerpts from papers
Daj#7482: I like Ajeya's report for just how dense and complete it is. I'm a fan of having "reference work". But yeah maybe there is benefit to having distilled summaries (which, tbf, the report does have iirc)
mgostIH#0245: I am all for making these observations more accessible!
adamShimi#8350: I don't want to replace the report, just to have a middle step between reading the words "Aleja's report" and reading the billions pages of the actual report.
futurememe#2076: Have a question. I am using GPT-3 for my current project. Using it to make educational chatbots. Want to switch to GPT-NEO i think! Is there there anywhere I can test it out without having to stand it up? IE give it prompts and get some responses back?
EricHallahan#1051: You can try this Colab notebook. |
futurememe#2076: thanks!!!
BoneAmputee#8363: `educational chatbots`
doin the lord's work 🙏
futurememe#2076: haha. It's needed. We gotta use AI to fix education!
futurememe#2076: 🙂
BoneAmputee#8363: yeah if you need any help, I'm probably not the person to ask but I would love to try to help that kind of project :skype_xd: I've finetuned conversational gpt bots before but making an effective teacher is going to be very difficult
EricHallahan#1051: Can you not use DialoGPT?
futurememe#2076: Thanks @BoneAmputee ! We are getting to that point. And OH wow @EricHallahan i didn't know that existed. Let me check it out
EricHallahan#1051: It seems like a better model for what you are doing, but obviously it isn't as knowledgeable as either of the GPT-Neo models.
futurememe#2076: So we have non profit and want to let kids talk to chatbots of albert einstein:)
futurememe#2076: https://cdn.discordapp.com/attachments/729741769738158194/838508943461580820/Image_51.png
futurememe#2076: we gonna generate AI responses but then let people collectively up vote, down vote answers
futurememe#2076: and edit
futurememe#2076: the answers to reach a consensus
futurememe#2076: using Rasa
futurememe#2076: we non profit for education:)
bmk#1476: as a general policy, we don't really help with downstream applications of our models
bmk#1476: especially since it's basically mostly just using huggingface
futurememe#2076: We using GPT-3 neo now:)
BoneAmputee#8363: took about 6 volleys for that web demo to forget its convictions ;~; |
futurememe#2076: no worries!
EricHallahan#1051: Well it is pretrained but not fine-tuned.
BoneAmputee#8363: yeah but like, it needs more attention
BoneAmputee#8363: even with finetuned chat bots I've made, 10 lines of short term memory is like, a lot for it to handle
BoneAmputee#8363: though I haven't made one in a while
futurememe#2076: when we have a demo we want to show you guys:) @bmk Just wanted to say hi and tell you we exist. We ideally just want to get more eyes on the work you guys are doing
bmk#1476: we don't really need more attention right now, unless you mean from engineers who can help us get stuff done
bmk#1476: the conversion rate of people who stick around and contribute from just general attention is miniscule
futurememe#2076: yes....that is the kind of attention. Exactly. I am trying to activate the educational community to using AI in teaching. Get those dads that are coders to help. So yes....engineers
Daj#7482: Well best of luck with your project!
futurememe#2076: this is the the other project i help fund. https://github.com/XRFoundation/XREngine
futurememe#2076: we want to talk to 3D avatars:)
futurememe#2076: https://cdn.discordapp.com/attachments/729741769738158194/838510750506418196/unknown.png
futurememe#2076: For XR multiplayer in browser:)
futurememe#2076: So talk to 3D Einstein
futurememe#2076: Thanks @Daj
BoneAmputee#8363: what 3d engine is that? 👀
futurememe#2076: it's the teams that i am part of
BoneAmputee#8363: nice!
futurememe#2076: it's all web based three.js |
futurememe#2076: full VR, AR, in browser multiplayer support
futurememe#2076: i am using it try to make educational metaverse
futurememe#2076: but want to fill it with bots that can teach kids
futurememe#2076: 🙂
Deleted User#0000: yeah i love that vision. I hope I can help it soon with my movement models
Deleted User#0000: and in other ways
futurememe#2076: Thanks man!
futurememe#2076: i think once all this tech comes together the world is going to become so cool:) Education definitely going to change
gwern#1782: does anyone have a good source on this 27b PLUG Alibaba BERT-like model? the PALM link is obviously not 27b, and https://mp.weixin.qq.com/s/PW0wZbts6ZpbKZSHyp8aVw is pretty sketchy when I try to read it in google translate... https://www.infoq.cn/article/EFIHo75sQsVqLvFTruKE seems a lot better but I still don't see any paper writeups
gwern#1782: > Now, Alibaba officially released PLUG, once again promoting the development of the Chinese community pre-training model. Next, PLUG will expand the parameter scale to 200 billion and further improve the quality of text generation. In addition to the PLUG (27 billion parameters) with Chinese as the core, Dharma Institute has also jointly released a new super-large-scale pre-training model "Wenhui" (11.3 billion parameters) for cognition in conjunction with Zhiyuan Research Institute and Tsinghua University. Tsinghua University released the ultra-large-scale multi-modal pre-training model "M6" (100 billion parameters).
Kia#2550: Wow like a Opensource project (or something like that I guess)
Kia#2550: Nonetheless Interesting Development :thonk:
gwern#1782: *is* it open source? I don't trust translations like this
gwern#1782: none of the other chinese models have yet been released for download that I am aware of
bmk#1476: lol the article claims it's the biggest chinese purely text model :thonk:
gwern#1782: it was then, I think
gwern#1782: didn't pangu-alpha come out after?
andyljones#7746: yeah, by a ~week
Kia#2550: It's probably a flex or they're just bragging to the West about this thing
gwern#1782: or they just mean 'announced' by 'released', which is what it means 80% of the time in the west too |
Kia#2550: That would be interesting
bmk#1476: yeah, 发布 can mean just announce
gwern#1782: as I said, I don't trust machine translation for nuances of research like that
andyljones#7746: the way 99% of the material is in chinese should tell you something about the flex priorities of these research groups
Kia#2550: So True
bmk#1476: i bet google translate meant it in the sense of "released news about"
bmk#1476: and doesnt realize that with LMs specifically releasing news about the model and releasing the model are different things
Teemochu#8740: are you saying the cost of attention is quadratic?
Louis#0144: Sigh
stephen#2400: Hi - I've a quick question on the small gpt-neo model on huggingface, sorry if its off topic or been covered already
I loaded the gpt-neo-125M model into Jay Alammar's Ecco library which visualizes transformer model internals. One of the visualisations is based on nostalgebraist's observation in https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens that the final transform to vocabulary space can be applied at all layers, and how the model has often identified the next token within the first few layers. This doesn't seem to happen with the gpt-neo models of comparable size, the output tokens are only ranked highly in the final layer. I've an example of this in https://gist.github.com/stprior/de2b2bd4a98fafd5e26c4cbb99a2f2f4
stephen#2400: Is there any reason the gpt-neo model would work differently? I'm not sure if Ecco should be doing something different to get rankings of tokens in the gpt-neo model, it seemed to just work without modifying how it finds the internal weights or embedding matrix.
EricHallahan#1051: First off, welcome back! I think we are very interested in stuff like this, so it is definitely not #off-topic.
cfoster0#4356: Does anyone recall if the models we trained have the output embedding matrix tied with the input one?
EricHallahan#1051: I don't know. ¯\_(ツ)_/¯
EricHallahan#1051: Maybe Sid knows?
stephen#2400: Thanks - I've been lurking all along
stephen#2400: I wondered about that - it looked to me like there was only one embedding matrix but I could definitely be mistaken
EricHallahan#1051: My initial reaction is that it is because 125M was was trained so quickly, but have you tried other GPT-Neo models?
stephen#2400: I haven't - I think I ran out of space when I tried to - but that might have been when I was running on gpu |
StellaAthena#3530: @stephen Use colab
stephen#2400: I'll try again and see
bmk#1476: a heads up that 125M was trained much shorter than the larger models
Louis#0144: That’s genius
Louis#0144: Holy shit
Louis#0144: I’ve never heard of anyone doing this
Louis#0144: Citation?
bmk#1476: gpt2 does it lmao
CRG#8707: https://discord.com/channels/729741769192767510/730090096287547444/795698185569304596
Louis#0144: We should do it for the grounding project
CRG#8707: Untied worked better for T5
Louis#0144: It’s an experiment worth trying tho
Louis#0144: So we have #interpretability-reading-group if that interests you. I think I can assume that projects might come of that eventually
Louis#0144: Kinda semi related
CRG#8707: Relevant table from: <https://arxiv.org/pdf/2102.11972.pdf#page=8> https://cdn.discordapp.com/attachments/729741769738158194/838553683766542336/a2789f144efd0be8be478ab1b40fc10d.png
CRG#8707: Hm, for the T5 decoder tying the embeddings was better. But IIRC lucidrains mentioned that for AR untied embeddings were superior. :thonk:
stephen#2400: FWIW the same pattern is the same for gpt-neo-1.3B. https://gist.github.com/stprior/ddf414ef62863d79109be7ec5609dfb8
It looks like the input and output embedding matrices are the same, and I think the output embedding is being applied correctly because the rank of output tokens decreases through the layers, and they are ranked close to 1 in the output layer.
cfoster0#4356: Interesting
alexyz#3459: I remember seeing someone recreate GPT-2 1.5B back when it wasn't released yet |
alexyz#3459: It was inferior to the actual 1.5B GPT-2, according to the creator
alexyz#3459: but it was amazing that someone was able to do it
alexyz#3459: there's also someone who trained a 1.25B GPT-2 on a Russian corpus
alexyz#3459: https://github.com/l4rz/gpt-2-training
alexyz#3459: i really like seing these precursors to GPT-Neo lol
alexyz#3459: I for the life of me can't find the 1.5B GPT-2 that someone personally made
alexyz#3459: can someone link it if they have a link?
bmk#1476: well that depends on which one you're talking about
cfoster0#4356: connor?
alexyz#3459: dunno
cfoster0#4356: <https://towardsdatascience.com/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8>
alexyz#3459: all I remember is:
1. i found it on Github
2. it was trained on multiple GPUs for like approx month
3. when I found it, it was crossed out using markdown saying it was inferior to the actual GPT-2 1.5B, but it was still there linked if someone wanted to use it
kurumuz#5695: yeah sounds pretty impressive
alexyz#3459: I think that might be it
alexyz#3459: because I also think it said that it wouldn't be released
alexyz#3459: but then they did after the actual one was released
bmk#1476: hmm that name "connor" sounds familiar |
bmk#1476: feel like I've seen it somewhere
alexyz#3459: but is there an actual release of that after GPT-2 1.5B was actually released? because I found it through github
cfoster0#4356: Yeah <https://github.com/ConnorJL/GPT2>
alexyz#3459: Thank you, that's the one
alexyz#3459: https://cdn.discordapp.com/attachments/729741769738158194/838569035758043146/unknown.png
Louis#0144: Idk a dork like that
Louis#0144: 🤷♂️
TheGantian#2451: What might be happening if transformers is giving me the following error when trying to create a EleutherAI/gpt-neo-1.3B pipeline?:
> ValueError: Unrecognized configuration class <class 'transformers.models.gpt_neo.configuration_gpt_neo.GPTNeoConfig'> for this kind of AutoModel: TFAutoModelForCausalLM.
> Model type should be one of BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig.
TheGantian#2451: (This is running the sample code on the HuggingFace page. Works on one computer but the other is giving me this)
StellaAthena#3530: @TheGantian update your version.
TheGantian#2451: I did run pip install --upgrade --force-reinstall transformers, I'll try again
EricHallahan#1051: Do you happen to only have TensorFlow installed?
TheGantian#2451: What else should I have? I might have missed adding something to requirements.txt
EricHallahan#1051: You do need PyTorch.
CKtalon#7792: They only released it as in openai's playground. No weights
CKtalon#7792: And it was biggest until Huawei beat it a few days later
TheGantian#2451: Ah, that's probably it. I have pytorch on my dev machine but not the test machine
EricHallahan#1051: Yeah, on launch day someone thought it was an early April fools prank. |
EricHallahan#1051: It wasn't, and their problem was that they only had TensorFlow installed.
TheGantian#2451: That was it. The GPT-2 models were downloading without torch but the Neo ones werent. Thanks for the assist!
DoesThisUnitHaveASoul#7264: hey everyone
DoesThisUnitHaveASoul#7264: Does anyone have any idea why most transformers for autoregressive text tasks use word-level embeddings instead of character level embeddings? Is it literally that you have a shorter context length? Which makes training more feasible?
kindiana#1016: it also means you process more text for the same amount of tokens
Spy#9778: This is an ad hoc guess, but I suspect that the BPE tokenization step also gives you some statistical advantages since how the merges were chosen depends on the corpus statistics. ELMo had a character LSTM for its embeddings, which you could easily replace a BPE embedding table with, but all the followup work uses subword pieces instead.
kurumuz#5695: https://medium.com/georgian-impact-blog/gpt-neo-vs-gpt-3-are-commercialized-nlp-models-really-that-much-better-f4c73ffce10b
kurumuz#5695: They say gpt-neo 2.7b is worse than 1.3b at SST-2 but they don't test the whole dataset and only ranadomly selected examples 🤔
kurumuz#5695: Still, interesting results
Kia#2550: It's probably Crappy testing
zphang#7252: feels like they should use some kind of standardized LM eval library, preferable in harness form
Deleted User#0000: anyone know if it's possible to do data parallelism accross nodes in a slurm cluster?
Deleted User#0000: say each node has just 4 GPUs, and I wanna use more
Deleted User#0000: ahm it seems like it can be done https://www.hpcworkshops.com/08-ml-on-parallelcluster/03-distributed-data-parallel.html
Sid#2121: sure, pretty sure deepspeed or torch distributed will work across a slurm cluster
Sid#2121: https://github.com/facebookincubator/submitit
Deleted User#0000: yeah. I found that that pytorch lightning supports it easily. However, im not sure its working right. batch_size * gpus * num_nodes = 4096, but the number of iters/per epoch I observe, seems to correspond to an effective batch size of batch_size * gpus
Deleted User#0000: so I guess it's not properly distributing accross the nodes?
kindiana#1016: you need to shard the dataloader as well I believe, not sure if ptl handles it automatically
Deleted User#0000: i think they say it does hmm |
Sid#2121: do you need to use slurm / lightning? deepspeed handles it pretty well
Deleted User#0000: my codebase is already using lightning, and slurm is installed on the cluster so
Sid#2121: if you're just doing dp it's really easy to change the code to deepspeed
Sid#2121: you literally just pass in your model and dataset to deepspeed.initialize
Sid#2121: and it distributes jobs using pdsh
Deleted User#0000: actually ligthning has beta support for deepspeed
Sid#2121: never really used lightning before, what... is it? like what does it do?
Deleted User#0000: its just a high level library for pytorch that is supposed to implement all the modern DL optimizations
Deleted User#0000: its pretty cool, but it comes with the drawbacks of high level libraries
Deleted User#0000: (mainly that if it doestn work is harder to know why)
kindiana#1016: its basically a big training loop impl lol
Sid#2121: coming from TF, pytorch is already high level enough that I don't really understand the need lol
Deleted User#0000: yeah basically
Sid#2121: so... it just calls .cuda and .backward for you? :berk: https://cdn.discordapp.com/attachments/729741769738158194/838769192186150912/68747470733a2f2f706c2d626f6c74732d646f632d696d616765732e73332e75732d656173742d322e616d617a6f6e617773.png
kindiana#1016: pretty much lol
Deleted User#0000: like to use multi-gpu or TPU, or any of this stuff, I literally had to add a flag and not touch anything else in my code
Deleted User#0000: or mixed precision, or gradient clipping, syncbatchnorm, etc
Louis#0144: Lightning is like huggingface for people who are scared of pytorch basically
Louis#0144: Or want TPU support
Deleted User#0000: or are lazy |
Louis#0144: True
CKtalon#7792: is the GPT2 tokenizer trained on the same corpus as GPT2 (i know..stupid question), so roughly 40GB text data?
EricHallahan#1051: IIRC, yes.
cognomen#6297: I suspect it's older than that
bmk#1476: idk, probably, but youd have to check the paper to be sure
CKtalon#7792: and gpt-neo is also using GPT2 tokenizer?
bmk#1476: i think GPT and GPT2 had different tokenizers but im too lazy to check
CKtalon#7792: or was something trained on ThePile
bmk#1476: same tokenizer as gpt2
CKtalon#7792: guess i'll have to dig into the paper and see how gpt2's tokenizer was trained
EricHallahan#1051: I think in the future we will need to train our own tokenizer.
CKtalon#7792: can't imagine training it on 800GB of text though
CKtalon#7792: haha
CKtalon#7792: the amount of ram needed is crazy
CKtalon#7792: any reason why they used 50+k token vocabs
CKtalon#7792: their paper doesn't really say much. just says this is what we did
CKtalon#7792: and like BPE good
EricHallahan#1051: What is the limit to the size of the vocabulary in tokenizers?
CKtalon#7792: i don't think there's a limit
CKtalon#7792: the number of char of your corpus lol |
bmk#1476: if we ever train a new tokenizer, we should save it for after we have multilingual data
Kharr#7888: And manually check it for garbage...
Kharr#7888: No way anyone looked at the GPT2 Tokenizer and thought "this vocab makes sense"
CKtalon#7792: i can imagine the vocab of this multilingual tokenizer would be over 100k in size
CKtalon#7792: considering japanese, korean, chinese, russian, and all the other weird ass characters
neko#5937: Why didn't openai release salle even in api form
neko#5937: DALL-E
bmk#1476: well, we wouldn't know any better than anyone else outside OA lol
neko#5937: It seems ironic that Microsoft ai is different than openai and had to do it's own video version of openai
neko#5937: Oh I get it
neko#5937: Nvm
mkualquiera#3484: I think OpenAI is still not quite sure how it wants to handle things
mkualquiera#3484: I mean they aren't really a compute provider, they are a research company
mkualquiera#3484: but they also don't want to miss out on the dollars
mkualquiera#3484: so they are probably conflicted
CKtalon#7792: do take note for like Chinese, due to the lack of spaces ,the BPE process takes like forever for a relatively large corpus (a few GB) (even for a purely chinese corpus)
CKtalon#7792: not sure how a multilingual one would fare
bmk#1476: good thing we have a lot of cpus
bmk#1476: :chad: https://cdn.discordapp.com/attachments/729741769738158194/838823659517771836/unknown.png
Kharr#7888: The simplest solution for multilingual models is probably to have separate embedding layers per language and something that selects which one to use based on the input (kind of like the multimodal work). Having all languages within a single vocab is pointless since there is little overlap. |
CKtalon#7792: i think ram will be an issue
CKtalon#7792: 755GB is likely not enough 😛
bmk#1476: good thing we have multiple machines with this much cpu and memory
CKtalon#7792: lol, can it be distributed though
bmk#1476: idk probably
CKtalon#7792: sigh, my chinese tokenizer sucks
CKtalon#7792: i had to do jieba and then sentencepiece
CKtalon#7792: ran out of ram on a 256GB machine
bmk#1476: i mean jieba can def be distributed for example
bmk#1476: not 100% sure about sentencepiece but i'd bet you can merge partial sentencepiece vocabs with just a small amount of state
CKtalon#7792: i did jieba to split out the terms. from that sentencepiece became relatively fast/trivial
CKtalon#7792: it has to do with the BPE process, and not having spaces really fucks up that process
CKtalon#7792: like i think it was not finishable in any time scale
CKtalon#7792: based on the corpus i had
CKtalon#7792: ~80GB
Sphinx#2092: You could probably just subsample.
Sphinx#2092: Unclear you really need 80 gb for a given language.
CKtalon#7792: that's what people say 😛 but i think a bigger corpus and the proper tokenizing of terms helps
þeremin#6617: Good morning - I'm a software engineer interested in AI alignment. I've worked with Tenserflow a bit, and I understand common machine learning techniques academically, but I've never built a large-scale machine learning application. One of the pinned posts mentions that you're bottlenecked on manpower; where should I look to contribute? Are there any needed tasks that I could do without deep knowledge of ML? Documentation, maintenance, first-drafting ideas?
EricHallahan#1051: First off, welcome! |
EricHallahan#1051: @bmk, do you have anything eval harness related?
bmk#1476: woah, awesome, more alignment people!
bmk#1476: @theremin so there's some stuff for eval harness that needs to get done that's been low priority stuff that we point noobs to for first contributions and that's what eric is referring to - though since you're interested in alignment, we could possibly find something there for you to do
bmk#1476: @þeremin
EricHallahan#1051: Yeah, alignment sounds more valuable than eval-harness lol
þeremin#6617: I'm perfectly happy to start with something basic to get my feet wet.
bmk#1476: how much do you know about alignment?
þeremin#6617: I think it is an important topic that I want to learn more about and help research, not one where I have any particular insight.
þeremin#6617: That said, I have read the Sequences on it from LW, and a bunch of stuff from MIRI.
þeremin#6617: I have a few pet ideas that are almost certainly wrong, but that I want to explore to figure out how.
bmk#1476: ok that's pretty good, yeah
bmk#1476: what directions are you most interested in?
þeremin#6617: Well, the idea that I'm most interested in is 'transferring' a preference ordering directly out of a human brain. Specifically, ML approximates a black-box function. It seems like it should be possible in theory to create a model that predicts what a human's 'moral intuition' or judgement about a situation would be.
þeremin#6617: I'm sure there's a nice concise term for that that I don't know.
þeremin#6617: In practice, doing that has obvious practical problems, as well as "what if one modern-day human is not the best judge for what the far future should look like".
bmk#1476: this sounds like value learning / inverse reinforcement learning
þeremin#6617: So the idea that I've been playing around with is trying to create an agent that has a goal to solve - some simulated game - and can also send questions about plans to a human.
þeremin#6617: That certainly sounds like the right words for it; I'll add that to my notes to research specifically.
Sid#2121: if you really want grunt work, some of the gpt-neox documentation needs updating. But I would prioritize any alignment work we have
EricHallahan#1051: I can do documentation lol |
þeremin#6617: Okay, I'll put that at the bottom of the list and see if there are any other suggestions.
bmk#1476: i'm not sure this solves that problem entirely - see CEV for more details but even if you could perfectly predict a human brain's moral judgement, "what we want", "what we say we want", and "what we would want if we were smarter, knew more, etc" are all separate things
þeremin#6617: Right - that's a good point.
Sid#2121: I just made a tool which autogenerates some :ultrazucc:
Sid#2121: but it will need a bit of integrating still
EricHallahan#1051: I kinda am just relaxing for the rest of the day after my final, so if you want to point me in a certain direction on what to update, I can do some of that.
Sid#2121: also are there any regex nerds here, my regex is not as robust as it could be
bmk#1476: i recommend reading about CEV https://intelligence.org/files/CEV.pdf
Sid#2121: ok, gimme a few mins
bmk#1476: > In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought
> faster, were more the people we wished we were, had grown up farther together; where
> the extrapolation converges rather than diverges, where our wishes cohere rather than
> interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.
bmk#1476: basically CEV is the ideal gold standard that we'd want to estimate
bmk#1476: yud explains in excruciating detail what he means by this in the paper
þeremin#6617: So I have some thoughts about that, and about how to respond to your point, but let me go read that PDF to see if they're already covered by CEV.
Daj#7482: (worth mentioning CEV is from literally 2006 and considered super outdated)
Daj#7482: (I'm not even sure Yud endorses it anymore)
bmk#1476: what would you recommend to supercede it?
bmk#1476: i find the idea of CEV at least in broad strokes super helpful |
Daj#7482: CEV has good symbolic value, but if you're interested in progress on value learning you could start with https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc, Stuart Armstrong's stuff (https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into) and maybe Abram Demski's normativity sequence (https://www.lesswrong.com/s/Gmc7vtnpyKZRHWdt5). For some interesting (critical) thoughts on human models from MIRI folk, see https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models
Daj#7482: Note this is generally pretty advanced stuff so don't feel like you need to read it all at once or anything lol
bmk#1476: i brought up CEV mainly because it's a good collection of direct counterexamples to the idea that we just need a model that can predict what a human would want or would say they want and everything works fine
Daj#7482: I do recommend reading CEV, I was just answering your question as to what I think supercede that document
bmk#1476: ah k
bmk#1476: that makes sense
Louis#0144: Must be good then
Daj#7482: :silencebird:
Imperishable_NEET#1969: Say, what math concepts do I need to know to understand how these algorithms work? I know there's a lot of linear algebra involved.
Daj#7482: You can get pretty far without deep math, but LinAlg is by far the most important yea
Daj#7482: And calculus
Daj#7482: All of NNs is basically taking the derivatives (calculus) of really big matrixes (LinAlg)
Daj#7482: But in practice you just do `network.learn(data)` or whatever lol
Imperishable_NEET#1969: I understood Calculus once in college, shouldn't be hard to get back up to speed.
Daj#7482: I'd recommend just diving directly into code and then learning math later as needed ~~at least before Stella or one of the other mathematicians catches you~~
Daj#7482: There's tons of good intro to ML courses and tutorials
zphang#7252: I think that as long as you've taken intro college calculus, intro linear algebra, and intro statistics at *some point*, you have enough to navigate a lot of modern work
Daj#7482: Oh yeah, learning statistics is just generally a good idea
Daj#7482: Naturally thinking in terms of probability distributions and the like is highly recommended
zphang#7252: and you can use phrases like "update my prior" |
Daj#7482: You need to read The Sequences to be allowed to say that
Dromarion#3383: I'm taking some ML courses on Udemy though you have to do some diy on the curriculum. I'm mostly following this roadmap.
https://whimsical.com/machine-learning-roadmap-2020-CA7f3ykvXpnJ9Az32vYXva
inox#5400: 99% of deep learning all you need is matrix multiplication and the chain rule
zphang#7252: arguably you don't even need to know the chain rule, you just need to know it exists
inox#5400: for sure I don't know the difference between forward mode and reverse mode automatic differentiation without checking
inox#5400: ok I checked: reverse mode is the chain rule and forward mode is the one I don't have time to learn
cfoster0#4356: They both apply the chain rule
guac#4716: whaaat you don;t know your vjps and your jvps
cfoster0#4356: *frowns in Pearlmutter*
inox#5400: dang you're right: I should've said one is schoolbook chain rule and the other isn't
Louis#0144: New pfp
Louis#0144: Also can someone give Hayley the regular tag pls?
nz#9710: Isn't that also bilal's?
EricHallahan#1051: Nope.
EricHallahan#1051: Same as it has been.
inox#5400: oh nice thanks!
Louis#0144: I was wondering why it had been so long and yet u didn’t have it
Louis#0144: 🤷♂️
inox#5400: it's actually an old pfp 😏 I was trying out being a furry for a couple months |
tg#7159: Hello! First time poster here. I've been trying to reproduce basic image synthesis using the excellent dall-e library of @Deleted User . So far I'm not having much luck. I have a dataset of about 1 million selfies. Each image is 256x256 pixels. I've tried encoding both using a custom codebook and also the off-the-shelf OpenAI and VQGanVAE1024 codebooks. The decoded output seems to be pretty good qualitatively. However, when I train the auto-regressive transformer my loss converges very quickly and the flatlines. When I plot the argmax predictions for each token, the model seems to be essentially predicting the same token everywhere (e.g. as if it was unable to attend to anything).
I'd be super happy to know if anyone has any debugging tips / suggestions? Are there some robust metrics to look at to see if the model is actually progressing? I've tried using a really slow warm up and it doesn't seem to make much difference... once the learning rate picks up the model converges quickly to this degenerate output. My transformer has about 8m parameters, and I can do a batch size of about 8 images (each being a sequence of 256 tokens).
Deleted User#0000: @tg hey! So there's actually a discord for the dalle-pytorch repo
tg#7159: Oh, and FWIW I was able to get the transformer to work well on a different (and much simpler) dataset. Have others found this sensitivity to the dataset?
tg#7159: Oh great, let me post there @Deleted User
Deleted User#0000: one of the contributors also noticed adamw doesn't converge
Deleted User#0000: we just switched it back to Adam this morning
Deleted User#0000: So you may want to retry
tg#7159: @Deleted User Is the discord invite-only?
tg#7159: (do you mean the one linked off the github repo?)
Deleted User#0000: Yup, the one on the repo
Deleted User#0000: I'm not sure, someone else actually heads the discord community
tg#7159: https://cdn.discordapp.com/attachments/729741769738158194/838895815584907294/unknown.png
cfoster0#4356: Try this https://discord.gg/YqEXUdjN
Deleted User#0000: @cfoster0 updated the discord link on the readme, ty ty
cfoster0#4356: O the link I posted will expire at some point just fyi
Deleted User#0000: i'll ask them to give me an unexpirable link
Louis#0144: Let’s say I wanted to do CLIP with two text heads that are massively different modes
Louis#0144: How hard would that be |
Louis#0144: And would it be stupid
cfoster0#4356: Modes?
Louis#0144: Like one head is stories and the other one is associated story reviews
Louis#0144: Using the reviews to guide the story generation by disentangling
cfoster0#4356: Oh so just like text-text matching instead of text-image marching?
Louis#0144: Yeah
cfoster0#4356: That might work, my only worry is the task could be too easy
EricHallahan#1051: I think the methods presented in CLIP and DALL-E are pretty universal.
Louis#0144: My hope is that if I do text to text matching with reviews I can initialize by giving a review that praises the lack of plot holes LMAO
Louis#0144: i tried doing it as prompt engineering with GPT3
Louis#0144: Didnt work
Louis#0144: This is just a toy idea in my head
Louis#0144: It’s also text to text grounding
Louis#0144: So I thought maybe I’ll throw it at the wall here
Louis#0144: @cfoster0 u laugh but LMs that know how to avoid plot holes is the core of my research interests
Louis#0144: Lmao
cfoster0#4356: I laugh because I think it'd be funny if this works
Louis#0144: LMAO
Louis#0144: it could
alexyz#3459: Does anyone know how many teraflops the TPUs on Colab have? |
EricHallahan#1051: Colab TPUs are v2-8s
EricHallahan#1051: Kaggle TPUs are v3-8s
kindiana#1016: 180 iirc
alexyz#3459: Ah ok
alexyz#3459: what types of TPUs does TRC provide?
Louis#0144: Kaggle has tpus??
Louis#0144: Wtf
kindiana#1016: A bunch of v3 and v2-8s mostly lol
alexyz#3459: ah ok
alexyz#3459: because I got access to TRC 🥳
alexyz#3459: that's fun
kindiana#1016: The email should say what you have access to
alexyz#3459: Ye
alexyz#3459: thanks 🙂
Rina#0391: guys
Rina#0391: i started my own gpt naval on spacehey
Rina#0391: she is for coding
Rina#0391: naval for coding
EricHallahan#1051: GPT-Neo is already very good at code.
Rina#0391: no no |
Rina#0391: i mean like
Rina#0391: fortune cooking styled setnance
Rina#0391: sentances*
Rina#0391: on how to code better
Rina#0391: like naval
Rina#0391: a ai that teaches
Rina#0391: gpt naval
Rina#0391: but more personafied thats human like
StellaAthena#3530: I don’t see why you’d want that (I also don’t know what naval is) but yeah you can fine tune on that if it makes you happy
Rina#0391: o okay
Rina#0391: stella
Rina#0391: woah
Rina#0391: i saw your talk
Rina#0391: on ai
Rina#0391: can we chat stella
Rina#0391: im a huge fan of gpt3
Rina#0391: omg i just realizd shes on my friendlist
Rina#0391: i forgottt
Rina#0391: hows dall e neo going
Kia#2550: Dall e neo? |
Kia#2550: Never heard of that
ExMachina#1693: Hi folks, I'm new here, thanks for open sourcing gpt-neo! I've been trying to fine tune the 1.3B model via the huggingface interface on a multi GPU single node setup on AWS (g4dn.12xlarge - 4 x TeslaT4 16GB RAM) question : has anyone here managed to use deepspeed to fine tune gpt-neo-1.3B or 2.7B on a multi GPU setup on AWS?
https://github.com/dredwardhyde/gpt-neo-fine-tuning-example I was using this for reference, but only one GPU ends up being used when I train using deepspeed. Works for the smaller Netflix dataset (fits on single GPU), but OOMs for anything larger (i.e average length > 200 )
Sorry if this is a stupid question/wrong forum, any help appreciated
Louis#0144: You don’t need that much to finetune 1.3b
Louis#0144: There’s plenty of people who did it single GPU
ExMachina#1693: I'm trying to basically have a setup where I can experiment between the 2.7B and 1.3B model in the same system to compare results for a new benchmark I'm developing. The distributed part of the setup seems to not be working for me
ExMachina#1693: If anybody has gotten multi GPU deepspeed with gpt-neo to work reliably, do let me know
Louis#0144: LOL
Louis#0144: ;-;
Louis#0144: The horror
Louis#0144: you need to disabled fp16 firstly
EricHallahan#1051: Well you can try it on integrated graphics, but you won't get far.
Louis#0144: You can’t use fp16
Louis#0144: It NaNs over a certain context size
Louis#0144: There’s a giant thread on transformers GitHub about this by yours truly
ExMachina#1693: Would you have a link to this? thanks!
Louis#0144: https://github.com/huggingface/transformers/issues/11076 |
janus#0150: Any word on this? Who should I ask? I'm extremely interested in the logit lens stuff
kindiana#1016: from a quick look of the code it looks like weight tie is default and not disabled for the runs
janus#0150: If you keep digging into this please share! I found the logit lens idea very counter intuitive, but if its true its would be an *incredible* interpretability tool. It would be good to try to explicitly construct the network to allow this kind of interpretability without a huge performance cost.
cfoster0#4356: I think you could probably construct some kind of "early exiting" GPT variant if you really want to encourage that kind of structure
kindiana#1016: Yeah there's been a lot of work along those lines I've seen
janus#0150: Yeah, like the loss is based on the prediction of each intermediate layer as well?
kindiana#1016: You can also have aux objectives
janus#0150: Any keywords for me to search literature?
janus#0150: By the way, are you guys committed to the same tokenization strategy OpenAI used? I think neo would be much better if you tokenized digits individually
janus#0150: I guess thats kind of off-hand, but there is surely something to be learned from GPT-3's bpe problems
janus#0150: even fairly conservative changes should help
kindiana#1016: https://arxiv.org/abs/2006.04152
kindiana#1016: https://arxiv.org/abs/2004.12993
kindiana#1016: https://arxiv.org/abs/1807.03819
kindiana#1016: https://arxiv.org/abs/2012.03837
janus#0150: 💯 Thank you!!
kindiana#1016: (sorry for spam lol, but some references to early exit/intermediate objectives in literature)
kindiana#1016: it uses the same tokenizer as gpt2 for drop in compatibility 🤷
janus#0150: Interesting that these are all from a capabilities standpoint
Rina#0391: Hi |
Rina#0391: any jobs here?
Rina#0391: My family thinks i sit here and play video games when I run code in my browser all day
Rina#0391: they think i do nothing.....
Rina#0391: now they want me to go to rehab
janus#0150: Yeah, thats a fair point. But Neo could define a new standard! Just think, GPT-4 in gpt-neo format for drop in compatibility
Rina#0391: Any internship
Rina#0391: janus can i dm you
Rina#0391: i have questio
janus#0150: lmao
janus#0150: sure
Rina#0391: add me
Rina#0391: friend list?
Rina#0391: i need to
Rina#0391: I only have 1 day
Rina#0391: to get a job
Rina#0391: before rehab starts
Rina#0391: on my freaking birthday
Rina#0391: i need a job soon
Rina#0391: not bestbuy
Rina#0391: I want to work here |
Rina#0391: i have been hardcore prompt engineering
Louis#0144: Uh
guac#4716: hey i hope you get the best help but this probably isn't the best discord to vent mate :/
Louis#0144: What is going on here
Louis#0144: @Rina everyone here is an unpaid volunteer
cst#9766: some of us are grad students, which is arguably worse
Louis#0144: I think a large portion of us are grad students
Rina#0391: oh
zphang#7252: I have some upcoming work on this as well, but more on BERT-style models and NLU tasks
Rina#0391: sorry
zphang#7252: for some reason it doesn't work on electra tho
Kia#2550: Um,Wish you're fine and safe...
Kia#2550: Uh
Kia#2550: Have a great day to
Louis#0144: I found NLI and electra gets spooky
Louis#0144: If that’s at all relevant
janus#0150: Whats the angle of the paper?
zphang#7252: it was an interpretability paper on fine-tuned models, and then just veers off into a weird direction
zphang#7252: actually I might as well preview the results here
zphang#7252: So CKA is this method for comparing similarities of representations, so I applied it to every layer for task-tuned BERT-type models, on their CLS token |
zphang#7252: https://cdn.discordapp.com/attachments/729741769738158194/838939868721709127/unknown.png
zphang#7252: surprisingly, we found that RoBERTa models have this weird block-diagonal structure to the representations
this means that the early representations are fairly similar, then at some point there's a break and the later representations are very similar
zphang#7252: you can see some links to the logit lens results here
zphang#7252: It also implies that you could maybe just drop the later transformer layers and get the same performance, maybe?
zphang#7252: Anyway, for ALBERT we see something similar (e.g. see RTE), but is slightly weirder cause all its layers are tied
But for ELECTRA this just falls apart
zphang#7252: So anyway, we tried the thing where we skip the later transformer layers, and see how well we do on tasks (1) with further fine-tuning of the head (2) without further fine-tuning
zphang#7252: https://cdn.discordapp.com/attachments/729741769738158194/838940856417255459/unknown.png
zphang#7252: (ignore ALBERT/HellaSwag, it looks borked)
kindiana#1016: x is layer?
zphang#7252: yup
zphang#7252: basically yes, for RoBERTa and ALBERT, you can usually drop off a couple of top few transformer layers *without further tuning* and get comparable performance
janus#0150: Wow, interesting
zphang#7252: (with further tuning of the head you get somewhat better performance as well)
zphang#7252: again, ELECTRA is weird because ???
janus#0150: @adamShimi ^ re: our conversation about what layers before the ouput are doing
zphang#7252: but yeah, the results are empirical and messy so it's been hard to properly write up
janus#0150: Yeah there is a lot going on
janus#0150: Interesting that its so discontinuous on some tasks and an S curve on others |
zphang#7252: lol, the next experiment in the paper I do some weird attention swapping thing
and my advisor told me "...yeah you need to split this into 2 papers or something"
janus#0150: What would this kind of experiment look like for gpt-neo? Chopping off intermediate layers and training a new logit layer for each?
zphang#7252: lol I was gonna say, can't really do it on gpt-3
kindiana#1016: yeah its kinda difficult for ar models :thonk:
kindiana#1016: you don't have any cls tokens
zphang#7252: early exit would be great for AR models
zphang#7252: because you can vary the exit point for every token
kindiana#1016: but then you can't do batched inference lol
kindiana#1016: don't think it will be worth it
zphang#7252: tru
zphang#7252: depends on your inference setup I guess
kindiana#1016: it helps if you do bs=1 inference, but I don't think most people do that
zphang#7252: might be useful for the "bs=1 inference" hobbyists
janus#0150: Forgive my ignorance but I'm not familiar with the bert or electra architectures. What do they output? Class probabilities? Is there thus no way (or expectation) that we could interpret intermediate layers in natural language?
zphang#7252: BERT-types do MLM (probabilities over tokens), ELECTRA does probability over real/fake token
zphang#7252: so you could still do the same thing with using the MLM head on earlier layers, at least for BERT-types
kindiana#1016: have people tried combining electra and mlm objectives?
zphang#7252: I've not seen any others use the electra objective (ELECTRIC maybe?), probably cause it requires training the generator model as well so it's not as brainless as MLM go brrr
kindiana#1016: yeah but if you are doing electra objective you can do mlm for "free" |
zphang#7252: that's true, I've not seen it so far at least
dmvaldman#4711: has anyone used "shower thought: " as a prompt for GPT3/GPT-neo?
EricHallahan#1051: > shower thought: _I don't know. This is going to sound like a stupid question._
>
> "Are you guys ready?" he said.
>
> They nodded.
>
> He took them through it, from where he sat on the couch. When he finished, he turned to Amy.
Now they have.
dmvaldman#4711: well thank you. it was worth a shot 🙂
EricHallahan#1051: You can visit #the-faraday-cage-archive for some interesting stuff.
dmvaldman#4711: woah
EricHallahan#1051: It is always hopping in there now.
EricHallahan#1051: (Lame Bunny pun not intended)
inox#5400: wow batbot's getting really good
inox#5400: that channel moves so fast I have no idea what changes @BoneAmputee has made
Louis#0144: Many times
dmvaldman#4711: any winners?
Louis#0144: Nah |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.