data
stringlengths 115
7.61k
|
---|
alexandrost#2936: I was wondering, in case we don't have a GPU that has enough Video RAM to load the whole GPT-Neo model, is it possible to use two GPUs for that?
alexandrost#2936: I have thought that one solution would be to re-train it using model-parallelism , but I was wondering whether there is a way to load the already available trained model on two gpus
alstroemeria313#1694: have you tried fp16
EricHallahan#1051: You can try binary16 or bfloat16.
alexandrost#2936: @alstroemeria313 No I haven't. Forgive my ignorance , what is the resolution of the current model?
alstroemeria313#1694: i was able to use neo 2.7b in half precision
alstroemeria313#1694: for inference
EricHallahan#1051: 1.3B was trained at bfloat16, 2.7B at binary32.
alexandrost#2936: thanks @alstroemeria313 and @EricHallahan ! that's very helpful
EricHallahan#1051: (Blame Sid for the discrepancy, it was a mistake on his part.)
alexandrost#2936: @EricHallahan is it possible for a binary32-trained model to be loaded in binary16 mode? for inference?
Sid#2121: :guilty: the voices told me to
EricHallahan#1051: Yes, but when sampling you need to be careful.
mkualquiera#3484: if we get acausally invaded you all know who to blame
Sid#2121: I would use bfloat16 over binary16 if you can
EricHallahan#1051: There is a bug in PyTorch that means that `torch.multinomial` is broken at binary16, I haven't tested bfloat16.
alexandrost#2936: do you think that could be done through Huggingface? or should I turn to the Eleutherai repo?
Sid#2121: just cast the logits to 32
Sid#2121: (you should be doing that anyway)
EricHallahan#1051: No, I am exclusively referring to Transformers. |
alstroemeria313#1694: can you just replace it w/ the gumbel-max trick
EricHallahan#1051: I'm saying that using HF naively will cause you problems with sampling.
alstroemeria313#1694: ah
alexyz#3459: @bmk what's the openinstruct model, looks interesting
EricHallahan#1051: Training on eval harness stuff IIRC.
alexyz#3459: 👍
bmk#1476: test model please ignore
alexyz#3459: also, https://huggingface.co/lg/ghpy_40k doesn't have the tags for HF's hosting inference API to work with it
alexyz#3459: 👍
bmk#1476: nothing on my hf account has any guarantees
bmk#1476: in fact i make everything confusingly named on purpose
bmk#1476: to deter people from deciding for whatever weird reason to use any of it
bmk#1476: like literally who is downloading these models?? https://cdn.discordapp.com/attachments/729741769738158194/845081842649530378/unknown.png
bmk#1476: please stop downloading them
EricHallahan#1051: You should just not make them public.
bmk#1476: i can 100% assure whoever it is thats downloading these that it's not what theyre looking for
bmk#1476: but i need to use them
bmk#1476: and i dont wanna log into hf
EricHallahan#1051: ¯\_(ツ)_/¯
bmk#1476: also can i even make models private? |
bmk#1476: isnt that a paid-only thing
EricHallahan#1051: It is.
bmk#1476: then why the heck did you suggest it
EricHallahan#1051: You don't need to put models on HF to use them with HF.
bmk#1476: it makes life significantly easier
bmk#1476: otherwise i need to store the models somewhere and move them around whenever i need them
alexyz#3459: isn't this the reason gpt-neo 125m got public
alexyz#3459: just to make it easier to do evals
bmk#1476: yes
bmk#1476: and everyone else has wanted to murder me for it since
alexyz#3459: what happened to 350M though?
alexyz#3459: wasn't it also "released"
bmk#1476: ok lemme set this straight
bmk#1476: the 125 and 350 models are not officially "released" in any capacity
bmk#1476: please dont use them
bmk#1476: there are no guarantees about the model
alexyz#3459: I know, i'm trying to potray sarcasm, i know
bmk#1476: there are no guarantees about any of the other models, either, but even less guarantees about 125 and 350
alexyz#3459: there's a reason they got no announcement
bmk#1476: ;------; https://cdn.discordapp.com/attachments/729741769738158194/845083748986519573/unknown.png |
bmk#1476: who added this to the 125M model
bmk#1476: this is wrong
bmk#1476: @StellaAthena
alexyz#3459: how many steps was it trained for?
bmk#1476: it says this on all 3 models
bmk#1476: literally only the 2.7B model was trained for 400k steps
bmk#1476: the 1.3B model was trained for 380k steps
EricHallahan#1051: Why does it even have a model card?
alexyz#3459: guess someone just duplicated the model cards :berk:
bmk#1476: idk!
alexyz#3459: for some reason it's the 12th most downloaded model on Hugging Face for text generation
EricHallahan#1051: Because people take "GPT-Neo is open source GPT-3" far too seriously.
EricHallahan#1051: It pretty much is, but there are physical constraints on performance.
EricHallahan#1051: You could make the same claim about GPT-2.
bmk#1476: i dont remember how many iters we trained our small model for but it def wasnt 400k
bmk#1476: ugh
bmk#1476: i fucking hate documentation
EricHallahan#1051: So farm it out (to someone like me)
bmk#1476: @Sid do we have any ideas how many iters 125M was trained for?
bmk#1476: i cant find the original config file |
bmk#1476: ok i think i have it
bmk#1476: 572,300 iters
bmk#1476: note to self: model dir is `gs://neo-d/models/GPT3_SMALL/`
Kharr#7888: So it was trained for more steps than the others?
bmk#1476: but smaller batch size
bmk#1476: i worked this all out at somep oint
EricHallahan#1051: You also have to remember that it is way less sample efficient too because it is a tiny model.
Kharr#7888: I'm just curious because its latent space is very well conditioned
bmk#1476: https://discord.com/channels/729741769192767510/827301113027756042/829104277489582101
𓅬 gabriel_syme 𓅬#3220: do you mean in downstream tasks? sry, I'm way behind on NLP terms
Kharr#7888: Yes, the model is actually really good for its size. One of the best for clustering text.
𓅬 gabriel_syme 𓅬#3220: thank you! will try it 🙂
bmk#1476: ok ive updated all the huggingface things with the right numbers of iters and tokens
DanHendrycks#8913: Hi all. @chilli , others, and I found that GPT-Neo is clearly the strongest code generation model.
https://twitter.com/DanHendrycks/status/1395536919774121984
kindiana#1016: have you seen @bmk 's models fine tuned on github?
DanHendrycks#8913: No, we fine-tuned GPT-Neo ourselves (it took about a week).
alexyz#3459: That's due to the Pile having Github in the dataset I assume
alexyz#3459: so it already has some text in it
kindiana#1016: oh cool |
DanHendrycks#8913: Yes, though we pretrained GPT-2 models on 30GB of github as well.
alexyz#3459: Pretrained?
alexyz#3459: oh now I see
bmk#1476: Hm then why would pile models still do better?
EricHallahan#1051: Because it is bigger?
DanHendrycks#8913: Well, a guess is that (1) we trained on only 30GB, (2) Neo is bigger, and (3) the architecture dimensions are more tuned compared to GPT-2.
EricHallahan#1051: :morelayers:📈
kindiana#1016: I think an ablation of different beam params would be interesting
DanHendrycks#8913: Noted. We'll also look into top-10/best of 10 accuracy instead of top-1 accuracy.
bismarck91#5255: https://twitter.com/DanHendrycks/status/1395536919774121984
bmk#1476: NEO STRONK https://cdn.discordapp.com/attachments/729741769738158194/845099938475016203/unknown.png
bmk#1476: @kindiana you should totally give hendrycks a copy of the 6B model to see how it does
bmk#1476: also this creates incentive to write the jax to whatever other format code lol
bmk#1476: btw how many tokens is 6B at rn?
kindiana#1016: 250B
bmk#1476: ah ok so we're close
kindiana#1016: I think I'll wait til 350
kindiana#1016: so its closer to iso-flops
bmk#1476: can you at least save a copy at 300B?
kindiana#1016: I save a copy every 10b |
bmk#1476: and you didnt discard any?
kindiana#1016: I checkpoint every 0.5b
kindiana#1016: but keep one in 20 checkpoints
bmk#1476: Ah
alexandrost#2936: are you training a 6B model?
kindiana#1016: I am :hap:
alexandrost#2936: 🤤
EricHallahan#1051: :gameryes:
alexandrost#2936: sounds exiting !
alexandrost#2936: see you later guys, time for sleep
alexyz#3459: is bigger better there?
Kia#2550: :hap: Nice
𓅬 gabriel_syme 𓅬#3220: yeah, they are percentage of success in tests I think
bmk#1476: I sure hope so
AI_WAIFU#2844: This means that EleutherAI is SOTA when it comes to recursive self improvement.
zitterbewegung#4846: hi
zitterbewegung#4846: I read some stuff by Louis Castricato about using modal logic to express narratives and using communiative diagrams
zitterbewegung#4846: are they on this discord it seems like they are
EricHallahan#1051: @Louis
Kia#2550: Yup the guys is here |
Kia#2550: Also never knows the guy has a last name
zitterbewegung#4846: so they are honk or they goose
zitterbewegung#4846: @honk i've been using blenderbot to try to make a visual novel
zitterbewegung#4846: i studied some modal logic and category theory in college
Kia#2550: Ow You can probably talk about this in #multimodal or probably stay in this channel
Kia#2550: Probably kouis is asleep for the moment
zitterbewegung#4846: okay just wondering where i should be talking about this
EricHallahan#1051: You should probably send him a DM.
Kia#2550: This ^
Louis#0144: Hi
Louis#0144: I’ve never had someone join here looking for me LMAOO
Kia#2550: His awake
Kia#2550: Never known you actually released papers about this things
Louis#0144: What is blender bot?
Louis#0144: I tried doing this with CLIP
Louis#0144: It didn’t really work
EricHallahan#1051: blenderbot
Louis#0144: I posted it on twitter
Louis#0144: Let me find it...
zitterbewegung#4846: https://huggingface.co/transformers/master/model_doc/blenderbot.html |
Louis#0144: https://twitter.com/lcastricato/status/1393361923660464146?s=21
zitterbewegung#4846: so right now i have a really simple game
Louis#0144: I tried animating an erupting volcano with clip
Louis#0144: By progressively increasing the size of the mask
Louis#0144: And amplifying the term “lava”
Louis#0144: Does that count?
Louis#0144: It worked ok
Louis#0144: I couldn’t get it to do more interesting stuff
zitterbewegung#4846: oh that looks cool
Louis#0144: The notebook is pinned in #art
Louis#0144: Feel free to experiment
Louis#0144: It’s all google colab
Kia#2550: What's kind of game? nonetheless we can talk about this in #off-topic really seems interesting
zitterbewegung#4846: okay ill talk with honk on multimodal and kianne on offtopic
Louis#0144: I’m very busy rn tbh
Louis#0144: If you DM me questions I can get them whenever
Louis#0144: Just pls concise questions only
Kia#2550: Nonetheless You can talk to Louis for the moment We can talked later
Kia#2550: Have a great both of you guys and stay safe
alexyz#3459: blenderbot is supreme chatbot |
alexyz#3459: there's a 90M, 2.7B, and 9.4B models
alexyz#3459: it's much better than DialoGPT
Louis#0144: WOAH
alexyz#3459: like why do people use DialoGPT when it exists
Louis#0144: How good is it
Louis#0144: Did they release the 9.4b model
alexyz#3459: Yep, Facebook actually released it
Louis#0144: Damn
alexyz#3459: dunno why it's not on HF
alexyz#3459: but it's released
alexyz#3459: There's also 400M and 1.4B distilled models
alexyz#3459: But people still ignore it 😠
Kia#2550: It's probably not that great
alexyz#3459: I've used it
alexyz#3459: it's great
alexyz#3459: the 2.7B model can run in Colab
Kia#2550: Ow :thonk:
alexyz#3459: it actually chats pretty well
alexyz#3459: DialoGPT is horrible, and yet people download that one more
Kia#2550: That's nice to hear |
alexyz#3459: like it'll go into repetitive loops and other weird things like that
alexyz#3459: Dunno why more people haven't heard of Blenderbot
alexyz#3459: There's even a feature where you can give it an identity, like "you are a friendly, nice, kind, tech support person" or something like that
alexyz#3459: you could put whatever you want, and it'd take it as its identity
𓅬 gabriel_syme 𓅬#3220: that's pretty cool
Kia#2550: So adorable
alexyz#3459: Of course, LaMDA looks like it'd blow it out of the water
𓅬 gabriel_syme 𓅬#3220: does it run in COLAB? 😄
alexyz#3459: but LaMDA will probably never be released open-source
alexyz#3459: Well, the 2.7B and lower models do
alexyz#3459: the largest one requires something like 32GB of VRAM to run tho
𓅬 gabriel_syme 𓅬#3220: oh, I figured it was just :morelayers:
Kia#2550: Google barely releases any papers about that...Nonetheless it's interesting but I want to read
alexyz#3459: well from the demos at Google IO it looks really capable
alexyz#3459: like it also has the identity thing
alexyz#3459: at the demos they spoke to the planet Pluto and a paper airplane
Kia#2550: We can probably do it with neo
alexyz#3459: Probably
Kia#2550: Probably yeah
alexyz#3459: but nobody's tried yet 🤔 |
Kia#2550: ~~Talking to a goosegirl~~
Kia#2550: Talking to a Tree would be nice to be honest
alexyz#3459: would be interesting
alexyz#3459: Hi.
**Hello, how are you doing today?? I'm good, how about you?**
I'm good too!
**What do you like to do for fun? I like to play video games.**
I like to play video games too!
**What is your favorite video game to play??? I love playing video games**
My favorite is Minecraft
**I've never played that one, but I've heard a lot about it.**
What's your favorite game?
**I don't really have a favorite, I like a lot of different types of games.**
What video game have you played last though?
**I haven't played a video game in a while, I've been too busy with school.**
Yeah, so have I, what's your least favorite course?
**I really don't have a least favorite, they're all fun in their own way.**
What courses do you have, then?
**I've taken a few art courses, and a few business courses. What about you?**
Ah, just this and that. What are your other interests, besides video games? |
**I like to read a lot, especially science fiction and fantasy novels. You?**
alexyz#3459: Blenderbot 2.7B
alexyz#3459: bold is the bot
alexyz#3459: Honestly I'm suprised by how good it is to chat with, last time I chatted with Blenderbot was last year
alexyz#3459: guess i forgot how good it is
alexyz#3459: but it does seem like it evades some questions, like if I ask what's it's favorite video game is
alexyz#3459: I kept the convo going, and then it evaded a question about its favorite novel
alexyz#3459: dunno if I'm imagining it or not
alexyz#3459: Well, I like to read fantasy novels too!
**Do you have any favorite authors??? Any recommendations?**
Well I like the Harry Potter series, my brother likes the Wings of Fire series
**I haven't read that one, but I have heard of it. Is it good?**
Well Harry Potter is good! I really like it, but what book series would your recommend?
**I don't really have any recommendations, I'm not much of a reader.**
well you just said you like to read a lot though
**I do, I just don't have time to read as much as I'd like to.**
Well, what do you like to read when you have the time?
**I usually read non-fiction books. I like to learn about different topics.**
I'm asking about what is your favorite fiction novel
**Oh, sorry, I misinterpreted your question. My favorite is The Catcher in the Rye.** |
alexyz#3459: Bold is the bot as like before
EricHallahan#1051: Can you please not spam #general? Maybe go to #the-faraday-cage-archive?
Jozef Poniatowski#7589: maybe dreams work like language models
Jozef Poniatowski#7589: recursively generating the dream states
Jozef Poniatowski#7589: in that sense current lms could be "constantly dreaming"
Jozef Poniatowski#7589: since they can't interact with any kind of external feedback
Jozef Poniatowski#7589: during inference
cfoster0#4356: Yeah I think this intuition is basically correct
cfoster0#4356: Dreaming as rolling out counterfactual world state transitions, not clamped by sensory inputs
EricHallahan#1051: PyTorch XLA, I suggest reading the docs.
kindiana#1016: not yet
kindiana#1016: its pretty buggy ime
kindiana#1016: very easy to fall off performance cliffs
Jozef Poniatowski#7589: yeah, im wondering if the "state transition rollout model" == the part of the brain that does dreaming == decoder-only LM
is something separate from an "interpreter",
like when you dream, it could be that the rollout model keeps simulating state transitions, but the interpreter is half asleep or even not awake
you could argue that the rollout model == decoder only model can and already does interpretation,
but maybe there's some unknown benefit to separating the two? |
Jozef Poniatowski#7589: such as, if you let a separate intepreter inform the generator you can guide the generator in a way that it can't by itself. although i can't see why a decoder alone couldn't achieve the same effect
spirit-from-germany#1488: 🙂 https://cdn.discordapp.com/attachments/729741769738158194/845176965979832390/unknown.png
spirit-from-germany#1488: https://t.co/90HrYv4QW9?amp=1
Kia#2550: I saw a paper or a test earlier about this
Kia#2550: And wow
spirit-from-germany#1488: *Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems, so we find that machine learning models are beginning to learn how to code.*
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/845177550343110666/SmartSelect_20210520-235342_Chrome.jpg
bmk#1476: :ultrazucc:
bmk#1476: citation go brrrrrr
spirit-from-germany#1488: Congrats, guys. You made it into a UC Berkeley paper 🙂
spirit-from-germany#1488: I hope many more are to come 🙂
spirit-from-germany#1488: ( At some point GPT-Neo could write the code for you 😄 )
Kia#2550: Codenet: :ultrazucc:
ersatz#0001: is there a list of times GPT-Neo has been cited in a paper (even a preprint)?
Sid#2121: http://vm.eleuther.ai:8999/
Sid#2121: we only just added a citation field for it lol - most of the previous citations for neo we've directed toward the pile paper - since that's the main difference between neo and gpt2 and our main contribution https://www.semanticscholar.org/paper/The-Pile%3A-An-800GB-Dataset-of-Diverse-Text-for-Gao-Biderman/db1afe3b3cd4cd90e41fbba65d3075dd5aebb61e
ersatz#0001: Thank you for this
alexandrost#2936: if we wanted to train the GPT-Neo model with a higher number of parameters, do you think the Pile dataset would be enough? or would we require a much larger dataset as well?
Daj#7482: Pile is enough until well into the trillions of parameters
alexandrost#2936: oh I see, thanks @Daj |
Homo Ludens#5962: Looks good, can be tried: https://bit.ly/3oAqA3N
StellaAthena#3530: It says I have zero papers 🤨
EricHallahan#1051: https://eleuther.ai/publications/
Homo Ludens#5962: Yea, it is relatively new :/
Homo Ludens#5962: Also, I thought the system could find a lot of contrasting articles, but only a few :/
chilli#5665: fwiw, ross wightman has recently been using it a lot
Daj#7482: https://twitter.com/ID_AA_Carmack/status/1395817564618596352?s=19
EricHallahan#1051: char GPT-Neo wen
AI_WAIFU#2844: I think someone has done something like this before, bit GPT-Neo wen
bmk#1476: whenever someone implements it in neo
bmk#1476: if you can implement something in neo and the tokenization script, i can run it easily
Daj#7482: He means sub char, literally only two input tokens
AI_WAIFU#2844: GPT-Neobit
EricHallahan#1051: `byte` = `char`
bmk#1476: *angry unicode noises*
Daj#7482: Imagine what a nightmare for interpretability this would be lol
AI_WAIFU#2844: interpretability is wishful thinking
Daj#7482: so is all of alignment lol
Daj#7482: Worth a shot
AI_WAIFU#2844: fair |
Daj#7482: Interpretability is at least unusually pareto efficient in the sense that it barely advances capabilities and could have huge upsides for alignment if it works
EricHallahan#1051: I thought it was already in NeoX for some reason?
Daj#7482: Most other alignment stuff is so deeply tied up with capabilities enhancement
EricHallahan#1051: But I can't find any reference.
EricHallahan#1051: ¯\_(ツ)_/¯
Daj#7482: We could make a bitNeo and send it to John lol
StellaAthena#3530: NeoX does support char level tokenization
Daj#7482: I'm sure he'd appreciate it
Daj#7482: bit is not char level
AI_WAIFU#2844: that's a good point, I just don't expect it to really work or scale, so it's a bit of a long shot
Daj#7482: Same 👍
Daj#7482: But it's not a longer shot than e.g. Agent Foundations
Daj#7482: imo
Daj#7482: Sufficiently advanced Interpretability is indistinguishable from Agent Foundations :berk:
AI_WAIFU#2844: funny enough, I feel like agent foundations has informed me about my views on prosaic AGI and prosaic alignment quite a bit, it's why I've refrained from immediately shitting on infrabayes as a waste of time
Daj#7482: Also same lol
inox#5400: train stacks of VQ-VAEs on the bitstream to compress and then train GPT on the tokens at different scales
AI_WAIFU#2844: isn't that what vq-vae already is?
EricHallahan#1051: \*cough\* \*cough\*
https://discord.com/channels/729741769192767510/832685145114935387/840010565799116861 |
https://discord.com/channels/729741769192767510/832685145114935387/840781331751239711
inox#5400: it has autoregressive discrete latent variables but you could train another VQ-VAE over a set of those from sequential blocks
AI_WAIFU#2844: right, but isn't that exactly what they do in the paper?
inox#5400: which VQ-VAE paper?
inox#5400: I think what I'm saying is a really dumb way to do https://arxiv.org/abs/2103.06089
AI_WAIFU#2844: https://arxiv.org/pdf/1906.00446v1.pdf
AI_WAIFU#2844: the good one
EricHallahan#1051: https://arxiv.org/abs/1906.00446
AI_WAIFU#2844: thx
inox#5400: > We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples...
yeah you're right
inox#5400: is everyone agreed that the vector quantisation stuff in this paper can be replaced by a gumbel-softmax now after this paper and DALL-E? https://arxiv.org/abs/2007.07307
alstroemeria313#1694: @inox do they use a straight-through gumbel-softmax or a normal one?
alstroemeria313#1694: i haven't figured out what temperature is ok to use w/ soft gumbel-softmax for the dall-e vae
alstroemeria313#1694: i just know straight-through works
UnsupervisedLearner#4148: Has anyone successfully applied a derivation of RoPE for images?
inox#5400: I think it's just the regular gumbel-softmax and I think the temperature probably has to be scheduled? although, looks like the paper where they actually show that is http://bayesiandeeplearning.org/2017/papers/54.pdf and they don't schedule the temperature there
inox#5400: they set it to 0.5
EricHallahan#1051: 2D RoPE?
EricHallahan#1051: You can make it toroidal. |
UnsupervisedLearner#4148: That's the obvious way to apply it I guess just wondering if it's been done already
EricHallahan#1051: I don't remember but I think someone tried it?
𓅬 gabriel_syme 𓅬#3220: I guess lucid might have tried it? And also check in #vision, there might be some tests in there (was it guac?)
𓅬 gabriel_syme 𓅬#3220: here: https://discord.com/channels/729741769192767510/833024668780068894/844990748998762533
𓅬 gabriel_syme 𓅬#3220: seems to do RoPE things, i.e. be better pretty much everywhere 🙂
Deleted User#0000: https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/rvt.py
Deleted User#0000: @UnsupervisedLearner feel free to try it
UnsupervisedLearner#4148: Thanks.
Am I understanding correctly that adding more dimensions means splitting up the resulting embedding? Where, for two position coordinates, one half is for coordinate m/n, the other half is coordinate i/j (using notation in the blog) and the two halves are concatenated?
EricHallahan#1051: Yes, you need to cut your embedding into *n* dimensions.
UnsupervisedLearner#4148: This all seems strikingly similar to fourier features
EricHallahan#1051: It is very similar to Fourier features.
UnsupervisedLearner#4148: Probably some unifying theory that generalizes all this in a neat way someone way smarter than me is going to release a paper or blog on in the next few months or so
cfoster0#4356: Likely
Goop#8368: is my discord glitched or did lucid leave the server 🤨
AI_WAIFU#2844: yeah he left, we have a bot to give him back his role every time he shows up
EricHallahan#1051: You are not powerful enough to summon lucidrains
Goop#8368: ah
Goop#8368: AH |
Teemochu#8740: *summons lucidrains*
Goop#8368: his github stuff is great, so I'm planning on using it lol
EricHallahan#1051: Blast from the past for me:
https://discord.com/channels/729741769192767510/795089627089862656/805567895244636200
Louis#0144: *gathers a flock of geese and begins dancing around a loaf of bread*
Louis#0144: god thats a lot of geese
Louis#0144: this is truly a bird enthusiast server
EricHallahan#1051: 🦆
Louis#0144: get that out of here
Louis#0144: wtf
EricHallahan#1051: 🐰
Louis#0144: hes back now
Louis#0144: u did it
Louis#0144: congrats
Goop#8368: hey, you gathered the flock of geese!
Louis#0144: #teamwork
alexandrost#2936: 😄
cst#9766: this is a super unimportant question, but lucidbrains leaves often enough to put together a bot to re-assign his roles?
Louis#0144: yes
Louis#0144: sometimes multiple times an hour |
EricHallahan#1051: I give him a lot of credit for it.
EricHallahan#1051: It is really easy to become unproductive on Discord lol
cst#9766: Fair, I've noticed myself suffering from the same thing
cst#9766: case in point, kinda
EricHallahan#1051: (I suffer from it way to much.)
alexandrost#2936: I was wondering. If someone was to help with this project, which area would be most important to focus on?
EricHallahan#1051: Are you talking about GPT-Neo?
alexandrost#2936: yes, (or GPT-NeoX I guess?)
EricHallahan#1051: We have a lot of projects. :berk:
EricHallahan#1051: Is Sid around?
alexandrost#2936: @EricHallahan I guess I should take a look at the gpt-neo and gpt-neox repo tickets?
EricHallahan#1051: That is a good place to start.
EricHallahan#1051: Getting familiar with the code cannot be a bad thing.
alexandrost#2936: thanks. Regarding transformers know how, is there some resource you would suggest? (something a bit more developer-friendly than the initial research papers?)
alexandrost#2936: I'm taking these "LazyProgrammer"'s Udemy courses but I'd like to try others as well
EricHallahan#1051: *Attention Is All You Need* is actually surprisingly easy to read in comparison to most papers, I do recommend at least trying to check it out.
You can also look at The Illustrated Transformer, which is our go-to resource for beginners: http://jalammar.github.io/illustrated-transformer/
alexandrost#2936: thanks @EricHallahan , will give that a go!
UnsupervisedLearner#4148: No idea how good this suggestion is in practice, but in theory Huggingface docs should have some stuff for developer specific knowledge
alexandrost#2936: @UnsupervisedLearner thanks, I definitely also need to dig more into the Huggingface docs! |
EricHallahan#1051: HF docs can be a pain to navigate unless you know where to look sometimes, so beware rough edges when trying to look for things.
alexandrost#2936: thanks, will keep in mind
Sid#2121: I've been summoned?
EricHallahan#1051: Oh, do you have anything for him to do?
Sid#2121: adding downstream tasks to neox would be good
Sid#2121: https://github.com/EleutherAI/gpt-neox/tree/main/tasks there is this tasks folder we pretty much haven't touched since we started
Sid#2121: unfortunately lm thunderdome doesn't seem to play well with distributed code so i kinda gave up on that route
EricHallahan#1051: What was the problem?
Sid#2121: with lm thunderdome?
EricHallahan#1051: Yeah.
Sid#2121: idk, it sorta worked but leo told me to expect everything to break with distributed code
Sid#2121: https://github.com/EleutherAI/gpt-neox/commit/f1c076fd5164cb29f8e8ad0aa822803edc0ba152 here's my WIP
alexandrost#2936: will take a look, thanks!
Sid#2121: it was also pretty slow
inox#5400: Attention Is All You Need has an annotated version to make it even easier to read: https://nlp.seas.harvard.edu/2018/04/03/attention.html
alexandrost#2936: this is actually amazing
Teemochu#8740: I tried running this locally, incidentally, and got an error with the translation dataset due to the Google drive being removed [after having made a few changes to make it runnable on the latest versions of stuff]
inox#5400: friggin bit rot
Teemochu#8740: The offending part is `datasets.IWSLT` (which I believe I had to rename `legacy.datasets.IWSLT`)
Bruce23#6204: Hi, GPT neo is really awesome! I am wondering if it can already do stuff GPT-3 can do, in particular: Input: List of Slang Names About People Who Read Alot: GPT3 output: Bookworm, Book geek, Book worm, Book toting nerd, Book nerd, Nerd |
Kia#2550: :thonk:
Kia#2550: Well perfect timing
Daj#7482: It can theoretically do anything that is a text completion task, how good it will _actually_ be you have to try yourself
Bruce23#6204: Hm, I can seem to tweak it that way
Jozef Poniatowski#7589: is anyone familiar with how the longformer local attention is implemented?
paulbricman#2527: Is there any plan for a TFGPTNeoModel? Thinking of the best way of running GPT-Neo in a web environment and TF at least has tensorflow.js... Compiling libtorch to wasm looks like a huge headache. Edit: waait a second, it seems that you can convert a pytorch model to tf
EricHallahan#1051: Not on HF as far as I am concerned.
Kharr#7888: You can convert the model to any framework you want. The weights of a DNN are just a dictionary of floating point numbers. You'll just have to implement the functionality and load the weights.
EricHallahan#1051: We do not have any plans to make a TF version of the Neo models right now, especially because the current HF model is pretty poorly implemented. It would probably be worth converting the weights to use GPT2Model if you really wanted to use TF+HF to run Neo.
EricHallahan#1051: Besides, we don't maintain the implementation anyway.
EricHallahan#1051: (https://eleuther.ai/faq)
Louis#0144: It would be awesome if they could reimplement it with pipeline parallelism support
EricHallahan#1051: Or you could use our repository lol
Louis#0144: Or that yes
finetune#0907: you can convert hf gpt neo models to hf gpt2 models. if you have a way to convert those to tf based gpt2, might work
Bruce23#6204: I really enjoy GPTneo, but huggingface has been pretty disappointing. the model crashes and huggingface needs 12+ hours to reboot them
Bruce23#6204: Is there an alternative?
kurumuz#5695: not sure if 2.7B on tensorflow js is a good idea haha
kurumuz#5695: @Bruce23 their inference api?
Bruce23#6204: Yes |
kurumuz#5695: that sounds crazy, well you need to pay 500$ to get decent inference.
kurumuz#5695: what do you need the inference for and what is your scale
kurumuz#5695: from huggingface i mean
Bruce23#6204: i am currently on the Lab plan for $199. i would upgrade it but the models are down 99% of the time we need them
Bruce23#6204: its unrealiable 😦
kurumuz#5695: @Bruce23 did you pin the model?
kurumuz#5695: if you're doing something big scale seriously, manage your own gpus
kurumuz#5695: huggingface is not worth
Louis#0144: Yeet
Louis#0144: HF doesn’t provide v good hosting yet
kurumuz#5695: @Bruce23 you have so many gpu providers you can source from
kurumuz#5695: its more work as you need automation
kurumuz#5695: but it should pay off
Bruce23#6204: I didnt pin the model yet. But the support told me that the model crashes and they need to manually fix this. I guess pinning wont help
kurumuz#5695: @Bruce23 vanilla gpt-neo?
kurumuz#5695: uh oh
kurumuz#5695: sounds like a fuckup on their side
Bruce23#6204: i dont mind setting this up myself. but it needs to be more reliable 😐 What GPU hoster would you recommend?
kurumuz#5695: as i said it depends on your scale
Bruce23#6204: maybe a few hundred requests per hour, max! |
kurumuz#5695: maybe wait a bit to see if huggingface can fix it
kurumuz#5695: @Bruce23 well, even one gpu would sleep most of the time then
kurumuz#5695: 24x0.85x30 is your monthly cost on a v100
Louis#0144: You don’t even need a v100
Bruce23#6204: is v100 the minimum?
Louis#0144: Finetune got it working for inference on a 2080ti
Louis#0144: By lazily loading chunks
kurumuz#5695: you dont need that
Louis#0144: I’m sure a 1080ti would work too
kurumuz#5695: you just need fp16
Louis#0144: No no no
kurumuz#5695: i got it working on 2080
Louis#0144: Do not use fp16
kurumuz#5695: why
kurumuz#5695: 1:1 evals
Louis#0144: It screws with inference
kurumuz#5695: good generation
Louis#0144: On longer sequences
kurumuz#5695: it does not
Louis#0144: A lot |
Louis#0144: Yes it does. We have confirmed this internally
Louis#0144: Not just the grounding project
kurumuz#5695: with hf transformers? i think they had a sampling bug
Louis#0144: Bmk confirmed this as well on the eval harness
asparagui#6391: how about bf16
Louis#0144: That should work fine
kurumuz#5695: well, we dont use more then 1024 tokens
kurumuz#5695: that might be the reason
finetune#0907: maybe they're running into the oom bug :berk:
Bruce23#6204: honk where do you host your model?
kurumuz#5695: OOH
kurumuz#5695: lol
Louis#0144: I can get it to produce nan logits using seq length 512
Louis#0144: I don’t
Louis#0144: I’m a researcher
Louis#0144: We’ll we have a model we’re about to release soon
Louis#0144: VQGAN on wikiart
kurumuz#5695: oh damn
Louis#0144: We’re figuring out where to host it rn
Bruce23#6204: So any concrete hostingcompany recommendation? |
Bruce23#6204: 😄
Louis#0144: No
Louis#0144: Sorry
Bruce23#6204: Ok
kurumuz#5695: just research on google haha
kurumuz#5695: one v100 is more than enough for your needs
Kharr#7888: Just wait for HF to fix their stuff -- they tend to eventually fix stuff
kurumuz#5695: eventually
finetune#0907: kinda funky. did quite a bit of testing with full length sequences and never saw any nans with fp16. finetuning worked fine for me too.
kurumuz#5695: *waits for the whole eternity*
Kharr#7888: Did you cast logits to fp32 or use them in fp16? PyTorch after v1.5.1 broke torch.multinomial for fp16 on all later versions
Louis#0144: What the fuck
Louis#0144: How
Louis#0144: I tried for a month to get finetuning working
Louis#0144: I couldn’t do it
kurumuz#5695: 🤔
Louis#0144: And this was with help from like
Louis#0144: Half of the servers main members
Louis#0144: LMAO
kurumuz#5695: on pytorch? |
finetune#0907: just ran modified run_clm.py with --fp16 and deepspeed zero level 2 on v100 and it worked 🤷♂️
Louis#0144: Yes
Louis#0144: Wtf
kurumuz#5695: it just works
kurumuz#5695: literally
Louis#0144: https://github.com/EleutherAI/visual-grounding
finetune#0907: ran with four v100 for about 20 hours, so should have had plenty of time to break
Louis#0144: If you wanna look at this
Louis#0144: And tell me what the f I did wrong
Louis#0144: Pls do
Louis#0144: LMAO
Louis#0144: I couldn’t get it working in fp16 at all
Louis#0144: Literally all of the activations were underflowing
Louis#0144: There’s an entire thread on HFs repo about this
Louis#0144: Like 60 comments
kurumuz#5695: @finetune what was the sampling hack you implemented?
kurumuz#5695: i remember we having some weird tokens generated
Louis#0144: No it isn’t even sampling
Kharr#7888: Something weird with HF. I finetuned both 1.3B and 2.7B in fp16 in custom framework
kurumuz#5695: @Louis yeah ik |
Louis#0144: The activations under flow
finetune#0907: cast logits to fp32 before torch.multinomial because it can pick zero probability tokens in fp16
finetune#0907: but this issues is def. different
finetune#0907: idk
kurumuz#5695: multinomial sounds broken
Kharr#7888: It is, you can either use Torch 1.5.1 or cast to fp32 in later versions
finetune#0907: wonder if it could be related to the cude version? i was using 11.1
finetune#0907: the nan issue i mean
EricHallahan#1051: We were using 11.1 as well.
Louis#0144: I’d love to get this repo working without A100s
Louis#0144: lol
Kharr#7888: Try running it on older pytorch
kurumuz#5695: A100s, how do you even find them?
kurumuz#5695: lol
Louis#0144: It would mean we could literally train a clip text encoder now
Louis#0144: Based off of neo
Louis#0144: No waiting
kurumuz#5695: that sounds nice
Louis#0144: Blessed be coreweave
kurumuz#5695: well, we dont have a company yet and they're asking for a company name while registering. |
EricHallahan#1051: We used some of them that were earmarked for NeoX development because they happened to be free at the time.
kurumuz#5695: so cant get gpus from them yet.
finetune#0907: here's how i called the training script. think the main modifications to it i made were replacing the dataset reading code to use a numpy memmap instead of reading csvs
https://github.com/finetuneanon/gpt-neo_finetune_2.7B#run-the-training
kurumuz#5695: maybe we should generate the company name with gpt-neo
Louis#0144: If it uses deepspeed custom trainer class that’s no go
EricHallahan#1051: That is why I can ensure that it was CUDA 11.1
Kharr#7888: This also means newer buggy PyTorch
finetune#0907: think it uses the regular trainer class
EricHallahan#1051: Well it also meant DeepSpeed.
Louis#0144: Anyway
Louis#0144: I’m going to give this project to someone else at this point
kurumuz#5695: fp16 ftw
Louis#0144: We have a grounding project that’s way more promising
kurumuz#5695: am i allowed to take it
kurumuz#5695: :berk:
Louis#0144: Yeah
Louis#0144: Honestly
Louis#0144: It’s yours
Louis#0144: Have fun |
kurumuz#5695: well actually im quite busy, jk
kurumuz#5695: lol
Louis#0144: OhLMAO
kurumuz#5695: well sounds interesting to me as we want to do image generation too.
kurumuz#5695: if we launch and its still not completed, i can take it.
Louis#0144: If you wanna work on it at any point let me know
kurumuz#5695: okay
kurumuz#5695: sure
Louis#0144: Of course it would still be under the EAI umbrella
EricHallahan#1051: If you want to, seriously, if someone can fix/track down the problem, go for it.
kurumuz#5695: yeah.
kurumuz#5695: I will see what I can do, If I can implement stuff I need to implement for our scaler today, I can start looking to it tomorrow.
finetune#0907: did you try if you get nans with gpt2?
Louis#0144: No
Louis#0144: We didn’t
Louis#0144: But we wanted to use neo
EricHallahan#1051: We didn't try GPT-2, that might be something to try yes.
Louis#0144: Oh I did try
Louis#0144: In the very beginning
EricHallahan#1051: Oh. |
Louis#0144: But I only used the small GPT2
Louis#0144: Not the 1.5b
Louis#0144: The small one worked
finetune#0907: maybe it's the cursed neo implementation
Louis#0144: Maybe
EricHallahan#1051: It is cursed.
finetune#0907: maybe could try converting 2.7b to gpt2 and see if that works
finetune#0907: maybe comment out the model = model.half() here, otherwise should be good
https://github.com/finetuneanon/misc/blob/main/gptneo_to_gpt2.py
Bruce23#6204: is the GPT-Neo 125M less prone to crash compared to the 2.7b?
Louis#0144: Oh true
Louis#0144: Literally no one but Leo and I have gotten neo to crash
Louis#0144: LMAO
asparagui#6391: gcp has a100's
Louis#0144: 1.3 crashes the same amount as 2.7 tho
Bruce23#6204: do you work for huggingface 😄
Louis#0144: No I volunteer here full time
Louis#0144: I’m between jobs rn
Bruce23#6204: awesome
kurumuz#5695: IDK how they are crashing the models |
kurumuz#5695: sounds bad when you paid 200$ though.
Louis#0144: Oh
Louis#0144: that
kurumuz#5695: We literally paid 200$ and stopped using it LMAO
Louis#0144: I think their backend is crashing
Louis#0144: not the model
kurumuz#5695: gave them free money i guess
Bruce23#6204: where do you host your models now?
kurumuz#5695: everywhere
kurumuz#5695: that we can host them
Bruce23#6204: whats your favourite hoster 😄
kurumuz#5695: datacrunch is pretty useful
kurumuz#5695: can't say my favorite, good prices on-demand though.
Louis#0144: i know a lot of people that do hosting on vast.ai
Louis#0144: youre not supposed to I think
Louis#0144: but they do it anyway
kurumuz#5695: bad idea, some vast.ai machines are extremely slow
kurumuz#5695: even tho they have 3090s
Louis#0144: ive heard coreweave is pretty good
Louis#0144: YEAH |
Louis#0144: omg
kurumuz#5695: coreweave doesnt let you register
kurumuz#5695: if you dont have a company
Louis#0144: yesterday I got a 3090 from vast.ai
Louis#0144: literally slower FLOPS than a 1080
kurumuz#5695: wait, let me show my findings.
Louis#0144: LMAO
kurumuz#5695: ```
vastai
3090-fp16 -> 3.3s
2080-fp16 -> 2.3s
V100-fp16 -> 1.1s
2080ti-fp16 -> 3.3s
```
kurumuz#5695: I have the notebooks saved.
kurumuz#5695: 2.7B gpt-neo
kurumuz#5695: same context size, generating same number of tokens
Louis#0144: i think the people hosting just dont know what good cooling is
Louis#0144: my memory junction yesterday was 108-110
kurumuz#5695: oof |
Louis#0144: maybe like the card above mine was mining?
Bruce23#6204: is it hard to setup gptneo? 😄
Louis#0144: no
Louis#0144: i think vast.ai lets you use custom docker images too
Louis#0144: + a startup script
Louis#0144: so its entirely painless
Bruce23#6204: with datacrunch $0.85/h is the cheapest
kurumuz#5695: so based
kurumuz#5695: with google cloud you might get better prices for a whole month.
kurumuz#5695: but idk
Louis#0144: assumes ur service gets 24/7 use
Louis#0144: which it might not
Louis#0144: depending where ur user base is
kurumuz#5695: well, he was fine with paying 500$ to huggingface.
Bruce23#6204: yes, i am
Bruce23#6204: if it would work
kurumuz#5695: yeah, you can just get a v100 for 500$
kurumuz#5695: and its fine for your scale.
kurumuz#5695: extra work though.
Louis#0144: if it wasnt a gpu shortage |
Louis#0144: u could have bought a P100 for $500
kurumuz#5695: 😢
Louis#0144: before covid started I know someone who got 3 P100s for $650 each
kurumuz#5695: its a bad time
kurumuz#5695: to start a business like mine
kurumuz#5695: kek
alexyz#3459: just wait till those chinese mining GPUs come on the market
Louis#0144: yeah
alexyz#3459: "lightly used"
kurumuz#5695: never heard of those.
Louis#0144: im gonna build a rack
kurumuz#5695: oh
kurumuz#5695: lol
Louis#0144: last time that happened I built a 1080 rack
Louis#0144: lasted me a while too
AI_WAIFU#2844: Have you guys done the math for CPU instances?
kindiana#1016: latency would be big oof
Louis#0144: as usual ben is always watching
kindiana#1016: might be cheaper tho
AI_WAIFU#2844: Yeah, I think users can tolerate a bit of latency at the start. |
AI_WAIFU#2844: Hetzner is much cheaper.
kindiana#1016: it would be like 10x higher latency
kindiana#1016: so like 20 seconds lol
alexandrost#2936: what kind of business are you starting?
Louis#0144: probably something illegal
Louis#0144: jkjk
alexyz#3459: NovelAI
Louis#0144: an anime pfp, must be infosec
Louis#0144: or that
Louis#0144: yeah
Louis#0144: wait is it actually NovelAI?
kurumuz#5695: yes
AI_WAIFU#2844: just add a throbber
Louis#0144: lmao
Bruce23#6204: to conclude , v100 is the way to go right?
Louis#0144: no way in hell u can get away with CPU then
Louis#0144: players would literally kill u
kurumuz#5695: we're not doing GPUs haha
kurumuz#5695: our inference is pretty fast rn
Louis#0144: o.o |
kurumuz#5695: SORRY
kurumuz#5695: İ MEANT
kurumuz#5695: we're not doing CPUs
Louis#0144: LMAOOOO
kurumuz#5695: lol
alexyz#3459: just buy A100s, for 10k each
alexyz#3459: very cheap
kurumuz#5695: honk i seen you say novelai would fail 😢
Louis#0144: in the beginning
Louis#0144: u guys cleaned up a lot
kurumuz#5695: yeah hopefully
kurumuz#5695: it was really hard at the beginning haha
alexyz#3459: No NovelAI discussion here 😐
Bruce23#6204: NVIDIA V100 for PCIe for $0.60/hr
Louis#0144: btw NovelAI isnt allowed here\
Bruce23#6204: would gptneo work on that well=
alexandrost#2936: @kurumuz looks like a cool project
Bruce23#6204: ?
Louis#0144: discussions of it
kurumuz#5695: yeah thats why i didnt want to start it. |
alexyz#3459: where?
kurumuz#5695: coreweave.
Bruce23#6204: coreweave
kurumuz#5695: you cant just get coreweave gpus.
kurumuz#5695: do you have a company?
Bruce23#6204: yes
kurumuz#5695: well, talk with them then.
Louis#0144: sorry wait
EricHallahan#1051: Just bring discussion about NovelAI to their server.
kurumuz#5695: also why 100 requests per hour max?
kurumuz#5695: that sounds pretty low.
Louis#0144: ye
Bruce23#6204: just wanted to check if that GPU makes sense for GPTneo
Bruce23#6204: before i chat with them
Bruce23#6204: likely more later, but right now we dont have a lot of req
AI_WAIFU#2844: If you haven't already I would incorporate. Even just because of taxes.
Bruce23#6204: how about NVIDIA Quadro RTX 4000 for $0.35
alexandrost#2936: 3090 has a worse performance than the 2080? how is that possible?
AI_WAIFU#2844: People don't cool their shit
alexandrost#2936: I was thinking of buying a 3090 just for transformers , mainly inference with GPT-Neo |
alexandrost#2936: it would be interesting to hear other people's experiences with rtx GPUs + GPT-Neo
EricHallahan#1051: Good luck finding one.
alexandrost#2936: yeah, I lost a couple of opportunities unfortunately
Bruce23#6204: Blazing fast spin-up times (as little as 5 seconds).
Bruce23#6204: does this mean gpus waking up from sleep mode?
Bruce23#6204: i can get the TESLA t4 on googlecloud for $0.379 hourly
Bruce23#6204: will that work for gptneo?
EricHallahan#1051: Should be plenty.
finetune#0907: might be slow
finetune#0907: should work tho, yea
Bruce23#6204: thanks
Bruce23#6204: Is 15 gb ram enough / too much?
alexandrost#2936: @Bruce23 T4's have 16GB of memory. When I tried with a 12GB it was iffy - sometimes it would run out. 16GB should be enough but, better check it, with large inputs as well
Louis#0144: use finetune's chunking script
Louis#0144: it works really well
Louis#0144: only for inference tho
alexandrost#2936: can you please give a pointer to that script?
Louis#0144: @finetune
EricHallahan#1051: And if you get FP16 working then it will be actually quite fast.
EricHallahan#1051: It is heavily optimized for FP16. |
Bruce23#6204: i can go up to 30gb ram if that makes more sense
Bruce23#6204: but thats not the ram of the GPU
Bruce23#6204: does it even mater
Bruce23#6204: matter
alexandrost#2936: yes, GPU memory is what you need
Bruce23#6204: what do you mean you tried with 12?
finetune#0907: got a transformers fork here that fixes oom issues and loads the model directly on gpu in half precision
https://github.com/finetuneanon/transformers
alexandrost#2936: amazing, thanks !
finetune#0907: for just the oom fix, you can check here :berk:
https://github.com/huggingface/transformers/pull/11630/files
finetune#0907: o the fork also casts the logits to fp32 to avoid the torch.multinomial bug
alexandrost#2936: that's great. And I can use this in the exact same way as the huggingface transformers package?
finetune#0907: yea
finetune#0907: haven't pulled in changes from upstream recently tho
finetune#0907: so if you need some function they added very recently, might not be in
Bruce23#6204: should i got with 7.5gb ram or 15gb? 😐 i dont even need 7.5 right? normally?
finetune#0907: system ram?
Bruce23#6204: yes
alexandrost#2936: thanks! I'll start playing around with it! |
finetune#0907: depends on if you're willing to be tricky with memory management
finetune#0907: 7.5gb sounds like might be hard
Bruce23#6204: ok
Bruce23#6204: 15 it is
Bruce23#6204: or even more?
finetune#0907: should be fine ig
Bruce23#6204: i can pick 30, it doens tmatter. just dont want to pay more than i should
alexandrost#2936: @finetune is fp16 the default mode in this fork? or should I specify it when building the pipeline ?
finetune#0907: it's default. might actually have to patch it to not go fp16 on the way :berk:
alexandrost#2936: @finetune great! I'll try it on the cloud. Thank you
Louis#0144: Fun story time. Did I ever tell@you guys about the last girl I was seeing? She wanted to teach me Chinese. First thing she taught me was how to call an ambulance and how to tell someone I’m bleeding from a goose inflicted injury
Louis#0144: I forgot how to type both in pinyin
Louis#0144: Sorry
kurumuz#5695: LMAO
kurumuz#5695: so based
bmk#1476: 1. damn imagine having any attention from the opposite sex whatsoever
2. she's a keeper why did you leave her
Louis#0144: Covid distances
Louis#0144: Ontario lockdown
Louis#0144: Sad |
kurumuz#5695: sad
kurumuz#5695: well, not like i was talking to people before lockdowns, but i was able to go outside and run.
kurumuz#5695: and think
Louis#0144: Are u Ontario as well?
Louis#0144: I’m not in Ontario anymore
Louis#0144: NY now
alexandrost#2936: ah NY, that's a fun place to be
Louis#0144: I hate it
Louis#0144: It’s so dirty here
alexandrost#2936: that's the fun part
Louis#0144: Nyc has gotten friendlier over the years
Louis#0144: But there’s still a massive drug issue
Louis#0144: Needles everywhere around the 80s
kurumuz#5695: me? no im in turkey as i said
kurumuz#5695: istanbul
kurumuz#5695: :berk:
Louis#0144: OH YEAH
Louis#0144: Forgot
alexandrost#2936: I spent a whole summer in NY, and tbh, I don't think I could live there for too long. It's impressive though to experience
kurumuz#5695: when are we meeting in istanbul |
Louis#0144: Yeah I grew up here
Louis#0144: I’m rly sick of it tbh
Louis#0144: Tru
Louis#0144: Idk when I’ll be back
Louis#0144: One day
kurumuz#5695: one day
Louis#0144: Oh sorry wait I thought we were in off topic
bmk#1476: can't wait for this fucking pandemic to be over so I can finally become a digital nomad
Louis#0144: My bad
Louis#0144: Soon
bmk#1476: "I can't believe it's not #off-topic!"
Louis#0144: Sometime this year I think
kurumuz#5695: starlink
kurumuz#5695: should be nice for nomads
Louis#0144: The US is easing restrictions for vaccinated people. I can go an entire day now without a mask shockingly
bmk#1476: I mean I only need intermittent internet connection and I'll do everything bandwidth intensive on a colo with good internet speed
AI_WAIFU#2844: once starlink comes online fully I'm gonna go buy a cabin in the mountains by a ski hill, and plonk myself down there.
kurumuz#5695: I liked masks first because i dont like to show my face, but it became really annoying.
Louis#0144: I get claustrophobic rly easily... I’ve already had covid + both vaccines so I think it’s fine tho
Louis#0144: They make me anxious |
Louis#0144: I feel on edge wearing them
AI_WAIFU#2844: I like masks because it let's me avoid government surveillance.
Louis#0144: It never got better
bmk#1476: I don't think I could survive in a ski resort
bmk#1476: i require access to city facilities
kurumuz#5695: yeah i need to live in a big city.
bmk#1476: also I just like taking pictures of city stuff anyways
kurumuz#5695: first, i want a fucking hospital nearby
AI_WAIFU#2844: mountains >> tiny buildings
kurumuz#5695: second, i cant be bothered and i want everything close as im extremely lazy
bmk#1476: perfect rectangles >> whatever weird conicish shape mountains are
kurumuz#5695: this is why i automate everything
kurumuz#5695: :berk:
AI_WAIFU#2844: although I do have to admit that having a grocery store a block away is nice
bmk#1476: also im waiting for buildings larger than mountains
bmk#1476: that would be nice
AI_WAIFU#2844: I think we'll actually see the opposite happen. Distribution and working at a distance is getting better, so I would expect people to fan out more rather than cramming together into buildings that can house entire cities.
bmk#1476: :noo:
bmk#1476: but i like big buildings
kurumuz#5695: decentralization ftw |
bmk#1476: where do all the chonker building enthusiasts go
bmk#1476: I guess property values in city centers will drop a bunch
bmk#1476: so maybe I'll be able to afford an apartment in Manhattan
AI_WAIFU#2844: city hall where they can argue about zoning laws
thenightocean#6100: guess we will just build our dream cities in blender :3berk:
thenightocean#6100: (I am actually doing that lol)
AI_WAIFU#2844: also tall buildings are effectively as close as you can get to a prototypical dick measuring contest
AI_WAIFU#2844: it's not actually practical
bmk#1476: we should go and convince the people running that new city thingy that was mentioned in SSC a while back to allow ginormous buildings
bmk#1476: *looks at LMs*
:guilty:
AI_WAIFU#2844: Yeah but LMs are less overtly phallic
bmk#1476: what's wrong with the construction of large phallic objects
Louis#0144: All that attention though
Louis#0144: They’re very egotistical
thenightocean#6100: probably right but I dont care. HK skyline at night is still the most beautiful thing in the world for me
AI_WAIFU#2844: That's a fair point. Some of my best pictures and views come from looking at Singapore at night. A genuine forest of glowing buildings as far as the eye can see
AI_WAIFU#2844: At least when viewed from the 30th floor or higher
thenightocean#6100: my fun theory utopia is cyberpunk megacity but without crime and suffering, just endlessness fun and monoroails and ginormous arcologies covered with neon signs
bmk#1476: oh my god yes |
bmk#1476: you have to share some of your renders, now i wanna see lol
thenightocean#6100: sorry I am beginner still
thenightocean#6100: but one day..
bmk#1476: ah
bmk#1476: well I'm excited to see them
thenightocean#6100: or someone will make an image gpt that can generate it
thenightocean#6100: thats why i am checking multimodal channel from time to time, to see the progress
bmk#1476: I sometimes watch C2077 videos purely for the city views tbh
AI_WAIFU#2844: litterally just walk over to the faraday cage and punch in the description
bmk#1476: my only complaint with the city in the game is it's not big enough for my taste
bmk#1476: tbh you know what would be cool? a cyberpunk city simulator game
bmk#1476: C2077 except minus all the combat and whatever
bmk#1476: literally all just open world city exploration
bmk#1476: and plus moar city
EricHallahan#1051: https://en.wikipedia.org/wiki/SimCity_(1989_video_game)
bmk#1476: no
kurumuz#5695: dude i just wanna be able to cook
kurumuz#5695: buy stuff from shops
bmk#1476: that's not even cyberpunk
kurumuz#5695: eat |
kurumuz#5695: be a merchant
EricHallahan#1051: Well you edited your comment.
kurumuz#5695: these stuff just feel extremely enjoyable to me
bmk#1476: C2077 except there's no storyline, it's just an enormous city that you can explore every inch of
kurumuz#5695: yes
kurumuz#5695: full of detail though
kurumuz#5695: good npcs
bmk#1476: I'd spend hours going to every single place in the city
bmk#1476: I feel like this is a totally untapped niche
Goop#8368: I just wanted to play the cool pachinko machines
bmk#1476: people were really disappointed with C2077 because the open world part was kinda meh
bmk#1476: you can't even go inside most of the buildings if they aren't part of the main storyline
Goop#8368: Can't even go back in some story buildings lol..
bmk#1476: yeah that too
cognomen#6297: Outside™
Goop#8368: Without getting stabbed :(
Goop#8368: Yakuza taps it fairly well imo
Goop#8368: I think Yakuza raised my elo by like 200 at shogi lol
thenightocean#6100: yes!!! God, if someone does that I will subscribe to Stadia just to play this mod
thenightocean#6100: I wonder if there will be more market for this sort of “experiential games” where people would pay just to hang out in this kind of worlds. Probably isnt a huge market for it but at least game designers could ignore all the combat and narrative elements which would make it easier |
Deleted User#0000: Hi
bmk#1476: I feel like if someone were to make a game like this, the development would be highly parallelizable since the majority of the work is adding in details, which can be easily outsourced
AI_WAIFU#2844: but that's also really fucking expensive
bmk#1476: this means greater utilization of parallel labor and the ability to hire cheaper labor, which would help stretch the budget much further
bmk#1476: I mean it would be the number one selling point of the game
bmk#1476: so it wouldn't be a wasted expense
thenightocean#6100: lets hope there is enough of us futurist city freaks to make it viable lol
bmk#1476: anyways if someone wants to do this, I'd love to contribute to the Kickstarter or whatever and hang around and do what I can
bmk#1476: wait, since it's highly parallelizable, open source would be a viable option
StellaAthena#3530: Second life tried to be this, basically
bmk#1476: i wasnt thinking necessarily multiplayer
bmk#1476: i guess that could be nice down the line but it wouldnt be mandatory at all
bmk#1476: i just wanna exploar
Deleted User#0000: so like no mans sky but instead of a procedural galaxy its a procedural city?
Deleted User#0000: that sounds like a nice idea actually
Deleted User#0000: why not both hmm
Deleted User#0000: reminds me of Dual Universe somehow hmm though thats more like minecraft on steroids
bmk#1476: why does it have to be procedural?
Deleted User#0000: it doesnt have to be. but i guess is the cheapest way to scale to large sizes
gwern#1782: I was reading about cyberpunk 2077 and it's just so obvious that the feature is NN-generated content, perhaps lightly human-vetted |
gwern#1782: the c2077 model and problems demonstrate incredibly clearly that the AAA model is just not going to keep scaling
bmk#1476: i dont really care what the development model is
Deleted User#0000: look forward to know more about this
https://www.latentspace.co/
bmk#1476: i just want a really cool explorable cyberpunk city
thenightocean#6100: Whatever tech will let me wake up here, I am ok with it https://cdn.discordapp.com/attachments/729741769738158194/845747297496334376/erik-osvald-111final.png
UnsupervisedLearner#4148: I want a vr floor that automatically predicts your motions and adjusts under you to give a better sensation of space
Bruce23#6204: I finally have my AWS instance and installed transfomers. Are there any instruction on how to apply this patch, finetune?
EricHallahan#1051: You install `transformers` from that repo instead.
Bruce23#6204: Ahh!, So instead of pip install transforms I find a way to pip install from a repo? Alright!
EricHallahan#1051: ```sudo pip install git+https://github.com/finetuneanon/transformers```
EricHallahan#1051: Or something like that.
Bruce23#6204: Aww, thanks!! 🙂 Should I uninstall transfomers first? I guess so?
EricHallahan#1051: No need, it is a package manager after all.
Bruce23#6204: Perfect!
Bruce23#6204: Hm, my python scripts gets "Killed". Am I running into resource issue? I have 14 Gb system ram available. I am not reaching any of my checkpoints, https://pastebin.com/FZ8DwhBZ
Bruce23#6204: Maybe I shouldnt use the pipeline and follow finetune's way 😄 I report back
Noa Nabeshima#0290: What are the things I should think about wrt transformative hardware
How important is inter-device bandwidth/how does that work
How do TPUs compare to GPUs, in specific terms |
How long does it take to develop various hardware technologies, both in terms of incremental improvements and in terms of larger (?) changes?
AI_WAIFU#2844: These are very broad questions with long, application, and time dependent answers
cognomen#6297: https://cdn.discordapp.com/attachments/729741769738158194/845772993929281636/EpoHgM9VgAAN8gk.jpg
kindiana#1016: as long as you don't use tf its fine lol
Teemochu#8740: I still think *any* card above yours will cause issues, period... you need airflow above your baseplate or your memory is going to get *hot*.
kindiana#1016: the !complete bot in #the-faraday-cage-archive is on tpu
cognomen#6297: this is a repost by the way
bmk#1476: yes, i remember when this was first posted
Teemochu#8740: Buildings with their own outdoor weather... as a Bay resident I could use a good balcony at 8000 feet every so often in winter.
Teemochu#8740: "balcony" meaning in practice "large rooftop park that's probably crowded af because everyone has the same idea"
Bruce23#6204: AttributeError: 'str' object has no attribute 'new_ones'
Bruce23#6204: How do I debug this? Is this a known issue?
finetune#0907: can you pm me the full error?
Bruce23#6204: Yes
Bruce23#6204: No, I can't send you pms
alexandrost#2936: @finetune Hi! I am trying out your fork of transformers with GPT-Neo - When I am building the pipeline with model='EleutherAI/gpt-neo-2.7B' , I am getting an "Unrecognized configuration class" error . Maybe I should be installing from a specific branch? I am using the "gpt-neo-dungeon-localattention1" branch
finetune#0907: haven't really tried with pipeline, but there's no reason it shouldn't work. if you check the repo, there's some code samples on model loading now
alexandrost#2936: I just found them literally right now! I think I was looking at an older readme file. Thanks!
finetune#0907: added them today :berk:
𓅬 gabriel_syme 𓅬#3220: ehm nope, and it's called urbanization in the West 🙂 |
EricHallahan#1051: They're the same picture.
zphang#7252: Computationally I'm assuming the first would be far faster
EricHallahan#1051: Yes, it will.
EricHallahan#1051: But functionally they are identical.
bmk#1476: congratulations, something that takes like 1% of the total run time now takes 0.5% of the total run time
bmk#1476: your code just got 0.5% faster, woohoo
bmk#1476: roll some dice
EricHallahan#1051: Runtime.
EricHallahan#1051: Because it becomes a lookup table.
gammascalpset#9792: I don't think this is the case though? Most devs are in denial, but I think it's pretty clear teams are seeing productivity plunges when going fully remote
gammascalpset#9792: especially for devs who are high on the adhd scale
gammascalpset#9792: I'm probably half as productive at home than at the office
𓅬 gabriel_syme 𓅬#3220: Would I have to fine tune smth like blenderbot if I wanted to have it be a domain specific assistant? Is a chatbot an overkill for digital assistants even?
𓅬 gabriel_syme 𓅬#3220: I'm thinking mostly providing suggestions and understanding directions in natural language
Kia#2550: A digital assistant chatbot?, that's interesting
alexyz#3459: You wouldn't (I think?)
𓅬 gabriel_syme 𓅬#3220: Not even sure a chatbot is what I'd need but I guess it's the closest
alexyz#3459: Blenderbot has a feature where you can describe what you'd be talking to, so you could describe something like "You are a friendly and helpful Planet Pluto" or something like that
𓅬 gabriel_syme 𓅬#3220: Like I'd like to tell it "add a bedroom in the North side of the house" and that to do smth in the ui
alexyz#3459: Ah |
𓅬 gabriel_syme 𓅬#3220: So I guess it is more embedded in a tool
𓅬 gabriel_syme 𓅬#3220: But is that even a bot?
𓅬 gabriel_syme 𓅬#3220: I guess yes if I want it to talk
𓅬 gabriel_syme 𓅬#3220: My problem is I don't know where to start :wojak_despair:
Kia#2550: Ow wow, That's literally a personal assistant
Kia#2550: I mean resources are available in the internet but actually making that is up to development or guesses
Louis#0144: Was drinking coffee outside and a spider landed in my mouth ama
alexyz#3459: :goose: 🕷️
Louis#0144: Ye
Kia#2550: Spit it out
Kia#2550: :goose7:
Kia#2550: Nonetheless morning
Gurkenglas#7362: even with this logit lens stuff?
EricHallahan#1051: What does this have to do with logit lens?
Gurkenglas#7362: if every layer happens to have a successively better guess at the output, wouldn't that be highly dependent on which "type" the tensors passing between each layer have?
alstroemeria313#1694: "the monument of the ancients by monet" https://cdn.discordapp.com/attachments/729741769738158194/846034867351584788/out_05000.png
spirit-from-germany#1488: Now you need a Robot to paint it and you could sell it for millions at an auction :D
alstroemeria313#1694: ahah
alexyz#3459: i'm pretty sure alstroemeria did something like that with NFTs lol, they got some AI to create thousands of images and mint them
Kia#2550: That's a printer tho:mittwoch: |
EricHallahan#1051: Time for me to rage at Glowforge again for their "laser printer".
https://glowforge.com/
EricHallahan#1051: \*sigh\*
Kia#2550: 3D laser printer...
alstroemeria313#1694: oh, i'm half-asleep still, i meant to post this in #art
EricHallahan#1051: I could tell.
Kia#2550: Take your time
EricHallahan#1051: But I was to lazy to call you out lol
alstroemeria313#1694: i'm still on a ritalin break
alstroemeria313#1694: which means i should have more coffee than usual
Kia#2550: Ritalin...:mittwoch:
Kia#2550: Ow
Kharr#7888: What are you using to generate these?
alstroemeria313#1694: VQGAN and CLIP
EricHallahan#1051: (#art and #the-faraday-cage-archive)
EricHallahan#1051: I really impressed by how strong the results are now.
Kharr#7888: It's really amazing stuff. Vision is a bit more forgiving but I can see this kind of thing transferring over to written stuff too (once coherence is better). e.g. "A story about a horse and a king" --> guided GAN
EricHallahan#1051: Well the hope was for the original visual grounding project to take the place of the CLIP text encoder.
EricHallahan#1051: That never really happened unfortunately.
Kharr#7888: I'm training something in between a GPT model and denoising auto-encoder for text right now (at character level). It writes coherent words and the sentences mostly make sense but like GANs, the overall coherence is not there yet. Still some things to figure out. |
EricHallahan#1051: GANs are always hacky but they work so ¯\_(ツ)_/¯
EricHallahan#1051: I would like to get into DDLMs soon though.
cst#9766: I have one of these in my apartment, got it for free. They work really hard to focus more on a market that is not tech-savvy, so everything is cloud based and stuff. It's actually a nice machine electromechanically at that price point but all the software locks make it not great
Sphinx#2092: So t5?
EricHallahan#1051: I have a heavily modified K40 across the room from me if that tells you anything.
alstroemeria313#1694: The GPU?
EricHallahan#1051: No, 40W laser cutter from china, originally designed for making rubber stamps.
alstroemeria313#1694: Oh
CRG#8707: NCA for text? 🤔
EricHallahan#1051: They are like the cheapest things you can buy, so not the highest quality. It has already paid for itself for us though.
alexyz#3459: Where did you get it from?
Kharr#7888: No, it's a bit more complicated than that. There is no encoder like in traditional transformer. It's full auto-regressive on the global scale like GPT except it also performs iterative refinement. So.. what if you could generate text with GPT using greedy sampling --> feed it back into model and have it refine it in a single step or guide the model to produce various alternatives by sampling from a distribution and joining that with the input.
EricHallahan#1051: They are everywhere. I forget where we got ours specifically.
alexyz#3459: I got an interesting idea, a multimodal model that creates 3D models from text
alexyz#3459: would that even be possible?
Kharr#7888: Why wouldn't it be?
EricHallahan#1051: My problem with such a system is that for making physical models you need to consider design constraints that are hard to contextualize in text.
EricHallahan#1051: There is a reason that mechanical drawings exist.
alexyz#3459: I'm not really thinking of creating physical models out of them, but makes sense
EricHallahan#1051: Yeah, that is just where my mind goes first, that's all. |
mgostIH#0245: We need datasets of 3D thingies too
mgostIH#0245: Would be quite cool if we'd put the whole world into some sort of NERF model
EricHallahan#1051: There are plenty of them, but they tend to be difficult to get your hands on.
mgostIH#0245: Take photographs of the surrounding ambient and turn it into objects and stuff
mgostIH#0245: A single model storing **everything**
EricHallahan#1051: That is called photogrammetry lol
mgostIH#0245: It may accidentally solve for weather forecasting too
mgostIH#0245: One thing that surprised me of the NERF paper was the efficiency of storage compared to usual methods
inox#5400: there's RL papers with world models in little doom-like environments encoded in NeRF-like ways
inox#5400: actually that's not NeRF at all it's more like SIREN I guess, but crap
mgostIH#0245: Yeee, whatever implicit representation works fine after all
mgostIH#0245: Has anyone tried SIRENs for the backbone of NERF?
Deleted User#0000: @mgostIH yes, i believe that's what they tried to do here https://arxiv.org/abs/2012.00926
mgostIH#0245: And is Siren an improvement?
cfoster0#4356: Nah, SIREN is just related
cfoster0#4356: In theory you could have a NeRF with sinusoidal activations but it isn't done much
Deleted User#0000: yea, there's actually a new improved SIREN out of Riken Japan i believe
Deleted User#0000: haven't read the paper in detail yet
cfoster0#4356: Oo which one?
cfoster0#4356: At first I thought you meant this https://ishit.github.io/modsine/ |
Deleted User#0000: that one is good! but i'm thinking of another (they did SIREN for a really high res gigapixel photo)
cfoster0#4356: Oh! ACORN? http://www.computationalimaging.org/publications/acorn/
Deleted User#0000: yess
Deleted User#0000: 👍 👍
Deleted User#0000: my mistake, not from Riken, its from Stanford lol
Deleted User#0000: i don't know what i'm thinking of
Shay#2039: how many more time until we can make high quality comics with this AI?
EricHallahan#1051: ¯\_(ツ)_/¯
Shay#2039: but am I dreaming too high?
EricHallahan#1051: I don't know, because I consider comics to be a visual medium.
EricHallahan#1051: I think there are a few papers that to manga generation though but I have no idea where to point you in the direction of to find them.
Shay#2039: oh, I know of crypko and thisanimedoesnotexist, thiswaifudoesnotexist as well
Shay#2039: also artbreeder
Shay#2039: but I wanted also to be able to create american comics, and european comics
Shay#2039: just any style
Shay#2039: that would beso awesome, I would never get boreed once again
alexyz#3459: comics requires multiple things:
language comprehension
alexyz#3459: image synthesis
cfoster0#4356: Yeah it's pretty cool. The only tricky part is the adaptive block decomposition, which requires solving an integer linear program periodically during training |
Shay#2039: GPT-neo has it
alexyz#3459: and the ability to create a structure from all of that
alexyz#3459: that actually makes sense
Shay#2039: the faraday cage has it
Shay#2039: we would just need to mix the two and make it cohesive in style and everything
Shay#2039: a big challenge
alexyz#3459: It's using VQGAN+CLIP, and the results aren't cohesive enough for a comic strio
alexyz#3459: *strip
alexyz#3459: DALL-E might be able to do that
Shay#2039: yeah, I know, but I always think in the future
EricHallahan#1051: That is why I point you in the direction of that research.
EricHallahan#1051: It is an active area of development, but I would say that we are not really the ones to ask about what is SOTA right now and where the problem field is heading.
Shay#2039: thank you
CRG#8707: Re: Manga creation, it feels like something dall-e might be able to do, or be close to.
cfoster0#4356: Hold up. Rereading and they actually improve on SIREN even when ablating their decomposition. Probably worth exploring this more
Deleted User#0000: Hi guys
dmvaldman#4711: what are people's dev environments like? are you using colab exclusively, do you have your own GPU workstation, multi-gpu?, do you use a cloud instance on some provider
bmk#1476: pyfra is my dev environment
EricHallahan#1051: I use Colab for personal development.
dmvaldman#4711: does the latency or debuggability become a nuisance? have you thought about owning some dedicated hardware? |
alstroemeria313#1694: i use my bf's gpu a lot
alstroemeria313#1694: he gives me a docker container to ssh into
alstroemeria313#1694: w/ some persistent storage
alstroemeria313#1694: i also use datacrunch.io sometimes
UnsupervisedLearner#4148: I had an idea that might solve something I've been thinking about since at least last year
dmvaldman#4711: hadn't seen this before. pretty cool that eleuther is investing in infra
chilli#5665: interesting way to put it haha
EricHallahan#1051: Well I don't have the budget for dedicated hardware so I don't really consider it. I would rather buy a Colab Pro subscription than hardware for myself.
bmk#1476: yes, we are investing a very large percentage of the total amount of money we invest across all areas in infra
bmk#1476: undefined%, in fact
chilli#5665: bmk
chilli#5665: I will pay you 1$
bmk#1476: ono
chilli#5665: and you can reinvest into Pyfra
bmk#1476: :ultrazucc:
chilli#5665: to turn that into 100% of our investment
gwern#1782: I will buy 1% of your stake for $1, which I believe implies a market cap of $10b
UnsupervisedLearner#4148: So you know how many functions are recurrent? How resnets and transformers actually implement a sort of recurrence?
And you know how nets seem to be getting deeper and deeper? (My suspicion based on circuits discovered by Olah et al at distil is that it's related to recurrent processing) |
And it seems like my own 'thought process' is recurrent, and I bet yours is too.
But the problem is when do you *stop* processing? When has the information been processed enough that you don't run through another time?
UnsupervisedLearner#4148: It's very similar to the halting problem, and I was thinking about all these RL schemes that might be able to sort of find a happy medium of accuracy and computation time and 'trainability'
dmvaldman#4711: so you are buying 1% of $1 for $1? talk about recurrence!
gwern#1782: our LPs demand to see a healthy return on our infracryptotoken fund
UnsupervisedLearner#4148: And it hit me, you know how when making image samples and text samples, it gets caught in an obvious basin. Where it cycles around (repeats itself in text, sort of shifts the pixels back and forth like a mirage for image generation)
gwern#1782: this lets us report a high IRR this year regardless of our losses on BTC/ETH/DOGE 😦
chilli#5665: lol, you know
UnsupervisedLearner#4148: That seems to me to be a geometric property of the attention matrix, which has these basins of attraction.
So I could have my net recurse until it spirals into a stable basin
chilli#5665: the recent crypto downturn has really revealed the personalities of some people
UnsupervisedLearner#4148: And train it to learn stable basins
bmk#1476: elaborate?
bmk#1476: this is like the 3rd crypto downturn ive been through and i havent noticed anything different from last time or the time before it
chilli#5665: Like, there's just some people on twitter/FB I know who've really lashed out about the crypto downturn
chilli#5665: and have gotten really mad at friends who hyped bitcoin and stuff
bmk#1476: o.O |
cst#9766: more people (esp. "normies") are involved this time
cst#9766: In my uninformed opinion
bmk#1476: thats been true literally every single time
chilli#5665: eh
chilli#5665: I think it's definitely been more mainstream in some sense
cst#9766: Well, sure, but my perception is that it's more this time?
cst#9766: I don't have any data on that though
chilli#5665: from the dogecoin/gamestop stuff
chilli#5665: to elon musk
UnsupervisedLearner#4148: Bought the top, sold the bottom:blobsad:
chilli#5665: etc.
bmk#1476: well, yes, it's been more this time than last time
bmk#1476: and last time was more than the time before it
bmk#1476: and the time before it was more than the time before that
bmk#1476: that's just how this works
bmk#1476: it's not really different imo
bmk#1476: anyways I have nothing at stake with this stuff, i just like to watch from the sidelines and see everything happen over again every 4 years like clockwork
bmk#1476: I have no idea what the heck kisma is, you're in the wrong server
dmvaldman#4711: my theory is that Satoshi is from the future, traveled back in time and like all time-travellers would want to figure out how to make the most money possible. so they re-invent bitcoin which was obvi the thing you would do if you were from the future where all of fintech is crypto. being from the future they would also only sell at peak price. since their original coins haven't moved, we're not at peak yet. so everyone should chill.
cst#9766: I feel like Satoshi has to have another wallet, right? |
bmk#1476: consider this an official mod warning: fuck off
gwern#1782: I find it entertaining just how much like all the earlier times it is, every single time since 2011 that I've been watching. "War. War never changes. It's always hell."
bmk#1476: the obvious solution is that gwern is Satoshi, problem solved
gwern#1782: :guilty:
UnsupervisedLearner#4148: This time we actually have scalable and somewhat working tech though. We're not long away from internet money being better than visa even for tiny payments
bmk#1476: my favorite part is how everyone on my TL is conveying the exact same High Wisdom about how they were right all along, etc
gwern#1782: what if satoshi's master plan was to become ultra-wealthy and then spend it all on a last-minute sprint for AGI
gwern#1782: (it's simpler than musk's master plan)
UnsupervisedLearner#4148: Also have new zk proof schemes, better ways to secure datafeeds, etc etc. It has grown tremendously since 2011
bmk#1476: people who have literally never said anything about Bitcoin ever are now suddenly experts in exactly why Bitcoin could never have worked in the first place
bmk#1476: ~~or, maybe spend it all on GANs for producing waifus~~
dmvaldman#4711: it's the waifu maximizer we should be concerned about. consuming all the resources of the universe
kurumuz#5695: yes
kurumuz#5695: quite sad to see only a few people are probably working at waifu maximizers
bmk#1476: actually yeah this is the strongest evidence for gwern not being satoshi
bmk#1476: if gwern were Satoshi, waifu maximizer would already exist
dmvaldman#4711: maybe the solve here (that launches us into our apocalyptic future) is a waifucoin that is mined by GAN-generation of waifus that match some embedding that starts with N-zeros?
dmvaldman#4711: okay back to work
kurumuz#5695: looking at all the people having issues with HF inference API, I'm happy we didnt go with it lol
kurumuz#5695: sounds horrible |
alexyz#3459: this one's bigger, it's the largest crypto downturn since 2017
alexyz#3459: anyway crypto discussion is def not on-topic
𓅬 gabriel_syme 𓅬#3220: woah, what was the drama about?
cst#9766: just some troll
Teemochu#8740: wait the ligma guy asked something in #gpt-neox-devs too?
cst#9766: I only saw him in here, but it looks like he's been removed so I guess he must have done it more than once, yeah
StellaAthena#3530: Correct.
alexyz#3459: is arxiv-sanity down?
James#6892: It’s so funny but so true
James#6892: It’s crazy how much history repeats itself
cst#9766: arxiv-sanity.com isn't loading for me
swcrazyfan#2478: @bmk By the way, your help pointing me the the right direction for converting my model worked perfectly!
bmk#1476: awesome!
yash!#5124: Hey
Imperishable_NEET#1969: Is the Venturebeat article about GPT-Neo inaccurate?
Imperishable_NEET#1969: Specifically this bit https://cdn.discordapp.com/attachments/729741769738158194/846290694632505344/ThisShit.png
Imperishable_NEET#1969: GPT-3 Ada I've been told is smaller.
Sid#2121: the sizes aren't public ¯\_(ツ)_/¯
chilli#5665: why is GPT-neo bolded
chilli#5665: lol |
chilli#5665: my researcher brain tells me that should imply that it's the best
Kia#2550: I mean that's probably point of paper to test things
Kia#2550: So It should be *Bolded*
Kia#2550: Or The Paper is Specifying "This X-model Can do better than this X-Model" So a "theme" of some sorts
zphang#7252: *Models in **bold** indicate that the authors subjectively think they are the best.*
cognomen#6297: models in regular print are for losers
Kia#2550: *Bold* Is dominance
alexyz#3459: Some people have done some benchmarks on it I think, they got something like 350M
Louis#0144: 350M?
Louis#0144: O.o
Louis#0144: That makes no sense
Louis#0144: As in they’re lying about the param count by a solid order of magnitude?
Sid#2121: they never released the param count
StellaAthena#3530: They never states the amount. People just assumed that DaVinci was the largest (175B) and that each smaller model was one step down
Louis#0144: Oh wtf
Louis#0144: I didn’t realize
Louis#0144: I wonder why they never released param count at all
finetune#0907: bmk evaled the api models a while back
https://discord.com/channels/729741769192767510/729741769738158194/843203144871313439
finetune#0907: comparing with the paper, looks like it's two steps down for each |
Louis#0144: Omg
alstroemeria313#1694: they aren't priced in such a way as to give the compute requirements away?
Louis#0144: No
alexyz#3459: well, the curie model is 1/10th the price of davinci
alstroemeria313#1694: ah
alexyz#3459: which would lead you to believe it's 13B
Louis#0144: No but I think the margins are huge
alexyz#3459: but evals make it look like 6.7B
alstroemeria313#1694: ah
Louis#0144: Yeah it’s pretty misleading tbh
alexyz#3459: It's not misleading
alexyz#3459: OpenAI didn't say anything about it
alexyz#3459: people just made up assumptions, and then went from there
EricHallahan#1051: Because it gives them a competitive advantage?
Louis#0144: That’s true
StellaAthena#3530: Performance is non-linear anyways. There’s no sense in which a 175B model is ten (or thirteen) times better than a 13B model
Louis#0144: It’s 10x better at having number of parameters
gwern#1782: wouldn't the cost be pretty linear? the main cost is capital, and you need _x_ times more GPUs on average for _x_ times more params, no?
Bunzero#2802: That's where they're probably very happy to have fat wads of cash from micro$oft, allowing them to have the pricing be as made up as they want.
EricHallahan#1051: Actually, if I am remembering correctly, no money changed hands. |
EricHallahan#1051: It was Azure compute credits.
nev#4905: currency of the future
gwern#1782: well, it was specified as money + compute credits, and zero information about what the ratio was
gwern#1782: so hypothetically it could be like $1 and $999,999 azure credits
EricHallahan#1051: Thank you for the correction, you probably know this more than I do.
gwern#1782: the GPT-3 licensing deal didn't contain any mention of cloud credits, so that was presumably all-cash, but no amount there
Bunzero#2802: Tbh their costs are near entirely compute credits so it doesn't make any difference
gwern#1782: yeah, as long as they can turn $1 of azure credit into $2 of API billing on average, then they're fine
gwern#1782: but if they are doing that, you have to wonder what they are constrained by?
gwern#1782: raw GPU/datacenter shortage? manpower to keep riding herd on the API?
AI_WAIFU#2844: I could totally see the GPU shortage killing their expansion
kindiana#1016: where are all the a100s going if not ms lmao
Gurkenglas#7362: I expect this code change changed my program's data but not its control flow. How do I export all the locals() and their recursive attributes to strings so i can diff them? Ideally these strings would not include the gigabytes of numbers. (I mean, I guess I can just turn the model knobs so it's only bytes of numbers, but surely there is prior work)
alexyz#3459: just wait for those *lightly used* chinese GPUs to go on the market
apolinario#3539: Is there any channel here ppl are discussing the EleutherAI Dall-E (replication efforts, etc.)?
alexyz#3459: #multimodal
apolinario#3539: thanks!
nev#4905: https://arxiv.org/abs/2105.04779
nev#4905: if this is so good, why are there no experiments with training?
EricHallahan#1051: Because it only works at inference. |
Louis#0144: for now
EricHallahan#1051: As far as we know.
nev#4905: 🤔
nev#4905: my question is more like
nev#4905: can you train with it?
nev#4905: so that there's no need to convert
EricHallahan#1051: I haven't looked closely enough to determine what constraint prevents them from doing that.
Sid#2121: it wouldn't make sense to train with it - it's a replacement for caching
nev#4905: 🤔
nev#4905: :thonk:
nev#4905: isn't the part with key-value sharing possible for training?
Sid#2121: well, sure, but that's a separate thing that's been done for ages
Sid#2121: that's not exactly the main innovation of the paper
nev#4905: and I'm guessing this parametrization doesn't perform as well
nev#4905: does anyone know which server this was? https://discord.com/channels/729741769192767510/729741769738158194/831981646785413120
Bruce23#6204: Hi, how I can keep GPTneo loaded in the GPU for subsequential requests ? I tried to save the model in a variable but of course that would require a lot of RAM and is unlikely the way to go 😄
nev#4905: has there been much research on what patch embeddings in ViTs learn?
Bruce23#6204: i am usign pytorch. should i save the model state to the disk an load from there on every request?
nev#4905: just load it
nev#4905: then `.cuda()` |
nev#4905: saving in a variable is the way to go
nev#4905: since you'll use the ram anyway
nev#4905: and loading from disk is very inefficient
Bruce23#6204: Hm, doesn't seem right.
Bruce23#6204: I am asking for inference / subsequential requests. I dont want to load the model on each request
Bruce23#6204: And I dont want to save the model in my system ram
finetune#0907: with .cuda() it'll be in vram
Bruce23#6204: Yes, i am loading it with .cuda() but i dont want to load the model on every request
Bruce23#6204: it should stay loaded 😐
nev#4905: what do you use to handle requests?
nev#4905: which framework?
nev#4905: in general, just load the model at the beginning
nev#4905: outside of request handling
Bruce23#6204: i am using fastapi, once the server runs, "free -m" reports 650mb used ram. once i run my code that loads the model, "free -m" reports ~3 gb used. so i am blocking system ram were i shouldnt
finetune#0907: don't del the variable :berk:
Bruce23#6204: or maybe i am getting something totally wrong
finetune#0907: either you still have the checkpoint loaded or you need to wait for gc
finetune#0907: or run it manually
Bruce23#6204: Ah, ok I try
finetune#0907: this kinda issue probably doesn't really belong here tho ..... |
Bruce23#6204: Ok
nev#4905: 🤔
nev#4905: I had a :smallbrain: moment
nev#4905: if attention is fast weights
nev#4905: (:schmid: )
nev#4905: there's stuff like compressive transformer, adaptive span, expire span etc
nev#4905: attention as it is can't be in the brain
nev#4905: since it can do infinite context length theoretically
nev#4905: but there can be an approximation
nev#4905: https://arxiv.org/abs/2103.13076 and basically all linear transformers
nev#4905: the advantage of transformers is that they're parallel right
nev#4905: then there's this :thonk: reddit comment
nev#4905: about GPT with an infinite context length
nev#4905: and how it could meta-learn everything without updating weights
nev#4905: can't find it
nev#4905: anyway
nev#4905: there's things like https://arxiv.org/abs/2010.00710 and https://openreview.net/forum?id=HklBjCEKvH
nev#4905: which I happen to be fond of
nev#4905: and you can make a connection
nev#4905: knn can be extended into softmax attention |
nev#4905: I don't have enough brain power to finish this train of thought
nev#4905: knn-lm parallels with attention
nev#4905: it should be more parallel than sequential learning
nev#4905: I think this seems to be what facebook is doing
nev#4905: first PKMs, then various long-range transformers with spans
nev#4905: tldr KNN-LM simp post
rom1504#5008: Cool ideas. I'm also interested in how to use fast knns in model trainings and inferences
cfoster0#4356: The Mods will not hesitate to ban you
nev#4905: :berk:
cfoster0#4356: Not you, rom1504
Immaculate#3168: jeez I just joined and saw the announcement from april fools. Got me almost 2 months later 🙂
nev#4905: is using KNN a bannable offence
alexyz#3459: what happened?
cfoster0#4356: Another low effort troll
alexyz#3459: lol
nev#4905: I can't tell how many layers of irony you're on
alexyz#3459: i'm doing well, hbu?
alexyz#3459: @cfoster0 troll
alexyz#3459: Thank you
cfoster0#4356: Thanks for pinging 👍 |
alexyz#3459: arxiv-sanity is back up 🥳
mange#7530: 👍
Kharr#7888: Has anyone played around with measuring how perplexity of GPT-Neo changes for tokens representing longer vs shorter subwords (in terms of character length)? I'm curious if length matters.
EricHallahan#1051: I haven't.
Kharr#7888: I'm working on some character level modeling with pooling and the model has a significantly more difficult time learning as compression increases (as you would expect). Should be similar for BPE so I was curious if someone had tested it.
juanjoshe18#1042: I have a question. Can gpt-NEO create a story with a length of 900000?
Kharr#7888: No way to keep a sequence that long in memory, but you can use a sliding context window of size 2048 and generate forever. The model will only reference the latest 2048 tokens if you do this.
juanjoshe18#1042: how I do a context window?
Louis#0144: Neo can also do stuffing btw
Louis#0144: Either doing knowledge graph stuffing which is summarizing and representing the summary as a KG
Louis#0144: or it could conceivably also do dense stuffing with prefix tuning
Louis#0144: But I haven’t tried yet
Louis#0144: That doesn’t buy you much extra space
Kharr#7888: Just truncate your tokens going into the model to 2048-1
Louis#0144: At most another few thousand tokens
juanjoshe18#1042: on min_length?
Kharr#7888: I don't use HF so not sure if they handle it out of the box. It's pretty easy to write your own sampling method with sliding window if they don't.
EricHallahan#1051: They should.
Louis#0144: Sliding windows have issues btw
Louis#0144: You rapidly start losing coherence |
EricHallahan#1051: I don't every pass 2048 tokens though.
juanjoshe18#1042: I'm new, I don't know how to do things with this model
juanjoshe18#1042: so I need help
EricHallahan#1051: First, welcome!
juanjoshe18#1042: thank you, you are very polite
EricHallahan#1051: Second, you should read our FAQ.
https://eleuther.ai/faq
juanjoshe18#1042: ok what should I do next?
Kharr#7888: Probably look through HF documentation if that's what you're using.
juanjoshe18#1042: what is a HF documentation?
Kharr#7888: HF = Hugging Face. What are you using for the model/how are you loading it? Is it something else?
juanjoshe18#1042: I'm using the google colab that other person provide me
Kharr#7888: 🤷 Not sure what you're working with in that case. https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.generate is the documentation for generation methods in Transformers. That's the best I can do for you 🙂
finetune#0907: they just break with index oob error for max_length above 2049 :berk:
EricHallahan#1051: HF, ever janky.
𓅬 gabriel_syme 𓅬#3220: do the parameters in generate matter much? I literally just started using Neo models with HF, so not sure if quality of outputs depends a lot on those
𓅬 gabriel_syme 𓅬#3220: btw I love the fact it took me 2mins of setting up to generate stuff
EricHallahan#1051: Uhh... *yeah*, they make a huge difference.
EricHallahan#1051: Are you using finetune's patch?
𓅬 gabriel_syme 𓅬#3220: oops 🙂 is there a write up someplace to learn more? |
𓅬 gabriel_syme 𓅬#3220: nop, just started vanilla HF. I do plan to use his repo eventually
EricHallahan#1051: https://huggingface.co/blog/how-to-generate
𓅬 gabriel_syme 𓅬#3220: woah that's great Eric
𓅬 gabriel_syme 𓅬#3220: so many stuff I did not know
𓅬 gabriel_syme 𓅬#3220: also, seems easy to use
EricHallahan#1051: finetune's repo also has tail-free sampling as well.
𓅬 gabriel_syme 𓅬#3220: cool, will try it in the morning!
𓅬 gabriel_syme 𓅬#3220: I do love I can fit the 1.3B on my meager GPU as well 🙂
EricHallahan#1051: https://trentbrick.github.io/Tail-Free-Sampling/
EricHallahan#1051: What GPU?
EricHallahan#1051: If you can I do recommend using finetune's patch and then running 2.7B at binary16.
EricHallahan#1051: It is a huge step up IMO.
Kharr#7888: Have you tried this?
EricHallahan#1051: I haven't but @finetune has.
𓅬 gabriel_syme 𓅬#3220: I got a 2080, the 1.3B ran quite well (inference though, not finetuning)
𓅬 gabriel_syme 𓅬#3220: will try that, thanks!
EricHallahan#1051: Yeah, try FP16, you will get a nice performance bump regardless of which model you use.
𓅬 gabriel_syme 𓅬#3220: sanity check, it's xirider correct?
finetune#0907: find it kind of hard to tell a difference. overall i feel it works a bit better than top_p, but some people said it works worse for them so idk
𓅬 gabriel_syme 𓅬#3220: can I use your repo with the 1.3B model btw? |
finetune#0907: yeah
𓅬 gabriel_syme 𓅬#3220: I'd like to test offline
𓅬 gabriel_syme 𓅬#3220: cool 🙂
𓅬 gabriel_syme 𓅬#3220: cause I have an 8gb gpu here. I'll try to fine tune and see
finetune#0907: wonder if 8gb are enough for finetuning 1.3b. could work
𓅬 gabriel_syme 𓅬#3220: not sure at all, I know it used about 5gb to inference (more or less I guess) if that matters
𓅬 gabriel_syme 𓅬#3220: but without fp16
finetune#0907: finetuning 2.7b with 16gb vram works with fp16 and deepspeed zero 2
𓅬 gabriel_syme 𓅬#3220: cool!
𓅬 gabriel_syme 𓅬#3220: hey then I can fine tune it in the server I have access too, especially if deepspeed is enabled
finetune#0907: but cuda always eats around 1gb, so an 8gb card will have relatively less space
𓅬 gabriel_syme 𓅬#3220: cool thanks! Time to create a dataset 😄
finetune#0907: what did you mean here btw?
cfoster0#4356: That exclusive license finally being put to use https://twitter.com/jjvincent/status/1397205895499689985?s=19
aze#1010: took them long enough
ethan caballero#6044: is there a paper/msft_blog?
cfoster0#4356: 🤷♂️
𓅬 gabriel_syme 𓅬#3220: I can see it now
𓅬 gabriel_syme 𓅬#3220: "The end of stackoverflow is nigh"
𓅬 gabriel_syme 𓅬#3220: I can finally do all my regex with this tool |
ersatz#0001: is that finetuned or just achieved with a custom prompt
Clint_Hercules_Aptera#1913: I wonder if we could do something similar with GPT-Neo using IBM's recent release of Project CodeNet earlier this month? https://bdtechtalks.com/2021/05/17/ibms-codenet-machine-learning-programming/
EricHallahan#1051: No way it is IMO.
cst#9766: Microsoft is doing something similar, afaik
cst#9766: But they seem to be relying on the full GPT-3 model
bmk#1476: ghpy time
EricHallahan#1051: We need a better tokenizer.
bmk#1476: we should make a code-specific tokenizer
EricHallahan#1051: Is there a thing where you partition the number of tokens for each modality?
bmk#1476: idk lol
EricHallahan#1051: Me neither.
EricHallahan#1051: The idea being, if not clear already, that you would explicitly allocate say 10% of your tokens to code and then your remaining stuff for standard text.
bmk#1476: no i want a tokenizer just for python code
bmk#1476: and i want to train a model using that tokenizer on just python
StellaAthena#3530: @bmk NeoX supports training a tokenizer
EricHallahan#1051: I am talking for general models, not pure code. It would be dumb to do that for pure code.
bmk#1476: well, i'd be using neo
bmk#1476: but i want a pure code model
bmk#1476: what's wrong with making a pure code model
EricHallahan#1051: I know that. |
EricHallahan#1051: Nothing, I am all for it.
EricHallahan#1051: But the regular Neo models also need a better tokenizer too.
StellaAthena#3530: Sure, but Neo doesn't have that functionality. Once you train the tokenizer you can use it wherever
EricHallahan#1051: I was just considering that concept for a moment.
bmk#1476: doesnt have what functionality?
StellaAthena#3530: Training a tokenizer?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/846829616022814751/unknown.png
StellaAthena#3530: Oh, ok
Eleiber#8347: Has anyone trained or finetuned GPT-NEO with Spanish text?
StellaAthena#3530: Someone talked about doing this but we haven't heard much back
EricHallahan#1051: A couple people have asked, but nobody has followed through with one as far as I know.
alexyz#3459: Wouldn't it be a good idea to train a tokenizer along with it if someone were to do something with Spanish text?
EricHallahan#1051: No, not if you are planning on tuning.
alexyz#3459: as the Hugging Face one is for English (i think)
alexyz#3459: Not tuning, training
alexyz#3459: for tuning you couldn't
EricHallahan#1051: Yes, because it is derived from GPT-2.
EricHallahan#1051: ` SolidGoldMagikarp`
Eleiber#8347: I tried to finetune GPT-2 in Spanish some years ago but the result was pretty bad because I used 300.000 messages from a Discord server I manage, and I didn't format the dataset, so there was a lot emojis, grammar errors, incoherences.
alexyz#3459: Why would you use Discord data? |
Eleiber#8347: I thought it would be a good idea, lol
alexyz#3459: it'd be a good idea for a chatbot
alexyz#3459: would be interesting
EricHallahan#1051: ~~SMS transformer wen~~
alexyz#3459: What is an SMS transformer?
Eleiber#8347: So yeah, it was something like this: https://cdn.discordapp.com/attachments/729741769738158194/846840845612744744/unknown.png
Eleiber#8347: Chaos basically
alexyz#3459: perfection
Eleiber#8347: That was my error for not replacing custom emojis and mentions with their names, so it was passing the mentions as literal strings.
EricHallahan#1051: I spun a concept of having a model that worked at the char-level on seven-bit ASCII with a maximum context length of 160 characters.
alexyz#3459: Sounds interesting
EricHallahan#1051: Just for the pure uselessness of such a model.
Eleiber#8347: But yeah, I think I will ask for help in other server I am (Spanish one) to make a coherent dataset so we can finetune GPT-NEO to Spanish.
Sphinx#2092: Could just grab the relevant subset of mc4.
Sphinx#2092: then rebuild the tokenizer, re-use embeddings in the intersection and keep going.
alexyz#3459: Is a multilingual Pile being worked on? On the Github it says it's a goal after Version 1 Pile
Bruce23#6204: Hi, I heard rumors about a new 6.7B model
Bruce23#6204: Are you realsing something like this?
alexyz#3459: It's being trained right now
Bruce23#6204: Wow, awesome! 🙂 |
Bruce23#6204: May I ask, do you plan to make it available in June?
Bruce23#6204: (i dont know how long such process takes)
EricHallahan#1051: As per our policy as mentioned in our FAQ (https://eleuther.ai/faq), we do not provide estimates on completion or release.
Bruce23#6204: You anticipated that question 😄
EricHallahan#1051: I was writing it long before you asked.
Bruce23#6204: I know
cfoster0#4356: It's also Ben's model, so he calls the shots on when it is done and/or released
cfoster0#4356: Not actively, no
alexyz#3459: thanks 👍
Bruce23#6204: Alright! 🙂 Thank you
aronax#9141: Will the 6.7B model be available on HuggingFace inference API once it will be released?
bmk#1476: it will if someone decides to figure out how to do that
bmk#1476: idk if anyone will
bmk#1476: so idk maybe?
Bruce23#6204: @finetune i studied your examples and saw parameters like "repetition_penalty_range" and "repetition_penalty_slope". I couldn't find them inside the Eleuther or HF docs. Would you enlighten me one more time?
finetune#0907: they are optional parameters i added to allow no/reduced penalties for less recently generated tokens. range species the number of tokens included in penalization and slope the penalty falloff curve
Bruce23#6204: Oh, so they are specific to your fork? Thanks! 🙂
finetune#0907: yes
Bruce23#6204: awesome 🙂
finetune#0907: should make some prs to hf some time, but there's not even been any feedback on my first one so far |
Bruce23#6204: They will blog about GPT-Neo 2.7Bn need week.
Bruce23#6204: next*
Bruce23#6204: To introduce the model to their client base
EricHallahan#1051: "They"?
Bruce23#6204: HuggingFace
EricHallahan#1051: Well their current implementation is incredibly buggy so they may want to fix it first.
Bruce23#6204: They did tweaks over the weekend
Bruce23#6204: "We now feel like it's in a stable and performant state and are preparing a blog post to demo it next week."
EricHallahan#1051: Where is this information?
Bruce23#6204: A was chatting with a rep
Bruce23#6204: I*
Bruce23#6204: I wanted it to work before I booted my own instance on AWS 😄
finetune#0907: don't see any changes made to the neo implementation recently
Kharr#7888: Maybe they are not merged yet?
Bruce23#6204: That info is from an hour ago
Bruce23#6204: Give them some time 😄
finetune#0907: maybe, but i also don't see any relevant pr
finetune#0907: other than mine
EricHallahan#1051: It kinda needs an entire rewrite lol, some of the design choices suck. They insisted that the `GPT2Model` class was totally inadequate for/incompatible with GPT-Neo, and then proceeded to write a flawed implementation as `GPTNeoModel` that is inconsistent and buggy.
Kharr#7888: Well, it's not hard to get the model stable so I'm sure they figured it out/will figure it out. I finetuned a version and released it internally to see how it stacks up before I decide to start lobotomizing it 🧑🔬 |
Kharr#7888: This is really funny because it's just a few lines of code to make it compatible (local masking in attention matrix and adjusting MLP)
EricHallahan#1051: Exactly.
EricHallahan#1051: We literally pleaded with them to do it that way and they refused.
Kharr#7888: That's unfortunate :blobsad:
EricHallahan#1051: It was mostly Leo's push but he was entirely justified in pushing for a `GPT2Model` implementation.
Bruce23#6204: Do these flaw belong to their inference API or also to their transformers code for gptneo?
finetune#0907: after changing the local attention part, there is still the difference that the neo implementation casts to fp32 inside attn
finetune#0907: that's not in gpt2
Kharr#7888: They are the same. The API just runs the transformers code.
Bruce23#6204: So my GPTneo is flawed 😐
Kharr#7888: Did you find that you needed this? I saw 0 dfference.
finetune#0907: i never checked tbh
finetune#0907: but it's a difference :berk:
Kharr#7888: I have it as an optional param in config and toggled on/off, couldn't see a difference after finetuning it in fp16
finetune#0907: interesting
finetune#0907: might be interesting to eval 2.7b in the regular neo code vs loaded into gpt2 to check if there's any noticable differences. saw very small deviation in results of layers at fp16 between the two
finetune#0907: not sure if it's enough to actually make a difference tho
Kharr#7888: I'll do some more testing I guess. The final distribution seemed almost identical for a bunch of prompts. Once you start sampling, it washes away 🤷
alexyz#3459: Is reverse distillation possible: basically making the model bigger? Dunno why anyone would ever want that, but 🤷♂️ just wondering
Deleted User#0000: Yup, https://arxiv.org/abs/1911.04252 |
alexyz#3459: woah that's cool
Teemochu#8740: Depends how long it takes
Teemochu#8740: and whether the result can be easily trained further
hGI.unsure#2032: Hi, does anyone know how to disable the computational graph feature in pytorch?
chilli#5665: What does that mean
hGI.unsure#2032: I want to run computations completely without autograd
hGI.unsure#2032: It is interfering with the ram->vram bandwidth in some strange way.
chilli#5665: `with torch.no_grad()`?
hGI.unsure#2032: doesn't work properly
hGI.unsure#2032: .detach() gets my bandwidth from 6gb/s to 11gb/s
but it's not working on some other machines.
chilli#5665: How are you measuring bandwidth
hGI.unsure#2032: https://colab.research.google.com/github/arrmansa/Gpu-bandwidth-benchmark/blob/main/Gpu%20memory%20bandwidth.ipynb
chilli#5665: I'll take a look in a bit
hGI.unsure#2032: thanks a lot
Bruce23#6204: Is there a resource/list available that suggests "good parameters" (e.g. temperature) for specific use-cases?
Louis#0144: No
Louis#0144: Typically you don’t wanna go higher than like 1.0 or so though
Bruce23#6204: Aha
Bruce23#6204: Hm yea sometimes I get good results with 0.5, sometimes with 0.9 |
Bruce23#6204: Still trying to figure that out
alexyz#3459: What happens with TRC after the first month ends? Just want to know 🙂
Kharr#7888: You might benefit from looking at what temperature does in the code. top_p, top_k and temperature all change how you sample from the distribution and it can be confusing to understand what's happening.
EricHallahan#1051: ^
𓅬 gabriel_syme 𓅬#3220: yep good post
𓅬 gabriel_syme 𓅬#3220: I went to bed and the discussion is still the same, but informative 🙂
asparagui#6391: you write an email documenting all the amazing things you did with it + links to your blog posts/presentations/code and they give you scads more access
Bruce23#6204: Very good article for a beginner like me, thanks! 🙂
alexyz#3459: A Titan RTX on vast.ai for 0.6/hour, good or bad deal?
alexyz#3459: and 8 3090s for 8/hour
𓅬 gabriel_syme 𓅬#3220: it's alright you can get V100 for that price too I think
Louis#0144: I have yet to see a v100 there
Louis#0144: Lmao
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/846931984362373140/image0.png
StellaAthena#3530: The cut off one at the top is A5000
Kia#2550: How much per hour?
alexyz#3459: isn't CoreWeave only for companies?
alexyz#3459: it's literally in the image
Kia#2550: A5000
Kia#2550: I'll search it |
StellaAthena#3530: That is also in the image. It’s the one cut off at the top, where you can see the price but not the name
StellaAthena#3530: I don’t know.
EricHallahan#1051: ($1.15)
𓅬 gabriel_syme 𓅬#3220: Really? Used to be pretty common
Kia#2550: $1.15
StellaAthena#3530: Their pricing page is: https://www.coreweave.com/pricing
bmk#1476: I don't think coreweave have any titan rtxes
StellaAthena#3530: It tends to be similar to what Google charges, from what I’ve seen
𓅬 gabriel_syme 𓅬#3220: I also thought it was for companies when I saw it but did not try harder to create and start an account
Kia#2550: There V100 is like a 1$?
Kia#2550: Interesting
StellaAthena#3530: AWS V100s are also $1 I think
alexyz#3459: i mourn the loss of gpu.land 😦
alexyz#3459: was a nice service
StellaAthena#3530: I’m not in the market of buying cloud compute but in the limited research I’ve done, CW seems to be on par with it slightly cheaper than AWS and GCP
Kia#2550: That's interesting
𓅬 gabriel_syme 𓅬#3220: Is there any point to cloud conpute vs owned infrastructure for large companies?
𓅬 gabriel_syme 𓅬#3220: Would it be just training too big models or inference and deployment?
𓅬 gabriel_syme 𓅬#3220: Or is inferencing also more cost efficient on your own infra?
AI_WAIFU#2844: Security, cost, and data locality mostly. |
AI_WAIFU#2844: I wouldn't be even slightly suprised if AWS/GCP/Azure had a backdoors and could snoop on what everyone was doing.
AI_WAIFU#2844: Especially for anyone who looks like a direct competitor.
𓅬 gabriel_syme 𓅬#3220: yeah that's a huge issue and with these companies very few companies are not competitors lol
𓅬 gabriel_syme 𓅬#3220: but what about cost?
𓅬 gabriel_syme 𓅬#3220: I always thought cost is less at 'medium' scales, like idk 4-8 GPU stuff
AI_WAIFU#2844: depends on usage
𓅬 gabriel_syme 𓅬#3220: not sure when I need to inference across 2000 people
𓅬 gabriel_syme 𓅬#3220: (as an example of a typical medium AEC company)
AI_WAIFU#2844: and egress is the worst, that's how they get you
𓅬 gabriel_syme 𓅬#3220: yeah that's another one I learned hard from GCS lol
𓅬 gabriel_syme 𓅬#3220: Like I feel for training 100% it's more cost efficient to buy 12 3090s or idk a DGX machine
𓅬 gabriel_syme 𓅬#3220: just not sure about inference. let's say I have a GPT Neo 2.7B design assistant and there's 5000 employees that might use it at any point. I guess that needs a cloud infra? (sorry I'm very bad at this)
tylerlastovich#3263: At a company I worked at we ended up buying an small 8 GPU rig for longer training cost savings. (GANs not NLP though). Inference was still in the cloud though.
kindiana#1016: what does the company use gans for :thonk:
𓅬 gabriel_syme 𓅬#3220: war
𓅬 gabriel_syme 𓅬#3220: (sry)
AI_WAIFU#2844: you'd have to run the numbers, and make sure that what your using is not important enough that google can't take it from you. Also latency might be an issue depending on location
𓅬 gabriel_syme 𓅬#3220: thanks! that kind of makes sense and is what I was thinking
𓅬 gabriel_syme 𓅬#3220: cool thanks! I need to understand how inference happens at scale anyways
tylerlastovich#3263: Facial photos and synthetic datasets (generated.photos) |
𓅬 gabriel_syme 𓅬#3220: but yeah AEC companies totally paranoid and most (largest ones) use their on-site servers
𓅬 gabriel_syme 𓅬#3220: especially since 50% of the work is competitions for new projects
bmk#1476: when dedicated eleuther colo
AI_WAIFU#2844: Synthetic datasets are a big one, I knew as mining company that was augmenting simulator output with GANs so they could train classifiers for mining applications
bmk#1476: you know how supercomputers sometimes have stuff printed on the racks
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/846942107809742849/1200px-Titan2C_2013_284991280252129.png
𓅬 gabriel_syme 𓅬#3220: synthetic datasets are huge for us as well, in design. In a way, could be a silly application of my DALLE model
bmk#1476: i want an eleuther colo with the eleuther logo and then a goose :goose:
AI_WAIFU#2844: You know, I was thinking about that, if you rent out your compute when you're not using it, the price of GPUs isn't *that* high.
tylerlastovich#3263: I was a team lead at Cray 😄
AI_WAIFU#2844: You just need a bigfuck buisness loan.
bmk#1476: woah that's awesome
𓅬 gabriel_syme 𓅬#3220: I'll take one thx 🙂
𓅬 gabriel_syme 𓅬#3220: noob question: how many concurrent requests would a 3090 handle for like a Neo 2.7B?
tylerlastovich#3263: This is really untapped still and interesting. Not to hijack this channel, but I did a lot of work on this so DM me if you want.
AI_WAIFU#2844: No that's what this channel is for, tell us more.
chilli#5665: the issue is that you aren't doing `param.data = param.data.pin_memory()`
chilli#5665: once you do that I think the bandwidth is the same
Kyle Rose#8362: is there a channel / place for simple Q&A
chilli#5665: depends on how simple |
AI_WAIFU#2844: !faq
Carl-bot#1536:
Kyle Rose#8362: read that
Kyle Rose#8362: it's when running the GPTNeo_example_notebook - https://colab.research.google.com/drive/1mH8kdtnmpWbMLcDVd7MI5w_xDj3YKJWs?authuser=1#scrollTo=sf_5E4fHFQIh&line=1&uniqifier=1
Kyle Rose#8362: i've followed all the steps, it's just not finding the dataset
AI_WAIFU#2844: > We welcome beginners to lurk and talk about topics they are knowledgeable of, but this is not the place to get intro-level resources or answers to basic questions.
Kyle Rose#8362: yeah
Kyle Rose#8362: hence the question of where would be a good spot
Kyle Rose#8362: sorry, thank you all
tylerlastovich#3263: At a high-level the amazing part of synthetic dataset production was how many companies immediately admitted having horribly biased internal datasets. Even huge companies. Gap-filling using conditional GANs was the most common ask. Second interesting point was how terribly murky the legal space is for AI-created works; tons of work to sort through it all.
tylerlastovich#3263: I agree here. As Neo goes more mainstream the questions will increase dramatically. Having a channel where end-*users* help each other will be valuable.
𓅬 gabriel_syme 𓅬#3220: this is so true. The diversity of generated designs in architecture is quite terrible with current parameteric models. It was very insightful to simply plot the designs I was generating across a couple of metrics to see how small the coverage of the space was, and how biased it was to certain areas.
𓅬 gabriel_syme 𓅬#3220: One of the reasons why I started looking into QD and older generative methods, but they are hard to make work well :/ So, DL would help. Especially smth like DALLE
cfoster0#4356: If there's a need for that, there's nothing stopping folks from starting it
cfoster0#4356: Open source FTW
AI_WAIFU#2844: Yeah, we've discussed it amongst ourselves several times, but the thing is that we're *extremely* bottlenecked by engineering time, so we just decided to outright not do anything that looks like tech support.
StellaAthena#3530: That said, this is a research collective. We don’t have bosses, and everything is done by force of personality. **Be the change you want to see in the world:** if people show an interest in end-user experiments and dedication to spend the time to do good work its easy to get compute resources and a channel.
StellaAthena#3530: Just because it’s not what I think is the optimal use of my time doesn’t mean it’s not a good use of *yours*. We wouldn’t dream of telling people they can’t work on downstream stuff, but the most active users are more interested in upstream stuff
Kyle Rose#8362: specific to the discord I believe an admin has to create channels
StellaAthena#3530: If there’s a demonstrated community of interest in #general and #research that is committed to working on cool problems, I will personally make you a dedicated channel. |
StellaAthena#3530: What’s a use-case you care about?
tylerlastovich#3263: Sounds good! I just think it is a good protective measure and an easy place to direct questions. More as a protective measure against the core team's time than anything. I bet there will be plenty of people like myself that will be doing interesting fine-tuning as models get bigger.
tylerlastovich#3263: Fine-tuning for financial market knowledge, and advanced prompt design/consistency/randomness.
StellaAthena#3530: Can you elaborate? I’m not really sure what you have in mind here.
StellaAthena#3530: Like, do you want GPT-Neo to pick stocks?
tylerlastovich#3263: Ha, no. More like teaching Neo about what each companies does by feeding it earnings report transcriptions, 10-Ks and S-1s. I run a financial information startup.
AI_WAIFU#2844: I was gonna guess 10-K summarization and data extraction.
StellaAthena#3530: Uh okay. I mean, I don’t get it because I know nothing about finance but *\*shrug\**
bmk#1476: finance? that's the thing that ensures my binary options are priced undutchbookable right
bmk#1476: (usually™)
AI_WAIFU#2844: no that's the thing where people with too much shit lend it out to people who claim to know what to do with it
AI_WAIFU#2844: in the service of getting more shit
tylerlastovich#3263: Tough crowd haha
StellaAthena#3530: @tylerlastovich @𓅬 gabriel_syme 𓅬 @Kyle Rose A large, unfilled niche is developing a general framework for fine-tuning. Ideally it would hook into GPT-NeoX and the Eval Harness as well. The best way to get a downstream applications channel is to start building one. If you can get some people together and show that you’re more than just talk, it’s pretty easy to get resources from EAI.
tylerlastovich#3263: My work is more about explaining how investing works, using GPT-3/Neo to bust industry jargon for normal people to understand.
Kyle Rose#8362: I figured out the error, it was a problem with the notebook's code. Thanks all.
bmk#1476: can you outline what you envision such a framework would look like?
bmk#1476: i have a hard time conceptualizing what it would do that would be nontrivial in complexity
bmk#1476: like shouldnt that trivially be part of neox
bmk#1476: just a `--resume_from` flag or something |
StellaAthena#3530: I don’t have something specific in mind, though the number of people who ask for help doing it has lead me to assume it’s non-trivial. Perhaps I overestimate their competency of the typical person passing through though 😛
bmk#1476: surely you know enough about what finetuning is to assess how hard it is, though
StellaAthena#3530: Some downstream applications involve adding additional layers to the network.
tylerlastovich#3263: How hard are finding doing your Python fine-tuning? I had fun playing with that today.
AI_WAIFU#2844: So sort of like rephrasing existing documents in a more layman understandable way?
bmk#1476: it takes about 20 lines of code with pyfra
EricHallahan#1051: `pyfra` stronk
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/846953692561145876/unknown.png
bmk#1476: (this is for a different tuning run but basically the only difference is the file names)
StellaAthena#3530: You’re just resuming training though. This would not be a way to train a part-of-speech tagger or do named entity recognition.
bmk#1476: just use huggingface then?
bmk#1476: it's pretty easy to do
tylerlastovich#3263: Pretty much, with some hopes to make a Q&A system too. It seems very needed in the investing market based on what I can see.
AI_WAIFU#2844: Yeah, there's a huge influx of retail investors, so that makes sense.
𓅬 gabriel_syme 𓅬#3220: I want to note that in the NLP / LM space I don't expect me to be great help developing frameworks, but can and will test things out in downstream tasks. I can help more in multimodal/visual stuff I guess.
𓅬 gabriel_syme 𓅬#3220: That said, I do love how simple it was to use Neo, so now I'm going to learn how to fine tune it as well. It's fun getting into NLP after forever
UnsupervisedLearner#4148: Isn't it the job of the one seeking investment to market to potential vestors?
StellaAthena#3530: I’m saying that if people get together and do cool things we will give them resources. I’m not naming a specific detailed plan to do a cool thing
UnsupervisedLearner#4148: When you write a speech you consider whom you're addressing
bmk#1476: im saying the particular example you give is confusing to me because it's totally unclear what the scope of the project would be |
StellaAthena#3530: It’s totally unclear what the scope of any project would be. Choosing a scope is step 1
cfoster0#4356: There was some idea floated about a space for Neo users to help one another/answer one another's questions and I'm not sure if this is meant to be that or something else
𓅬 gabriel_syme 𓅬#3220: I do have one idea, trying to get the dataset first before I propose 🙂 not really easy to do that
StellaAthena#3530: Forget it, it’s not important. What is important is that we are happy to provide space and resources to people who make progress on cool problems.
StellaAthena#3530: That’s the bottom line, regardless of the quality of my half-assed example
tylerlastovich#3263: I think I track what you are saying and I believe it would be very valuable for many to have a clear, documented way to fine-tune and example projects that show off the prowess of Neo after being fine-tuned. It should probably be separate from an offer of resources though IMO. That should likely be done sparingly or else the world will start asking. 🙂 I will think if I have anything interesting enough that a wider audience would care about it.
Bruce23#6204: i think i read about it before but just can't find it. can i return only new tokens? (remove inputs from the result)
Bruce23#6204: or should that be subject to my postprocessing
CKtalon#7792: @tylerlastovich I think it can be done as a Q&A system, but it won't be of much use to the financial illiterate, since the answers will probably be the same financial jargon.
CKtalon#7792: https://cdn.discordapp.com/attachments/729741769738158194/846967309364101150/unknown.png
CKtalon#7792: as someone who knows what these are, the answer is perfect, but for someone who doesn't, they will be confused. In the case of American options, new option sellers might think that once the price hits their strike, someone will assign the options, etc. (quite common misconception)
Kyle Rose#8362: what's the cost to train neoX today vs when the chips are done?
CKtalon#7792: https://cdn.discordapp.com/attachments/729741769738158194/846968067422158878/unknown.png
CKtalon#7792: what you want is something that produces like an ELI5 answer
EricHallahan#1051: I am very confused by this question, can you clarify?
CKtalon#7792: Here's a real question on a covered calls subreddit. lol to the answer. https://cdn.discordapp.com/attachments/729741769738158194/846969503966298152/unknown.png
tylerlastovich#3263: Exactly. This is what I would be looking to train for, summarization + facts. GPT-3 knows the wiki definitions just fine. I care more about individual companies though than generic concepts. 'What does xyz company produce" is closer to what I am looking to find. It is a supplement, not a replacement for hard data.
CKtalon#7792: you will need to supplement it with stuff from like seeking alpha
CKtalon#7792: and continuously refinetune it
CKtalon#7792: you don't want something from 10 years ago to influence your model's knowledge of a company |
CKtalon#7792: MSFT 10 years ago is very different from the MSFT today for example
CKtalon#7792: for AI which gets outdated almost immediately..not sure finance is a good field to apply it (especially knowledge/news)
CKtalon#7792: if you want to create a spam blog for investing, i guess it's easy. but if you want to create something of value..not easy
James#6892: On the topic of things getting outdated, how would you train a model to not get current events confused with past events
James#6892: wouldn't it have trouble with conflicting information from present and past?
James#6892: like how do you even solve this problem
James#6892: at scale
CKtalon#7792: don't think it's solved or solvable as of now
James#6892: Like GPT-4, if trained on 2020 data, will be about COVID, but then if combined with later data, will not have COVID
CKtalon#7792: like it's easy for an article to just write down some historical facts (it's already done in finance i believe). Like stock X moved Y% after reporting earnings a year ago. Or what kind of movement happened after it beat expectations in the same way, etc
James#6892: how will it consolidate the two worldviews
CKtalon#7792: don't think the technology is there yet: constant scraping the internet and finetuning its models
CKtalon#7792: too expensive
CKtalon#7792: that's what scifi does, right? browse through the internet and learn stuff
tylerlastovich#3263: Agreed here. Absolutely not creating a spam blog. This work isn't the core of my project, that is already a normal data API. I just love the research being done here and am curious how a Neo model fine-tuned on Edgar data will perform. I am making no promises that it will be world-changing.
James#6892: Hmmm, so it has to continually browse through and re-train, and have timestamps
CKtalon#7792: but the amount of work to preprocess the data, clean it, etc isn't trivial yet. haha
James#6892: I feel like we're on the edge of many world changing ideas/products but still not quite there yet with current technology limitations
James#6892: is going to be very interesting times
CKtalon#7792: once we are at quadrillions of parameters, and you can train a 10B token dataset in like a few seconds, probably.. but i don't think we are anywhere clsoe to that in like 30 years |
EricHallahan#1051: Open question.
James#6892: Haha I see, good to know
bmk#1476: doesnt seem that hard to me, just that nobody's actually gotten around to doing it irl with a real model
EricHallahan#1051: And presented it successfully.
James#6892: I wonder how many use cases actually require retraining every few seconds, vs every week, every month, etc.
CKtalon#7792: well, you want to know breaking news?
bmk#1476: personally i don't think it's the most interesting thing to work on but if someone can make the dataset ill run it on neo
CKtalon#7792: so that thing needs to scan the web. once it hits a threshold that it's a big deal, it retrains itself
James#6892: Sounds like something perfectly up Google's alley
James#6892: They are probably already running some experiments on this
EricHallahan#1051: Nah, they already have a great retrieval system.
EricHallahan#1051: And it doesn't even need to be trained, they just need to align people to do SEO for them!
Kyle Rose#8362: are there timelines for smaller models working towards the 175B parameter model
James#6892: Maybe not most interesting technically, but feels like something that is very useful in practice to know your model is being constantly updated
James#6892: I agree lol, they are crowdsourcing their indexing alreay
bmk#1476: well, i'll leave it to the people who are selling LM systems
EricHallahan#1051: It is our policy to not provide timelines or roadmaps as described in the FAQ.
bmk#1476: it's not very useful for the kinds of research i want to do with LMs
James#6892: i don't think the google search system will actually get replaced by some self-updating LM
James#6892: not anytime soon at least |
James#6892: Haha true
tylerlastovich#3263: Your code generation models are already more interesting. An area worth focusing on!
bmk#1476: well, glad that someone finds it useful, lol - i didn't actually filter the data, AI_WAIFU did, and that was prolly the bulk of the work since my share of the effort is amortized over all pyfra experiments
bmk#1476: so i guess it's a teamwork™
bmk#1476: anyways i dont have concrete plans to do more code generation stuff, that was just a random thing to test pyfra on
tylerlastovich#3263: It was giving me some pretty good answers, I was impressed. Better than what I could get with GPT-3.
hGI.unsure#2032: Hi, thanks for pointing out that just .pin_memory() works as well.
I've decided to continue using `param.data = param.data.detach().pin_memory()` since that seems to be working even if torch.nograd is not used.
Gpt-neo 2.7B seems to be working at reasonable speeds on pcie-3
https://github.com/arrmansa/Basic-UI-for-GPT-Neo-with-low-vram
StellaAthena#3530: Really? This seems heavily alignment-related to me
StellaAthena#3530: @bmk This is something that I think is very interesting. If you have a suggestion for the approach to take, I will make the dataset happen.
Sphinx#2092: You could probably just do something like what they did in the pitfalls paper
Sphinx#2092: If you can flatten those curves, or at least reduce the slope, that would be nice.
Louis#0144: @bmk I thought we were convinced da Vinci wasn’t 175b
Louis#0144: The results you showed me were convincing
Louis#0144: What happened
StellaAthena#3530: @Louis I believe the eval harness results indicated that da Vinci is 175B and that each smaller model is *two steps* smaller rather than one step
Louis#0144: This is da Vinci https://cdn.discordapp.com/attachments/729741769738158194/847104084883931176/image0.png
Louis#0144: It’s still lower than the 175b |
Louis#0144: No?
Louis#0144: It’s a step below
Louis#0144: Did we rerun eval
StellaAthena#3530: It's between the two
Louis#0144: It’s also not the numbers that we have on the research logs
Louis#0144: We must have rerun eval ?
StellaAthena#3530: 175B piqa = 81, but 13B piqa = 78
Louis#0144: But this is 80
Louis#0144: On the dot
Louis#0144: Where’s the other eval run?
Louis#0144: It’s not 81
StellaAthena#3530: The winogrande number is actually better than what GPT-3 reports for 175B
Louis#0144: Oh weird
Louis#0144: Maybe it’s just a different 175b checkpoint
StellaAthena#3530: This is each model in increasing order of size left to right https://cdn.discordapp.com/attachments/729741769738158194/847105591783456793/Capture.PNG
Daj#7482: For anyone interested, @StellaAthena and me will be on the Montreal AI Ethics panel here in 30min:
https://twitter.com/mtlaiethics/status/1396849052088668163
thenightocean#6100: just saw this: https://openai.com/fund/
alexyz#3459: 🤔
thenightocean#6100: 🤔 they seem to be moving from 'groundbreaking AI research lab' into a boring SaaS company |
bmk#1476: ~~let's ask them for compute for eleuther~~
thenightocean#6100: theory: they were scaling towards GPT-4 and hit a wall, and now when they discovered Scaling hypothesis is wrong, they have no choice but to pivot.
inox#5400: careful you'll give the alignment people hope
StellaAthena#3530: You should have listened to @Daj on the MAI panel. He was full of hope lol.
thenightocean#6100: totally random thought, I have no clue what I am talking about ... 😛
thenightocean#6100: will there be video somewhere?
inox#5400: @Daj is our shonen hero he can't give up
inox#5400: I believe in him
StellaAthena#3530: Yea! I’ll share it when it’s out
Daj#7482: I can make an update from 95% chance of doom to 80% sound like the huge amount of hope that it is!
AI_WAIFU#2844: god I hope so, now we just need to release something competitive with GPT-3 and that will kill their primary revenue stream.
thenightocean#6100: I surely hope not, I kinda want scaling to continue! Can we get a AI that does really cool things, and then temporarily hit diminishing returns just in time so we can solve the alignment, and then we restart the progress afterwards? Like, jeez, shouldn't be that complicated, right?
Daj#7482: That sure would be convenient if scaling worked that way :thonk:
Daj#7482: Like it was part of a preplanned plot :thonk:
Daj#7482: A training situation, or one might even say "simulation" :thonk:
Daj#7482: ~~no spoilers~~
Daj#7482: ~~I haven't read it yet~~
thenightocean#6100: also if anyone knows how this simulation ends please dont write any spoilers. K?
thenightocean#6100: (spoiler, we are in this fable: https://astralcodexten.substack.com/p/book-review-arabian-nights)
StellaAthena#3530: @Louis https://blog.eleuther.ai/gpt3-model-sizes/ |
alexyz#3459: that's very good proof of those model sizes 🙂
ethan caballero#6044: https://pbs.twimg.com/media/E2VTa9AWUAIqUrf?format=jpg
bmk#1476: @DanHendrycks I think you might be interested in this too
Teemochu#8740: wait if we're in a training situation then we need to deceive our trainers somehow
Teemochu#8740: it's in the Holy Book of Miles
kinoc#5731: And not just like solve the problem? er.... I know, I know. Where's the fun in that 😉
Teemochu#8740: We solve the problem once we're out of the sandbox
kinoc#5731: plus think of all the missed drama narrative points.
triggerhappygandi#0001: Yeah wtf is this
triggerhappygandi#0001: It feels so wrong
triggerhappygandi#0001: I half think they're going to try to make a new tenor with the smallest videoGPT and possibly even their own streaming service with bigger models in the future.
triggerhappygandi#0001: And flush all alignment down the shitter.
Teemochu#8740: Meanwhile likely with a heavy emphasis on censoring usecases
triggerhappygandi#0001: I am _somewhat_ fine with it ngl
bmk#1476: also seems to mean that OA doesn't care that much about alignment anymore
bmk#1476: kinda sad
triggerhappygandi#0001: F
Teemochu#8740: turning into paperclips is fine but turning into slightly-morally-different humans ("slightly" being much broader than you'd think, ie I'd call the practices of any civilization of the past 500 years "slightly" different) isn't
bmk#1476: ~~time to make Eleuther the number one alignment group :tribalism:~~
triggerhappygandi#0001: :tribalism: |
triggerhappygandi#0001: Models corrupt. Big models corrupt bigly.
Teemochu#8740: All hail the corruption
bmk#1476: *taps sign*
Teemochu#8740: *Aligned* corruption though, so it's papers and CLIP clops instead of paperclips
EricHallahan#1051: > *That's funny, I don't feel corrupt. In fact, I feel pretty good.*
bmk#1476: "clip clop" is the perfect name for a clip model trained on pones
triggerhappygandi#0001: Don't acknowledge it.
triggerhappygandi#0001: I take it this means friendship ended with ~~mudasir~~ OpenAI before it even started?
EricHallahan#1051: - GLaDOS, *Portal 2*
triggerhappygandi#0001: Wheatly was chad.
bmk#1476: stalemate detected. stalemate resolution associate, please press the stalemate resolution button in the stalemate resolution annex
tylerlastovich#3263: The fund seems fine? Their production approval gate offers usage-verified deal flow for early-stage VCs. Outside capital simply aligns behind that.
triggerhappygandi#0001: They're a research group. Why are they doing vc stuff
bmk#1476: because Sam Altman
triggerhappygandi#0001: This will attract even more tech bros than before
bmk#1476: if anyone can do VC correctly it's altman
triggerhappygandi#0001: Why not Greg Brockman too
triggerhappygandi#0001: Ah
tylerlastovich#3263: Are they though? I think that ship has sailed.
triggerhappygandi#0001: Wdym |
triggerhappygandi#0001: Before this grant what did OpenAI do that felt like non-research group aligned to you
AI_WAIFU#2844: Yeah that's what everyone is mad about.
alexyz#3459: the API?
triggerhappygandi#0001: The API doesn't warrant a "ship has sailed" comparison imo
bmk#1476: time to edit wikipedia
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/847215089069064212/unknown.png
EricHallahan#1051: I agree.
alexyz#3459: time to ~~edit~~ get banned from wikipedia
TheAloof#8651: Wasn’t it one of Sutskever’s arguments that we (as in, people in general) will start to pay more attention once AI moves the needle on GDP? This makes sense to me, and the Fund is a move in that direction, too.
tylerlastovich#3263: I think they can do both, like most companies. But they are no longer pure research.
triggerhappygandi#0001: The API pays for Microsoft funding. But the grant is going out of the way
alexyz#3459: how about them switching from nonprofit to forprofit
triggerhappygandi#0001: In my mind they were like Deepmind; they do money making stuff but only so that they can focus on research. But this vc stuff is too cringe.
bmk#1476: > deepmind
> money making
:ConfusedDog:
triggerhappygandi#0001: Their research powers Google apps no?
bmk#1476: no thats gb |
triggerhappygandi#0001: They don't actively make money, just help google
AI_WAIFU#2844: 20% datacenter cooling savings go brr
triggerhappygandi#0001: I remember Sundar speaking about how Google maps is now 40% more accurate in poorer countries because of the efforts of Deepmind and brain.
bmk#1476: mostly gb
Teemochu#8740: Our datacenters need to be about 20% cooler :paperclop:
bmk#1476: id guess
triggerhappygandi#0001: Deepmind definitely powers Google products.
triggerhappygandi#0001: Muzero is used in youtube algo
triggerhappygandi#0001: Or will be soon if it isn't already
triggerhappygandi#0001: Now I wonder if they're doing videoGPT _solely_ to create AI Netflix.
tylerlastovich#3263: On the flip side, the VC stuff might let them do better research without having to charge users more for access or sell exclusive use contracts.
Teemochu#8740: As a power user (zero internal info here) I suspect some change to the recommendation model was made in mid 2020, a lot of the 2018ish changes that made rabbitholing harder became more inconsistent around that time, the iron wall becoming more of a chainlink curtain.
triggerhappygandi#0001: I have noticed nothing but that doesn't say anything. I'll take your word for it.
triggerhappygandi#0001: I hope, but this is looking bad on their part nonetheless.
Teemochu#8740: (Basically it's really hard to get the algo to recommend things with low view count ever since that announcement after they came under fire for Elsa videos... it's now more hit or miss)
alexyz#3459: um what?
triggerhappygandi#0001: Like, creating a video based on the text?
triggerhappygandi#0001: That's lucrative if works.
Teemochu#8740: Believe me Netflix wouldn't be the first usecase for that lol
triggerhappygandi#0001: _ohhh_ |
Teemochu#8740: Think about it, if you only need a few minutes at most and it's ok if the story is a bit inconsistent...
tylerlastovich#3263: Can confirm. They are very interested in legality too.
Teemochu#8740: No one cares about the actual pizza, it's just a plot setup
alexyz#3459: A video generative thing that could generate literal films will not be here this year
alexyz#3459: because that's a combination of so many tasks
triggerhappygandi#0001: I didn't say it will, but that if that's where the head's at
alexyz#3459: and yeah, it wouldn't be the first usecase
alexyz#3459: like you could do so many more things with it
Teemochu#8740: Music is a good bet for something that would be lucrative though
triggerhappygandi#0001: They already did put out a music gen transformer.
alexyz#3459: Someone needs to make a better Jukebox
triggerhappygandi#0001: But the copyright stuff makes jukebox 2 near impossible
alexyz#3459: Imagine being able to listen to music that would go closer and closer to your tastes
alexyz#3459: Why?
alexyz#3459: You can't copyright a *voice*
alexyz#3459: you can copyright the music itself
alexyz#3459: the lyrics
EricHallahan#1051: No, you pretty much can lol
alexyz#3459: you can?
triggerhappygandi#0001: You can't? Then what happens to the transformative music that the model spits out? |
Teemochu#8740: inb4 some randos from (wait how do I redact the name of the tracker without saying the name) publish a sota generative model
triggerhappygandi#0001: I'm willing to bet all record labels will push back so hard you will not even be able to open source it
triggerhappygandi#0001: If the model is too good.
EricHallahan#1051: Anything in that domain is cursed to be confronted by IP law and the recording/music industry.
triggerhappygandi#0001: I played around with a colab notebook to generate music with jukebox. It was already too impressive even with noisy background.
Teemochu#8740: Jukebox sounds like dream music
alexyz#3459: Jukebox 5B_lyrics
triggerhappygandi#0001: Also, remember all the videos copystriked on youtube because they use 1 second of copyright music
alexyz#3459: It's 5 seconds
triggerhappygandi#0001: Nintendo has a history of banning videos.
Teemochu#8740: Like, it gets the feel of exactly how music sounds in a dream just right
EricHallahan#1051: It makes me facepalm every time I see someone use 15.ai
triggerhappygandi#0001: What's that
alexyz#3459: It's a voice cloning thing
alexyz#3459: which is only online like 15 days a year
Teemochu#8740: Text to speech. Lots of ponies.
Teemochu#8740: Joint collaboration from MIT and /mlp/ (not joking btw)
EricHallahan#1051: Because they are literally inches away from driving total copyright supremacy.
alexyz#3459: What do you mean by that?
alexyz#3459: There are other services which do the same thing, there's no lawyers after them |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.