data
stringlengths 115
7.61k
|
---|
chirp#4545: @𓅬 gabriel_syme 𓅬 i did implement the Geva paper just enough to verify it was legit, but it's not too useful for explaining individual example inputs, because for real inputs there are thousands of memories that are highly activated
chirp#4545: @bmk looking at individual units is tricky, because each Transformer feedforward layer is pretty densely activated. You need to reduce the dimensionality somehow
See https://twitter.com/nostalgebraist/status/1345115593964355584
chirp#4545: ^ that's basically what i am trying to solve
chirp#4545: and yes i am doing a logit lens thing
chirp#4545: high level idea:
- start with a GPT-Neo model and an example input (anything you want)
- show logit-lens visualization to show "where the action is happening"
- build a fast "dataset example lookup service" so you can see *what* is happening at each important location in the model
chirp#4545: My hope is that the "dataset example lookup" functionality will make the logit lens stuff a lot more useful
bmk#1476: ah that makes sense
bmk#1476: and that also explains why jay alammar was loking at svded stuff
chirp#4545: ooh is he doing something similar?
bmk#1476: yeah it came up in the conversation
chirp#4545: ah nvm he's doing neuron visualizations
chirp#4545: or is he doing something new?
bmk#1476: https://jalammar.github.io/explaining-transformers/
chirp#4545: ah yeah i just looked at that today 🙂
bmk#1476: but yeah this is some thing im super interested in and if we can show similar effects of single neurons/whatever activating for many related concepts that would be :ultrazucc: |
bmk#1476: definitely keep me posted if you find anything interesting
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/851320893858381854/unknown.png
chirp#4545: ^ qualitative example -- random layer, small dataset, and it already gives pretty decent results
bmk#1476: what's going on here
chirp#4545: main problem atm is it is SUPER slow
bmk#1476: I assume square brackets is last token
bmk#1476: is the number showing which token activates the strongest at a layer?
chirp#4545: ah no the number is just the index in my dataset
bmk#1476: oh
chirp#4545: i am curious though if you find this useful
bmk#1476: I'm not sure how to interpret this image
chirp#4545: "other sentences which produced a similar activation at this layer"
bmk#1476: ohh
bmk#1476: so this is all sentences that have a token that has an embedding at layer n that looks close to the embedding for this sentence's last token
chirp#4545: yes!
chirp#4545: well
bmk#1476: and said token is outlined with [] and by varying n you can look at what different layers are doing
chirp#4545: same activation at FF layer, which is the *change* in the embedding in that layer - should be better at picking out what that layer specifically is doing
bmk#1476: here's an idea for something to try
chirp#4545: i do worry a little that it's just picking dataset examples with the same next word, which would be boring |
bmk#1476: find something that has two totally different ways to phrase a similar idea
bmk#1476: and pick a bunch of examples for each
bmk#1476: and then look at pairwise similarity between these two sets of examples *as you change the layer number*
bmk#1476: or other relasted concepts
bmk#1476: pick something like say "NNs" and "GPUs"
bmk#1476: find sentences that contain nns
bmk#1476: find sentences that contain gpus
bmk#1476: pair them up arbitrarily
bmk#1476: my hypothesis is as layer depth increases, similarity will slowly climb, and then drop again near the end
triggerhappygandi#0001: Interesting comparison between tpu-v3 and V100 in the "diffusion>GAN" paper by openai
> We convert their TPU-v3 estimates to V100
> days according to 2 TPU-v3 day = 1 V100 day.
bmk#1476: sounds about rigjt if they mean tpu cores
triggerhappygandi#0001: I hate this {accelerator-name}-{time} notation for flops
EricHallahan#1051: Yes, it is 100% unacceptable IMO.
bmk#1476: just wait till we have swarm
bmk#1476: "100 v3-8-days"
bmk#1476: which is decidedly NOT the same thing as 25 v3-32-days
bmk#1476: the bigger the pods, the harder it is to reserve
triggerhappygandi#0001: :zucc: |
triggerhappygandi#0001: Pain
bmk#1476: and the faster the interconnect and therefore more mp you can do
EricHallahan#1051: We should use AUh lol
𓅬 gabriel_syme 𓅬#3220: wait is a V100 2x faster, what does this mean?
EricHallahan#1051: It means nothing because it is a useless measurement.
𓅬 gabriel_syme 𓅬#3220: you can directly generate images in #the-faraday-cage-archive or use the pinned notebooks in #art
kindiana#1016: I mean, accelerator-time is better than flops imo
EricHallahan#1051: Welcome back, unfortunately the bot is down right now.
𓅬 gabriel_syme 𓅬#3220: (oh did not see it's down)
𓅬 gabriel_syme 𓅬#3220: but you can use colab easily with the notebooks!
EricHallahan#1051: Yeah just use the Colab.
BoneAmputee#8363: it's back now
EricHallahan#1051: It has come a long way since its origins.
triggerhappygandi#0001: probably means one v3-8 is 4x more perf
𓅬 gabriel_syme 𓅬#3220: ohhhh the opposite
𓅬 gabriel_syme 𓅬#3220: wtf, what? that's such an unintuitive way to say it no?
kindiana#1016: I mean
kindiana#1016: its saying 1 v100 = 2 v3 cores
𓅬 gabriel_syme 𓅬#3220: it would help if they had the word cores in there, at least for me
𓅬 gabriel_syme 𓅬#3220: but yeah not it makes more sense |
chilli#5665: I think they’re both fine lol
chilli#5665: They just tell you different things
chilli#5665: Just report both
kindiana#1016: efficientnet: :blobsad:
chilli#5665: Well, you can’t tell how flop-efficient it is without knowing the flops
kindiana#1016: yeah, just making a joke about how overly optimizing for flops can be bad
chilli#5665: + lower flops can still be valuable even if it’s wall clock slower
chilli#5665: Due to things like inefficient kernels or specialized hardware to make it faster
нυηтєя#0156: Lmao I got this answer to a simple question `Hey` with aitextgen module and GPT Neo model
нυηтєя#0156: 20.9.2015:
Seth Jansen, former director of the Office of Women’s Issues in the United States, criticized the Senate for failing to investigate an
18.3.2015:
Pelosi responded, as the current majority
13.4.202015:
Rebecca Brown, who served in the House |
8.12.2015:
Seth V. Drouville, who also served
7.10.2015:
Nancy McElhone and Rebecca Brown, a
8.13.2015:
Paula P. DeLeon, who served
7.11.2015:
John W. Adams, who
7.12.2015:
Seth Jansen, the current |
7.13.2015:
Quinn, who served
7.14.2015:
Seth Jansen, who
7.13.2015:
Randy Binder, who
7.14.2015:
Rachael Rottman-Garr, who
7.14.2015:
|
None
Kia#2550: It's already a thing(-ish)
EricHallahan#1051: I don't see why this is particularly interesting.
нυηтєя#0156: Well, I was just testing the aitextgen module of Python, and idk why, that got printed ¯\_(ツ)_/¯
krigeta#6645: Hello, I am very excited to see this awesome GPT-Neo. May somebody please help to find a colab/guide on "how to use GPT-Neo to predict story future from the given story as input"?
EricHallahan#1051: Welcome! I suggest you read our FAQ (https://eleuther.ai/faq) and the #rules.
To answer your question, I think a simple web search would ironically probably do you better than asking here, or alternatively try one of the communities in the #communities channel.
gammascalpset#9792: loving this though
> This seems to be a path to making an AGI which cares about people to the same extent and for exactly the same underlying reasons as people care about other people. After all, we would have the important ingredients in the algorithm, we can feed it the right memes, etc. In fact, we can presumably do better than "intelligence-amplified normal person" by twiddling the parameters in the algorithm—less jealousy, more caution, etc. I guess I'm thinking of Eliezer's statement here that he's "pretty much okay with somebody giving [Paul Christiano or Carl Shulman] the keys to the universe". So maybe the threshold for success is "Can we make an AGI which is at least as wise and pro-social as Paul Christiano or Carl Shulman?"... In which case, there's an argument that we are likely to succeed if we can reverse-engineer key parts of the neocortex and subcortex.
krigeta#6645: Hello sir,
1.I had read the FAQ and got to know that my PC wont able to run it and there is no other explaination about on How can I do what I am looking for.
2.Did google results from past month but not able to find something like it other than a chatbot.(got horrible results as well).
3.hopping to communities now.
EricHallahan#1051: Huh, I know there are a bunch of them out there, not that I pay particular attention to them.
gammascalpset#9792: like, given how interested we are in AI alignment, it's weird that we're not interested in the wiring details that make some humans altruists - which seems like a complex objective that isn't easily captured in one utility function (not saying it's not, but if it is, no human would be able to write that function atm)
triggerhappygandi#0001: @kindiana a v3-8 is the actual physical thing right?
triggerhappygandi#0001: And the bigger pods are made by stacking multiple of these
EricHallahan#1051: Yep, with a high-speed interconnect.
triggerhappygandi#0001: So a single one of these is about 4 GPUs worth... nice |
EricHallahan#1051: Yeah, TPUs are cheap compute.
EricHallahan#1051: They are so enticing until you snap back to the reality of them.
Daj#7482: fwiw, I don't think humans are aligned and I consider "handing the keys to Paul Christiano" to be a last resort option lol. The problem is that I think humans aren't _inner aligned_ (he has some later posts talking about this https://www.alignmentforum.org/posts/DWFx2Cmsvd4uCKkZ4/inner-alignment-in-the-brain). I basically expect human reward circuits to not at all generalize to the kind of situations a powerful AGI will encounter (and probably wirehead). So I think there's a fundamental difference between "a powerful system that models human values and implements our considered judgement/reflective equilibrium" and "literally human values", and I expect the latter to _not work_. I have talked and disagreed with Steven about some of the implications for alignment he talks about in his posts, but I also think it's worth having someone investigate.
gammascalpset#9792: already gave this one a read
gammascalpset#9792: I agree with the "probably wirehead" statement, but since I can't think of any better pointers atm, I think I'll allocate some of my time to neurosci
Daj#7482: I'm more and more becoming convinced myself that inner alignment is the true alignment problem
Daj#7482: and that outeralignment might really be solvable by https://www.alignmentforum.org/posts/Nwgdq6kHke5LY692J/alignment-by-default / pointers type ideas
Daj#7482: which may involve directly modelling or understanding neuroscience circuits
Daj#7482: but in that case it seems worth studying the brainstem/hypothalamus. The way I see it the brain has a number of inner alignment mechanisms that are roughly strong enough to withstand optimization pressure like we saw in the ancestral environment, but will pathetically fail in the near future (and are already failing)
Daj#7482: So the hard problem is developing new _robust_ inner alignment mechanisms. Relaxed Adversarial Training? Some kind of hyper advanced transparency and oversight tools?
gammascalpset#9792: part of the midbrain's reward algo (or at least what seems to work somewhat like a reward algo) is not just wired to external inputs but also to parts of the neocortex
gammascalpset#9792: which only makes the plot thicken, cause we'd have to give up the assumption that the cortex is a blank slate (or would we?)
Daj#7482: Well yeah, the model that Steven proposes is that the neocortex is kind of like an "amplification operator" on the hardcoded reward circuits in the brainstem/hypothalamus
gammascalpset#9792: my point is that the midbrain gives rewards for things that it can't possibly understand, so it has to be wired to parts of the neocortex that have at least some level of hardwiring towards recognizing certain things
gammascalpset#9792: think of complex social situations
Daj#7482: Some more posts lol:
https://www.lesswrong.com/posts/szeKeZwuQhFxirfBY/is-rl-involved-in-sensory-processing
https://www.lesswrong.com/posts/jNrDzyc8PJ9HXtGFm/supervised-learning-of-outputs-in-the-brain
https://www.lesswrong.com/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent
Daj#7482: Steven has pretty simple explanations for this |
Daj#7482: No hard wiring required
Daj#7482: Just have the world model predict the future reward
Daj#7482: Oh wait that might be in a post he hasn't published yet
Daj#7482: Which I read a draft of
Daj#7482: soon then lol
gammascalpset#9792: that would imply that there are no complex social situations that are innately rewarded, only rewards that are learnt when the social situation leads to some kind of pleasure
Daj#7482: yep
gammascalpset#9792: I don't want to claim this is true or false with the current mind budget I can give it but it seems like a bold statement
Daj#7482: seems pretty plausible to me tbh
Daj#7482: You can still have moderately complex things hard coded
Daj#7482: like "things that look like you = reward looking at those"
Daj#7482: I think we still underestimate how much complex behavior is just emergent from interacting with a complex environment
gammascalpset#9792: true
gammascalpset#9792: the ancient part of our brains is still pretty huge compared to our largest ANNs
Daj#7482: I actually don't remember ever reading good estimates of how big the brainstem is in neurons/synapses
gammascalpset#9792: I thought I had counterproof of this in an intro neurosci book I was reading https://cdn.discordapp.com/attachments/729741769738158194/851377779278217236/Screenshot_2021-06-07_at_10.31.13.png
gammascalpset#9792: but I googled "ventral pallidum" and it's part of the basal ganglia 😦
Daj#7482: Keep a look out for Steven's upcoming post, there's some mindblowing stuff in there about how the brain does RL
gammascalpset#9792: I don't have a LW account yet 😛
Daj#7482: I'm sure I'll spam it everywhere here when it comes out lol |
chinesesoup#6725: Btw have you guys ever thought about making gptneo do math? Might sound like a silly idea but I kinda noticed it can kinda solve some math especially when you let it break it down in steps
chinesesoup#6725: In principle you could make a calculator module so gptneo just uses an output to put a formula in a calculator, then it gets calculated and it continues with the input
chinesesoup#6725: Might be a little too specific tho to implement in the general model
gammascalpset#9792: has already been done with other transformers with good results, no time to link the paper rn
user91010#6777: "answer this 1st grade math question" is historically a great way to suss out chatbots, gonna be fun when that's no longer true
quinn#9100: https://github.com/deepmind/mathematics_dataset
gammascalpset#9792: Question: do you need an IQ of 200 in order to come up with a question for an IQ test that is accurate at around ~200?
gammascalpset#9792: Think of a test for which you can take a long time to answer (much longer than 15 minutes) but not quite unlimited time
gammascalpset#9792: Otherwise the answer is obviously no if you just leverage the test taker's slowness (which you can't for AIs)
gammascalpset#9792: Thinking about how smart *you* would have to be to suss out a chatbot that can do a little bit of math/logical reasoning
gammascalpset#9792: Most humans don't seem to have good math/logical reasoning skills when you take them too far from the training distribution (tribal environment, chasing animals, what they studied at uni for 4 years etc.)
user91010#6777: fwiw gpt-3 passes the Sally Anne test (with the temperature lowered, and names changed to prevent it from cheating)
gammascalpset#9792: To give some credit to most humans, they do pass this test
gammascalpset#9792: To take that credit away again, I'm not sure I'd call it "outside their training distribution"
chinesesoup#6725: Yea but wouldn't that also be a really cool extra feature for gptneo? Maybe by triggering it by saying solve 19 / 5, you can use a calculator
chinesesoup#6725: To which it responds calculator: 19/5 or something and then the result gets calculated and gptneo can use that result in its context
chinesesoup#6725: You would just have to generate a calculator dataset so it understands it can use a built in calculator thats not in the neural net
gammascalpset#9792: I think as a feature it's easy enough for users to write it doesn't need to be written into the lib
chinesesoup#6725: You could just make a flag to pass it and allow that. Some people might only want to use it to try and get their math homework done without creating a dataset and training the model themselves xd its just another capability added that probably gets progressively more useful if the project progresses without any (or almost no) additional overhead
gammascalpset#9792: *Some people* :thonk: |
chinesesoup#6725: I'm just saying this might actually be worth considering. Remember, gpt3 has also been done already so its a really weak argument
chinesesoup#6725: As for "its easy enough for users to write"... yea easy enough for some users, not all. It does add an extra feature that some people might like quite a lot and would actually consider using and running using something like openchat
Louis#0144: Gm my goslings
Louis#0144: This is a good dataset
chinesesoup#6725: I could probably write something to generate math problems that require a bit of reasoning
chinesesoup#6725: You know like the guy who buys a whole cart of melons at the store? Lol
chinesesoup#6725: Or someone who tries to fit 67 liter in 1.5L bottles
gammascalpset#9792: I wonder if it's that trivial? I'm not even sure we could reliably generate math problems that wolfram alpha can't solve with some heuristics and a huge search
AI_WAIFU#2844: This is funny because to me it's the opposite. I think we can come up with asymptotically inner aligned agents fairly easily, and deal with deviations from the ideal with corrigibilty + impact regularization + scaling. Getting outer alignment on the other hand seems doable with pointer stuff, but the details are unclear to me.
Daj#7482: cool lets combine our half formed pseudo ideas into one whole formed pseudo solution :berk:
StellaAthena#3530: @chinesesoup This would be a genuinely useful thing to create.
gwern#1782: no. consider reaction time, working memory, or crystallized intelligence measures such as vocab tests
krigeta#6645: What would be the best format for a story dataset?
StellaAthena#3530: @Louis
Louis#0144: Hi
Louis#0144: What kind of story dataset
Louis#0144: There’s many kinds
krigeta#6645: its superhero type: Dragon Ball Z
Louis#0144: lmao
Louis#0144: Not what I meant |
Louis#0144: What are you trying to accomplish
krigeta#6645: lol, Actually I want to use GPT-Neo to predict the future events based on the previous story, plot and characters
krigeta#6645: and the only thing which I can use is google colab
Louis#0144: Oh
Louis#0144: So
Louis#0144: That’s a very open problem
Louis#0144: You probably want to use COMET
Louis#0144: In which case you should look into storing knowledge graphs
krigeta#6645: Actually I am new to this, it would be awesome if there is a guide for it or something like that?
Louis#0144: https://arxiv.org/abs/2104.05837
Louis#0144: This is an easy introduction
krigeta#6645: so means GPT-neo is not for what i am looking for?
Louis#0144: Probably not
Louis#0144: If you want to predict next events in stories
Louis#0144: Rather than predicting text
Louis#0144: Predicting text isn’t that useful here
Louis#0144: Language models write a lot of filler
Louis#0144: Neo is no exception
Louis#0144: GPT3 stories kinda suck too
krigeta#6645: filler would be nice as well if it is possible to do that |
Louis#0144: That isn’t predicting events then
Louis#0144: You just want to write stories
Louis#0144: Those are very different tasks
Louis#0144: They share almost none of the same framework
krigeta#6645: indeed it is but the paper you sent me, it would take me million years to make something out of it, looking for something like a colab notebook or something on github
Louis#0144: Para comet has a nice api
Louis#0144: It’s on GitHub
Louis#0144: It takes 30min to set up
krigeta#6645: let me check
Louis#0144: Para comet is for five sentence stories
Louis#0144: However
Louis#0144: Summarizing + sliding window is good
krigeta#6645: its parellel comet?
Louis#0144: Paragraph
Louis#0144: https://github.com/skgabriel/paracomet
krigeta#6645: thanks dont know why it is not coming in when I am searching in the git
krigeta#6645: one more thing like you said this
krigeta#6645: how can I train a model on colab to do this for filler writing?
Louis#0144: Just finetune neo
Louis#0144: That isn’t really storytelling though |
Louis#0144: You’re just training neo to bullshit
stella#0420: there is a rodent among us https://cdn.discordapp.com/attachments/729741769738158194/851470693777539092/image0.png
stella#0420: what a creature
Louis#0144: @Daj
TaiAurori#6781: crypto scam bots
stella#0420: if any mod wants to deal with them their id is "851411153317920808"
TaiAurori#6781: hooray
rikuwu#0001: 851411877345886239 https://cdn.discordapp.com/attachments/729741769738158194/851470806352396319/unknown.png
HuffGLaDTem#3584: i got one from a different account
Daj#7482: ugh
stella#0420: mmm rv
Daj#7482: Alright we're on it
krigeta#6645: any colab to do that?
TaiAurori#6781: @Deleted User is the one that dmed me
stella#0420: raid mode 👍
Louis#0144: Not that I’m aware of
pebbles#7130: ye I get one this time too
HuffGLaDTem#3584: https://cdn.discordapp.com/attachments/729741769738158194/851470905603653670/unknown.png
rikuwu#0001: :whenthe: might be time to pre-emptively make a report channel like the guys at Solana did
pebbles#7130: https://cdn.discordapp.com/attachments/729741769738158194/851470934988292157/unknown.png |
Louis#0144: @Daj we need captchas
HuffGLaDTem#3584: @Deleted User dm me
Daj#7482: Doesn't work against this kind of attack
stella#0420: you guys got a join log i'm assuming?
Haxxardous#9240: i received the scam as well, hopped in here to mention it
krigeta#6645: which would be the best possible model to work with 12gb ram of colab?
Louis#0144: 1.3b
krigeta#6645: thank you :harold:
MrDragonFox#1766: differnt user now - same scam
Louis#0144: Wtf
Impaeling#0890: Who should I report the bot to?
Louis#0144: @Daj can you @ everyone and tell them to disable receive messages from users
MrDragonFox#1766: https://cdn.discordapp.com/attachments/729741769738158194/851471451353514014/Screenshot_2021-06-07_at_15.43.23.png
stella#0420: who up building a model to classify those lole
Louis#0144: For the time being
Impaeling#0890: https://cdn.discordapp.com/attachments/729741769738158194/851471470391197707/20eef0e0799ba5765b3d7ee77cbdfc61.png
alexyz#3459: Just make an announcement saying "be careful, nobody's gonna give you free crypto lmao"
mr_seeker#1337: Talking about models: trying to fine-tune a 2.7B gpt-neo model on 2 GPU but getting a constant date with the OOMKiller. Anyone who knows how much GB it requires before it shuts up?
MrDragonFox#1766: ya its multiple users
hGI.unsure#2032: Hi, does anyone know the peak vram requirement and model size (in gb) of the 6B model ? |
stella#0420: at this point just send their id it's the same message
stella#0420: not tryna minimod but yea
stella#0420: :aolman:
Louis#0144: No one has it running in an environment you’d like to finetune it in yet
alexyz#3459: Just use TPUs
Louis#0144: Unless you want to use Jax
Daj#7482: We should have banned all the bots, tell us if you get anything from this point on
Daj#7482: They all join at the same time
Daj#7482: like amateurs
Daj#7482: in blocks of ~40
Daj#7482: very annoying and Discord doesn't get you the tools to handle it
stella#0420: yeah
mr_seeker#1337: Got a script that automatically disconnects when finished? Hate to have it run overnight with nothing to do...
stella#0420: there are a few bots with decent raid protection
Daj#7482: Would an announcement to tell people to turn off their messages me useful? Feels more like a disturbance
Daj#7482: It's not a raid that's the problem
Daj#7482: They can instantly get the user list and DM
stella#0420: oh yeah you mean that
Daj#7482: Even if we isolate and ban them quickly
Louis#0144: Yes tbh |
alexyz#3459: Just say that you should be careful
pebbles#7130: or maybe put something about these attacks in the join message / information / FAQ ?
bmk#1476: we can use an airlock like that one pony discord
alexyz#3459: because nobody's giving you $20k of ETH for free
Louis#0144: Yeah an airlock would be useful
bmk#1476: Optimalverse I think
stella#0420: i mean most people would probably just ignore it, doubt many would read the announcement to begin with
EricHallahan#1051: Can we revoke all invites?
Daj#7482: Did we check if that actually works? If so yeah
bmk#1476: they said it works
EricHallahan#1051: And just recreate?
stella#0420: sounds like a bad idea haha
Louis#0144: It might be the perma invite on our site
hGI.unsure#2032: I just want to know the max vram needed for inference in 16 bit
Daj#7482: Then we should probably set up an airlock
StellaAthena#3530: It's easy to make an airlock that works
stella#0420: idk where the stuff is referenced
alexyz#3459: Maybe it can't get the userlist if it's in an airlock
Daj#7482: yeah if this is the case we should just do that
Daj#7482: Mild inconvenience for new joins but that's not a big problem |
alexyz#3459: Maybe the best solution is just to not use Discord, as all these raids are tailored for Discord
Daj#7482: Fun fact: The bots joined yesterday at the _exact_ same time too lmao
bmk#1476: ~~ok we're switching to urbit~~
Daj#7482: This is so amateurish
alexyz#3459: They don't care about amateurish
stella#0420: hold on guys let me make a SLACK channel really quick
alexyz#3459: they go on a server
alexyz#3459: and then dm
Daj#7482: I'm setting up ICQ
stella#0420: that's like the major thing we've got on openai
alexyz#3459: and immediately leave
bmk#1476: ew
stella#0420: the creatures using slack
alexyz#3459: Why not Element?
Xseleon#1545: Targetting techy discords is also stupid.
stella#0420: mnnrnrg
Daj#7482: Yahoo Messenger
Louis#0144: Guys we’re moving to MSN messenger
Louis#0144: Or a big MySpace group chat
alexyz#3459: They're targeting people who know enough about crypto to fall for the scam |
stella#0420: let us target the one group of people who will know it's a scam 100%
alexyz#3459: but not enough to not fall for the scam
Louis#0144: Or BIM
alexyz#3459: there's a very specific niche
Daj#7482: Who is savvy enough to know how to send Ether but would fall for a scam like this lol
Xseleon#1545: It's a bad scam if it's so specific lmao
stella#0420: yeah exactly
Louis#0144: A kid with their moms credit card
alexyz#3459: Well it works apparently
Louis#0144: Kids are tech savvy but stupid
alexyz#3459: otherwise they wouldn't be doing it
Xseleon#1545: True, the scammers are making more money than me
stella#0420: thing is that wouldn't be an issue if they had like a guide on "how to get crypto"
bmk#1476: a pretty big percentage of ethereum users
stella#0420: but at the bottom of their message https://cdn.discordapp.com/attachments/729741769738158194/851473050549354496/image0.png
stella#0420: they've got that accounted for too!
stella#0420: stupid
chinesesoup#6725: Yea, but bmk just said you guys aren't looking for any more data for the pile currently
Louis#0144: Oh yeah most ethereum users are brain dead
Louis#0144: That’s true |
alexyz#3459: ...
Louis#0144: Leo said to
Louis#0144: Said it too*
Louis#0144: 😛
stella#0420: WHAT ANOTHER STELLA
stella#0420: HELLO
StellaAthena#3530: So? Doesn't mean we can't do other interesting things with it. It won't be added to the Pile, but at a minimum I will personally use it
stella#0420: oh i have nick perms
Louis#0144: @chinesesoup we have multiple data collection projects going on rn
Louis#0144: None of them are going into the pile
StellaAthena#3530: Hey!
stella#0420: mitosis
stella#0420: there is another
chinesesoup#6725: Yea I'll make a framework, then you would be able to add different objects (bottles, pippets, buckets, glasses, ...) with an amount. The objects would be grouped in specific categories depending on how they could be used and then its just a matter of adding more objects for the generation. I have a math question book with some pretty tricky questions and I will definitely look into that to get a lot of different ways to formulate the questions
bmk#1476: https://arxiv.org/abs/2103.03874
chinesesoup#6725: Im gonna add a few objects of every type to start with
pebbles#7130: unfortunately people do fall for these scams, it's really annoying. If they were 0% effective we'd never see them
Daj#7482: I honestly doubt this
Daj#7482: I think even if they were 0% effective they would still exist
Daj#7482: I used to hang out in hacking forums, and there was a common scam where you would sell "get rich quick" books |
Daj#7482: The tricks in them only _sound_ feasible, but don't work at all
Daj#7482: So the grift is the smart scammer sells the book or the spam bot or whatever, the stupid scammer then tries to use the (non-functional) scam to make money, but just goes broke
Daj#7482: It's very easy to use the conjunction fallacy to convince some chump that your discord spam bot trick will totally make them $$$
bmk#1476: second order scams :bigbrain:
Daj#7482: exactly lol
inox#5400: wow it's like parasite ecologies
gammascalpset#9792: all good points, but these happen to be the ways of measuring IQ that might be least applicable to AIs
bmk#1476: metatumors
gammascalpset#9792: also, aside from working memory, you could argue that by measuring reaction time or crystallized intelligence you're not really measuring g, you're measuring stuff that is easier to measure and relying on the correlation they have with g in humans
gammascalpset#9792: which imho is not a very good correlation because as a decently clever 21 year old I have a very shitty reaction time and my vocab is not extraordinary cause I had very little interest in reading for most of my life
bmk#1476: honestly, scsmming scammers is a valuable public service
pebbles#7130: ooh, this I did not consider, good point
gammascalpset#9792: ok, I got pretty decent vocab, and adhd hinders my reaction time considerably
gammascalpset#9792: i might have rushed judgement ( did I mention I have adhd yet )
inox#5400: it's interesting that evolution run long enough always ends up with parasites and parasites that feed on the parasites etc
pebbles#7130: even our own DNA seems to have parts which basically jump about the genome copying themselves, seemingly without doing anything "useful"
gammascalpset#9792: I love how the concept of demons in AI alignment basically formalizes the common opinion that ~~some~~ middle managers are parasites
Daj#7482: Sufficiently advanced rationalism is indistinguishable from mythology
gammascalpset#9792: you could even argue that capitalism is a parasite (not with a political intent, just in the sense that it's an optimization demon and is not perfectly aligned with human goals)
pebbles#7130: The main AI safety problem is basically just King Midas |
gammascalpset#9792: might have to delete this comment lol, don't want to start no politrib shit
Daj#7482: Calling the market economy a misaligned optimizer is standard canon around here I think lol
pebbles#7130: [also no politrib] I'd argue it's kind of the other way around. We're like the single cells that make up the organism of capitalism.
Daj#7482: related: https://twitter.com/RokoMijicUK/status/1338564636938006529
pebbles#7130: Some bees that are not quite eusocial will kill any bee (other than the current queen) that starts to become fertile. This way they keep each other in check
Daj#7482: Inner alignment at work!
Daj#7482: Apoptosis is also an inner alignment mechanism
inox#5400: it does get muddy, like are our gut bacteria parasites?
gammascalpset#9792: you might want to read The Selfish Gene, it explains why this is in the interest of individual bees
inox#5400: what about the fish parasites that bite off the tongue and then do the job of the tongue?
chinesesoup#6725: Some are and some aren't
pebbles#7130: yeah, these are super similar, especially if you look at insect colonies as superorganisms
gammascalpset#9792: so I wouldn't call this a demon
Daj#7482: What if it's just a better tongue, that would be dope
Daj#7482: (and I know it's not)
inox#5400: I thought it was a slightly better tongue?
Daj#7482: is it?
inox#5400: but it eats some of what you eat?
Daj#7482: if so that's really funny
inox#5400: it drinks blood I think |
Daj#7482: My tongue also eats some of my food
Daj#7482: Coincidence, my tongue also gets its nutrients from my blood
inox#5400: yeah but everyone feels weird about it when the tongue has it's own nervous system and organs
gammascalpset#9792: tl;dr each bee has incentive to keep the queen alive and *force her* to reproduce as much as possible
offspring of the queen carries 3/4 of the worker bees' genes, whereas most living beings that use sexual reproduction only ever get to produce offspring with 1/2 gene resemblance
Daj#7482: Our gut has its own nervous system too
Daj#7482: humans are basically hivemind creatures
Daj#7482: don't think about it too much lol
gammascalpset#9792: also super interesting that it's the workers enslaving the queen, if you want to put it that way, not the other way around
Daj#7482: Yes but each _individual_ bee has an incentive to mate over the queen, since they would pass on 1/1 of their genes
Daj#7482: It's a prisoner's dilemma
Daj#7482: so alignment is necessary
chinesesoup#6725: Sometimes they litterally "hug" the queen to death lol
pebbles#7130: yeah, a lot of people think the queen is in charge, but that's not the case, it can be kinda 50/50
inox#5400: or the fungus in a leafcutter colony
gammascalpset#9792: not really, they would pass 1/2 of their genes
gammascalpset#9792: you only pass 1/2 of your genes to your children
Daj#7482: oh maybe I have to reread that book
pebbles#7130: sometimes worker ants will physically drag the queen to a new nest if she doesn't want to leave
Daj#7482: It's been like...years |
Daj#7482: Humans were domesticated by wheat
pebbles#7130: the relationship is mutal 😤
gammascalpset#9792: not sure if it's 50/50, I also don't remember the book exactly, but maybe each bee has an incentive to try and not become the queen
gammascalpset#9792: but once it does become the queen, its best strategy is complying
pebbles#7130: what I meant was that the whole colony can be thought of as the organism, and it's the colony which reproduces, so different parts of the superorganism can make other parts do things they don't really want to do
pebbles#7130: I wouldn't be surprised if you could find cases where the workers force the queen to do something, and vice-versa
pebbles#7130: and ants can basically stroke the chin of another ant to make it regurgitate food
gammascalpset#9792: sure, but remember that each agent works only at their own levels and any high-order structure is more of a convenience that could be thrown away at any time
Daj#7482: I mean the same applies to our bodies
Daj#7482: This is how tumors happen
Daj#7482: It's alignment all the way down
gammascalpset#9792: indeed
pebbles#7130: hmm, not sure I quite agree tbh
gammascalpset#9792: the selfish gene argues that selection doesn't happen at the level of individuals, but genes competing for their frequency in the gene pool
pebbles#7130: yeah, it's all genes competing
pebbles#7130: but the most effective genes can be ones which cause cells to work together to form organisms, and sometimes for those organisms to work together to form super-organisms
pebbles#7130: I do get what you're saying though
gammascalpset#9792: and brains are machines built by DNA with the best heuristics it can encode in them (sounds obvious to AI people, but I think most bio people would realize this for the first time when reading the book)
inox#5400: not the best heuristics just good enough
gammascalpset#9792: the best heuristics it can find |
inox#5400: sure idk why I'm being pedantic
gammascalpset#9792: cause people would be annoyed irl but here we love it
inox#5400: horiztontal gene transfer makes all this stuff so wild, especially in organisms like fungi where individual species are hard to distinguish
pebbles#7130: horizontal gene transfer is wild. Like imagine as a computer program, just finding some random code on the internet and running it no questions asked
(not quite a fair analogy, because the random code would itself have to be from something which could self-replicate, so fully malicious instructions are rare)
inox#5400: ...yeah that would weird and totally not the normal way to install stuff on windows in 2021
Daj#7482: This is basically how online learning works too lol
Daj#7482: Well I guess not exactly
Daj#7482: Exchanging weights between NNs would be equivalent
pebbles#7130: the point is there's no trusted source or trying to get code which does something specific. It's just "hmm, this looks like code, let's see what it does"
pebbles#7130: yeah, I actually haven't tried horizontal gene transfer in any of my evolutionary NN stuff, but I have inheritance and crossover
pebbles#7130: and actually, mini-rant, but I see the inheritance and crossover done so badly (in evolutionary algorithims). I don't know if what I'm doing is super-inefficient, but it seems to work, and is much more biologically analogous
CRG#8707: Well... <https://en.wikipedia.org/wiki/Karyoklepty>
pebbles#7130: also I've found that inheritance by itself basically guarantees fatal genetic diseases, and it's a huge problem, but the fix is just crossover, it's really cool
CRG#8707: Ciliates are wild: https://cdn.discordapp.com/attachments/729741769738158194/851484149512405002/Cb7nqsx.png
gammascalpset#9792: what do you think is bad about how crossover is usually done?
gammascalpset#9792: I've seen crossover in evo algos is usually done at random spots, which biologically is a big LOL afaik
gammascalpset#9792: I mean, if it was really done like that, wouldn't the vast majority of offspring just fail to develop?
pebbles#7130: I've seen the crossover implemented as taking the value for one parameter and swapping it with the value of another
gammascalpset#9792: the disadvantage is that new stuff gets invented less often, and natural evolution solves this problem by... not solving it, being slow, and waiting for random mutation to change the crossover markers to something that still works |
pebbles#7130: which is the equivalent to saying "let's swap how active these two enzymes are", which is biologically infeasible
gammascalpset#9792: lol
gammascalpset#9792: at least natural DNA is huge, so it can get a lot of variety even only by tweaking gene activations
pebbles#7130: if your method of creating a child from two parents, is to randomly decide which parameter is inherited from which parent, then that isn't very biologically analogous afaik
pebbles#7130: but that's the most common way afaik, and sure it works, and tbh I actually haven't officially tested if mine is much better / even worse
gwern#1782: every measurement of g is 'measuring stuff that is easier to measure and relying on the correlation'... there is no ground truth, although brain imaging *might* be getting there
pebbles#7130: biological evolution uses inheritance and crossover for most of the variation, instead of mutation
pebbles#7130: the mutation rate that most EAs use is much higher than irl mutation rate
gwern#1782: I've expanded my CYOA proposal with a lot of details, if anyone is interested
Deleted User#0000: really any measure of anything other than raw sense data is "measuring stuff that is easier to measure and relying on the correlation". sometimes the influence of the measured property on the proxy is just more consistent and less noisy than other times
hGI.unsure#2032: Can someone tell me the 6B Vram requirements for inference and model size at 16 bit?
kurumuz#5695: thanks, will reread it
kurumuz#5695: try it yourself lol
UnsupervisedLearner#4148: I am very very very much interested. I skimmed and did not see a *how you can help* section
What part of the overall design process are you at now? How can I contribute?
gwern#1782: how can you help? do it, I guess
gwern#1782: I don't want to do it because the NN and stats is so trivial and it's all about building the website and database and that's all stuff I hate @_@
gwern#1782: if I am doing that I'm not reading papers or having more cool ideas like 'decision transformers for preference learning'
UnsupervisedLearner#4148: Hah, okay that's fine with me. I need to fluff my CV |
UnsupervisedLearner#4148: I have this other thing that is time sensitive but if someone else expresses interest around here please ask them to dm me
UnsupervisedLearner#4148: I don't know about trivial though, sampling from LM is still unsolved IMO
bmk#1476: ~~ok mr "spends literally months polishing his website up"~~
gwern#1782: that's *why* I know I want to avoid more webdev lol
gwern#1782: @kurumuz is pretty interested
kurumuz#5695: we pretty much have the whole thing setup already, need to seperate it as a product though
gwern#1782: ah, but I just solved *that* too! see #prosaic-alignment about Decision Transformers https://www.reddit.com/r/reinforcementlearning/comments/nqp9nh/decision_transformer_reinforcement_learning_via/h0xyia4/
gwern#1782: (today and yesterday have been very good days)
kurumuz#5695: you solved sampling?
gwern#1782: for CYOA purposes, I think so
kurumuz#5695: O.o
UnsupervisedLearner#4148: who is we and where is it set up?
kurumuz#5695: novelai.net
kurumuz#5695: well that is who we are
ethan caballero#6044: how do you condition it with "+inf" reward/return? All the upside-down rl stuff still seems to struggle with conditioning on returns greater than the returns observed during training.
gwern#1782: you use a large number. DT does seem able to predict usefull rewards larger than seen
gwern#1782: I think they have some charts about that?
CRG#8707: > One effect of this type of modeling is that we perform conditional generation, where we initialize a trajectory by inputting our desired return. Decision Transformer does not yield a single policy; rather, it models a wide distribution of policies. If we plot average achieved return against the target return of a trained Decision Transformer, we find distinct policies are learned that can reasonably match the target, trained only with supervised learning. Furthermore, on some tasks (such as Qbert and Seaquest), we find Decision Transformer can actually extrapolate outside of the dataset and model policies achieving higher return! https://cdn.discordapp.com/attachments/729741769738158194/851544944362520576/E24vEZ0UcAgI7K6.png
ethan caballero#6044: ^@gwern these results seem mostly negative; right?
gwern#1782: but not entirely |
gwern#1782: plus, of course, I envision this being in a feedback loop. it will bootstrap up in quality
gwern#1782: it's unclear to me whether any preference learning approaches satisfy that criterion, anyway. at least, no one has shown the PPO approach does that
gwern#1782: (somewhere where tree search works could probably do that... like a model searching a go game tree should be able to increase reward far past demonstrations. sadly, tree search doesn't work for GPT)
ethan caballero#6044: cries in @chilli
aze#1010: is there a codebase/example code on how to actually load and infere it available anywhere?
Louis#0144: Ye
Louis#0144: There’s an eval script
Louis#0144: In Ben’s repo
aze#1010: i see ty
aze#1010: i assume it only works w/ v2/v3-8? is there support for cpu
45#2247: is that "learning how to play AI dungeon from human feedback"
45#2247: where human ranks his preferences? paying EAs to rank their moral estimation of each scenario
45#2247: haven't looked much into it but wtf is that target return
Louis#0144: Oh uh
Louis#0144: Idk
Louis#0144: I don’t think that will work
Louis#0144: Going to be entirely honest with u
Louis#0144: There’s a lot of work into playing text adventures
Louis#0144: Everything needs to be feature engineered
Louis#0144: Non feature engineering hasn’t gotten far yet |
Louis#0144: Typically you measure how well these models perform by how far they can get into Zork
Louis#0144: No one has beaten zork
Louis#0144: We’ve gotten very far
Louis#0144: But no one has beaten it
45#2247: wait i thought gwern was just saying in his comment to rank different AI dungeons scenarios
Louis#0144: even Turing can’t beat zork. I’m pretty sure OAI tried zork too. I don’t think they got far
Louis#0144: Yes but zork is a much easier scenario
Louis#0144: Much much easier than AID
Louis#0144: I saw gwerns post
Louis#0144: My advisor did too
Louis#0144: I’m skeptical but I think it’s worth trying
alexyz#3459: Zork?
Louis#0144: I think he did atleast (?)
Louis#0144: Ill go send it to him
45#2247: MIT hardcore game according to wikipedia
EricHallahan#1051: Do you do not know what Zork is?
45#2247: "learning to morealize from human feedback" ™️
45#2247: *moralize
alexyz#3459: Read the wikipedia article, I heard of it, I just didn't know the name
Louis#0144: Zork is the grand daddy of all text adventures |
Louis#0144: It isn’t the First
Louis#0144: But it’s by far the most famous
45#2247: eliza is open-soruce
45#2247: zork when
EricHallahan#1051: *Adventure*
Louis#0144: > “But it occurred to me while thinking about a Choose Your Own Adventure version of GPT that Decision Transformer is the right way to optimize your model for games to do finetuning on fiction text / learning to rank possible completions / learning to generate high-scoring completions all in a single model, using just supervised learning, in a single training run, with no new algorithms.”
@gwern I agree with you here
Louis#0144: We are collecting a dataset for this in #carp
Louis#0144: To use with a clip model
Louis#0144: Our original approach was also PPO ranking
Louis#0144: I think this is promising
Louis#0144: But I do not think it would be very good for this use case
Louis#0144: I think it would basically be a sentence level discriminator
Louis#0144: Also idk what u guys were thinking, having a discussion about storytelling and not tagging me
Louis#0144: I’m like our domain expertise in this area
Louis#0144: Lmao
Louis#0144: TLDR though is this is 100% worth a try
Louis#0144: I don’t think it would do particularly well tho
Daj#7482: What? You don't even use LMs for everything, you're clearly an amateur :^) |
Louis#0144: LMAO
Daj#7482: Imagine using Knowledge Graphs
Daj#7482: look at grandpa over here
gwern#1782: I think solving Zork is, no matter what preference learning arch you use, quite difficult, and not a preference learning setup at all. I think preference learning wil lwork for CYOA AID because it's very 'local' but for zork, you have those crazy long range dependencies and bizarre puzzles, which is difficult for any approach period
Louis#0144: Yes
Louis#0144: That was my reasoning too for the dataset we are collecting
gwern#1782: if you had a *lot* of trajectories solving Zork, maybe, but...
Louis#0144: PPO will work locally
Louis#0144: Not globally
Louis#0144: You’re right
gwern#1782: whereas for CYOA/AID, it's much more of a local 'I know it when I see it' sort of thing
Louis#0144: But I think this method of decision transformers might also work globally
Louis#0144: I think that’s the main advantage of an approach like this
gwern#1782: it is naturally a preference learning problem. there's no way to 'solve' AID. there's no right answer. you just want a wide diversity of high quality stories coming out of it, no amtter what the user throws at it. it's improv and esthetics, not solving a convoluted puzzle
Louis#0144: My thoughts too
Louis#0144: I would be hesitant to call it a choose your own adventure though
Louis#0144: I guess that’s what’s concerning
Louis#0144: Choose your own adventure needs long term dependencies to be fun
Louis#0144: You need to be able to reference something from 20 decisions ago
Louis#0144: Otherwise it’s not really a choose your own adventure |
Louis#0144: It’s an endless maze
Teemochu#8740: not sure about finetuning but inference around 14-15gb iirc
Teemochu#8740: assuming you can get it running (there's a decent amount of custom code needed for now from what I hear, supporting rotary and all that)
Teemochu#8740: one thing to be aware of as well is 6B has decoupled encoder/decoder embeddings
Louis#0144: In that regard I think the puzzle component of zork might be of significant value
gwern#1782: yes, but it's much weaker than the fiddly puzzle logic. as long as ou have the broad strokes right. GPT can handle that pretty well, IMO. it's not perfect, but it doesn't need to be. user choice of actions can help enforce consistency, and they will do so subconsdciously, discarding poorly consistent choices
kurumuz#5695: Isn't this the problem we're trying to solve anyway?
Louis#0144: Don’t get me wrong though I totally think it’s worth trying
Louis#0144: You should do it
Louis#0144: I’d play the fuck out of it
Louis#0144: LMAO
kurumuz#5695: Like, AID has no structure to it and can't go quite far back, hence it can't even do things CYOAs can do
Louis#0144: Reasonable
Louis#0144: Yes
Teemochu#8740: given where "I know it when I see it" comes from, this is a quite masterful pun even if you didn't intend it
Louis#0144: That’s why you ground + use KGs
gwern#1782: you can't rewind?
gwern#1782: humans have their preferences...
kurumuz#5695: It only remembers the last 700 tokens or so
kurumuz#5695: making any long range dependent stuff impossible |
gwern#1782: yeah but there's no reason they couldn't log the history and let you rewind to an arbitrary point's 700 tokens
Louis#0144: I think your method would work super well if you just include a tiny knowledge graph that keeps track of decisions (not even generated text, solely the counter factuals of the decisions you made)
Louis#0144: It would only need to be a few dozen vertices
kurumuz#5695: I think its also much easier to feature engineer
Louis#0144: I think if you spent a few days on that it would be totally kick ass
kurumuz#5695: so you can do quite a lot of stuff with this approach
Louis#0144: Naively might not be fun once the “oh wow ai is so cool” framing fades away
Louis#0144: @gwern did u try implementing this
hGI.unsure#2032: Thanks.
I'm assuming that eventually it's going to come to pytorch. I just wanted to know if it would fit in 16Gb of ram/shared gpu memory. Hopefully it should run on low vram with a bit more than (15(model size) /11 (gpu bandwidth)) 1.4 s/token.
Louis#0144: Is there a prototype
bmk#1476: gwern always intends to pun
Louis#0144: Yeah gwern is the best at puns
Louis#0144: Fr
gwern#1782: no, i literally came up with this idea yesterday and the DT preference-learning about 2 hours ago
bmk#1476: ~~also gwern doesn't really implement things, presumably because he detests working with anything that's not haskell~~
Louis#0144: Lmao
kurumuz#5695: lol
kurumuz#5695: I would totally try to implement this now if my team wouldnt kill me for not focusing on the launch
kurumuz#5695: also getting 6B to hf is more ~~painful~~ fun |
Louis#0144: NO
Louis#0144: KURU
Louis#0144: FOCUS
bmk#1476: y'all got progress on that?
Louis#0144: It’s almost ready
Louis#0144: Last steps
bmk#1476: I can't wait for 6B on HF
Louis#0144: They just needed a break
gwern#1782: https://tenor.com/view/chappelles-show-dave-chappelle-chappelles-tyrone-tyrone-biggums-gif-4958017
Louis#0144: Me too
gwern#1782: y'all got any more of those distractions
Louis#0144: 6b only has global attn right?
StellaAthena#3530: Yeah
Louis#0144: Ok
Louis#0144: @finetune
Louis#0144: Important for u
kurumuz#5695: we already kinda confirmed that lol
StellaAthena#3530: Who is doing the HF port? Louis and Kuru?
Louis#0144: (For reference I don’t work with kuru I just converted him to a minion of the knowledge graphs)
Louis#0144: Finetune |
Louis#0144: Kuru and I just talk about KGs
Louis#0144: lmao
kurumuz#5695: lmao
kurumuz#5695: finetune does all the work, we just talk
Louis#0144: LMAO
Louis#0144: no u do work too
Louis#0144: Don’t undersell yourself
finetune#0907: reassuring, got that part right
EricHallahan#1051: Do you really think Louis would spend his time on that? :3berk:
kurumuz#5695: he is busy building all those vertices
Louis#0144: I’m busy writing grants and papers
Louis#0144: 😦
Teemochu#8740: what's the status on your pr for the memory bug btw
Louis#0144: I have no time to do my own research anymore
Teemochu#8740: to hf proper
finetune#0907: no progress https://github.com/huggingface/transformers/pull/11630
EricHallahan#1051: I don't think HF really cares lol
Teemochu#8740: just be sure the 6B code is descended from the fork so they are incentivized to pull it all in at once if they want a ready-made solution
Daj#7482: Is finetune actually making a PR to HF?
kurumuz#5695: :berk: |
finetune#0907: maybe they'll write a new gptjax class
kurumuz#5695: oh god
Daj#7482: I'm pretty sure HF internal is working on an implementation too, no?
kurumuz#5695: idk
kurumuz#5695: we're just doing it for ourselves
kurumuz#5695: ig
Daj#7482: fair
Daj#7482: good luck lol
kurumuz#5695: i think we're pretty close
kurumuz#5695: but we will see
finetune#0907: the jax codebase does a few things bit differently from what i've seen
Daj#7482: JAX is nice too tho
kurumuz#5695: well most of our inference is GPU
EricHallahan#1051: *I would not be surprised if it is better than HF*
bmk#1476: fork HF transformers and apply all the fixes
Teemochu#8740: I think you already know this but 6B has decoupled encoder/decoder token embeddings, if that's by any chance what's throwing you off @finetune
bmk#1476: call it transformests
bmk#1476: ill use it lol
kurumuz#5695: @finetune do the model in GPT2
kurumuz#5695: so they wont fucking create a new class |
kurumuz#5695: LMAO
bmk#1476: while youre at it try putting gptneo into the existing gpt2 class too
EricHallahan#1051: JAX is becoming more attractive by the day to me.
finetune#0907: so far modified the neo class
kurumuz#5695: finetune has that code already
bmk#1476: ah nice
kurumuz#5695: so yeah put that in while at it
kurumuz#5695: destroy the neo class
EricHallahan#1051: Who cares about backwards compatibility?
EricHallahan#1051: The Neo class was pretty much useless anyway.
finetune#0907: might port it over to gpt2 some time, but results are slightly different when running thru that
finetune#0907: yea
kurumuz#5695: kinda useful for deepspeed inference
kurumuz#5695: as neo doesnt work with gpt-2 on there
bmk#1476: cynical half-joking speculation in violation of hanlons razaor: HF made a separate neo class because it serves as better marketing to frame neo as totally different rather than just something in the same class as gpt2
kurumuz#5695: that would make sense tbh
finetune#0907: could imagine that actually ye
Daj#7482: more reasonable is it was literally done by an intern lol
Daj#7482: Which we know is true
bmk#1476: i mean i did hedge my statement |
kurumuz#5695: ye but creating a new class is more work 🤔
Daj#7482: imagine touching legacy code
Louis#0144: We need GAX. Geese are... uh... xylophone? Xenomorphs? Idk
kurumuz#5695: Geese are fast model runners
Louis#0144: Ye
kurumuz#5695: GFMR
kurumuz#5695: our inference library
Louis#0144: Name the inference library goosefoot
Louis#0144: And the logo is a picture of a goose sprinting
Teemochu#8740: yes but internships are partially luxury-class interviews
Teemochu#8740: the employer gets a lot of information and the candidate gets ramp-up time at a discounted rate to the employer
kurumuz#5695: with the neural net in its mouth
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/851561389159874640/angry-goose.png
kurumuz#5695: i approve this
Louis#0144: Ye
bmk#1476: or maybe this one https://cdn.discordapp.com/attachments/729741769738158194/851561537804959774/89236f955086a0fd36d38e1f9cfcc85a.png
kurumuz#5695: man i also have to do the backend after 6B
kurumuz#5695: lol
kurumuz#5695: ye that is better
bmk#1476: lol i just found this https://www.cbc.ca/news/canada/manitoba/white-canada-goose-spotted-in-winnipeg-park-1.3255129 |
Daj#7482: god why do geese always look like this
Daj#7482: majestic
Daj#7482: I recently discovered that canada geese are small af
Daj#7482: We have some really hench geese down by the river
Daj#7482: they bully the tiny canada geese
bmk#1476: wait, youve never seen a canada goose irl before?
Daj#7482: Not up close until recently
bmk#1476: ah
Daj#7482: and they're tiny
Daj#7482: compared to the hench european geese that were nearby lol
Daj#7482: dunno what species it was
Sid#2121: someone make a virgin canada goose v chad europe goose meme
Daj#7482: They heckled us for our fries
Daj#7482: https://cdn.download.ams.birds.cornell.edu/api/v1/asset/162799271/1800
Daj#7482: these absolute lads
Daj#7482: I think
Sid#2121: truly chad posture https://cdn.discordapp.com/attachments/729741769738158194/851563089899421716/2Q.png
bmk#1476: you sure thats not just a europe thing
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/851563116947832862/unknown.png
Louis#0144: Yeah Canadian geese are huge |
Louis#0144: They’re not swans
Louis#0144: But they’re huge
Sid#2121: @Daj told you they were just far away
Daj#7482: lmao
Louis#0144: Awww Connor and Sid had a romantic riverside walk
Sid#2121: threesome, actually
Louis#0144: oh
Louis#0144: With the goose???
Louis#0144: Oh god
Sid#2121: yes
Daj#7482: Giant canada goose is a different species
bmk#1476: its a subspecues
Daj#7482: The geese came up to haze us for food lol
Louis#0144: I thought sid was in the uk still
Daj#7482: anyways I thought they'd be bigger
Louis#0144: Or is that jprester
Daj#7482: We're flatmates lol
Louis#0144: LOL
Sid#2121: naw i got out of there before brexit
Louis#0144: Do u even speak German |
bmk#1476: wait, brexit happened already?
Sid#2121: ich lerne
Sid#2121: yeah, end of last year
bmk#1476: i had always just assumed that brexit would never actually happen
Sid#2121: we brexited
bmk#1476: huh
EricHallahan#1051: It was a hard deadline.
Daj#7482: btw Sid you're full of shit
bmk#1476: thats what they said every other time
Daj#7482: Chamomile tea is nice
Sid#2121: connor is old man confirmed
bmk#1476: animeland has best tea
Daj#7482: It smells nice in hot water you're just weird
Daj#7482: Green tea still superior obviously
bmk#1476: i tried some earl grey once and it was horrible
Daj#7482: But this is nice too
Sid#2121: stop insulting my heritage
Sid#2121: earl grey is all we have
Daj#7482: _You_ don't even drink earl grey
Sid#2121: (it's not even ours we stole it) |
bmk#1476: might have just been the brand but the citrus flavor was absolutely suffocating
Sid#2121: I will admit weeb tea is superior
bmk#1476: i felt like i was drinking the pure distilled essence of orange scented shampoo spiked with an indetectable amount of tea
Sid#2121: earl grey shouldn't be *that* citrussy lol
bmk#1476: maybe it was the brand
bmk#1476: "twinings"
Sid#2121: twinings is ok ¯\_(ツ)_/¯
Sid#2121: evidently, you are simply uncivilized 🧐
bmk#1476: the same brand did have something labelled "english breakfast" which was really nice though, but afaict its just pure black tea or something
Sid#2121: lmao at english breakfast tea being a foreign concept to people
bmk#1476: i dont know anything about tea
Sid#2121: the average english person drinks like 5 cups of that a day
Sid#2121: i don't even think i'm exaggerating
bmk#1476: well its good shit
Sid#2121: it is
bmk#1476: no citrus shampoo flavor at all
bmk#1476: i should totally go buy some more at some point
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/851565563519696896/Screenshot_from_2021-06-07_22-57-18.png
kurumuz#5695: TPU VMs are so good
kurumuz#5695: totally a game changer imo |
bmk#1476: also Arizona iced tea is amazing even though it's not really tea anymore with all the stuff they add
Louis#0144: Lipton
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/851566569926033438/Arizona_Green_Tea_with_Ginseng_and_Honey_23.png
bmk#1476: the stuff is dirt cheap too, 1.29 cad for a can or a round 1 usd
gwern#1782: "brexit" is hatespeech. they prefer to say they "yeeted the eu"
pebbles#7130: I am in the UK and I apologise for brexit ;-;
pebbles#7130: I was underage when the initial vote happened
kurumuz#5695: 🤔
kurumuz#5695: what are you apologizing for exactly
pebbles#7130: I don't want to start any politrib
kurumuz#5695: okay
Jonnathan#1234: Isn't Arizona green tea basically just sugar water?
bmk#1476: only the most amazing sugar water ever
bmk#1476: but yes technically all soft drinks are basically sugar water
bmk#1476: but it's still delicious as hell
Jonnathan#1234: Yea it's pretty good. I drank it often as a kid when my dad was buying a ton of it. Pretty sure I got fat off that and cereal.
bmk#1476: arizona far outclasses my next top choices for sugary drinks (Canada dry, brisk, and pepsi/coca cola)
Jonnathan#1234: Frosted flakes is basically crack 😤
gwern#1782: (hm. I thought the definition of 'soft drink' was being carbonated, in which case there are soft drinks which aren't sugar water, iirc, germany is notorious for drinking lots of just plain carbonated water? but WP seems to suggest that it is in fact defined as a sweetened non-alcoholic drink, which merely is usually carbonated)
bmk#1476: pardon my inaccurate wording, Arizona isn't carbonated |
bmk#1476: though I did hear that alcohol-containing Arizona exists somewhere
bmk#1476: ~~now to find that place~~
UnsupervisedLearner#4148: just add everclear
bmk#1476: that works too but I meant like it's an official thing
kinoc#5731: My AI sense is tingling something fierce, like I'll have to download something big or update some code or create an NFT. Anyone around here about to release an update or something momentous in the near future?
bmk#1476: I am about to release the world's first 2.7 parameter neural network
bmk#1476: here is parameter 1: 0.158295
bmk#1476: stay tuned for the other 1.7 parameters
kinoc#5731: The last 0.7 is the cliff hanger
StellaAthena#3530: I thought the point was that the parameters were tuned for me :thonk: :thonk: :thonk: :thonk: :thonk: :thonk:
gwern#1782: I just revolutionized the AID industry and also preference learning ~nyo~run~
kurumuz#5695: well now someone needs to implement it
kurumuz#5695: lol
gwern#1782: details! I expect my annus mirabilis of 2021 to be cited, however :schmid:
kinoc#5731: please enlighten and expo0cate
bmk#1476: 2021 is also going to be the annus mirabilis of eleuther
bmk#1476: 2020 was just ramp up
gwern#1782: https://sites.google.com/berkeley.edu/decision-transformer https://www.reddit.com/r/reinforcementlearning/comments/nqp9nh/decision_transformer_reinforcement_learning_via/h0xyia4/ https://www.reddit.com/r/GPT3/comments/ntvqw6/cyoa_aid_proposal_collaborative_storytelling_on/
𓅬 gabriel_syme 𓅬#3220: 661 messages 👀
bmk#1476: unrelated but im pretty sure gwern is the only remaining living person to use score to refer to 20 of something |
bmk#1476: in normal usage
gwern#1782: there are dozens of us who use it every fortnight! dozens! sometimes I use it and realize I said it a sennight ago
𓅬 gabriel_syme 𓅬#3220: lol
bmk#1476: hey i still like fortnight as a time term
𓅬 gabriel_syme 𓅬#3220: too old for the game?
𓅬 gabriel_syme 𓅬#3220: it's ok it happens to all of us
bmk#1476: damn those gamers for coöpting it
bmk#1476: also https://cdn.discordapp.com/attachments/729741769738158194/851581166372519987/wcMBeNTP721iAAAAABJRU5ErkJggg.png
bmk#1476: for all the people who complain about french numerals, including me
𓅬 gabriel_syme 𓅬#3220: smh I never considered the four scores could be french, and it's so obvious
chirp#4545: https://twitter.com/mark_riedl/status/1401989845870792706?s=21
chirp#4545: Emoji request ^
Louis#0144: Anyway if anyone implements gwerns system
Louis#0144: It probably makes more sense to cite storium
Louis#0144: Over a discord message
Louis#0144: 🤷♂️
gwern#1782: what's a storium?
Louis#0144: Preference learning for storytelling
Louis#0144: Using collaborative writing annotations
Louis#0144: It was learning to summarize |
Louis#0144: Before OAI did it
gwern#1782: unless they use Decision Transformer, then there's no point in citing them
Louis#0144: But that’s such a small extension
gwern#1782: it's a radical revision with many advantages
𓅬 gabriel_syme 𓅬#3220: so I was thinking, could we not do the same fine tuning with a DALLE model? I've been thinking of few shot learning for my DALLE models and if it will be possible. Example, a new layout type becomes available and the model learns to do it from being fine tuned on a few samples. Can we also teach it preferences?
Louis#0144: I just interpreted it as better scaling
Louis#0144: And also being really good for multitask
gwern#1782: I'm not sure. the GPT part is just predicting image tokens. what would correspond to training on multiple ranked options or even 'rewards'?
𓅬 gabriel_syme 𓅬#3220: if I had a downstream evaluation of the output, could that be the reward?
𓅬 gabriel_syme 𓅬#3220: although I don't know how to answer the first part of the q
gwern#1782: oh. then yeah, that's just regular DT
gwern#1782: prefix the reward of an imge to the VAE tokens
𓅬 gabriel_syme 𓅬#3220: oh woah! that's even nicer / easier
gwern#1782: so instead of being [VAE token #1, #2, ... #n], it predicts '[reward, VAE token #1, #2, ... #n]'
𓅬 gabriel_syme 𓅬#3220: I have not read DT yet sry if this is silly, but do they discuss multimodal uses?
gwern#1782: (the thing about DT for AI dungeon is that you want to predict *completions*, not from scratch samples, so you need to stick the reward 'in the middle' so you can condition appropriately left-to-right)
CRG#8707: CLIP for reward might work :thonk:
𓅬 gabriel_syme 𓅬#3220: yeah, although my dataset is architecture (training my own CLIP though). but in fact my reward will be performance-based. So each layout will have a thermal comfort, daylight, energy performance
gwern#1782: I don't believe so but I could be wrong. but really, this is all DT is. [reward, output, output, output, ...]
Louis#0144: This is what we are doing in #carp |
bmk#1476: lemme get this straight - the core idea of DT and UDRL is you train the model to predict actions conditional on reward, and then just ask it "lol gimme an action that gets 99999 reward", right?
𓅬 gabriel_syme 𓅬#3220: cool thx, reading the paper today
Louis#0144: For story completions
Louis#0144: Ye
bmk#1476: what the fuck tho
Louis#0144: Which is less useful than what we’re doing with CARP
Louis#0144: CARP let’s u constrain
bmk#1476: thats like .. whaT
Louis#0144: Against reviews
bmk#1476: it seems way too good to be true
Louis#0144: It’ll totally work
gwern#1782: it would be way too good to be true if you used a dumber model than GPT 🙂
Louis#0144: I’d bet money gwerns idea will work
Louis#0144: But idk
Louis#0144: I kinda worry about long term dependency stuff like I mentioned above
bmk#1476: if you asked me, i'd design a system where you predict reward from s and a and then search over a to maximize reward
Louis#0144: Maybe it’s circumventable
Louis#0144: This is CARP
Louis#0144: This is literally CARP
Louis#0144: lmao |
gwern#1782: yeah, so literally just encode each of those numbers into a BPE, and prefix all of them to their image's VAE encoding list. then you can control the output. 'give me a room with X% efficiency, Y square feet, Z daylight'
bmk#1476: but literally just asking the model for an action with high reward?? that seems.. just.. wrong
gwern#1782: like muzero or some sort of *chump*
gwern#1782: 'get in loser, no more planning, we're going shopping.'
kinoc#5731: Why either/or when you can have both/and (and amp your system with ...)
gwern#1782: I don't think anyone has established if you can plan over a DT tree yet
gwern#1782: it may have the same flaws as regular GPT trees, in degeneration
gwern#1782: this is surely something people are researching right now, though, as the obvious next step
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/851587603772735488/VPxXj9T2NEdUPHIxtQAAAAAElFTkSuQmCC.png
CRG#8707: Does the beam search from trajectory transformer count? https://trajectory-transformer.github.io/ https://cdn.discordapp.com/attachments/729741769738158194/851587806249484318/3e63779c856b30360db88063186dd124.png
𓅬 gabriel_syme 𓅬#3220: exactly my goal! thanks for this, will definitely try it
𓅬 gabriel_syme 𓅬#3220: really nice way to fine tune as well, close to what I was thinking but not exactly lol
kinoc#5731: I'm have an "intuition" that you can get transformers and tree search to work together if you pick the right chunking level between the two.
kinoc#5731: just like adding the "reward-evaluation" to the whole output is at a different "level" than individual tokens.
gwern#1782: ah, I forgot about that. I didn't read Trajectory Transformer closely
UnsupervisedLearner#4148: Any recommendations on software for producing graphics like this? https://cdn.discordapp.com/attachments/729741769738158194/851589616280403968/fig-1-2x.jpg
Louis#0144: Ok you’re gonna laugh
Louis#0144: I’m willing to bet
Louis#0144: That that graph was made in power point
Louis#0144: It looks like it was |
Louis#0144: 100%
EricHallahan#1051: I bet it is PowerPoint.
Louis#0144: I don’t know what kind of psychopath uses PowerPoint for this
Louis#0144: Draw.io is really good
𓅬 gabriel_syme 𓅬#3220: hey, carefuly how you speak about the MS Office collection
𓅬 gabriel_syme 𓅬#3220: it's our last line of defense vs the robots
Louis#0144: We need that IQ curve. From left to right: PowerPoint, tikz, draw.io
Louis#0144: @bmk free meme material
𓅬 gabriel_syme 𓅬#3220: I would 100% use InDesign or in my case Affinity Publisher for that
𓅬 gabriel_syme 𓅬#3220: I actually did that for my paper when I needed a flow chart like that (kind of)
orthocenter#5689: Hey there. Has anyone considered using naqt questions (https://www.naqt.com/samples/ict.pdf) for training data
UnsupervisedLearner#4148: Thank you guys
sweg#8920: so my playing chess against gpt thing is working out
sweg#8920: and describing the game beforehand effects the game
sweg#8920: i.e. if i say "the following is a game between two novices. Observe how one player blunders their queen immediately"
sweg#8920: and the gpt3 controlled player blundered its queen
sweg#8920: gpt3 is a general intelligence confirmed?
gwern#1782: huh. you should post those transcripts
sweg#8920: idk how i would do that cause i have it setup in a way thats visual
sweg#8920: i might make a youtube video out of it lol |
sweg#8920: if i do ill share that
gwern#1782: ...visual? it's GPT-3. it generates text
bmk#1476: my guess is he wired it up to a chess interface
Louis#0144: https://twitter.com/yoshitomo_cs/status/1402053492202635268?s=21
Louis#0144: Looks promising
sweg#8920: yep
sweg#8920: its not that good tho tbh
sweg#8920: i tried "The following is a demonstration of the bong cloud opening, in which the king is moved on the second move"
sweg#8920: and it didnt do the bong cloud
sweg#8920: lel
bmk#1476: the bongcloud isnt really popular enough for it to show up in the data id hazard
gwern#1782: not as if the training corpus would prioritize chess. although dumping chess PGNs in FEN format would be an amusing addition to The Pile
bmk#1476: it only really blew up recently, after gpt3s data was already finalized
bmk#1476: like it was a thing before but not like super popularf
bmk#1476: also it's one word
EricHallahan#1051: Multilingual tokenizer wen
kindiana#1016: tokenizer that doesn't waste context on tokenization wen
bmk#1476: if you can get someone to scrape those files into a nice tar.gz, i can fine tune 2.7B on it
gwern#1782: mm. shawwn scraped a lot of PGNs, but they weren't in FEN format, which we blamed for the poor chess performance - no information about board state
bmk#1476: lmk if you get someone to figure that out |
gwern#1782: could probably just ask on a computer chess forum. I bet it already exists
bencooper#7768: What route would you guys go for self hosting some sort of API for GPT-Neo (using it in a web app)? Would you use GCP or a Docker instance on AWS? Hugging Face's Inference API seems too expensive.
kurumuz#5695: umm
kurumuz#5695: depends on your scale. how many users will you serve?
kurumuz#5695: what are you gonna do with the api etc
bencooper#7768: Would be used for a web app, hopefully at some point with 100s of daily users
bencooper#7768: So not just for experimenting
kurumuz#5695: you need to calculate your costs.
kurumuz#5695: you cant be naive with this kind of thing, it might mean you going bankrupt.
kurumuz#5695: scaling gpus isnt easy either, so you might want to consider inferkit or huggingface
bencooper#7768: I really appreciate it!
bencooper#7768: Seems like more of an upfront cost just to get going with hosting GPUs etc, but eventually would maybe be worth it with enough users. So think I'll start with hugging face, and eventually if neccessary self host. Also seems like alot more monitoring and ops time with hosting GPUs
Exocamp#8255: am training gpt-mlp-jax from lucidrains for fun, to see how it works/if it works/if it does good at working
𓅬 gabriel_syme 𓅬#3220: nice! what kind of data?
Exocamp#8255: imported it to colab, made no changes to the train.py code or data other than adding a (very bad) matplotlib graph for loss.
Here's up to step 250 https://cdn.discordapp.com/attachments/729741769738158194/851655320006426624/f39fa943e7084669567b730ae9fcae43.png
Exocamp#8255: Compressed enwiki8
Exocamp#8255: a section of Wikipedia
Exocamp#8255: Honestly after this I might try doing enwiki9, which is an order of magnitude bigger than enwiki8
Exocamp#8255: by looks of this thing, looks like training curve already rather flattened out. 400/what was supposed to be 100,000 steps https://cdn.discordapp.com/attachments/729741769738158194/851656094287790110/495fb006574986e4a6439ff469d73edd.png |
Exocamp#8255: Ah I found the problem?
Exocamp#8255: num_tokens was 256 lmao
Exocamp#8255: github readme examples had 20000
Exocamp#8255: Let's see if this fixes things
Exocamp#8255: also set attn_dim to 64
gwern#1782: ~curriculum / progressive training~
Exocamp#8255: ?
Exocamp#8255: sounds like something i'm very interested in
Exocamp#8255: Okay I forgot what mistake I exactly made in setting params but it appears to be a big one. https://cdn.discordapp.com/attachments/729741769738158194/851660488010235924/6a3c4c3dca731944d794721eebb8c00d.png
Exocamp#8255: oh nvm, just training magic https://cdn.discordapp.com/attachments/729741769738158194/851660627915309056/3f83d256c0999d184d3bef6325a4b620.png
kurumuz#5695: man its only 100 steps haha
rs#2093: hey im new here was wondering what type of interpretability/fairness stuff eleuther does (obviously i see the reading group in the sidebar, but anything else?)
Exocamp#8255: I am impatient ADHD-addled man who has no true idea what he's doing, please understand
kurumuz#5695: oh so you are me
Exocamp#8255: yes
Exocamp#8255: hello, clone
kurumuz#5695: lol
Exocamp#8255: i have looked into this.
Exocamp#8255: ah.
Exocamp#8255: this is |
Exocamp#8255: *very* useful thank you
Louis#0144: Welcome. Enjoy the geese
Louis#0144: You’ll feel right at home coming from Waterloo
rs#2093: how many other waterloo ppl are here?
Louis#0144: Like four
Louis#0144: Maybe five
Louis#0144: Lots of Georgie tech tho
Louis#0144: About 20 people from GT
rs#2093: : o
Louis#0144: @Sahl @kiwi
Louis#0144: and one other
bmk#1476: hey yeah so we've been looking to spin up some alignment-relevant interpretability projects, atm we're still trying to work out how to organize our stuff to hand out tasks; for now you can look around in #alignment-general, #prosaic-alignment, #agent-foundations to see what kind of thing we're interested in
bmk#1476: #deleted-channel is another alignment relevant project you might be interested in
Louis#0144: Oh yeah
Louis#0144: @rs eegi might srsly interest you
Sahl#0630: @rs waterloo gang
Sahl#0630: welcome
rs#2093: tyty everyone
45#2247: waterloo trigger for french pple
𓅬 gabriel_syme 𓅬#3220: any advice on how to properly preprocess this kind of text information? |
https://codes.iccsafe.org/content/IRC2021P1/preface#IRC2021P1_FmPREFACE_FMSecDevelopment
𓅬 gabriel_syme 𓅬#3220: I'm a bit at a loss, although I do vaguely remember tabular data extraction in someway
finetune#0907: very high loss in first step. if that's 2.7b, maybe check if your num_heads is 20 in the config
gammascalpset#9792: stackoverflow is down
gammascalpset#9792: the attack has started
gammascalpset#9792: hug your loved ones
Daj#7482: 🖇️
rom1504#5008: All your papers will soon be properly organized
rom1504#5008: (that was the prompt for the end of the world "please help me organize my papers, with clips maybe")
𓅬 gabriel_syme 𓅬#3220: are 1200 pages of text decent for fine tuning dataset?
CKtalon#7792: that's about 500k words. wouldn't say it's a lot
𓅬 gabriel_syme 𓅬#3220: thanks! more it is 🙂
Daj#7482: https://twitter.com/elicitorg/status/1401983419479781379?s=19
Haven't looked at it yet but seemed potentially of interest to people here
gwern#1782: > https://www.freepatentsonline.com/y2021/0158162.html is it just me or has google been patenting an awful lot of DL/DRL stuff lately?
quinn#9100: Someone once told me that google's patent strategy is entirely to defend against patent trolls and that there's a low risk of them enforcing against individuals or small companies, going one further and saying that it's in the public interest for google to hold patents because it keeps everyone resilient against patent trolls. Anyone have a good sense of if this is true?
n.kh.l#5814: when im finetuning the gpt neo model with the colab, it shows me this message `Skipping training since max_steps has already saved.` repeated a lot of times. i looked at the faq and github issues and i cant seem to find anything. can i just stop it or should i wait for it to complete?
ari#9020: I've seen that message come up earlier, Stella tried to help someone who got it earlier at https://discord.com/channels/729741769192767510/729741769738158194/847298202268467230 but I'm not sure whether that actually fixed things; maybe @swcrazyfan knows
n.kh.l#5814: ok yeah i suspected it had something to do with the max steps... because last time i finetuned it did this and i just stopped it and when i tried generated it didnt look like finetuned output
n.kh.l#5814: so should i just comment out the whole while loop |
n.kh.l#5814: ```py
# Else, just train
while current_step < params["train_steps"]:
# Else, don't stop and restart
estimator.train(input_fn=partial(input_fn, global_step=current_step, eval=False), max_steps=params["train_steps"])
```
ari#9020: I have no idea, my knowledge of gpt-neo code consists entirely of what I've picked up from lurking on this server
DanHendrycks#8913: Wu Dao API: https://api.wudaoai.cn/Api/1373532973227487232
bmk#1476: if anyone needs help writing prompts in Chinese I can always help
BeatriceBernardo#5504: We might be resilient against patent trolls, but we will become more vulnerable against google.
gwern#1782: it's possibly a coincidence that these patents are coming as DM's attempt to get more legal autonomy has been quashed
n.kh.l#5814: hi. i had a similar problem (skipping training since max_steps has already saved). i commented the while loop and it just stops very quickly (maybe its doing 1 step but im not sure)
gammascalpset#9792: Don't know US latent law well, is there no hope once a patent is passed, or could you still fight them in court by trying to argue that the patent is too generic if they try to enforce it?
sheggle#6841: Anyone got a TL;DR of the user agreement when downloading WuDaoCorpora2.0?
StellaAthena#3530: I have a fork that may fix the problem, try out the code at www.github.com/stellaathena/gpt-neo
If that doesn’t work for you, feel free to DM me and I’ll continue to work on it.
**Edit:** it does not in fact do so, but at least I now know exactly what’s going wrong and can fix it later today
Dohn Joe#2433: Does anyone here have experience with pointer networks, or hierarchical pointer networks? |
I’m looking to get a sense of how many indices they can juggle.
gdawg16#0493: Gpt-neox?
Sid#2121: yes
gdawg16#0493: Thank you
𓅬 gabriel_syme 𓅬#3220: I call bullshit.
inox#5400: I provide this service for people's wallets against regular bridge trolls
bmk#1476: >bridge trolls
>troll bridge
sighs, pulls out DT
EstebanSir#2189: YES
EstebanSir#2189: WOOO
Kia#2550: Congrats EleutherAI 🥳
Kia#2550: Really really surprising for the realeased
GrimSqueaker#8837: I have a partial list of the huge lists of datasets (with annotations/categories/domain/data type) I gathered in my old job as data Czar:
https://docs.google.com/spreadsheets/d/1Nq8VAoZZo1yABAi4E9zR3Z6gcE2GW1s6c2ECjuyUAyc/edit#gid=0
https://docs.google.com/spreadsheets/d/1knGJBU_vZtkfhkvsYhSIKp0C3RM4G00RLT4eE-Yib0U/edit#gid=0
gammascalpset#9792: thing is even if it was true, all it takes a change of leadership to someone who realizes they can/want to use the patents disney-style
gammascalpset#9792: by disney-style I don't mean anything specific, just evil |
chris_myzel#9645: If I'd go to train on a language different than english, I'd be better of fine tuning the released models rather than starting from scratch, would you agree?
Daj#7482: Unless you have the compute to train from scratch (which is a lot), yes
chris_myzel#9645: Ok thx - just listened to your podcast with Jim - looking forward for part 2
Daj#7482: Thanks, glad you enjoyed :)
swcrazyfan#2478: To be honest, I’ve simply stopped it during the loop, and I’ve gotten okay output. I’m not sure if it’s as good as it’s be if it was able to actually complete correctly.
Fando#5805: Hello, I would like to use the gpt-neo model for text generation based on certain keywords. Does someone has experience with that? Thank you a lot for your help 🙂
pebbles#7130: !faq [bot not working??]
EricHallahan#1051: It only works for privileged members.
EricHallahan#1051: !faq
Carl-bot#1536:
EricHallahan#1051: See?
pebbles#7130: ah ok, that makes sense
EricHallahan#1051: If you haven't already, I suggest you read the FAQ. `:)`
pebbles#7130: maybe it'd make sense for anyone to be able to use that one command
pebbles#7130: yeah, that's exactly what I was trying to say `:)`
Fando#5805: okey, thank you a lot 🙂
chirp#4545: Is there a good way to build a data pipeline that makes it easy to access the artifacts from Colab?
Louis#0144: !faq
Louis#0144: Damn
Louis#0144: Lmao |
alexyz#3459: Would a rebooting of the Pile project be out of the question?
gwern#1782: what would be the point? compute-constrained, not data
alexyz#3459: A larger, multilingual Pile
Daj#7482: It's out of the question in the sense that "the original authors are burnt out on that kind of work and have other projects they're doing, and as gwern said, we're not data constrained"
alexyz#3459: ah, 👍
Sid#2121: if you wanted to head it up, though, no one's gonna stop you
AI_WAIFU#2844: Yeah we've had this question asked multiple times, so if all those people want to get together and pickup the torch. Be our guest.
n.kh.l#5814: oh yeah i tried it with the hackernews and it works pretty well
n.kh.l#5814: im not sure how im supposed to format my data though
n.kh.l#5814: like 1 entry per file
n.kh.l#5814: or seperate by \n
Louis#0144: If you wanna head up visual grounding
Louis#0144: That’s also an option
Louis#0144: LMAO
n.kh.l#5814: im trying to tokenize my dataset for gpt neo... its 6M lines in a zstd compressed jsonl file and when i tokenize with colab, its super slow compared to tokenizing the hackernews dataset
n.kh.l#5814: is there a special way i can format my data or something to make the tokenization faster?
alexyz#3459: would be a fun idea
Leo Sanders#1157: Hey 👋 I just reached out to Connor Leahy and Stella Rose on Twitter and they redirected me to the Discord.
We are prototyping on GPT2-L and would like to move to your awesome GPT-NEO 1.3B!
I have a question I cannot find the answer to: approximately how long the inference takes on high end GPU like Nvidia T4 for a small output like 60+ tokens? Do you have any example of inference speed Im looking for ms/token? |
Whatever you would have will help a huge deal!! 😊 thank you so much!
Daj#7482: Hello! I think @kurumuz had benchmarked some numbers for that
Leo Sanders#1157: Hey buddy!
kurumuz#5695: I have benchmarks for 2.7B.
AI_WAIFU#2844: I'm sure we can extrapolate
Leo Sanders#1157: I’m interested in whatever you have
kurumuz#5695: It also depends on if you're going to do batching
kurumuz#5695: okay, sure.
kurumuz#5695: ```
seq_len max_len runtime
128 168 1.2413259412000002s
256 296 1.3484386238999833s
384 424 1.5182151628999805s
512 552 1.6499565551000046s
640 680 1.7703169692000074s
768 808 1.892524761200002s
896 936 2.0653174241999865s
1024 1064 2.19975038069997s
1152 1192 2.3780867653000426s
1280 1320 2.53249043699999s |
1408 1448 2.6793070617000128s
1536 1576 2.856790712399993s
1664 1704 3.0497268097999837s
1792 1832 3.2173556434000035s
1920 1960 3.4154131358000086s
```
T4 gpt-neo 2.7b fp16
kurumuz#5695: you might want to use deepspeed inference, its faster.
EricHallahan#1051: 1.3B should perform nearly identically to GPT-2 XL when it comes to throughput.
kurumuz#5695: Yea, should be pretty similar, actually I can compare them.
kurumuz#5695: i will benchmark between GPT-2 L, XL and GPT-Neo
Leo Sanders#1157: Seq_len is the count of input tokens. And max_len the count of output tokens?
EricHallahan#1051: ~~Yep~~ I read the question wrong lol
kurumuz#5695: max_len-seq_len
kurumuz#5695: is output tokens
kurumuz#5695: it always generates 40 tokens.
StellaAthena#3530: 40 tokens in 2-3s is pretty good
Leo Sanders#1157: That’s so hat I will do: 45 or 65 tokens output and up to 750 token inputs
kurumuz#5695: with T4 right
Leo Sanders#1157: On GPT2 T4 (g4dn EC2) I have this |
kurumuz#5695: okay, let's estimate some stuff
kurumuz#5695: with GPT Neo 2.7b, for 750 token input and 45 token output it should be around 1.8 seconds
kurumuz#5695: for 65 token input it should be around 2.6
kurumuz#5695: If you don't hit a VRAM bandwith bottleneck, GPT-Neo 1.3B should be half of those pretty much
kurumuz#5695: You probably will hit a bandwith bottleneck though
kurumuz#5695: If you want to get faster, use deepspeed inference
kurumuz#5695: it's pretty simple to use
chilli#5665: why is deepspeed inference faster?
kurumuz#5695: they have optimized cuda kernels
Leo Sanders#1157: https://cdn.discordapp.com/attachments/729741769738158194/852282767172960346/image0.jpg
chilli#5665: and how much faster is it?
kurumuz#5695: I have data, gimme a sec
Leo Sanders#1157: These are GPT2 M and L on T4 Pytorch
Leo Sanders#1157: Input token count for each row
Leo Sanders#1157: Output token count each column
Leo Sanders#1157: All in ms
kurumuz#5695: ```
2.7b fp16:
-------------------------------------------
dsi, bs=5, 0.37s \ 1000 tok context |
hf, bs=5, 0.65s \ 1000 tok
dsi, bs=4, 1.01s \ 2000 tok
hf, bs=4, 1.29s \ 2000 tok
6b fp16:
-------------------------------------------
dsi, 1.10s \ 1000 tok
dsi, 1.48s \ 2000 tok
``` @chilli
kurumuz#5695: so, its a lot faster.
chilli#5665: I can't really read this lol
kurumuz#5695: yea its not readable
Leo Sanders#1157: Yeah actually lot faster than GPT2
chilli#5665: actually, I guess I can?
kurumuz#5695: lol
chilli#5665: 0.37 for DS vs 0.65 for HF?
kurumuz#5695: ye
kurumuz#5695: 5 batches
kurumuz#5695: 1000 token context
kurumuz#5695: this is a V100
kurumuz#5695: mind you |
Leo Sanders#1157: That looks super fast
Leo Sanders#1157: Amazing! I think I’m in business with GPT-NEO 🤣
kurumuz#5695: yea i think you can optimize the neo a lot with deepspeed inference
kurumuz#5695: you should totally go with it
Leo Sanders#1157: Also quality wise it looks so much better than GPT2
kurumuz#5695: though I don't know your use case.
Leo Sanders#1157: I will use for storytelling
cfoster0#4356: 👀
Leo Sanders#1157: we’re building a secret world full of dragons, creatures and strange encounters in the dark corners of an all-mighty AI 🧠
EricHallahan#1051: GPT-2 kind of sucks lol
Leo Sanders#1157: 🤣🤣
kurumuz#5695: How interesting we're doing the same thing.
kurumuz#5695: haha
kurumuz#5695: well, similar.
Leo Sanders#1157: Seriously?
kurumuz#5695: yeah
Leo Sanders#1157: I’m a big fan of AI Dungeon
kurumuz#5695: we're novelai
kurumuz#5695: if that means anything
Leo Sanders#1157: But so many things I dont like about it |
Louis#0144: Welcome to the club
Louis#0144: I’m a storytelling researcher
Leo Sanders#1157: Yeah I heard of NovelAI
kurumuz#5695: Are you a researcher?
Leo Sanders#1157: Although I never tried it. Did you guys have an app yet?
Louis#0144: soon
kurumuz#5695: Our open beta is soon.
Leo Sanders#1157: Got it
Leo Sanders#1157: Nop I’m co founder of an app company
Leo Sanders#1157: Also tech engineer, corporate banker and many other job I had depending on country I’ve been 🤣
Leo Sanders#1157: But most seriously I love TTRPG, RPG and would really really enjoy have a clear storyline in an AI Dungeon type of app with DnD style, gamebook universe
Louis#0144: https://moonshotquest.com/ this?
kurumuz#5695: yea, ofc
Leo Sanders#1157: Yes
Leo Sanders#1157: Do you work on NovelAI Honk also?
Louis#0144: No
Louis#0144: I’m a researcher at Georgia tech
Louis#0144: Doing work into storytelling
Leo Sanders#1157: Sounds amazing!
Louis#0144: I advise kuru but I don’t work for NAI |
Leo Sanders#1157: I think the future of storytelling has to go through AI
Louis#0144: Eh
Louis#0144: Too broad of a statement
Louis#0144: Doesnt rly mean much
Leo Sanders#1157: True
Daj#7482: For context: Louis (Honk) thinks _symbolic methods_ have value
Daj#7482: So disregard all opinions
Daj#7482: (jk ofc :berk:)
Daj#7482: (Or am I? :morelayers: )
AI_WAIFU#2844: The only symbols I care about are bfloats
Leo Sanders#1157: 🤣
Daj#7482: The future is AI everything else is a distraction
kurumuz#5695: the future is AI waifus
AI_WAIFU#2844: :ultrazucc:
kurumuz#5695: That live inside our brain, hopefully
Daj#7482: They have pills for that
Leo Sanders#1157: That would be a nightmare for me but surely a dream for some
Daj#7482: Curing that, that is
kurumuz#5695: lol
kurumuz#5695: Why would we cure it |
kurumuz#5695: :smug:
Daj#7482: This is for your own good
Leo Sanders#1157: I need to read one of you paper Honk Honk
kurumuz#5695: I don't wanna take my pills!
Louis#0144: They’re all on my site
Louis#0144: https://www.louiscastricato.com/papers
Leo Sanders#1157: The only goose I knew until now is the Mighty Vancouver Goose. I need to cure my fear of it 🤣 https://cdn.discordapp.com/attachments/729741769738158194/852287565004406784/image0.webp
Louis#0144: Fear us
Louis#0144: :ultragoose:
kurumuz#5695: :gooseknife:
Leo Sanders#1157: I will check this out thanks so much for sharing
Leo Sanders#1157: When you see the goose tongue, and hear the hissing - you know you’re in trouble!
Louis#0144: The tongue is just to distract you from the revolver they keep under their wing
Leo Sanders#1157: 🤣
Leo Sanders#1157: Nice meeting you all! I’m on twitter: http://twitter.com/LeoLovesAI
Leo Sanders#1157: DM me I will follow you, Im not sure of your twitter @
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/852291145643327548/whereisthebread.png
Leo Sanders#1157: https://cdn.discordapp.com/attachments/729741769738158194/852291855444869170/image0.png
matt222222#4805: Anyone know how to use a GPU instead of CPU when using/fine-tuning with HappyTransformer?
matt222222#4805: I'm getting a note 'happytransformer.happy_transformer - Using model: cpu |
matt222222#4805: how do I get it to use my GPU?
bmk#1476: what the heck is happytransformer
matt222222#4805: it's a wrapper on top of the transformers library for using models like neo
chilli#5665: isn't this that meme award library
chilli#5665: lol
bmk#1476: we cant provide help with it cause we have nothing to do with it
bmk#1476: go find whoever made it
matt222222#4805: will do, thanks
bmk#1476: why not just use transformers directly tho
bmk#1476: youll have better support
matt222222#4805: ease of learning, need to get something up and running quickly
bmk#1476: transformers is about as easy to use as it gets
matt222222#4805: any good noteboks or tutorials for training a neo-2.7B?
bmk#1476: i dont think it's possible to wrap transformers and make it simpler
tylerlastovich#3263: That depends on the intended audience, no? You could make it much simpler to for the layman by using smaller, less ml jargony words. Or build-in prompts, actions, etc (something I worked on last year).
bmk#1476: hf isn't really jargonny
EricHallahan#1051: HF is literally like six lines of code to inference a model.
kurumuz#5695: yea lol
tylerlastovich#3263: Exactly. 4 lines of pure excess, waiting to be abstracted.
kurumuz#5695: then for complicated stuff |
kurumuz#5695: you need to umm
bmk#1476: overabstracting is bad interface design
chilli#5665: I've come up with the easiest API around:
```
from easy_inference import i
i() # only 3 characters of code!
```
kurumuz#5695: "No need to learn math, its only 3 characters!"
tylerlastovich#3263: You need to know what inference means still though. I suggest easiest_ai as the package name.
I agree that over-abstraction is bad, but that has not stopped no-code from becoming a thing. Business users will still like to make 'flows' and processes with tools like HF once they realize they exist.
Louis#0144: I’m reading the happy transformers docs
Louis#0144: Dude
Louis#0144: This is weird
Louis#0144: It’s not any shorter than HF
Louis#0144: And it’s impossible to know what it’s actually doing
Louis#0144: Literally just use HF @matt222222
Louis#0144: You aren’t gaining anything
Louis#0144: It’s easier to understand and better |
bmk#1476: i can do you one better
```
import ai
```
gwern#1782: unfortunately, this tutorial assumes that peace is an option
Leo Sanders#1157: I found charging it while screaming worked every time. But I prefer to keep my distance so everyone can enjoy peaceful coexistence. 🤣
ersatz#0001: why are people posting about geese in this server
ersatz#0001: some guy is even bringing that to the novelai server
ersatz#0001: I don't get it
kurumuz#5695: :gooseknife:
n.kh.l#5814: im still working on finetuning the gpt neo models so i cant test it yet but just as an estimate, would there be enough data with something like 50 songs from an artist to be able to decently generate music like them?
mkualquiera#3484: Long story, just embrace the geese
n.kh.l#5814: why are people talking about AI in this server?
Louis#0144: please stop asking for tech support every other day
Louis#0144: we aren't your personal engineers
gwern#1782: "Why are you all in stupid goose-suit avatars?" "why are *you* in a stupid man-suit avatar?" 👯♂️
n.kh.l#5814: sure my bad i was just wondering but its fine
aze#1010: anyone here work with jax often? what is the to-go troubleshooting if jax cant detect my cloud vm TPU? (tensorflow works flawlessly)
Teemochu#8740: Is this perfectly linear in # tokens generated? |
kurumuz#5695: seems to be pretty linear, yea
kurumuz#5695: why?
Teemochu#8740: haven't looked into the way inference works much, so it doesn't do any kind of batching/caching or whatever, that's good to know
swcrazyfan#2478: Did you get the fix working?
StellaAthena#3530: I believe what’s happening is that when you set the number of steps to fine-tune the model, the model’s step counter is incremented by the same amount. So it thinks it’s finished training, even though it’s not. Haven’t had a chance to look at how to fix that yet.
jekbradbury#2280: jax uses a different cloud tpu setup called “cloud tpu vms”, look for documentation on that
aze#1010: i got it working, had to send a post request on port 8750 and choose a tpu driver version
gwern#1782: a classic bug with stylegan too, btw!
bmk#1476: :tribalism: https://cdn.discordapp.com/attachments/729741769738158194/852385836679036928/unknown.png
bmk#1476: eleuther stronk
Kia#2550: Lovely 😄
Louis#0144: Eric is who
Louis#0144: Micpie?
StellaAthena#3530: Eric is @HypnoPump17
bmk#1476: eleuther has the best publication page of any discord server
bmk#1476: and man we're pumping out research like crazy
guac#4716: dang that's not a neural radiance field paper :sadge:
bmk#1476: yeah ikr the naming is confusing
bmk#1476: even the same capitalization
Aran Komatsuzaki#5714: stella managed to be a co-author of every paper except for the single-author paper by me lol |
guac#4716: fooled me :/ lol will skim though but probably out of my wheelhouse
Louis#0144: @bmk once Stella gets back from vacation we’re gonna finally finish the speedrun paper
Louis#0144: We just need to finish writing
Louis#0144: Then CARP is going smoothly too
Louis#0144: I don’t see issues there
bmk#1476: is there anything that needs help with for the speedrun paper
bmk#1476: I can help wire infra up
bmk#1476: also after this we're writing up a negative result thing for multimodal grounding right
Louis#0144: Yes
Louis#0144: Absolutely
Louis#0144: Infra for carp would be useful actually
Louis#0144: If you wanna help on that
bmk#1476: what do you need
Louis#0144: I’ll talk tmrw
Louis#0144: I’m too tired rn
bmk#1476: k
StellaAthena#3530: There’s several things soon-to-be-out that I’m not on, but yeah that’s kinda funny for the current moment.
StellaAthena#3530: It’s about efficiently converting between intrinsic and extrinsic coordinates when working with protein chains
guac#4716: ah thanks for clearing that up haha 2-3 OOM speedup is very nice good work ya'll 👍
gdawg16#0493: Congrats to everyone and especially myself for this achievement |
Louis#0144: What
gdawg16#0493: Honk
Louis#0144: You dare speak the language of the divine!
Louis#0144: Heather
Louis#0144: Heathen
Manny96#3437: Open Source and efficient business models aren't mutually exclusive -- that's the current zeitgeist. To the contrary -- efficient business models are FOSS.
Not stewarding FOSS is grossly unethical -- that's current dissonance at OpenAI; the proper stewarding of FOSS -- would be a differential advantage. Technological efficiency doesn't imply the best in ethics.
bmk#1476: are you trying to suggest we do something about it?
Manny96#3437: Essentially, perform commercialisation of FOSS AI to conscientious enterprises and/or consumers.
bmk#1476: we don't plan on doing commercialization
Manny96#3437: The inverse of FOSS only internally value maximises; doesn't grow the pie at large.
Manny96#3437: commercialisation doesn't have to be for-profit
Manny96#3437: It just means that there is a go-to market strategy
StellaAthena#3530: This is false, and even if it wasn't false we wouldn't care
StellaAthena#3530: We do not have any ambitions ot bring any kind of product to any kind of market
Manny96#3437: Yeah, I see
Manny96#3437: There doesn't have to be a profit; but, creating enterprise adoption is very important
AI_WAIFU#2844: And that's happening.
AI_WAIFU#2844: But we personally have no plans of doing that. |
AI_WAIFU#2844: Since it would create perverse incentives
Manny96#3437: There are commercial products that aren't for-profit
Manny96#3437: Yeah, I see
Manny96#3437: Zero marginal cost
Manny96#3437: No cost function for reproduction of work
bmk#1476: well, others can create enterprise adoption for us
Manny96#3437: Yep
Manny96#3437: Although, code-base contributors that work on enterprise adoption aren't excluded from contributing code (under GPL license)
Manny96#3437: But, it's good to have contributors that not have close ties with the commercial entities (perverse incentives, indeed)
Manny96#3437: don't have*
Manny96#3437: Business models aren't mutually exclusive from FOSS
StellaAthena#3530: If you want enterprise adoption, by all means go do it. None of us will stop you. Heck, I'll applaud you. I just don't care about companies profiting off of my work and won't help them do it.
What exactly is your goal here? To convince us to comercialize?
bmk#1476: we prefer not having a business model
Manny96#3437: No, please stick to your principal - it's another edge to not focus on commercialisation
StellaAthena#3530: Okay, so what's your endgoal here? It seems like you're lecturing us on why we should commercialize
Manny96#3437: Perverse incentives like you said
Manny96#3437: I didn't mean for to seem that way, my apologies; the project would fail if it seemed that way
Manny96#3437: You need contributors that focus strictly on research |
Manny96#3437: Conflict of interest, indeed
bmk#1476: that's 100% of our contributors rn
Manny96#3437: I see
bmk#1476: I think it would alleviate a lot of confusion if you described what you view Eleuther as right now from your perspective
Manny96#3437: Yep
Manny96#3437: The organisation EleutherAI strictly focuses on FOSS AI algorithms, research
Manny96#3437: For the indefinite future!
Manny96#3437: Every algorithm I've developed in my life--has GPL licensed
Manny96#3437: I refuse to work on anything that isn't something like the GPL license
Manny96#3437: Rather, would go without a job than to work on anything that isn't FOSS
Louis#0144: Stalman has a discord alt
Louis#0144: lol
Sahl#0630: EleutherAI, or as I like to call it, Eleuther + AI,
Louis#0144: Stalman or as I like to call it, Linus + leech
Louis#0144: GPL is a meme
Louis#0144: Tbh
Louis#0144: Copyleft is great
Louis#0144: GPL and FOSS is weird
Louis#0144: Mostly because stalman is a leech tho
Louis#0144: A leech who wanted to legalize child abuse stuff |
Louis#0144: GNU kinda sucks anyway
Louis#0144: OpenBSD>GNU
Louis#0144: FreeBSD too
Manny96#3437: To the massive disappointment of many AI researchers, “OpenAI” has closed source their best outperforming natural language processing algorithm “GPT-3”; offering, only, exclusive rights to the Microsoft Azure cloud platform. It’s no longer serving open AI the company “OpenAI”.
Arguably, this interferes with anti-trust laws; as the open source community trusted the company to develop open research, but, in exact contradiction, to the founding charter ethos, the company is close sourcing key components. There is a silver lining, “GPT-2” source is still available; and "EleutherAI" project tries to be an open source alternative to the "OpenAI" organisation ; there is mention that “GPT-3” is just a linear scale up of parameters (computational complexity) from “GPT-2”. There is a clause in the “GPL” open source licence: that is, you cant copy right, a copy left IP.
Open Source; reproducibility, transparency, and freedom of productive work.
Louis#0144: Who are you talking to
Louis#0144: lol
Sahl#0630: what’s wrong with those
Sahl#0630: I’m informed on neither
Louis#0144: I don’t like the cults surrounding them
Louis#0144: The ideas are solid
Louis#0144: The communities are weird
𓅬 gabriel_syme 𓅬#3220: I see what you did there
Louis#0144: I’m still confused what your point is @Manny96
Louis#0144: You went from bunnies plans to foss with no transition
Louis#0144: I’m leaving that typo
dmvaldman#4711: i think we're getting joosed
Louis#0144: Yeah wtf
Louis#0144: I pointed that out and then everyone silence bird’Ed me |
EricHallahan#1051: To be clear, it is OpenAI's right to maintain GPT-3 as a closed-source product. They may license it as they please and nobody can force them to do otherwise. Even if it is against their charter, it doesn't mean that there is other reasons why they would not choose to openly license.
Louis#0144: They are also still making ai available to a lot of people that wouldn’t otherwise have it
Louis#0144: Which is not negligible
Louis#0144: I still think they are going with their original charter
Sahl#0630: It’s a complicated tradeoff
Louis#0144: They just have financial demands
Louis#0144: They aren’t run by Facebook for instance, they don’t have infinite money from another source
Louis#0144: OpenAI is independent mostly
Louis#0144: Besides Microsoft
Louis#0144: But even then
Manny96#3437: Block-chain will revolutionise FOSS based Fin-Tech
Louis#0144: oh god
EricHallahan#1051: Has their vision become murky lately? Absolutely. Do we have a problem with that? Absolutely not. It is their decision as an organization.
Manny96#3437: Create, financial pathways for FOSS
Manny96#3437: Proof of stake blockchain (open stake and transparent and reproducible)
StellaAthena#3530: I have a problem with it in the sense that if I were in charge I would act differently out of moral obligation. But I'm not going to go after them with a knife and try to coerce them to change.
Louis#0144: I’m pretty confident manny is just a troll
Louis#0144: Tbh
EricHallahan#1051: Exactly.
ethan caballero#6044: "we got way more clarity" - Wojciech Zaremba |
Louis#0144: The Eleuther vision is to maximize alignment memes per teraflop
Louis#0144: Paper clip maximizer but more dank
ethan caballero#6044: https://www.google.com/search?q=%22we+got+way+more+clarity%22+-+Wojciech+Zaremba
LaTrissTitude#0433: hello, found this server looking for an AI research community; trying to find researchers to discuss with about preference elicitation / NLP. this server seems to be mostly deep learning focused, isn't it?
EricHallahan#1051: We have a vocal minority of server members that are very keen on pushing for GPL or modified licenses thereof. We have had extensive discussions and have always come to the conclusion that permissive licenses are the way to go for us.
Louis#0144: Indeed
𓅬 gabriel_syme 𓅬#3220: welcome!
𓅬 gabriel_syme 𓅬#3220: there's a ton of people working on NLP here yes, and we've recently had discussions about PL as well in that context
EricHallahan#1051: Welcome! If you haven't already, please read our FAQ.
https://eleuther.ai/faq
LaTrissTitude#0433: read before posting :p
cfoster0#4356: Depending on what you mean by preference elicitation, #deleted-channel may be of interest
LaTrissTitude#0433: I'm learning a model of implicit preferences for categorization purposes from direct user feedback, according to what I read, preference learning seems to be the term when learning an order between items, and preference elicitation when learning a categorizer... but the terms are unclear, that's why I'm looking for some more experienced researchers for their take on the matter ^^
LaTrissTitude#0433: (soon starting my phd)
EricHallahan#1051: Unfortunately if you read the "Get Involved" page it is almost entirely out of date by this point. `:P`
EricHallahan#1051: Not that it is wrong, but projects have progressed that it effectively needs a rewrite. I'll have to rewrite it sometime before the end of the week so I can forget about it for another three months.
Eric Fillion#2038: I just read some comments regarding Happy Transformer within this chat, and I want to clarify a couple of things. I agree that Hugging Face's transformers library inference functionality is quite simple. But, its training functionality requires a fair bit of expertise to use. With Happy Transformer, you can train models, like GPT-Neo, with just a few lines of code.
Kia#2550: Guys
Kia#2550: We Hit 5k :mittwoch:
Kia#2550: Congrats 🎉 |
Manny96#3437: Stars?
Manny96#3437: GIT?
Kia#2550: Members
Kia#2550: But that would be nice to
Manny96#3437: GIT?
Kia#2550: Discord 😄
Manny96#3437: Nice!
Louis#0144: Not really. There’s a Linux command that lets you finetune neo with HF in one line
Louis#0144: The place happy transformers sits in is that it should be codeless
Louis#0144: If it’s trying to capitalize on people with no CS knowledge
Louis#0144: Otherwise there is never a reason to not use HF
Louis#0144: Using HF with training scripts that they provide or with a trainer has no required expertise beyond just knowing what a token is or what a generate function is
Louis#0144: Which can both be explained in 30 seconds
Louis#0144: The target audience is poorly thought out tbh
guac#4716: bro it's 3 a.m. relax lmao
EricHallahan#1051: Wait, it is going to be 5 o'clock somewhere in five minutes.
Manny96#3437: Aus
Louis#0144: LMAOO
Louis#0144: I can’t sleep
LaTrissTitude#0433: 9 o'clock in a few minutes here |
guac#4716: we're all on E.T. stop hiding
Louis#0144: My knees are in so much pain
Kia#2550: Sleep
Kia#2550: Ow yeah
EricHallahan#1051: I pretty much gave away my exact location once here lmao
EricHallahan#1051: So I am not hiding much.
guac#4716: yes i remember you essentially triangulating your position lmao
EricHallahan#1051: Good times.
Louis#0144: Eric is European I’m imagining
Manny96#3437: Get this guys - don't use smartphones for that exact paranoia lmao
Manny96#3437: Tringulation
Manny96#3437: lmao
guac#4716: eric is the quintessential quaker
Louis#0144: Oh
Louis#0144: Penn
Louis#0144: I see
EricHallahan#1051: It depends upon which Eric.
Louis#0144: I wanna join the pen15 club but I heard they are really elite
Louis#0144: 3am Eleuther
Louis#0144: Eleuther after hours |
Louis#0144: Language models gone wild
Louis#0144: I should really sleep
Kia#2550: Sleep now
Kia#2550: :goose6:
EricHallahan#1051: Same.
𓅬 gabriel_syme 𓅬#3220: I think the hardest part for someone without knowledge is not really training, I think it's what to train on. It's easier to get some repo or codebase and figure it out (nowadays) than actually identify, source (scrape, download, create), preprocess, and feed your data to a model
𓅬 gabriel_syme 𓅬#3220: This is why the 1,000,000 MNIST tutorials I read when first coming into DL were both great and terrible
𓅬 gabriel_syme 𓅬#3220: Like I kind of know how to finetune a GPT Neo and I'm nowhere close to a CS person. But I'm absolutely stuck in preparing the data I've found. That's why things like The Pile are amazing, and datasets is one of the core things I'm working on in my domain
HypnoPump17#9322: Nerf here stands for "Natural Extension of Reference Frame". Not as cool as the DL stuff, but it's mainly infra for Proteins/Alphafold2/3D networks such as an SE3 transformer. @bmk @Aran Komatsuzaki @StellaAthena wrt the name: the first paper talking about this algorithm (a non-parallel version) dates back from 2005 lol. Not in our interest that people get confused, but naming things differently in a field which used the name first doesnt seem like the way to go
GrimSqueaker#8837: Data is 80 % of the work.
If you also need to do problem formultaion in defining the data, that becomes more like 98%
𓅬 gabriel_syme 𓅬#3220: I agree 100% 🙂
𓅬 gabriel_syme 𓅬#3220: in my case I also need to design data generation processes.
GrimSqueaker#8837: They're terrible. They also establish a terrible baseline for all tools and models, wherein they're not expected to provide an example of how to use it on new data. (e.g. that's not already formatted as "from .data import mnist_Train, mnist_Test". ). NVM the capability to get predictions on new data (not just training)
𓅬 gabriel_syme 𓅬#3220: That line exactly, I hate it so much
GrimSqueaker#8837: Umm, Recommenders / Implicit recommendation is what you want
𓅬 gabriel_syme 𓅬#3220: still do, although things are getting better
GrimSqueaker#8837: nah
GrimSqueaker#8837: there's just more stuff, and a small amount is usable. So there's more of it
GrimSqueaker#8837: the % remains minute |
𓅬 gabriel_syme 𓅬#3220: that's true, maybe I've learned where to look or found the right people (this discord is a great example)
𓅬 gabriel_syme 𓅬#3220: also, I struggled a lot to go through that hump alone at some point
LaTrissTitude#0433: recommenders are mostly based on multi users approaches (collaboration based algorithms), my use case is for a single user only, on one time use data
GrimSqueaker#8837: The last really big steps for practical industry stuff (for the 99% , not FAANG stuff), in my view remains relatively unchanged -
SKLearn. Pandas. Catboost/XGB. Keras. +- Spacy.
TF (vs theano) - borderline
𓅬 gabriel_syme 𓅬#3220: sounds about right
𓅬 gabriel_syme 𓅬#3220: CatBoost rocks btw
GrimSqueaker#8837: Still sounds like a recomender. Sequence/session learning. (I did a competition on that recently, WSDM, booking.com.
https://github.com/ddofer/Booking-Challenge
https://www.bookingchallenge.com/
GrimSqueaker#8837: I'm in love with it. I especially nerded out on it when I was doing interviews a half year back. It swallows simple flat mixed datasets easily (they even theoretically support text/BoW, although that was buggy when I tried it).
It's no SparkBeyond, but it is super convenient. I like it much more than XGBoost or LGBM. I don't care if it's some % slower, it's just easier to use and has lots of convenient stuff baked in, and it's easy to feed it categorical data, or sklearn api. joy.
LaTrissTitude#0433: I see, makes sense, I'll check this out tonight
𓅬 gabriel_syme 𓅬#3220: Yeah I was using that and LGBM back then. I think the differences in performance were minor but I wasn't doing deep industry stuff. But it did feel better to use, and also their documentation felt nice.
GrimSqueaker#8837: sessions / recommenders - What won the competition (and spanked me), was just a transformer BTW. (Some with an LSTM for getting positional embeddings).
The sessions there are all short, most length 4-5 , users very rarely repeat.
𓅬 gabriel_syme 𓅬#3220: Although, I never quite got the way they did categorical if I remember correctly. But they had some fancy stuff going on
𓅬 gabriel_syme 𓅬#3220: around that time I switched to categorical embeddings, which imo was even easier lol |
GrimSqueaker#8837: the WSDM conf had a write up of winners approaches.
(BTW, my embedding + pooling model on the repo outdid top ~8 model [a transformer model]. A shame I didn't submit it during the competition :P)
𓅬 gabriel_syme 𓅬#3220: and quite competitive
LaTrissTitude#0433: hmmm.... still going to pose a challenge though, deep learning is a no go on my side, too few users to learn from
LaTrissTitude#0433: how large was the training set?
chinesesoup#6725: Have you guys thought about scraping pdf files or something then extracting text and filtering the text with AI?
Daj#7482: PDF->Text is an absolute nightmare
Daj#7482: quality is very bad
Daj#7482: We tried, extensively
Daj#7482: Also we're not data bound atm
chinesesoup#6725: Yea I know, thats why you need to extensively filter it I guess
chinesesoup#6725: Even tables ect won't show up properly
chinesesoup#6725: But if you read the pdf directly you can put the tables in a usable text format
chinesesoup#6725: Then just a way of finding out if the text reffers to images and if its not gibberish
chinesesoup#6725: Then you should end up with decently clean data no?
Daj#7482: Not worth the effort
Daj#7482: at our scale
Daj#7482: Also no one wants to do it since it's boring as hell lol
chinesesoup#6725: Hmm I'm gonna try and do it some time later, currently I'm working on a chess dataset
Daj#7482: If anyone can get good PDF->Text to work, we'd be _super_ interested in that |
Daj#7482: Since a ton of great data is locked up in PDFs
chinesesoup#6725: Yea exactly my thought
Daj#7482: it's just an extremely soul crushing thing to work on lol
chinesesoup#6725: Thats why I wanna try that
chinesesoup#6725: My soul has already been crushed by programming, I should be fine 🤣
GrimSqueaker#8837: How many users, how many targets, how many events, how much metadata?
GrimSqueaker#8837: If you can do that, then you have a company
LaTrissTitude#0433: 1 user per dataset, very few events ( a dozen at most, those expert users are very time limited ) per session, huge search space ( millions of possible feedbacks ), the datasets can be absolutely unrelated from each other, the dataset is comprised of a few millions of time series including a few other dimensions I'm not sure I can disclose amidst some other data
𓅬 gabriel_syme 𓅬#3220: I will invest in that one
chinesesoup#6725: I'll keep you guys posted then xd
user91010#6777: anyone have a link to the "Chance" model mentioned on the github
user91010#6777: seems p lightweight
GrimSqueaker#8837: what are the millions of TS? (is the issue selecting a feedback, with millions of possible feedbacks?)
If you're talking about dozens, then avoid ml. I'd do the heuristic of "return K most used (defined by business logic, or sum(count(event)) in your dozen examples". +- basic word matching search if it makes sense
alstroemeria313#1694: hey is there like... some way to weight a cross-entropy loss function, if you have some sort of measure of how bad it would have been to choose an incorrect category, given the actual category?
alstroemeria313#1694: With normal cross-entropy it only counts for assigning probability to the correct class, assigning probability to slightly-off classes doesn't count for anything
alstroemeria313#1694: i.e. i have a cost matrix for my classes
alstroemeria313#1694: and want to make use of it
alstroemeria313#1694: I've already tried things like minimizing the expected cost if you sampled from the output distribution and the model collapsed to nearly always just predicting one class |
alstroemeria313#1694: Or do you like... just expect a model trained with cross-entropy to pick up the costs implicitly from the distribution of the training data
Kharr#7888: Have you looked at label smoothing loss? Normally you smooth uniformly outside of the correct class, but you can certainly weight it
alstroemeria313#1694: i want loss=0 to still be always predicting 100% for the correct class though?
Kharr#7888: Can't have that since you are predicting a distribution.. if loss=0 for a single class, that's just normal cross-entropy
alstroemeria313#1694: Like the optimum in the limit of memorizing the training set should still be the same
alstroemeria313#1694: "but if you can't achieve that optimum, it is better to make these kinds of errors than these other kinds of errors"
Kharr#7888: Maybe add a secondary loss? You could do cross entropy + weighted alternatives and give each loss a different weight when you add them together. (e.g. loss = 0.7 * cross_entropy + 0.3*weighted_alternatives)
alstroemeria313#1694: maybe
alstroemeria313#1694: Like, I'm training an autoregressive model on sequences of VQGAN tokens
alstroemeria313#1694: And it treats all of the possible tokens as independent
alstroemeria313#1694: But I actually have a measure of how visually 'close' one token is to another
alstroemeria313#1694: i.e. the Euclidean distances between their VQGAN embeddings
alstroemeria313#1694: (Which is how VQGAN works, the encoder outputs continuous embeddings which are then vector quantized to the closest embedding in the codebook, according to Euclidean distance)
alstroemeria313#1694: And you can in fact do gradient descent in VQGAN embedding space to optimize an image for a particular loss
Kharr#7888: I have no idea off the top of my head, I haven't played around with such a setup yet 😦
alstroemeria313#1694: But there are like... super complex interactions between adjacent and spatially close VQGAN tokens
alstroemeria313#1694: Repeating the same token over and over nearly always produces a flat color output, for instance, but VQGAN is capable of encoding very complex and realistic textures and edges
Kharr#7888: It's a general problem with AR models -- repetition often allows the model to reduce loss since there are local correlations in text and images
alstroemeria313#1694: well, it works fine with AR
Kharr#7888: I mean as a general problem.. AR models like to repeat themselves, even when they are billions of parameters |
alstroemeria313#1694: Real VQGAN token sequences don't really repeat much so it doesn't learn to output repeats
alstroemeria313#1694: IDK, I'm guessing the statistics of sequences of VQGAN tokens are just different from text
alstroemeria313#1694: But then I haven't tried greedy decoding yet so
alstroemeria313#1694: *shrug*
alstroemeria313#1694: All my demo grids are sampled.
alstroemeria313#1694: Like. The problem with text is that repeats are actually higher likelihood, *in the actual training data*, than individual non-repetitive sequences
alstroemeria313#1694: Like if you had a biased coin, p(heads) = 0.6, the most likely sequence, and the one you'd get with greedy decoding, is all heads
alstroemeria313#1694: (I suspect, but can't prove, that the actual highest likelihood sequence of characters for any length over a minimum is all spaces)
alstroemeria313#1694: i... will actually code up greedy sampling and try it now
alstroemeria313#1694: on my partly trained AR model
alstroemeria313#1694: so i can verify whether this is the case for VQGAN tokens too
𓅬 gabriel_syme 𓅬#3220: is it the case that only one specific category is the 'correct answer' each time?
𓅬 gabriel_syme 𓅬#3220: I was wondering if you could try smth like semantic loss. It wouldn't guarantee you that you select the right class but it would push the model to select *one* class as the right answer. No idea why this came up, just curious if it would help with collapse
𓅬 gabriel_syme 𓅬#3220: it's a weighted loss btw, added to your standard (typically)
alstroemeria313#1694: yes, i'm training it to predict real sequences so the correct answer is the next real token
alstroemeria313#1694: hm
alstroemeria313#1694: i could just add in my "expected squared Euclidean distance if sampled" loss
alstroemeria313#1694: at a rly low weight
alstroemeria313#1694: Since if it assigns 100% to the right answer then its expected cost is zero
alstroemeria313#1694: by definition |
alstroemeria313#1694: so it doesn't change the optimum in the limit of memorizing the training set
alstroemeria313#1694: ...Wait, is the problem that expected cost if sampled *assumes you're sampling*
alstroemeria313#1694: Could I just minimize the expected cost if you did greedy decoding instead
alstroemeria313#1694: Or would that be worse...
alstroemeria313#1694: I can't because I'd have to take the argmax and that isn't differentiable.
alstroemeria313#1694: yeah, greedy sampling of VQGAN tokens repeats too
chinesesoup#6725: I just discovered what a pain it is to parse pdfs. Anyone got experience with that? I need a way to get the elements and not just the plain text
bmk#1476: welcome to the dark side
bmk#1476: there isn't really any good way of parsing pdfs that isn't also proprietary
bmk#1476: if you decide to build a good pdf parser, please, *please* let me know
bmk#1476: pdfs are basically to be treated as just vector images since there's absolutely no guarantee that the layout of things on file has any relation to the layout on the page whatsoever
Louis#0144: Only good way I know to manage PDFs is ocr
Louis#0144: And even then
Louis#0144: It’s kinda ehhh
Daj#7482: I warned you lol
chinesesoup#6725: I thought about parsing it myself but the iso spec is more than 750 pages lol
EricHallahan#1051: I feel like there is so much information locked away in them that is entirely not accessible.
chinesesoup#6725: Gonna try pdf to xml
chinesesoup#6725: And then from xml to text
Kharr#7888: Give Tika a try -- it's not perfect but it works the best I've seen for a canned solution. |
chinesesoup#6725: How do they even make the format so inaccessible lol
chinesesoup#6725: Why, just why
chinesesoup#6725: The xml seems to work but I still have to see if there is anything useful formatting in it
Kharr#7888: A proper parser will preserve formatting like paragraphs, bullet points, headers, etc
chinesesoup#6725: Seems like everything is still there
chinesesoup#6725: Even images in base64
chinesesoup#6725: The problem mostly is that every text ect
chinesesoup#6725: Gets drawn on specific coordinates
chinesesoup#6725: Described in cm 😭
chinesesoup#6725: This is gonna take a while
chinesesoup#6725: Its completely unstructured lol
chinesesoup#6725: Its just coordinates with svg, text, or images
chinesesoup#6725: And the font defined
Daj#7482: It's funny every time we see another person have this experience when first encountering PDFs lol
UnsupervisedLearner#4148: Just compile a giant pdf dataset and do supervised training with a gpt
Pdf source -> actual document
chinesesoup#6725: You mean like train a gpt on the xml of pdfs?
Louis#0144: @Daj OAI writing about normativity now?
Louis#0144: That’s what I got from the finetune blog |
Daj#7482: When have they not?
Louis#0144: They usually do AI safety
Louis#0144: Which isn’t normativity
Louis#0144: Normativity is all about extracting norms from data
Daj#7482: Yeah but if you want safe AI it better behave normative
UnsupervisedLearner#4148: I have not even attempted actually thinking about this
I'm just memeing about GPTs for everything
Louis#0144: True
chinesesoup#6725: I mean gpt works for svg files so I guess it also works on xml files? It would probably work if they had a much larger context window
chinesesoup#6725: It would be able to figure out the relations between the text locations and fonts I guess
LaTrissTitude#0433: unfortunately not an available heuristic, no business logic available (various kinds of expertises areas are possible, our approach is general), to answer your question, a single point of data has multiple possible views (time series), the time series are diverse af (semantic, binary, numeric, ...), some views can be seen as imagery, others as other kinds of representations.. huge search space, extremely poor amount of feedback, oh and it's an iterative process on top of this :D
My main problem is that I don't know the name of this kind of... "state of the art category", can't seem to find anything on this kind of problems amidst preference learning and recommender systems sota papers
UnsupervisedLearner#4148: I was harping about this last night. Having such a fixed context window when scaling so massive is just weird
Might be why they aren't doing dense GPT 1.7T
UnsupervisedLearner#4148: (Besides all the other reasons. )
chinesesoup#6725: Yea that contextwindow is a pretty tough problem
chinesesoup#6725: Have they ever tried to train a small model with a much larger context window yet? |
UnsupervisedLearner#4148: There's lots of stuff on 'efficient transformers' yeah
UnsupervisedLearner#4148: They talk about it a lot in here
https://discord.gg/kPE22Qmw
Because gene sequences are long
bmk#1476: @chinesesoup pdfs are basically vector graphics that happen to have text in them
bmk#1476: treating them as anything but that will just cause you pain
chinesesoup#6725: Yea I realised that now
GrimSqueaker#8837: proteins are long, genomes are ridicolous
CKtalon#7792: if you have pdf in non-English characters, good luck too
CKtalon#7792: you'll get rubbish generally
Louis#0144: @gwern how does it feel to get cited by OAI
CRG#8707: And deepmind
Louis#0144: oh damn
Louis#0144: I need to email a deepmind researcher
Louis#0144: he wants to colab w stella and I
Louis#0144: eleuther + deepmind
Louis#0144: 😉
UnsupervisedLearner#4148: You should be super pretentious about it and act like you're reaching down to help such a plucky little lab |
UnsupervisedLearner#4148: "I do it for the little people, you know"
chinesesoup#6725: 😂😂😂 lmao
StellaAthena#3530: We are. I work for a company with an order of magnitude more employees and multiple orders of magnitude more revenue 😛
bmk#1476: > multiple orders of magnitude more revenue
wait, DM makes money??
StellaAthena#3530: I said revenue, not profit
bmk#1476: wait, DM makes revenue??
bmk#1476: is any of that revenue not just google supplying it with money to burn
StellaAthena#3530: In 2019n DM had 266 million pounds of revenue
StellaAthena#3530: and a net loss of 477M
UnsupervisedLearner#4148: Take it a step further and brag about being an American with an oom more geography and GDP
bmk#1476: how much of that revenue is not from google
bmk#1476: is *any* of it not from google
Louis#0144: pounds of what?
UnsupervisedLearner#4148: Neurons
bmk#1476: feathers
StellaAthena#3530: No idea @bmk
StellaAthena#3530: > And DeepMind is not alone. OpenAI, DeepMind’s implicit rival, has been facing a similar identity crisis, transforming from an AI research lab to a Microsoft-backed for-profit company that rents its deep learning models.
StellaAthena#3530: Big OOOF |
StellaAthena#3530: I can only assume this is exactly the PR OAI doesn't want lol
AI_WAIFU#2844: They brought this upon themselves tho
Samin#4651: at the end of the day someone's gotta pay up to nvidia
tg#7159: What do folks use these days for cloud GPU compute? EC2? Lambda?
tg#7159: I think I'm mostly compute constrained... and would ideally like to scale up to 16+ GPUs.
tg#7159: (PyTorch workflow, dataset is maybe 4 GBs)
guac#4716: 16+ gpus for 4gbs of data seems a bit much lol
StellaAthena#3530: What are you actually looking to achieve? Specifically?
tg#7159: model is pretty fat and it seems to keep improving after training for 5 days on my RTX 3090 (which is 500 epochs or so)
StellaAthena#3530: What is the model?
tg#7159: auto-regressive transformer fitting 1024 VQ-VAE image sequences
bmk#1476: how many params
tg#7159: Right now I'm training on 200k images or so and I want to scale up to larger dataset and ideally reduce the training to be under a day
tg#7159: Right now it's ~1b, but I was thinking of scaling that up as well
AI_WAIFU#2844: I think azure does a good job with this stuff.
tg#7159: I've intentionally scaled things down while I'm training on my workstation
StellaAthena#3530: None of that makes any sense to me.
tg#7159: which part?
StellaAthena#3530: How big are you images
tg#7159: 512 x 512 |
StellaAthena#3530: Unless my math is way off (always a possibility when doing arithmetic) you’re pretty far away from optimal compute / data trade off
cfoster0#4356: Are you saying the model is too big for the dataset?
StellaAthena#3530: Yeah
tg#7159: Okay, let me see if I can be more precise
StellaAthena#3530: Even if we say that each pixel is a byte of information, the dataset can only hold a total of 50GB of information.
tg#7159: 1. I want to reduce the wall time. I'm using a batch size of 64 right now, model is about ~1b params. It does about 1-2 it/s on the RTX 3090, no accumulation.
2. After training for several days, my eval loss continues to improve, as do qualitative samples.
tg#7159: I can increase my dataset arbitrarily, but I haven't found any issues with overfitting using even 1000 epochs.
StellaAthena#3530: You’ve done 1,000 epochs on 200k images with a 1B model and nothing weird happened? And validation loss kept going down?
tg#7159: My thinking was that the simplest way to improve the 1-2- it/s would be to use more GPUs expect near linear scaling.
CRG#8707: According to one of the dall-e authors, compute efficient training created blurry images for the small models.
CRG#8707: The 13B model was trained to convergence
AI_WAIFU#2844: They have much more data than parameters.
tg#7159: Random crop & horizontal flip & color jittering augmentation
tg#7159: but... I can scale up my dataset arbitrarily as needed...
StellaAthena#3530: 512^2 bytes * 200,000 = 52 GB, right? Or am I being an idiot?
tg#7159: my model is larger in terms of bytes than my dataset yes
tg#7159: it has the power to full encode the entire dataset
AI_WAIFU#2844: wait in what world is a 1B model > 52GB. Or are these compressed?
tg#7159: (the dataset is compressed, each image is on average 30KB) |
AI_WAIFU#2844: Have you played around with the hparams at all?
tg#7159: like, I mean _theoretically_ you could encode the entire dataset into ~XGBs using JPEG compression where X is like 4
StellaAthena#3530: How is each image 30 KB when compressed? 512^2 = 26,000
tg#7159: because they're JPEGs?
AI_WAIFU#2844: rgb
tg#7159: https://cdn.discordapp.com/attachments/729741769738158194/852602475260936222/unknown.png
AI_WAIFU#2844: but still 6GB of data vs a 1B model.
tg#7159: I'm a little confused. I thought you were suggesting that my dataset was too small or something. I was pointing out that the model size is in roughly the same ballpark as my dataset when compressed.
tg#7159: Regardless, my wall time is days and I want to make it less than that...
Samin#4651: 512 * 512 is 260,144
AI_WAIFU#2844: I would start with a multi-gpu instance on any of the cloud providers
AI_WAIFU#2844: things become progressively more painful as you rack up GPUs.
StellaAthena#3530: GCP is probably the quickest from start to training and reasonably cheap. I’ve never used Azure but I’ve heard bad things
AI_WAIFU#2844: The benefit of Azure IMO is that they've got good clusters.
AI_WAIFU#2844: What with the whole OAI training
StellaAthena#3530: If generating more images is cheap, it would be worthwhile to double the size of your dataset and train it for 10 epochs. Compare held-out loss to the same model trained on the original dataset for 20 epochs (so, same total number of images). I would expect that the larger dataset for fewer epochs does better on an independent test set.
I know the heuristics for text and images are different, but your numbers aren’t adding up in my head.
tg#7159: What is the heuristic that you're going by here so I can better understand your confusion? Is it something like... X = dataset size, Y = model size... X > c * Y or something ?
tg#7159: GCP > Azure |
tg#7159: Thoughts on Lambda's cloud offerings or EC2?
tg#7159: I haven't used any of these before so I'd mostly be picking from a hat
tg#7159: one other thing that might be helpful to know is that without data augmentation (e.g. random crop), the model definitely overfits the dataset fairly quickly and the loss at some point rapidly drops towards zero
tg#7159: but again... my hope was to reduce wall time per iteration simply to speed up the wall time to convergence, and I was wondering what cloud solutions people on here would vouch for
marmiteCloud#5923: GROBID is fantastic for scientific papers / reports... You can also use something like LayoutLM or Detectron2 to detect text areas and get pretty good OCR results with tesseract using the segmented instances.. It's a problem I work on for a company, so keen to hear if you develop a better approach.
marmiteCloud#5923: maybe training pptx-->pdf mappings could work somehow, though I doubt it due to formatting of pdf. Maybe pptx--> png from pdf?
chinesesoup#6725: you could "simply" read out the file if you follow the pdf iso specification, or just convert it to xml or any other format. xml seems the most useful from the stuff I came accross. Even images are in there using base64. You would get all the text in xml format, the only problem is that every element just has coordinates and contents, it almost has no structure. Its like editting a file in photoshop or something, all the elements just get placed on a specific location and there is no info about tables ect. Most editors will also break down text in multiple text elements to align them properly
chinesesoup#6725: I can send you an example xml if you want, it is structured but only barely
chinesesoup#6725: you could probably use machine learning to figure out which texts belong together using the coordinates and width/height
chinesesoup#6725: https://cdn.discordapp.com/attachments/729741769738158194/852629249152385084/c015fa2f328c18d7e1649888e642c8d9.png
chinesesoup#6725: this is how it looks, even though its a single block of text. pdf to text parsers just read all of this chronologically and usually it makes sense lol
marmiteCloud#5923: yep, it would be cool to develop something to do that, across the diversity of pdfs, into xml. right now, if you aren't using scientific papers where something like GROBID pre-exists, a minority of the time the concatenated text output will be garbage unfortunately... and often you lose info on what is a title/heading/footer etc.
I'm suggesting you could use training data of rows of pptx files (powerpoint XML, essentially) and their PDF outputs (and deliberately mix the outputs up a little) to train pdf --> pptx. It might transfer to none-pptx-origin pdf's.
chinesesoup#6725: Yea I was thinking the same thing, but with html or word. I'm not sure if it would transfer tho
chinesesoup#6725: Would probably be better to just train directly on something like the xml
chinesesoup#6725: The problem would be the context window
alstroemeria313#1694: you got yours to overfit? so jelly
alstroemeria313#1694: I'm processing MS COCO into VQGAN tokens rn
tg#7159: Are you training the VQ-GAN or using one for like an auto-regressive model or something? |
alstroemeria313#1694: i'm using the pretrained 1024 token imagenet one rn
alstroemeria313#1694: and will train an autoregressive model once it's done encoding
tg#7159: the taming one? I didn't know that they released an imagenet model yet
alstroemeria313#1694: there are two
alstroemeria313#1694: they are only vqgans, no autoregressive model
marmiteCloud#5923: Oh, nice. Yeah unclear really. Maybe have a preprocess classifier to split into common pdf styles first. I can send you a little detectron2 model that classifies text, headers and images, if you'd like. I'm not sure the nature of XML and closing tags will work well with GPT architecture. But if it does...
tg#7159: Oh right.
tg#7159: Yeah, the first thing I tried actually was pre-computing the sequences and then training a transformer directly on those sequences
tg#7159: but that led to overfitting
tg#7159: so now I to the discretization as part of the training loop
tg#7159: so I can augment the images
alstroemeria313#1694: i'm going to train an autoregressive model conditioned on a CLIP text embedding and a score of how well the CLIP text embedding fits the decoded output
StellaAthena#3530: Oh, I forgot about the augmentations. NVM, ignore everything I said then D:
alstroemeria313#1694: i noticed that it spent *the majority of its time* encoding and decoding images with VQGAN
alstroemeria313#1694: my model is much smaller than yours though rn
alstroemeria313#1694: i want to like... justify this CLIP conditioning scheme
alstroemeria313#1694: quickly
tg#7159: I only encode when training the transformer
alstroemeria313#1694: and then scale/train a better one
tg#7159: also, I use torch.no_grad around the encoding |
alstroemeria313#1694: i have to decode so i can feed it to CLIP
alstroemeria313#1694: for the CLIP score
tg#7159: gotcha
alstroemeria313#1694: if i fed the original image to CLIP instead, the CLIP score wouldn't actually correspond to anything the transformer could possibly output
alstroemeria313#1694: it would be correlated but
alstroemeria313#1694: i can just get it exact by decoding
chinesesoup#6725: The thing is I'm not really good with machine learning or anything. I'm just a coder that was interested in gpt neo so I figured I could help making datasets xd So thats what I'm trying to do now
chinesesoup#6725: But grobid or detectron2 do seem interesting tools
tg#7159: What is the expectation with the score? Are you trying to transfer some idea of "confidence" to the transformer? Like, how accurate the text is for a given image?
alstroemeria313#1694: yeah, kind of... it's a simplified Decision Transformer type idea
alstroemeria313#1694: CLIP score is the reward, the transformer learns "this sequence of outputs corresponds to this reward" then you prompt it with a high reward and sample a policy
alstroemeria313#1694: Simplified because there's no state and the reward only comes at the end of the sequence
alstroemeria313#1694: So it comes down to prompting it with a CLIP text embedding and a good CLIP score and sampling VQGAN tokens.
alstroemeria313#1694: (A full Decision Transformer takes intermediate rewards and states at each timestep too)
alstroemeria313#1694: it's done, GPUs go brrr
alstroemeria313#1694: :brr:
gwern#1782: (arguably, there is state and intermediate rewards if you zero out unavaiable tokens)
gwern#1782: (this is not even necessarily a pednatic point - think about SPIRAL, or systems using fourier transforms. perhaps you *should* generate images progressively with rewards at every 'timestep'_
alstroemeria313#1694: idk how to get the rewards though, I only have a CLIP score for the full sequence
Teto#0001: What's the most cost effective gpt model |
Teto#0001: :LeDogTripoloski:
alstroemeria313#1694: effective how
Teto#0001: Low cost
alstroemeria313#1694: oh
alstroemeria313#1694: the smallest lol
Teto#0001: True
Teto#0001: But is the performance loss worth it
alstroemeria313#1694: i don't know
Teto#0001: 1.3b is the smallest right?
alstroemeria313#1694: i usually just use the biggest that i can get my hands on that will fit into gpu memory but i'm not generating mass quantities of text or serving an api
EricHallahan#1051: That is highly dependent upon your application and preference.
Sid#2121: https://6b.eleuther.ai/ :bigbrain:
Teto#0001: Just a chat bot ai
Sid#2121: cost = 0, effective = big
EricHallahan#1051: *Get in while supplies last!*
Teto#0001: Was this
Teto#0001: Lemme check
Sid#2121: it's the 6B param gpt model we (well, Ben) just released
Teto#0001: Ny goal is to create a virtual ai assustant
Sid#2121: if you have some technical competency, and sign up to TRC, you could conceivably run this at a very low cost |
Sid#2121: @iobot in the faraday cage is running it
iobot#4286: in the what?
Sid#2121: get back in ur cage
gwern#1782: well, it might not work for the current VAE given that it seemsto be mostly all-or-ntohing but my point is there are lots of archs which give you images for subsequences, and those images can be scored, and the difference of those scores used as rewards
Teto#0001: What us trc
gwern#1782: tfrc
Sid#2121: https://sites.research.google/trc/
alstroemeria313#1694: i can decode partial sequences if the sequence is a multiple of the number of tokens per line, i'm just not sure how to score it with CLIP yet
alstroemeria313#1694: since CLIP takes square images
alstroemeria313#1694: i guess i could just resize it to square and score it
alstroemeria313#1694: but... this might distort things a bit.
gwern#1782: if you pad it out with black/white pixels... hm. might make it too easy... on the other hand, that's sort of a constant bonus for getting to pixel _n_, and RL is about maximizing so it doesn't matter if you have constant bonuses
alstroemeria313#1694: huh
alstroemeria313#1694: It'll slow training down though because the VQGAN part is actually more expensive than the transformer part
alstroemeria313#1694: i could cheat and take the fully decoded image and mask it off with black at different points
alstroemeria313#1694: and just feed those all to CLIP
gwern#1782: from a RL perspective, it doesn't necessarily change anything compared to an end to end loss. like in achess context: you could add rewards to each turn as reward-shaping, or you could provide only the true terminal loss. they ought to be equivalent in terms of the final optimal policy. however, the reward-shaped one can be *much* easier to learn
alstroemeria313#1694: ah
alstroemeria313#1694: vqgan token sequences are supposed to be modelable autoregressively in the first place
alstroemeria313#1694: hm |
alstroemeria313#1694: i'm just adding the terminal reward to the model
gwern#1782: like the question of how much state to provide as observations. if the state can be computed from the history, in theory, your RNN or transformer or whatever doesn't *need* the state as an input, it can just calculate as much as it needs. however, it sure can make learning easier
Louis#0144: OH NO HES ESCAPED
iobot#4286: what?
Louis#0144: Are u here to turn me into a paper clip
iobot#4286: no, I'm here to turn you into a paper clip
Louis#0144: Oh no
iobot#4286: yes.
alstroemeria313#1694: i'm not sure what i'd *use* as state
gwern#1782: so in a DT/CLIP/DALL-E context, you could imagine a setup where the transformer gets data encoded as tuples of 'immediate reward, image to date'
gwern#1782: this would be much much larger input than simply [reward, tokens]
gwern#1782: but the incrementality *might* make reward much easier, in the same way that dumping an entire chess board state + move value estimate is easier than just '1. k2; 2. f5; 3. E2 (!)'
Deleted User#0000: Small rant: a company called "OpenAI" locking GPT-3 under an invite-only paywall is so hypocritical. We're the developers of a programming language called Kind. We'd like to experiment using GPT-3 for algorithm and code auto-completion in our language. We've been patiently waiting for almost a year already, but I guess our application hasn't even been seen yet. We have so many ideas that could benefit everyone, we have a team to work on them, we have money to pay whatever they want. We just need access! :( Is there anything we can do at this point?
gwern#1782: that sounds pointless. how would GPT-3 even know your language?
Deleted User#0000: They should definitely rebrand as ClosedAI /sighs
gwern#1782: the standard joke is 'ClopenAI' fwiw
AI_WAIFU#2844: wait for us
Deleted User#0000: @gwern it doesn't need to, I think. Anyway only experimenting we'd be able to tell
UnsupervisedLearner#4148: Neo Davinci when :ultraberk:
EricHallahan#1051: ¯\_(ツ)_/¯ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.