data
stringlengths 115
7.61k
|
---|
mgostIH#0245: He isn't planning to recreate bitwise fp arithmetic
voxs#0001: wait how is minecraft redstone analog
mgostIH#0245: Comparators can subtract signal strength
mgostIH#0245: he also makes a signal multiplier on stream
voxs#0001: kek
EricHallahan#1051: You can do the same exact thing in real life. You are limited by your upper and lower bound of your output of your op-amp, but it is doable.
voxs#0001: why don't we do analog NNs irl then
EricHallahan#1051: Because analog computers are effectively dead.
EricHallahan#1051: They are really only useful in very specific applications after the advent of the microprocessor.
EricHallahan#1051: Analog circuitry requires shielding from noise. And so when your gains are large even a small amount of noise at the input will make large changes to the output.
mgostIH#0245: Ye basically Minecraft is a world where analog is better than digital
mgostIH#0245: Or at the very least on par with it
EricHallahan#1051: That is why good EEs have jobs. It is very hard to maximize the SNR for any given situation.
EricHallahan#1051: Minecraft doesn't have analog computation with redstone; it has 16-level logic IMO.
EricHallahan#1051: It isn't truely continuous.
EricHallahan#1051: Even though you can do analog-like operations with it.
mgostIH#0245: I think conceptually the circuits shown there were very analog-like
mgostIH#0245: Ofc there's game limitations
mgostIH#0245: but imo it's just a mindset of how you design your circuit
EricHallahan#1051: It is a made up system. It is neither one or the other. |
EricHallahan#1051: Just game design.
StellaAthena#3530: Someone was asking about server statistics / hit rate? Here's the past week of activity on our website. https://cdn.discordapp.com/attachments/729741769738158194/828382604566593586/Screen_Shot_2021-04-04_at_5.36.02_PM.png
StellaAthena#3530: And discord https://cdn.discordapp.com/attachments/729741769738158194/828382719763021844/Screen_Shot_2021-04-04_at_5.37.04_PM.png
EricHallahan#1051: Total revenue: 0.00
dionysus#2918: Big buccs
triggerhappygandi#0001: Bobux
triggerhappygandi#0001: https://tenor.com/view/when-no-bobux-mfw-no-bobux-when-bobux-bobux-0bobux-gif-18384880
AI_WAIFU#2844: Man I'm digging deeper into the ray documentation an it's pretty powerful
AI_WAIFU#2844: @kindiana do you know if theres a nice way to tell ray to put something on 1 particular node without using custom resources?
kindiana#1016: something = actor?
AI_WAIFU#2844: Yeah basically, is there a way to put an actor on a specific node?
coozamano#5333: How to donate to this project???
gwern#1782: isn't that in the FAQ?
EricHallahan#1051: !faq
Carl-bot#1536:
StellaAthena#3530: @coozamano Welcome! We are not currently accepting monetary donations. If you enjoy what we do and would like to give back to EleutherAI, the most useful thing you can donate is manhours (assuming you’re skilled at ML development or research) or *large* amounts of compute.
bmk#1476: large = at least hundreds of GPUs kind of scale
coozamano#5333: gotcha, whats the reasoning for not accepting monetary donations out of curiosity?
bmk#1476: we already have enough money
bmk#1476: unless you're thinking of giving us 7 figures or something |
coozamano#5333: gotcha
coozamano#5333: I'll message here once I've got 7 figs to give
bmk#1476: :berk:
AI_WAIFU#2844: There's also practical issues. Eleuther is not actually a legal entity(yet?) so things like taxes get complicated.
bmk#1476: that too yeah
bmk#1476: if someone wants to give us a huge amount of money it would be worth figuring the tax situation out
coozamano#5333: perhaps I could give back by setting up an entity and all that for the community? Then donations would be a breeze
bmk#1476: no thanks
bmk#1476: we can figure that out ourselves if we need one
coozamano#5333: ok
kindiana#1016: not that I know of lol, lmk if you figure it out
EricHallahan#1051: Nah, we rather not have one. Adds to management overhead.
bmk#1476: we kind of like being informal
Louis#0144: This sounds like a nightmare
bmk#1476: having a legal entity would make things more complicated so unless we could get a huge amount of money as a result it probably wouldn't be worth it
AI_WAIFU#2844: K then resources = {<random string> : "INTMAX" } it is then
coozamano#5333: more power to ya, didn't mean to impose, just love the project and want to help
AI_WAIFU#2844: The other thing is we need to do some research on the pros and cons of various legal entities before we settle on something
AI_WAIFU#2844: The dependency tree of shit to do here is big and deep.
StellaAthena#3530: If you have any friends who are good at ML, tell them we are awesome and that they should come write code with us 🙂 |
AI_WAIFU#2844: Also not just ML, HPC enthusiasts would be a big plus
coozamano#5333: gotcha, i will ask around! I have a genius friend who knows a ton about distributed computing, I'll ask him!
bmk#1476: we're also looking for postgrads who need more publications and would be willing to ~~slave away in the paper mill~~ help us turn our ideas into fully fledged papers
bmk#1476: we have lots of ideas and lots of compute, we're just short on people who can turn that into actual experiments and papers
Louis#0144: Or people who can debug
coozamano#5333: any idea bout doing a DALL-E-Neo
bmk#1476: working on it
EricHallahan#1051: It is already in development.
bmk#1476: soon™
coozamano#5333: omg thats so cool
Louis#0144: We’re working on a lot of visual stuff
coozamano#5333: im mostly a software engineer, so i don't know if I can help in any way
coozamano#5333: pretty good at web scraping, heres a scraper that I built: https://twitter.com/nikita_jerschow/status/1372225506930790400?s=20
EricHallahan#1051: https://www.eleuther.ai/get-involved
bmk#1476: are you good at writing unit tests
bmk#1476: lots of unit tests
coozamano#5333: what language
bmk#1476: python
mkualquiera#3484: snek
EricHallahan#1051: You also don't have to be a :goose: anymore. |
coozamano#5333: hmmm, main skill is node.js but is there somewhere where the requirements are written down? I'll take a look
bmk#1476: nvm
bmk#1476: we don't really use nodejs
ethan caballero#6044: Oh Shit!! Has Amanda Askell joined Dario.agi too?!
https://www.linkedin.com/in/amanda-askell/
gwern#1782: ehhhh
AI_WAIFU#2844: how are you guys able to keep track of all these names? I can barely follow the papers.
gwern#1782: https://askell.io/cv/ does 2020-2021 mean that she's left...
gwern#1782: I don't have linkedin, can you screenshot that?
AI_WAIFU#2844: Like 50% of the time I'm like, "who?".
RyanT#5929: Honestly same
ethan caballero#6044: https://cdn.discordapp.com/attachments/729741769738158194/828431861461811230/Screen_Shot_2021-04-04_at_8.52.09_PM.png
RyanT#5929: Going to make a substack that maps out these connections
bmk#1476: the only names you need to know: schmidhuber, yud, shazeer
AI_WAIFU#2844: no just schmidhuber and yud
bmk#1476: no just schmidhuber
bmk#1476: did you know that schmidhuber solved alignment in 1991?
gwern#1782: hm. feb 2021. so since it's april, that implies she's been gone from OA for at least a month
bmk#1476: is OA ded
bmk#1476: y exodus |
StellaAthena#3530: Pinned a message.
StellaAthena#3530: I don’t.
cfoster0#4356: AFAICT Ethan has been one of the folks doing detective work
chilli#5665: My understanding is that the OAI exodus is not all to the same company
ethan caballero#6044: Just Paul Christiano & Jacob Jackson. All the rest seem to be to dario.agi
chilli#5665: Do you have a source for this?
chilli#5665: Or just hearsay
kinoc#5731: When should the "E-team" activate (that's ya'll...)?
kinoc#5731: If everyone is leaving OA then someone has to fill the gap of AI greatness. E-Team?
TastyBucketOfRice#8796: Hey all, I'm an HPC PhD student focusing on deep learning. I'm interested in contributing, but I'd like a bit more info on what's in the pipeline and what specifically I could be working on.
TastyBucketOfRice#8796: Should I take this to DMs with leadership or discuss here?
EricHallahan#1051: Discuss here, please!
chilli#5665: There's definitely a lot of HPC stuff that needs work :P
EricHallahan#1051: You probably want to look at https://www.eleuther.ai/get-involved to get an idea of what that entails.
StellaAthena#3530: “Leadership” is a loose concept. We are mostly a collective. But people with blue and purple names generally know what’s going on.
StellaAthena#3530: There isn’t a section on HPC there
TastyBucketOfRice#8796: That's great, but I'd like specifics if possible
EricHallahan#1051: Yes there is.
EricHallahan#1051: I added it.
EricHallahan#1051: I said to talk to Sid |
TastyBucketOfRice#8796: I have read this, but I'm looking for a more concrete roadmap for what I'd be doing/publishing
TastyBucketOfRice#8796: If one exists, of course
bmk#1476: seems weird to point someone to a page where the only mention of HPC is that we need it and to talk to Sid, lol
EricHallahan#1051: Cool, just want to make sure we aren't treading the same ground again.
StellaAthena#3530: @TastyBucketOfRice We are interested in training very *very* large language models. We recently released 1.3B and 2.7B parameter models, but we are ultimately interested in going after the 175B benchmark that GPT-3 set
StellaAthena#3530: We are working with a cloud computing company and probably realistically have the compute to get there if we build good enough models.
bmk#1476: we plan to have a lot of GPUs, the interconnects aren't the best, how do we make it go brrrr training a big model regardless
TastyBucketOfRice#8796: And you're doing this with DeepSpeed, yes? Is this development more on the model architecture or distributed training?
TastyBucketOfRice#8796: What interconnects and GPUs? I assume you're using NCCL?
StellaAthena#3530: @TastyBucketOfRice Yeah, it’s using DS and based on their Megatron implementation
bmk#1476: A100s; interconnect is regular 10Gb (?) ethernet internode, pcie, nvlink but only for pairs of gpus, i believe
StellaAthena#3530: We are working with A100s in pods of either 4 or 8, depending on what CoreWeave is able to free up the most of.
AI_WAIFU#2844: Model arch is effectively fixed (for now...) Most of the work is going into getting training to scale
TastyBucketOfRice#8796: that interconnect is rough
bmk#1476: yeah unfortunately this is all we have
AI_WAIFU#2844: @bmk Should I ping Sid?
bmk#1476: err sure
bmk#1476: he'd probably be able to give a lot more detail
AI_WAIFU#2844: @Sid probably has the best of grasp of the situation
StellaAthena#3530: @Sid is the person leading this project |
TastyBucketOfRice#8796: you may be able to use 1-bit adam (https://www.deepspeed.ai/news/2020/09/08/onebit-adam-blog-post.html) here
StellaAthena#3530: Funnny you should say that
StellaAthena#3530: I went and implemented ZeRO 3, and found that it effectively wasn't working
StellaAthena#3530: It's possible we need to do more hparam sweeps or that we haven't tested on scales where it's the best yet.
AI_WAIFU#2844: Ok Sid is offline rn, but stick around and we'll be able to fill you in on the details.
StellaAthena#3530: But so far we have generally been pessimistic about 1-bit adam, CPU-offload, ZeRO-3, etc.
StellaAthena#3530: You're right that the interconnect is rough. To an extent it's flexible, in that CoreWeave is a company that is actively growing. We were recently asked if we would prefer the current set-up or full NVLINK but half as many pods, and said we believed the later would give better results. So that may be in the works
StellaAthena#3530: In terms of publications / results of research, we have a couple goals:
1. If we can get there, we want to release online a 175B+1 parameter trained GPT-3-style language model. If not, we want to train as large a model as we realistically can and release that.
2. Both us and CoreWeave are very interested in distillation. Even if you have a DGX machine you still can't even run inference on it with GPT-3. How small can we get our model? How cheap can we make inference on it (in $, compute, or whatever)?
3. We have a bunch of ideas of things we would like to do with these massive language models. There's an entire category of research that you need oodles of compute to even be able to touch (Scaling laws in particular)
TastyBucketOfRice#8796: Can you expand on this a bit? Is this an accuracy issue or are you having trouble implementing offloading/ZeRO 3?
StellaAthena#3530: Let me go grab the plots
TastyBucketOfRice#8796: It's going to be extremely challenging to scale at all with this interconnect without compression/offload, regardless of pcie/NVLINK
TastyBucketOfRice#8796: That definitely aligns with my research interest. Thanks for the summary!
AI_WAIFU#2844: Also if you're willing to put in the work we have shitloads of compute just sitting around doing nothing. So if you have a specific project in mind bring it up.
bmk#1476: also we're super interested in publishing stuff
TastyBucketOfRice#8796: Great! I have some ideas, but I'd like to get more familiar with the project/core team first
bmk#1476: we're probably the discord server with the most publications
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/828441886992629770/Screenshot_2021-04-02-20-32-06-555_com.android.chrome.png |
AI_WAIFU#2844: Yeah hang around here then and scroll through the Projects section to get a feel for who's doing what and what people are interested in.
Sphinx#2092: lol oddly specific flex but okay
bmk#1476: i mean, it's a massive meme flex
AI_WAIFU#2844: They'll get less specific as we make bigger flexes
bmk#1476: "we're literally a bunch of random people on discord and we still managed to get shit done"
Sphinx#2092: speaking of which, how did the Pile do on reviews?
bmk#1476: 1.5/3/4
StellaAthena#3530: 1.5, 3, 4 out of 5
Sphinx#2092: lol wtf, how does some give a 1.5 on a dataset paper
AI_WAIFU#2844: Stuff had to be cut and that had consequences
StellaAthena#3530: @TastyBucketOfRice Does this link open for you: https://wandb.ai/eleutherai/neox/reports/Snapshot-Apr-4-2021-9-32pm--Vmlldzo1ODMwNzc/edit?flasher=&template=snapshot
AI_WAIFU#2844: also reviewer number 1 is an angy barnacle
TastyBucketOfRice#8796: yep, taking a look
bmk#1476: their review was like "the idea to create such a dataset is great, love the initiative, the analysis is thorough, NLP needs more papers like this, however you omitted some details due to page limit. 1.5 strong reject"
bmk#1476: (obv exaggerated but same energy)
Sphinx#2092: lol yeah, some people are ridiculous.
RyanT#5929: Where did you submit the Pile paper?
bmk#1476: acl
Sphinx#2092: Maybe the rebuttal will work.
bmk#1476: yeah hopefully AC takes note |
Sphinx#2092: The AC saved me from some reviewers demanding more content from my short paper
Sphinx#2092: and they were like "fam, this is only 4 pages. Not much more can fit here, it's fine."
StellaAthena#3530: @TastyBucketOfRice You're looking at two sets of runs. Grey and Green are the same size model and Red and Orange are the same size model. All are ZeRO 3, the difference between model runs should just be CPU-offload, though I'm not very slick with WandB and am having trouble loading the hparam table
TastyBucketOfRice#8796: Alright, what's the takeaway here? I'm not familiar enough with this model and wandb to glean insights. Is the issue you're facing in allreduce?
StellaAthena#3530: honestly I'm having trouble remembering. It's been three weeks since I've looked at these numbers and it's 10 pm local.
StellaAthena#3530: I remember the takeaway from this plot: it shows ZeRO-3 with CPU offload. Note the flops/s/gpu
https://wandb.ai/eleutherai/neox/groups/n6HSC59hVUhE8BUMPsurvw?workspace=user-stellaathena
StellaAthena#3530: It's an order of magnitude smaller than what we would like to see.
StellaAthena#3530: That's barely hitting 3e12. Meanwhile over here
https://wandb.ai/eleutherai/neox?workspace=user-shivanshupurohit we are getting 4x that and I am pretty sure this run has poor hparams
TastyBucketOfRice#8796: I see.
TastyBucketOfRice#8796: And these ~12 samples/sec runs are with ZeRO-2? Or ZeRO-3 without offload?
StellaAthena#3530: One of them is DS without ZeRO, the other is ZeRO-1
StellaAthena#3530: @TastyBucketOfRice Here we go, this is a clean comparison finally: https://wandb.ai/eleutherai/neox/reports/Snapshot-Apr-4-2021-10-3pm--Vmlldzo1ODMxMTE
StellaAthena#3530: oh shit
bmk#1476: ?
StellaAthena#3530: @bmk Y'know how Sid found that the 6-pod GPUs were slower than the 4-pod GPUs and the 8-pod GPUs, even when using one GPU
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/828450291279200285/Screen_Shot_2021-04-04_at_10.05.37_PM.png
StellaAthena#3530: We didn't know about this when we ran these computations and so didn't think to check. But the ~5x slower runs with ZeRO-3 are on 6-GPU pods. |
bmk#1476: uhh
bmk#1476: are you saying it might only be 3x slower on a 4 gpu machine?
bmk#1476: also there's a perfectly reasonable explanation why more gpus means slower single gpu performance
StellaAthena#3530: Or even better
bmk#1476: but i don't really see how this matters since it's not going to recover from a 5x handicap just by moving to a different number of gpus
StellaAthena#3530: @bmk CoreWeave's 6-GPU pods are slower than their 4-GPU and 8-GPU pods. I'm not talking about "per GPU" numbers, I'm talking about legit only using one GPU in the pod
bmk#1476: i am talking about single gpu performance too
bmk#1476: of course it gets slower with more gpus attached
bmk#1476: the gpus all have to split up the connections to the cpu
kindiana#1016: really shouldn't tho
StellaAthena#3530: I'm doing a poor job explaining this.
StellaAthena#3530: Ask Sid when he wakes up
StellaAthena#3530: I'm 75% sure he'll be excited to learn our slow Z-3 results were on 6-GPU pods and being compared to 8-GPU pods
bmk#1476: the cpu only has so many pcie channels
EricHallahan#1051: Lanes
kindiana#1016: like, 128 lol
bmk#1476: what cpu do we have?
kindiana#1016: i don't know anyone who builds systems without 16x links to gpus
bmk#1476: me
kindiana#1016: except for miners |
kindiana#1016: lol
bmk#1476: Intel consumer chips don't have enough channels
bmk#1476: 24 i think
AI_WAIFU#2844: Intel's retarded for not adding more pcie links
kindiana#1016: well multi-gpu is dead in consumer so 🤷
bmk#1476: no, they're trying to force people to not use commodity hardware for ml lol
kindiana#1016: but both intel and amd server chips have enough channels
bmk#1476: market segmentation
EricHallahan#1051: No, they just don't want people using commodity hardware in general for server stuff.
AI_WAIFU#2844: Like I can't wait for a chip startup to just make some hardware that's has like 20x 100Gb Eth links.
AI_WAIFU#2844: Like have half the thing be comms
kindiana#1016: y tho
gwern#1782: I wonder if the new CEO is going to drop a lot of that segmentation to catch up? he talks as if he's going to gore intel's internal sacred cows, like not fabbing for external designs. if intel wants to get back into peoples' good graces, dropping a lot of the bullshit like PCI lane starvation and non-ECC RAM would help
AI_WAIFU#2844: So you can glue a shitload of them together without paying the nvidia tax
gwern#1782: conceptually, that's kinda what cerebras is, isn't it? absolutely absurd on-chip IO
kindiana#1016: well thats kinda the other direction
kindiana#1016: no off chip comms just one big chip
AI_WAIFU#2844: And they're :smallbrain: and decided that 18GB of sram is enough
gwern#1782: yes, well, they aren't the only people to be wrongfooted by transformers + scaling, so I don't blame them for it
gwern#1782: that's always the hazard of these ASIC startups - you'd better get lucky and the puck be where you started skating towards several years ago |
AI_WAIFU#2844: Like they're gonna be good for inference on smaller models, because you can map the NN to the hardware. But that's it basically.
kindiana#1016: 9B is still a pretty big model for inference
gwern#1782: (now, the real question is when did they realize how badly they had screwed up strategically and what did they do about it? cerebras talks so little I have no idea)
kindiana#1016: and their new one on 7nm is going to be even bigger
gwern#1782: nobody is paying like $1m for cerebras chips to do *inference* on, come on
ethan caballero#6044: I stan graphcore IPUs
kindiana#1016: well, its quite good at low latency inference
bmk#1476: thankfully the people paying aren't the people using so it all works out in the end
kindiana#1016: I can see people buying it for that 🤷
kindiana#1016: you can't throw more money to reduce latency otherwise
bmk#1476: imagine having latency on a 18GB model
AI_WAIFU#2844: Like I wouldn't be too worried about it it's still gonna have it's applications. I would just eat the L and figure out how to stick hbm dies and off-chip interconnects on top of that giant plate of silicon.
bmk#1476: HFT companies are gonna be the only ones buying them
AI_WAIFU#2844: Yeah but HFT firms are loaded
bmk#1476: (I'm going to choose to believe that HFT = Highly Fungible Token)
bmk#1476: i mean but not compared to the size of the entire ML industry, especially if you consider expected future potential
AI_WAIFU#2844: Actually it looks like they're not fucking around: https://cerebras.net/
> System IO over 12x standard 100 GbE
kindiana#1016: pretty tiny compared to like... anything
kindiana#1016: pcie 4.0 16x is 256gbps |
kindiana#1016: and its got a lot more compute than a gpu lol
AI_WAIFU#2844: Yeah but that's 3x what nvlink can do no?
kindiana#1016: its not really absolute bandwidth that matters, but more bw/flop
AI_WAIFU#2844: Yeah I guess your right, they've got an absolutely enormous amount of compute there, *maybe* if you did pure mp/pp it would work, but even then.
gwern#1782: https://arxiv.org/abs/2003.11666#cerebras
kindiana#1016: :thonk:
AI_WAIFU#2844: Yeah I think that's just for 1 chip
AI_WAIFU#2844: The real test is always how many you can glue together
AI_WAIFU#2844: You know what I would do if I was a mad lad? I would take that thing and turn every side of it into insanely high speed interconnect, then I would build the thing in such a way that you can just click them into a giant 2d mesh of silicon.
bmk#1476: that sounds like chiplets
bmk#1476: honestly, with the industry's move to chiplets, cerebras doesn't make a lot of sense to me
kindiana#1016: chiplets with more steps :berk:
AI_WAIFU#2844: It's chiplets but yuge
bmk#1476: the smaller you can cut it, the better
bmk#1476: because better yield and also binning
kindiana#1016: I want to see someone do POP with gddr
AI_WAIFU#2844: POP?
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/828459897809010698/640px-ASIC_2B_Memory_PoP_Schematic.png
kindiana#1016: like what they do in phones lol
kindiana#1016: stack the ram on top of the processor |
bmk#1476: is that really meaningfully faster/better than hbm?
kindiana#1016: no
kindiana#1016: but its significantly cheaper
bmk#1476: then why do it?
EricHallahan#1051: You get different packaging.
EricHallahan#1051: And so you have different advantages for each based off of that.
AI_WAIFU#2844: how do you cool that
kindiana#1016: throw it in a vat of novec
kindiana#1016: (or through the bottom)
EricHallahan#1051: It becomes a major engineering problem.
kindiana#1016: the idea is gddr has the lowest cost per gbps by a pretty large margin (2-3x), and if you can achieve sufficient volume + commodity packaging your chips will be cheap too
kindiana#1016: mostly a meme
AI_WAIFU#2844: what if we put the gddr vertically on top of the chip
AI_WAIFU#2844: I've also heard of microfluidic cooling being very effective, but that introduces the risk of your chip having a stroke
kinoc#5731: And you thus get closer to organic nanotech (human biology...)
EricHallahan#1051: I assume that actually is a real engineering consideration and not a joke?
AI_WAIFU#2844: All good memes are postironic: https://asmedigitalcollection.asme.org/electronicpackaging/article-abstract/128/1/38/466015/Numerical-Analysis-of-Blockage-and-Optimization-of?redirectedFrom=fulltext
𓅬 gabriel_syme 𓅬#3220: how well does it work to dump them in the sea?
𓅬 gabriel_syme 𓅬#3220: some data centers are built (or in progress) underwater now right?
EricHallahan#1051: That is a Microsoft thing mostly. |
𓅬 gabriel_syme 𓅬#3220: should have access to lower temperatures and also cooling medium
𓅬 gabriel_syme 𓅬#3220: yeah true, it was a MSFT article I read
Louis#0144: Huh?
Louis#0144: Wdym
AI_WAIFU#2844: scroll up
Louis#0144: O
triggerhappygandi#0001: Have you worked with them?
ethan caballero#6044: yes, I used to work at graphcore. :guilty:
triggerhappygandi#0001: :guilty:
triggerhappygandi#0001: Damn
triggerhappygandi#0001: Think you can pull some strings to get us some time on the IPUs?
triggerhappygandi#0001: It will complete the computing trifecta
inox#5400: I'm talking to them next week, what am I supposed to say so they give me the IPUs?
ethan caballero#6044: @inox @triggerhappygandi
I know that Simon Knowles (CTO of graphcore) is a big fan of neural scaling laws research.
Watch first 11 minutes of this video:
https://share.vidyard.com/watch/cU1WtarU53k4gT52TvuKTy
bmk#1476: we dont need more compute
bmk#1476: you can go ask for IPUs once you figure out how to utilize all our current compute
ethan caballero#6044: to scale to AGI, you need more compute. |
𓅬 gabriel_syme 𓅬#3220: I wouldn't mind some compute 🙋♂️
bmk#1476: yes, but we'd need, like, 10x more compute than we have rn to drastically change our plans
bmk#1476: otherwise it'd just sit there underutilzied
AI_WAIFU#2844: No it would sit there regardless
bmk#1476: possibly
𓅬 gabriel_syme 𓅬#3220: is there a benefit to utilizing your compute in smaller, divergent projects while idle?
AI_WAIFU#2844: Even if all that compute came glued together with 600Gb infiniband we still wouldn't know what to do with it
bmk#1476: are IPUs that hard to program?
bmk#1476: if we could actually get 10x the compute needed to train GPT3 and with adequate interconnects, i'd probably take a month off work to do nothing but focus on making it work, and i'm sure this sentiment is shared by at least a few other people around here lol
AI_WAIFU#2844: I wouldn't know, but I'm going off our experiences with TPUs.
bmk#1476: well, we mostly gave up after google told us that we couldnt get more tpus
bmk#1476: even if we could get 50% efficiency, it would take unreasonably long to train on tpu
AI_WAIFU#2844: Hmm...
AI_WAIFU#2844: that's why there are 8 channels under "Projects"
bmk#1476: yes pls do a project using our compute
bmk#1476: GPU, TPU, your choice
bmk#1476: we have everything: compute, knowledgeable folks, ~~cat~~goosegirls, etc
bmk#1476: all the stuff you need to pump out research
𓅬 gabriel_syme 𓅬#3220: I'm still kind of wary that my research is not really aligned (pun intended) to your current research. So I wouldn't want to impose. I'm getting there though, finding more parallels as time passes by. Although, I do plan ask you for a small TPU perhaps when the dalle-mtf code is up and running 🙂
bmk#1476: i mean we have way more compute than we know what to do with |
bmk#1476: i can get you set up on a gpu pod and leave you to it
neko#5937: no way
bmk#1476: we have used, to date, literally millions of dollars of compute
neko#5937: wow nice
AI_WAIFU#2844: more importantly, we have been sitting on millions in unused compute
neko#5937: lol
bmk#1476: yes that too
bmk#1476: we could be using literally millions more
bmk#1476: pls use our compute and write a paper
neko#5937: i could do that instantly lol
bmk#1476: draft up a proposal
neko#5937: ok what's the boundaries here
bmk#1476: @AI_WAIFU i'm working on a framework to make it super easy to specify tpu experiments
neko#5937: the result must be open source?
bmk#1476: the core component is a set of primitives that make writing python super easy
neko#5937: nvm i should read faq lol
AI_WAIFU#2844: I think that makes 3 of us working on TPU frameworks
bmk#1476: this kinda stuff https://cdn.discordapp.com/attachments/729741769738158194/828491682231222302/unknown.png
bmk#1476: it's not directly a tpu framework
bmk#1476: the goal is to be a general "make python easier to use" framework |
bmk#1476: you know that feeling when you develop the right abstraction and it feels like your productivity suddenly went up 10x
𓅬 gabriel_syme 𓅬#3220: the tpu_experiment() stuff you were sharing was really exciting imo
𓅬 gabriel_syme 𓅬#3220: for :smallbrain: like me
bmk#1476: that feeling when something that used to be tedious and hard to make work suddenly becomes trivially easy and understandable
bmk#1476: i got a dose of that from functional programming
bmk#1476: but it's not enough
bmk#1476: I'm hooked now
AI_WAIFU#2844: Ok you're gonna have to walk me through this because I might want to make my thing compatible with your thing
bmk#1476: i want to make a library that lets me easily orchestrate stuff across machines
bmk#1476: it's super early stages rn
kindiana#1016: have you seen fabric?
bmk#1476: I'm basically just implementing a bunch of functions that feel useful to me so i can move mountains with a few keystrokes in the future
kindiana#1016: http://www.fabfile.org/
bmk#1476: no idea what that os
StellaAthena#3530: We currently have the equivalent of around 50 V100s and also another 50 A100s of compute sitting idle.
kindiana#1016: I use it to orchestrate stuff across tpus
bmk#1476: kinda like what I'm doing but i want to make my own
kindiana#1016: lol
kindiana#1016: NIH
AI_WAIFU#2844: Ok I need to sleep rn, but you, me, (and Ben?) Need to hop in VC and talk this out, because I think there's a lot of overlap/work that's already been done for us |
kindiana#1016: I'd be happy to, maybe in 20 hours or so?
kindiana#1016: not sure what timezones y'all are in
AI_WAIFU#2844: Yeah that works for me
bmk#1476: https://gist.github.com/leogao2/0468159c5104281d127fa8d14b86ec2c this is everything i have rn
bmk#1476: well, also i have all the cursed functional stuff
bmk#1476: taken together it's just shy of 1k lines but im leaving that out for now
AI_WAIFU#2844: K I gotta sleep fr
𓅬 gabriel_syme 𓅬#3220: needs more ```>>``` power
bmk#1476: so i found out that my idea had been acausally scooped by someone else almost exactly
bmk#1476: they even use >> too
AI_WAIFU#2844: :berk:
bmk#1476: well, ok, their architecture was a bit different and arguably superior to mine
bmk#1476: but still
mkualquiera#3484: Leo in a few months https://cdn.discordapp.com/attachments/729741769738158194/828494306163163186/se-lain-19.webp
bmk#1476: >implying that isn't me rn
triggerhappygandi#0001: Aww come on. We _need_ to have TPU/GPU/IPU trifecta.
triggerhappygandi#0001: It's not about computer
triggerhappygandi#0001: Why is he an underage anime girl
triggerhappygandi#0001: And why is there an anime on server maintenance
mkualquiera#3484: See, if you had watched the anime you would know she is literally god. |
triggerhappygandi#0001: Mfw
triggerhappygandi#0001: God works with servers
triggerhappygandi#0001: And has bad cable management
mkualquiera#3484: (well, assuming you also watched at least 3 explanation videos because no one understands the ending)
kindiana#1016: why would that be useful research data?
nz#9710: Have you seen Jaxline @kindiana?
nz#9710: https://github.com/deepmind/jaxline
kindiana#1016: yeah its kinda like pytorch lighting for jax from my understanding
nz#9710: not sure how pytorch lighting works, but jaxline does indeed make it easier to handle experiments (I'm using it to setup mine), it's pretty cool ngl. I know brain has https://github.com/google/CommonLoopUtils but I think jaxline is currently more developed (I may very well be wrong, though)
kindiana#1016: how'd you find these lol
kindiana#1016: CommonLoopUtils is new to me
nz#9710: flax discussions lol
kindiana#1016: ah
nz#9710: https://github.com/google/flax/discussions/1143
nz#9710: looks like clu is under more active development though
kindiana#1016: https://github.com/google/CommonLoopUtils/blob/master/clu/checkpoint.py
kindiana#1016: these are some cool utils
nz#9710: I wish CLU had a couple examples, I'm using the NFNets repo as a guide (which uses jaxline) for now
𓅬 gabriel_syme 𓅬#3220: I wish jaxline had some examples
nz#9710: https://github.com/deepmind/deepmind-research/blob/master/nfnets/experiment.py |
𓅬 gabriel_syme 𓅬#3220: oh cool they have there, thanks!
𓅬 gabriel_syme 𓅬#3220: ot: does anyone use a highlight mode for github? is there any, apart from the vscode trick?
𓅬 gabriel_syme 𓅬#3220: like I want to select a word and highlight it everywhere
𓅬 gabriel_syme 𓅬#3220: hmm I guess codespaces will be nice when out
kindiana#1016: control f?
kindiana#1016: lol that's just what i do
𓅬 gabriel_syme 𓅬#3220: 😄
𓅬 gabriel_syme 𓅬#3220: have you tried 1s?
𓅬 gabriel_syme 𓅬#3220: it's wild
𓅬 gabriel_syme 𓅬#3220: the previous link: https://github1s.com/deepmind/deepmind-research/blob/master/nfnets/experiment.py
nz#9710: yea it's great to quickly check out a repo
𓅬 gabriel_syme 𓅬#3220: wild might have been too much lol, it's nice 🙂
andyljones#7746: does anyone have a citation to mind for 'modern industrial ML is too expensive for public sector researchers to keep up with'? i'm poking through the national research cloud stuff, but a paper or three would be better
Sid#2121: Hey Quentin! Great to have you here. Just to correct some things, @bmk listed the compute arch we have *currently* but that probably isn't going to be the same architecture we'll have for the final runs. I haven't got a word on what the architecture is for sure going to be yet but i've made it very clear that nvlink pairs / 10Gbe isn't going to work lol. So far i can say for sure that we're going to have 100Gb+ ethernet with RDMA, and probably 4 GPU nodes with nvlink connection between all the nodes. I'm also trying to push for 8 GPU nodes instead of 4 but, we'll see
Sid#2121: We've tried out 1-bit adam and integrated it into the codebase, but at least in the tests we've run so far, doesn't seem to offer much of a speedup when combined with pipeline parallel and we're not sure why
Sid#2121: I even had to fix deepspeed to get that running since they don't provide integration with pipe parallel out of the box, so it's quite possible i made some dumb mistake.
Sid#2121: Either way, I think i'd personally not have to rely on an experimental optimizer for training, but if we can show it'll help performance it's probably sensible to use
AI_WAIFU#2844: Nothing comes to mind, but maybe try citing OAI on the fact that the compute requirements for SOTA ML experiments keep doubling every 3-6 months
StellaAthena#3530: I’m not sure I agree with the premise
StellaAthena#3530: Are you suggesting that the typical government ML-based data scientist is significantly technologically behind the typical private sector one? |
StellaAthena#3530: And that the cause of this is $$$?
andyljones#7746: to be clear, i'm not too fussed about whether you agree coz i can choose someone else to cite
andyljones#7746: but lol yes
andyljones#7746: actually, 'data scientist' idk
StellaAthena#3530: In the US that’s definitely not true.
andyljones#7746: because they don't do much technologically advanced in the first place
StellaAthena#3530: The reason I went there was that governments don’t tend to do as much product development
StellaAthena#3530: Can you provide an example that illustrates why you think this?
andyljones#7746: https://hai.stanford.edu/national-research-cloud-joint-letter
> But today, the research prowess that’s powered decades of growth and prosperity is at risk.There are two reasons: public researchers’ lack of access to compute power and the scarcity of meaningful datasets, the two prerequisites for advanced AI research.
TastyBucketOfRice#8796: Ah, that arch is much more manageable.
I'm interested in contributing. What can I start with? What's the process here?
Sid#2121: i guess first step would be to familiarize yourself with the codebase https://github.com/EleutherAI/gpt-neox/ I guess you have some familiarity with deepspeed and such as well?
EricHallahan#1051: I was going to suggest the same.
Sid#2121: I'll add some issues on the github relating to other stuff that needs doing and you can just let me know if you're interested in picking any of them up.
StellaAthena#3530: @Sid is there a way to force CW to give me an 8-GPU pod or do I have to just build and check?
TastyBucketOfRice#8796: Sounds good.
Sid#2121: we don't have any 8 GPU pods with coreweave |
StellaAthena#3530: 4 GPU pods then?
aze#1010: the problem with reproducing DALL-E is finding a suitable dataset right? do we have a viable implementation in PyTorch already?
EricHallahan#1051: We are building a dataset and the code. We have seen glimmers of it working a few times on less diverse datasets, but nothing particularly good in terms of results. It is a very hungry model in terms of compute.
aze#1010: i see
gwern#1782: do we really know that? it's not like you overfit danbooru2020, even
gwern#1782: and the papers for clip/dall-e show pretty smooth scaling with dataset size, iirc
ainoob#9556: hello guys
ainoob#9556: need some help
ainoob#9556: i followed the instructions here
ainoob#9556: what do i pass in the model parameter?
ainoob#9556: python3 main.py --predict --prompt <example_prompt.txt> --gpu_ids <device:GPU:0 device:GPU:1> --model <config_name>
ainoob#9556: i downloaded this folder https://the-eye.eu/public/AI/gptneo-release/GPT3_2-7B/
EricHallahan#1051: What are you trying to accomplish?
ainoob#9556: i want to predict text locally
EricHallahan#1051: Are you trying to fine-tune or just use the pretrained model?
ainoob#9556: use the pretrained
ainoob#9556: and then maybe experiment with finetuning
EricHallahan#1051: I suggest you use the Hugging Face release, which is far easier to get started with.
EricHallahan#1051: https://huggingface.co/eleutherAI/
ainoob#9556: so i just run this cmd on a py file? |
ainoob#9556: from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-2.7B")
ainoob#9556: its giving me a key error
ainoob#9556: Traceback (most recent call last):
File "hug.py", line 2, in <module>
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 345, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 352, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'gpt_neo'
cfoster0#4356: You'll need to install `transformers` from the GitHub source, since there isn't a new release yet
ainoob#9556: i installed transformers with pip and its v4.1.1
Sid#2121: @ainoob try this colab notebook https://colab.research.google.com/drive/17MhFnXeHE7ZnLo2vlQ1Htqm03_X1ULqm?usp=sharing#scrollTo=6dy3EEFGKJuR
ainoob#9556: i am trying to setup locally 🙂
alstroemeria313#1694: `pip install git+https://github.com/huggingface/transformers`
ainoob#9556: i think it worked. didnt know pip was accepting git links! |
ainoob#9556: thanks
Bing Chilling#6390: Hello everyone, just read an article on GPT-neo where it gets very close to GPT-3! A few questions I would like to ask if possible? My task is tweets sentiment classification and as you know tweets contains slang and words etc...
1. Does the trained model using Pile contains tweets like data?
2. Can I use transfer learning together with my own data to further train the model?
3. Can I replace the output layer to like softmax classification?
Thanks 🙂
Carl-bot#1536: Welcome to EleutherAI! This is a research-focused discord and is not the best place to get answers to entry-level questions, learn basic machine learning, or get tech support. Novices are welcome to hang out here, but we encourage you to lurk more.
The #communities channel has links to other ML-oriented communities that may be better suited for your question.
bmk#1476: tl;dr we are still 2 entire orders of magnitude away from gpt3
EricHallahan#1051: You can read the preprint of the Pile paper here: https://arxiv.org/abs/2101.00027
Louis#0144: 1,2,3 all yes
EricHallahan#1051: 1?
Louis#0144: common crawl
Louis#0144: common crawl contains parts of twitter
EricHallahan#1051: Ah.
EricHallahan#1051: That makes sense.
Sid#2121: I mean, as quotes in articles maybe sure |
Sid#2121: but not threads or anything
Sid#2121: also @Bing Chilling in general BERT like models are much better than GPT models for sentiment classification
Sid#2121: I think you'll have better results using a bert model
Bing Chilling#6390: I see.. Thanks all for your input! Is Moloch the bot made from GPT-neo? haha
Sorry if these are novice question as my last updated NLP knowledge was at Word2Vec, fasttext level used in my research... I am now catching up these newer ideas.
Sid: I have read that BERT performs no better than LSTM in classification task for small training sample (about 50,000 each category)?
rom1504#5008: the eval section there https://huggingface.co/EleutherAI/gpt-neo-2.7B is letting some people think that gpt neo 2.7B is achieving the same results as gpt 3 ; might be worth clarifying
Louis#0144: there are a lot of tasks where LSTMs are still king
Louis#0144: but they are few now
Louis#0144: sentiment analysis is such a task for instance
Louis#0144: and tasks with little data
Louis#0144: anywya
Louis#0144: sorry this is not a server for novice questions
Louis#0144: I recommend Yannic's server
Louis#0144: we have a link in #communities
Louis#0144: you're free to lurk here though
EricHallahan#1051: Also, Moloch does not have any GPT-Neo in him.
Bing Chilling#6390: Got it thank you all!
StellaAthena#3530: **Major** news in the AI world: US Supreme Court rules in favor of Google in Google v. Oracle |
ERROR: type should be string, got "\nhttps://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf\nEricHallahan#1051: No, just major news in general lol\nEricHallahan#1051: It made my day.\nEricHallahan#1051: I also beat you lol: https://discord.com/channels/729741769192767510/730095596861521970/828685311436783636\nEricHallahan#1051: Though Leo brought it to my attention.\nbmk#1476: then technically i beat you to it too lol\nthenightocean#6100: what will be the consequence of this verdict?\nSid#2121: can someone tl;dr this 60 page mega document for me\nSid#2121: did you download the images??\nStellaAthena#3530: My understanding is that the bigger impact would be a ruling against Google. The way Google acted is consistent with how many companies currently act and a ruling in favor of Oracle would be extremely disruptive to the tech industry\nSid#2121: @-Archivist let's take this over to #multimodal\nEricHallahan#1051: The ruling effectively affirms the norm that APIs can be copied between software vendors.\nStellaAthena#3530: Today’s batch of rulings also included giving Alex Jones, a man who is genuinely in competition for Worst American Alive, the middle finger and I am so pleased by this\nEricHallahan#1051: Mathworks is probably very disappointed right now, because if it went their way, they would be able to sue GNU Octave or NumPy for copying MATLAB API calls pretty much verbatim.\nEricHallahan#1051: And so that makes me happy too.\nEricHallahan#1051: (This is surprisingly common practice with scientific software lol)\nStellaAthena#3530: Good summary https://twitter.com/amac/status/1379090272760520716?s=20\nStellaAthena#3530: For the non-Americans who might not know who Alex Jones is, he is a right wing conspiracy nut job who has made millions of dollars traumatizing people whose kids were murdered in 2012\nDaj#7482: b-but the gay frogs!" |
Daj#7482: lol
StellaAthena#3530: In 2012 a 20 year old brought a gun to an elementary school and murdered 26 people, 20 of whom were six or seven years old. Alex Jones is a media magnate and talk show host who has insisted for the past decade that it never happened, that it was a psy-op by the US government to justify gun regulation, or that it was a false-flag operation carried out by the US military
Daj#7482: Insane he never pulled back on that one
Daj#7482: Even for him
Daj#7482: What a psycho
StellaAthena#3530: He and his devotees have harassed, stalked, and doxed parents of the murdered children for their purported role in covering up what really happened
Daj#7482: I wish we could just enjoy his DMT government alien rants, but no, he also has to be a horrible friggin psycho
StellaAthena#3530: The harassment is bad enough that there’s something like a half dozen people *in jail* for it
Daj#7482: Jesus lmao
Daj#7482: wtf is up with america
Daj#7482: Germany has, like, Drachenlord
Daj#7482: Well I guess the AfD but even they aren't _that_ bad
StellaAthena#3530: In 2016 a woman was sent to jail for sending death threats to one of the parents
bmk#1476: well, 5x larger population
bmk#1476: = 5x more crazies a priori even before you consider the inside view
Daj#7482: I guess
StellaAthena#3530: In 2015 “Mills angrily approached the sister of murdered teacher Victoria Soto—who is regarded as a heroine for her attempt to protect her students from the shooter in the Sandy Hook attack—shoved a photograph in her face, "and began angrily charging that not only did the Sandy Hook tragedy not take place, but that Victoria Soto never existed."”
StellaAthena#3530: Another person had previously been sent to jail for assaulting Victoria Soto’s sister over the same claims
StellaAthena#3530: Just last year someone was sent to jail for publishing private info – presumably about where people lived – and encouraging recipients to go to the homes of parents and harass them
nz#9710: what the fuck |
EricHallahan#1051: Some Americans are messed up.
StellaAthena#3530: So yeah, a group of the victims of this harassment are suing the fuck out of Alex Jones for lying about this on air for years and actively promoting this harassment
bmk#1476: i think this is a symptom of the broader problems of the memetic ecosystem
bmk#1476: these people truly genuinely believe that they're doing the right thing
bmk#1476: well, idk about Jones, but the rest of them
bmk#1476: so something about the memesphere is allowing these harmful memes to spread unchecked
StellaAthena#3530: Oh did I mention he makes a huge amount of money selling pseudoscientific products on his show’s website? It’s a *ride* https://www.infowarsstore.com/
bmk#1476: lmao
StellaAthena#3530: Check out those disclaimers https://cdn.discordapp.com/attachments/729741769738158194/828715897450397696/image0.png
EricHallahan#1051: I :berk: but it actually sad.
gwern#1782: I recall reading an interview/article about one of jones's assistants. they said that jones knew perfectly well that he is peddling 100% BS and that none of it is true (and is often depressed and sad), but that he is too greedy to quit and also feels too trapped by the movement and the embarassment of quitting and admitting his life is a travesty etc. which is both much more damning and sadder
EricHallahan#1051: Also, you are going to give gandi a heart attack with that image there.
Mechanical / Ben#9604: 😄
EricHallahan#1051: Welcome! It depends what you are asking about. Are you fine-tuning? If you are what hardware are you trying to use?
Mechanical / Ben#9604: screenshot incoming
Mechanical / Ben#9604: nvm, AMD Radeon R9 200 with up to 16 GB and Overclock functionality
Mechanical / Ben#9604: On AMD Phenom 2
Mechanical / Ben#9604: with 16 GB
EricHallahan#1051: If you would like to use the model or fine-tune, I suggest you use Colab.
kinoc#5731: And if you're brave check out https://github.com/Xirider/finetune-gpt2xl |
Mechanical / Ben#9604: Im kinda new and need a briefing,.. Im good in figuring things out but this is completely new
kinoc#5731: But why either/or when you can do both/and for the same price ...
bmk#1476: i wond not recommend fine tuning on amd
EricHallahan#1051: AMD hardware is kinda not useful. Unfortunately the research community has standardized around NVIDIA.
bmk#1476: and anyways this isnt the right place to ask about finetuning HF models
EricHallahan#1051: There was never any mention of HF until you brought it up though.
EricHallahan#1051: That is a pretty bold assumption to make, though I would probably make the same one.
Mechanical / Ben#9604: well i must admit, I just said I heard I could run a model and train it, and he asked for specs
EricHallahan#1051: Indeed I answered the question given, not the question implied.
EricHallahan#1051: But I would suggest using HF if you are not using it already.
EricHallahan#1051: It is far more intuitive than our research code.
Mechanical / Ben#9604: I never used a model on my gpu before, I just ran AI DUngeon
Mechanical / Ben#9604: on webclient
EricHallahan#1051: Yeah, the 1.3B model is 5.2 gigabytes.
bmk#1476: this isn't the right place to ask about infetuning models fullstop
EricHallahan#1051: Correct.
EricHallahan#1051: I suggest you use Colab if you are interested and want to invest your time into learning what is involved, but we are unable to really help you further than that unfortunately.
Mechanical / Ben#9604: Whats Colab?
EricHallahan#1051: https://colab.research.google.com/
Mechanical / Ben#9604: I will check that out as well, thanks |
EricHallahan#1051: Good luck!
PM#8434: @EricHallahan thanks for the guidance. Got a pretrained model running in colab. You guys rocks, unbelivable to make GPT3 level public. Thank you
bmk#1476: we don't have a gpt3 level model fyi
bmk#1476: not even close
bmk#1476: our model is just slightly (2x) bigger than gpt2
PM#8434: still. the old GPT2 i got running on a rapsberry pi and it's writing my daily blog. even the 348M model is surprisingly good for simeple things
AI_WAIFU#2844: @kindiana @bmk are we still on for vc in ~2.5 hours?
bmk#1476: wait what?
bmk#1476: refresh my memory pls
bmk#1476: I'm going to be out for a bit over 2 hours so i might be slightly late
bmk#1476: if i don't show up it's because my bike broke down the the middle of nowhere and i have no reception
bmk#1476: so uh what's the meeting for
kindiana#1016: https://discord.com/channels/729741769192767510/729741769738158194/828492618489004052
bmk#1476: oh that
bmk#1476: can we push it back like an hour?
kindiana#1016: works for me
Ziadjbt78#1939: #website
EricHallahan#1051: Welcome! Is there anything you need help with?
Ziadjbt78#1939: Just loving what you guys are doing
D3MZ#6696: Anyone having trouble with using gpt-neo via huggingface? |
EricHallahan#1051: Why do you ask?
D3MZ#6696: I get this as an error ```ValueError: Unrecognized configuration class <class 'transformers.models.gpt_neo.configuration_gpt_neo.GPTNeoConfig'> for this kind of AutoModel: TFAutoModelForCausalLM.
Model type should be one of BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig.```
EricHallahan#1051: ^
D3MZ#6696: for just running the demo code ```from transformers import pipeline
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B')
generator("EleutherAI has", do_sample=True, min_length=50)```
EricHallahan#1051: You need to run that.
EricHallahan#1051: (Sorry for the ping)
D3MZ#6696: @EricHallahan I already did, found that solution from a tweet, but same issue.
EricHallahan#1051: Do you have PyTorch installed, and if it is, is it up to date?
D3MZ#6696: ah no, just Tensorflow
EricHallahan#1051: Yeah, you need PyTorch. Hugging Face doesn't have the model written for the TensorFlow backend yet.
D3MZ#6696: thanks so much I'll give that shot.
EricHallahan#1051: Good luck!
StellaAthena#3530: Holy fuck this is hilarious:
> But in a surprise twist, an audit and report released last week found no bias in the algorithm because there was no algorithm to assess in the first place.
>
> “Banjo expressly represented to the Commission that Banjo does not use techniques that meet the industry definition of artificial Intelligence. Banjo indicated they had an agreement to gather data from Twitter, but there was no evidence of any Twitter data incorporated into Live Time,” reads a letter Utah State Auditor John Dougall released last week.
https://venturebeat.com/2021/04/05/government-audit-of-ai-with-ties-to-white-supremacy-finds-no-ai/ |
Louis#0144: https://samdbrice.medium.com/76acc8b5d534
Louis#0144: Let’s submit the pile
Louis#0144: Talk about our data policies
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/828841610707992586/image0.png
Louis#0144: Ye
Louis#0144: Doesn’t say you need to use scipy
RyanT#5929: Form for submission looks p simple too
𓅬 gabriel_syme 𓅬#3220: this is pretty much all of my industry as well. It's my 'turing test' for AEC companies, if there's AI in the title there's no AI in the tool
RyanT#5929: I worked at a startup whose “AI” was a program in spark that essentially counted rows in a huge join
𓅬 gabriel_syme 𓅬#3220: tbf I could have made some cash easily by playing the same game. So many people are becoming smth head of AI in design and all their running is GAs
𓅬 gabriel_syme 𓅬#3220: oh well, I traded that with doing stuff I actually like (or waste my time with stuff I like lol)
cognomen#6297: the terrifying thing is that this was only found out because the founder had a prior conviction
cognomen#6297: otherwise the state would have put no effort into auditing this thin veneer of an excuse for racial profiling
EstebanSir#2189: oh no
EstebanSir#2189: was the link to the gpt-neo-2.7 model changed?
EstebanSir#2189: the colab notebook i was using (that used transformers) now crashes
EstebanSir#2189: oh ok that was a dumb error
EstebanSir#2189: so hhh anyone using that same notebook, model name changed from gpt_neo_2-7B to gpt-neo-2.7B
EricHallahan#1051: The model never changed. It must have been something left over from before release.
Sid#2121: it did change, we requested a name change. @EstebanSir must have been using the old notebook from suraj's branch before it merged to main |
EricHallahan#1051: That is what I was saying.
EstebanSir#2189: yeah i have, it works fine with that simple change to the notebook
chirp#4545: https://twitter.com/gdb/status/1379481542343360515?s=21
AI_WAIFU#2844: > We’re looking for software engineers to help us build models vastly more capable than GPT-3, CLIP, and DALL-E. We train our models on some of the largest machine learning supercomputers in the world, which requires a team with deep software engineering expertise in order to extract maximum performance.
AI_WAIFU#2844: A couple lines later:
> We’re building safe Artificial General Intelligence (AGI), and ensuring it leads to a good outcome for humans.
AI_WAIFU#2844: :catgirl5:
Aran Komatsuzaki#5714: Reply as "Come build the next GPT-3 with us in EleutherAI" :berk:
Ravna#1831: The only way to guarantee the safety of the language model is to train it specifically on the dataset that didn't pass the filters of OpenAI and Eleuther, such as negative karma reddit posts, fanfics and erotic literature.
gwern#1782: dooo ittttt :chad:
Aran Komatsuzaki#5714: i'll do it unless i get some oppositions here lol
nz#9710: deadass do it
Aran Komatsuzaki#5714: @Daj ?
thenightocean#6100: haha, for realz? That would be amazing!
nz#9710: this is but a fraction of aran's power
inox#5400: starting a twitter feud for engagement?
gwern#1782: I see it more as tweaking OA's nose over being clopenai
Aran Komatsuzaki#5714: i'm going to delete as soon as anyone says "No"
Ravna#1831: It's not even a formal feud. Neither the OpenAI official account nor its CEO's account mentioned anything about this.
Ravna#1831: It's at most some verbal exchanges between OpenAI employees and outside github contributors. |
bmk#1476: i don't think we should start a feud with OA
bmk#1476: our general strategy has been to try and build trust with them
bmk#1476: as for the tweet, it's fine iff you can make it clear that it's a joking between friends sort of way and not us trying to be passive aggressive about OA
Aran Komatsuzaki#5714: @bmk so no action needed?
Aran Komatsuzaki#5714: or add a line like "(it's a joke)"?
Sphinx#2092: (unless...)
bmk#1476: sure yeah add that in a reply
Aran Komatsuzaki#5714: got it 🙂
Aran Komatsuzaki#5714: done
bmk#1476: perfect
bmk#1476: it's complicated
bmk#1476: a lot of people at OA care about the same sorts of things that we do
bmk#1476: sometimes, OA does things we disagree with but because of different world models rather than disagreements over terminal goals
bmk#1476: but also as an organization OA does some completely baffling shit sometimes (msft gpt3 licensing)
Daj#7482: oh come on I look away for two minutes pfft
bmk#1476: lol feel free to correct me if you disagree Connor
Daj#7482: I think it's cringe but eh whatever
Daj#7482: It seems super petty
Daj#7482: but it's in Aran's name so if he wants to post it I think it's fine
Daj#7482: It's funny but only in context, to an outsider it looks petty |
Daj#7482: and almost vindictive
Aran Komatsuzaki#5714: i don't really mind deleting it. i posted only cuz i was encouraged to.
bmk#1476: i think it's fine if it's clearly in a sort of friends poking fun at each other way
bmk#1476: and I'm not normie enough to tell if that's the case
Daj#7482: We are not this close with OpenAI
Daj#7482: Especially not Greg
bmk#1476: ok lol
Daj#7482: This is a bigger PR fail than what we were complaining about last time
bmk#1476: i defer to your judgement
thenightocean#6100: haha, I think I misunderstood, I first thought we are offering Brockman to work with us 😋
Daj#7482: but I need to go cook now, so do as you please
Daj#7482: I don't think the EV- is _huge_
Aran Komatsuzaki#5714: ok i'll delete it then lol
EricHallahan#1051: It didn't pass my PR litmus test.
EricHallahan#1051: Of my gut lol
EricHallahan#1051: ¯\_(ツ)_/¯
Greg#3814: ?
bmk#1476: i think it could have been funny and it's a shame that the public context just isn't there
Greg#3814: https://tenor.com/view/hmmm-thinking-batman-gif-14744673
Aran Komatsuzaki#5714: https://twitter.com/Live_News_Nick/status/1379490392719118339 |
𓅬 gabriel_syme 𓅬#3220: I mean it makes sense no
𓅬 gabriel_syme 𓅬#3220: not to join, to leave lol
ethan caballero#6044: Here's citation:
https://www.bloomberg.com/news/articles/2021-04-06/google-ai-research-manager-samy-bengio-resigns-in-email-to-staff
RyanT#5929: Makes sense, he seemed pretty close to the Ethical AI team that got gutted
𓅬 gabriel_syme 𓅬#3220: yeah exactly and he was treated badly anyways for it (imo at least)
RyanT#5929: Yeah that’s what I’ve heard
𓅬 gabriel_syme 𓅬#3220: obviously nothing compared to the EthicalAI people treatment, but still
ethan caballero#6044: "I am looking forward to my next challenge" - Samy.
Samy to dario.agi confirmed.
gwern#1782: that's the lesser bengio, right
StellaAthena#3530: He's not the one who got the Turing Award
gwern#1782: _files that under 'Bengio The Lesser' in his head_
StellaAthena#3530: I feel weird calling him that given all that he's achived, but yes
gwern#1782: I feel like nominatives shouldn't've gone out of style. why can't we refer to 'Hinton the Upstanding'?
StellaAthena#3530: Reminds me of the Fefferman brothers
bmk#1476: cruel pun
StellaAthena#3530: Robert Fefferman is a highly influential mathematician. He has a named professorship at the University of Chicago, is a member of the national academy of science, a fellow of the AMS, and is (was?) the dean of the college of physical sciences at chicago
StellaAthena#3530: His *brother* is the more accomplished one
gwern#1782: that's why it's perfect! lots of nominatives are like that |
AI_WAIFU#2844: bruh
StellaAthena#3530: *Charles* Fefferman is a Fields Medalist, has a Wolfe Prize, and was the youngest person in US history to become a fully professor when he was made a full professor at the age of 22.
RyanT#5929: It’s incredibly funny
RyanT#5929: I remember looking up the UChicago Fefferman and his brother being the first or second result
RyanT#5929: http://www.physicsmeetsml.org/posts/sem_2021_04_07
RyanT#5929: Might be an interesting talk
RyanT#5929: Unrelated to that, anyone know of good tools for virtual poster layout?
StellaAthena#3530: @RyanT I just use PowerPoint for posters
StellaAthena#3530: Even in-perosn ones
cfoster0#4356: Eleuther, Pile, and Neo got mentions in this short podcast, and it's honestly one of the best general public-targeted explainers I've heard. <https://rajeevsrinivasan.substack.com/p/episode-23-ai-20-and-the-coming-language>
EricHallahan#1051: (I doubt Connor is up right now.)
gwern#1782: I feel a lot more sympathetic for bengio than gebru or mitchell. gebru delivered her resignation ultimatum and knew what would happen; mitchell likewise knew, or should have known, downloading thousands of files to leak in support of timnit would get her fired by any sane organization. but bengio, whatever his subordinates were doing going off on crusades, appears to have done nothing like that
cfoster0#4356: For the record we've typically had discussions about this ongoing situation in #off-topic
𓅬 gabriel_syme 𓅬#3220: the fact that bengio was treated like this for standing up for his team and subordinates (and from everything I've heard, it wasn't the first time) should tell you anything you need to know really. my outsiders 2c, that are worth next to nothing
Spy#9778: any good pointers on stuff for mixed precision training?
Spy#9778: not libraries but stuff about how to do mixed precision training without booming my performance
bmk#1476: :ptsd:
chilli#5665: just use `amp`?
chilli#5665: (if you're using pytorch)
Spy#9778: nah I'm using jax |
Spy#9778: but I can look at how amp makes decisions I guess
Spy#9778: if that is available somewhere
chilli#5665: are you on TPUs or GPUs?
Spy#9778: GPU
chilli#5665: I think you're OOL
Spy#9778: I'm actually looking for stuff along the lines of comparisons of the effect of making various stuff fp16 vs full precision
Spy#9778: e.g. some layers vs all, adam moments, gradients
Spy#9778: not a library
Spy#9778: papers/blog posts I'd imagine
chilli#5665: Do you need real fp16?
chilli#5665: or do you just want to simulate for the purposes of testing?
Spy#9778: simulate is fine
Spy#9778: I mostly just want to learn about it rn
chilli#5665: https://leimao.github.io/article/Neural-Networks-Quantization/
chilli#5665: that's the resource I looked at
chilli#5665: haha
Spy#9778: ah that looks great
Spy#9778: thanks
inox#5400: David C Page referenced this paper in the "How to Train Your ResNet" blog posts but you probably already found it https://arxiv.org/abs/1710.03740
Spy#9778: nope! |
Spy#9778: thanks
kindiana#1016: you can also look at haiku/flax examples
kindiana#1016: most of them include mixed precision
chilli#5665: does it work on GPU?
kindiana#1016: yeah
chilli#5665: oh interesting
kindiana#1016: its the same principle
chilli#5665: is it a recent addition?
kindiana#1016: but you might have to replace bf16 with float16
chilli#5665: smerity had a comment a couple months back about Jax not working with fp16
kindiana#1016: hrm
chilli#5665: and was complainig about it
kindiana#1016: well I've never personally used it
kindiana#1016: so maybe it doesn't work
kindiana#1016: lol
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/829197971454164992/unknown.png
Spy#9778: that thread has a pointer to this: https://github.com/google/flax/blob/master/flax/optim/dynamic_scale.py#L38
Spy#9778: which is pretty good to know about
chilli#5665: pytrees are actually a great invention
Spy#9778: I agree |
Spy#9778: the nest stuff in the tf source code was nice but it wasn't really user facing
Spy#9778: the universality of it in jax is really nice
chilli#5665: well, it wouldn't really even be that hard to make it user facing in TF
chilli#5665: haha
chilli#5665: I might add pytrees everywhere in pytorch
spirit-from-germany#1488: I am with putty ssh working on an instance and can run my script, but once I disconnect, it gets also shut down .... I tried to use "screen", a tool for exactly that, but it doesnt work ... any ideas what I could do? 🙂
Spy#9778: hmm
Spy#9778: @spirit-from-germany screen or tmux should work fine there
Spy#9778: are you making sure to detach from screen in a way that doesn't terminate it?
Spy#9778: if you just want to run a one off script you can just use nohup
Spy#9778: or disown
spirit-from-germany#1488: nohup diswon works now 🙂
chris_myzel#9645: @rowbot 2.7B via 🤗 /Transformers downloaded around 10.5 GB
rowbot#6655: surprisingly small
chris_myzel#9645: one could say around 4 byte per parameter 🤖
rowbot#6655: one could also say 32 bits per parameter
chris_myzel#9645: which brings us to the question will we need need 700 GB of Memory to work with 175B and how do we accomplish that
Zaidovski#4800: Hello guys i'm a new datascientist here, happy to be part of this.
Zaidovski#4800: Well I'm passionned about mahcine learning and deep learning, i'm working in a company, and there where i discovere the GPT-NEO, that uses Transformers(Attention is all you need). Well i'm so happy to be hear and i can't wait to share ideas and maybe so,ving problems ^^
Zaidovski#4800: solving |
Daj#7482: Welcome! Feel free to take a look around and read our FAQ linked in #rules
Vova Zakharov#2625: hey everyone, is that legit? https://gapt.ai/
Daj#7482: lol
Daj#7482: That's my only comment
Daj#7482: ¯\_(ツ)_/¯
Daj#7482: They say
> We also commit to financially support EleutherAI, which is a collective of researchers working on open source AI research because we are basing ourselves on their latest models (GPT-Neo). Moreover, we want to see other even more powerful models appear.
For the record, we have no idea who tf this is or what they're doing lol
Deleted User#0000: i boosted server twice
Deleted User#0000: you guys are cool
Daj#7482: Thanks!
Daj#7482: We really appreciate the kind words and support people have shown ❤️
Vova Zakharov#2625: Damn I was hoping I somehow missed your release. Or did I?
Daj#7482: We've released a 1.3B and 2.7B model
Daj#7482: Those are fully legit and even integrated into HF already
Vova Zakharov#2625: HF?
Daj#7482: Hugging Face
Daj#7482: They are kinda the defacto NLP library makers
Daj#7482: Make using NLP models super easy
Vova Zakharov#2625: 2.7B must not be really comparable to GPT-3’s 175B though? |
Vova Zakharov#2625: What is your subjective impression?
Daj#7482: Subjectively, it's surprisingly far better than GPT2 imo
Daj#7482: and really good at code
Daj#7482: But yeah of course a far cry from 175B
Daj#7482: We had actually just had these models laying around on our harddrives for months
Daj#7482: we didn't expect people to care this much lol
Daj#7482: @Serge "Objective" evaluation in NLP is hard to impossible
Daj#7482: So my gut feeling is as stated a few posts up
Daj#7482: "Significantly better than GPT2, unusually good at code, far cry from 175B"
Serge#0241: Nice
Serge#0241: "unusually good at code" where can I find those cool implementations and examples, like code generation?
Serge#0241: Again, sorry for the newb questions
Daj#7482: We don't offer implementations or track them tbh
Daj#7482: You can try the colab notebook or the HF implementation yourself
Daj#7482: But we don't do tech support really
Serge#0241: Hmm... Probably need something like awesome-gpt-neo repo
Daj#7482: a few people on twitter have been posting results from neo
Daj#7482: but yeah we are researchers and try to focus on research/development
Daj#7482: and leave downstream uses of our models to whoever wants to use them
Serge#0241: You could offer paid support. Easy revenue stream |
Serge#0241: More budget for research/improvements
Daj#7482: Luckily, we don't need any money atm
Serge#0241: Ah I see
Daj#7482: It would just be a distraction
Serge#0241: Okay, well thanks anyway for being a truly open version of OpenAI
Vova Zakharov#2625: Good. Perhaps you’ll need less than 175B to be as good as 3 then. I wonder if we can see it happen this year 😉
Daj#7482: We're going for 175B either way
Daj#7482: or probably 200B
Sid#2121: https://twitter.com/wenquai/status/1378416315044614150 here's a cool example of some code generation
mkualquiera#3484: ~~Go for 1T, connor, you know you want to 😈 ~~
Daj#7482: of course I _want_ to
Daj#7482: But we've exhausted all legally permissible methods of pushing @Sid
mkualquiera#3484: @Sid think of the goosegirls
Sid#2121: i beg to differ, I don't have a personal sushi chef yet
Daj#7482: we have yangda
Serge#0241: prompt = "Below is the commands to the cooking robot for making the best homemade sushi:\n1. "
mkualquiera#3484: Sid knows that after making 1T we don't need him anymore. You could just write "Below is the code for making aligned AGI"
Daj#7482: I'm pretty sure he's betting on it
Daj#7482: https://www.youtube.com/watch?v=X75b0kZoeqY this is sid
apolinario#3539: Hi everyone, I have a question regarding the potentials for a future NEOx 175B parameter code. |
When that's achieved, what's the perspective regarding running/fine-tuning it with regular tools? Is it possible to think the 175B parameter code will just be posted @ Hugging Face && be fine tuned over Google Collabs or it will also need creativity on how to make that accessible?
Daj#7482: We don't know how running a 175B model in practice will look
Daj#7482: But it sure as hell won't run on colab lmao
Daj#7482: It probably won't be runnable at anything approaching acceptable speeds until either a few more cranks of Moore's Law happen or you buy some serious high end hardware
kindiana#1016: looks like we are on track for some more moore's law cycles before the model is released so it shouldn't be an issue :berk:
apolinario#3539: Got it. Thanks, makes sense.
So probably a "regular person" will still need some less accessible (compared to a Colab) intermediaries between the model & running it. But if I understand correctly the goal is that basically it still becomes accessible to "whoever manages to" as opposed to "exclusively Microsoft" as it is with GPT-3
Daj#7482: Yep, you got it
Daj#7482: Maybe some people will ifnd ways to make it more efficient or distill it or whatever
Daj#7482: But at the end of the day if you wanna run high end software, you need high end hardware, not much we can do about that
mkualquiera#3484: Taking a guess I think the easiest way to use it yourself would be to get the compute from a cloud high end GPU platform
mkualquiera#3484: I also guess there will be companies like CW focusing on making an API that people can use (much like OAI)
Daj#7482: Overblown title but overall one of the better articles I've seen I guess
Daj#7482: man it's so weird to see news outlets write about oneself
Daj#7482: Really triggers that Gellman Amnesia
Daj#7482: it's not just a google translate rehash of another one, so yeah :berk:
mkualquiera#3484: Actually I saw a very similar article a couple of days ago :thonk:
surajpatil#3994: Hi @bmk , it seems that the model file is missing from the 350M model on the hub https://huggingface.co/EleutherAI/gpt-neo-350M/tree/main
Daj#7482: Search medium.com for gpt neo if you wanna see some real trash :berk:
Louis#0144: Oof |
TransformerOptimus#1007: Hi, i am founder of graphgrail.com we do an intelligent chatbots(conversational Ai) that speaks like a human: empathic, proactive and consistent - we use transformer but our custom architecture with smart ranging that is our research.
Here i want to find ai researchers and exchange ideas. Also we do have a plan to train GPT-neo to become chatbot model and test how it behave, maybe better than our model
Daj#7482: @TransformerOptimus Hello there, no advertising, please
Louis#0144: If you are here to recruit employees, we don’t take kindly to that
TransformerOptimus#1007: no recruit, cant get rid of link if not accepted
Louis#0144: Connor is admin, not me
Daj#7482: I'll let it slide
Daj#7482: But yeah nothing personal, it's just policy to avoid spam
Daj#7482: Welcome
TransformerOptimus#1007: Done del link
Wes#9024: Hi, I just joined. I took a look at the projects channels and I'm disappointed to see I can't find pinned messages explaining what the project is about, what are the milestones. It would be cool to know where the projects are at.
Wes#9024: By the way, I'm a recently MSc graduated student in Maths & ML from France, I have strong focus on audio signal processing & ML; combinatorial optimization (especially on graphs); links between ML and cognitive sciences or biology. Looking forward to share with you!
EricHallahan#1051: Welcome! Have you visited our website? It is linked in #rules.
Wes#9024: Thanks 🙂
Yes, I found you from there, actually, and even though some projects are introduced (such as gpt-neo, the-pile), it seems like most of them are not
Sid#2121: most are still works in progress
Louis#0144: How come torch’s determinant function isn’t differentiable wtf
Louis#0144: Det is smooth
Louis#0144: Cool we have a few audio projects
Louis#0144: You might be interested |
EricHallahan#1051: Woah, hold your horses
Sparkette#4342: Is there somewhere I can experiment with this without needing a top-of-the-line GPU? Will it work on Colab?
EricHallahan#1051: Yes, the website actually has more projects in the source, they are just not easily accessible because we haven't written up descriptions yet.
Sid#2121: here you go 🙂 https://colab.research.google.com/drive/17MhFnXeHE7ZnLo2vlQ1Htqm03_X1ULqm#scrollTo=6dy3EEFGKJuR
Sparkette#4342: cool, thanks 😄
Louis#0144: 🐴 👌
Louis#0144: Holding
Wes#9024: Sure, I just would have loved to know what each and every project corresponds to without having to ask in every channel 😅
EricHallahan#1051: Yes, we are currently looking to expand in the direction of audio, just nothing has been formally kicked off yet.
Sid#2121: most channels have headers/google docs, and if they don't, you should annoy the people most active in the channel to do so lol. tbh even i don't know what's happening with a couple of the projects
Louis#0144: Yes it has
Louis#0144: lol
EricHallahan#1051: What?
Louis#0144: @RyanT started
Louis#0144: Stella approved it
EricHallahan#1051: What is he doing?
Louis#0144: Equivariant audio transformers
Louis#0144: Using fractals
mkualquiera#3484: Same as the CLIP+neo thing but with wav2vec could be interesting :thonk:
Louis#0144: So spectral equivariance |
EricHallahan#1051: Oh, that is the equivarience thing, not an entirely an audio thing IMO.
Louis#0144: It’s mostly audio
Louis#0144: 🤷♂️
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: I agree
Wes#9024: So I guess I'll take a look at the "equivarience" channel 😉 Thanks for the guidance
cfoster0#4356: There are like 3 or 4 different audio things that may be happening lol
Louis#0144: Quick! Pop quiz ! What’s happening in #carp
EricHallahan#1051: Okay, I'm kicking it off now: the **S**pecial **P**rojects group on **SP**eech and **S**ignal **P**rocessing
mkualquiera#3484: other than a bunch of suffering?
Louis#0144: Honestly
Louis#0144: Hi Suraj
EricHallahan#1051: (Or something like that.)
surajpatil#3994: small tip, if you use the latest stable release of `Transformers` you should observe almost over 3x speed-up when generating longer sequences 😉
Louis#0144: I saw yeah
Louis#0144: Wild
EricHallahan#1051: Yes, we are very happy to see that made release.
Sid#2121: oh nice, let me update this
Louis#0144: Now if we could generate longer sequences without NaN
Sid#2121: NaNs? |
Louis#0144: Yes
Louis#0144: (Not to single you out suraj dw, stas is already fixing it)
alstroemeria313#1694: Um? It worked for me?
alstroemeria313#1694: I think I used logdet actually though
Louis#0144: Oh ok
alstroemeria313#1694: Did you try it
Louis#0144: I will later
alstroemeria313#1694: The docs just warn that double backward is 'unstable' when the matrix is singular
Louis#0144: What are u using det for
alstroemeria313#1694: I was using logdet twice in the calculation of the KL divergence between two multivariate Gaussians with given means and covariance matrices
Louis#0144: O ok
alstroemeria313#1694: What are you using it for
Louis#0144: Geodesics
Louis#0144: Using torch to solve DEs...
alstroemeria313#1694: Oh
alstroemeria313#1694: I know nothing about that ^^;;
alstroemeria313#1694: But I was backwarding through it and this was working
triggerhappygandi#0001: Is this a thing?
triggerhappygandi#0001: I saw jax tutorial on solving the wave equation. Looked gimmicky
Louis#0144: It’s straight forward |
mkualquiera#3484: use torch to build artificial goose intelligence, use artificial goose intelligence to solve DE
Louis#0144: That too
EricHallahan#1051: Would `geoopt` be useful to you?
Louis#0144: Nah I was just experimenting
triggerhappygandi#0001: @Louis can you show some code
Louis#0144: Don’t need a real api
triggerhappygandi#0001: To solve a complex de
Wes#9024: I took a look at it, the "code generation" aspect isn't really performant, to me 🤔 anyone with the same feedback?
triggerhappygandi#0001: Competent code generating LMs are still some time away
Louis#0144: ```print(“hello world”) ```
Louis#0144: How’s that
Louis#0144: There’s tutorials online
Louis#0144: I’m trying to get it working myself
Wes#9024: Definitely 😉 It's still good to evaluate how far it is!
EricHallahan#1051: I consider Microsoft to be at the forefront of this kind of stuff, and they are using *LSTMs* still. Their code completion is shockingly good.
Louis#0144: LSTMs are poggers
triggerhappygandi#0001: It's not the same as writing full code itself
triggerhappygandi#0001: Davinci is impressive, but the code isn't quite executable by itslef.
EricHallahan#1051: Yes, of course. But in terms of production-level models, Intelisense/Intelicode rocks.
triggerhappygandi#0001: Is that something native to vscode? |
EricHallahan#1051: Visual Studio and VS Code
triggerhappygandi#0001: I've seen services like Kite which are very expensive
EricHallahan#1051: You can tell those developers put a lot of effort into it because it obviously is a benefit to the developers themselves.
nickdza#1656: @EricHallahan @StellaAthena thank you for pointing me to the huggingface transformer. It's exactly what we were looking for and really amazing!
AbyssalDragon#6260: Hello!
Wes#9024: Anyone having issue using the 2.7B model? It entirely fills the session's RAM and crashes it when loading
EricHallahan#1051: Welcome! Anything you are looking for in particular?
AbyssalDragon#6260: A place to hang out with people that are better at machine learning than me.
AbyssalDragon#6260: So yeah I'll probably be lurking for a bit.
EricHallahan#1051: You can absolutely do that, :lurkmoar:.
chris_myzel#9645: think this the guy claiming it https://twitter.com/DemangeJeremy
EricHallahan#1051: Uhh... Yeah, that was from one of his tweets, yes.
nz#9710: where's the money lebowski
EricHallahan#1051: Eh, we really don't care too much right now.
chris_myzel#9645: if you have decent amounts of RAM and use the pre-trained model you don't need a GPU to get generated code from gpt-neo
Wes#9024: For the record, using sessions with no GPU/TPU gives you 25GB of RAM instead of 12.
EricHallahan#1051: Yes, they are just very slow in comparison.
EricHallahan#1051: Though CPU instances are great for debugging.
RyanT#5929: Wondering if there’s an opening for homeopathic LLMs
RyanT#5929: Where instead of distilling knowledge you dilute |
RyanT#5929: And tell people the model works better that way
mkualquiera#3484: reminds me of the gpt4 guy
Daj#7482: You could make a funny :morelayers: meme out of this I feel
Daj#7482: "By diluting the information with more parameters, the model becomes stronger :morelayers: "
zphang#7252: Interesting: GPT-3 didn't have any summarization results
bmk#1476: maybe because evaluation sucks
zphang#7252: when has that ever gotten in the way of a summarization paper :p
gwern#1782: can't you get those from the learning to summarize paper?
zphang#7252: those are tuned, right?
zphang#7252: anyway, this came up because I saw a paper with the following summarization tasks: arXiv, pub med, and patents
gwern#1782: but they presumably include baseline with zero finetuning
zphang#7252: oh you're right it looks like they have zero-shot setups too
EricHallahan#1051: Yes, we know about gapt.ai
lab#1636: Are there any workgroup looking to re-implement DALL-E on top of gpt-neo?
EricHallahan#1051: Not on top of, but more in parallel.
lab#1636: Where can I contribute bits :sadge:
cfoster0#4356: Eventually #multimodal, although there's other prep work going on rn. There's also a sister Discord that's doing DALL-E replication
cfoster0#4356: Can we get a link to their server in #communities?
lab#1636: 👍
lab#1636: oh dang how did I miss the topic on multimodal |
Louis#0144: 😮
Louis#0144: i did not know of this
Louis#0144: wtf
Imperishable_NEET#1969: Not sure where to ask, but would it be possible to create 2D animation from 3D scenes using GANs like style transfer? To turn 3D animation from say, MMD into something like anime with its 2D non-euclidean perspective tricks? Asking @gwern.
Louis#0144: dont tag gwern unless you know them please
EricHallahan#1051: This really isn't a good place to ask about this kind of stuff.
StellaAthena#3530: We haven’t done any GAN stuff, though some people who hang out here do
Daj#7482: Anything is possible with enough GPUs and ~~slaves~~ grad students
StellaAthena#3530: Anything is possible with enough GPUs and ~~slaves~~ ~~grad students~~ random people from the internet
Imperishable_NEET#1969: Alright sorry. Was just asking gwern because he's done this kind of stuff before.
StellaAthena#3530: No worries.
Daj#7482: Anything is possible with enough G~~PUs and slav~~e~~s grad stud~~e~~nt~~s~~ random peopl~~e~~ from the internet~~
Imperishable_NEET#1969: It's an interesting concept wrt. media synthesis, though. I could see a service being made that lets you create an anime-esque scene from a storyboard you put together posing models in MMD or Blender.
StellaAthena#3530: This seems like a good moment to remind all you lurkers that we underutilize our compute to the point of wasting the equivalent of hundreds of thousands of dollars of computer a month. We are very strictly bottlenecked by man-hours and not at all by compute.
If you’re a random person interested in doing something that really isn’t that much work but requires absurd amounts of compute.... we are very plausibly the best place in the world for you to go to do the project. We will happily give you the compute so long as you open source the model and let people who want to collaborate with you. That’s more or less the requirements.
Don’t know how to write a paper? No problem! We have people who will happily provide advice and mentorship, and teach you how to write papers and do research.
Daj#7482: Pinned a message.
Spy#9778: oh huh |
Spy#9778: I've kinda been vibing here since I figured you guys had everything that needed doing done
Spy#9778: is there a "good first issue" tag on a github or something
Daj#7482: ~~fwiw I expect super intelligent AGI to happen so soon it'll be much easier to just build that and tell it to make your degenerate self insert hentai than custom building GANs~~
Daj#7482: There are a few things depending on your interests and level of skills. But this is also a call for new projects since we have more compute than projects that can use it
mgostIH#0245: Are you down for some Pokemon AI :ultrazucc:
Spy#9778: I'm a NLP phd student but my distributed computing skills are 0 so I don't think leading a project would be a good idea
StellaAthena#3530: Nominally yes. In practice we are not always the best at tagging things well. If you share a bit about your background I can poi f you in the right direction
Spy#9778: uh
Spy#9778: idk I use TF/jax and know a bit of pytorch
Spy#9778: I do language modeling stuff
StellaAthena#3530: We can sponsor basically any language modeling project.
StellaAthena#3530: That’s an exaggeration, but not by much
nz#9710: No computer vision?
Spy#9778: hmm I'll keep that in mind
Spy#9778: most of my projects have been chosen around the premise that I will be doing them on 2 GPUs
StellaAthena#3530: Ping me if you have a project that could use 12 A100s. I think we currently have 24 sitting idle
Daj#7482: We can atm support additional sturdy Project Managers that can scope, organize, implement and see through worthwhile experiments
Daj#7482: We have the resources to support more such PMs that wanna do something cool
Spy#9778: Oh do you guys have dedicated hardware? I thought everything was done using cloud time
StellaAthena#3530: We have a lot of TPU access but also dedicated GPUs |
StellaAthena#3530: Yeah 100%. If you are good at project management / scoping plz DM me because I am not and have been pretending to PM four projects simultaneously.
Daj#7482: Could've fooled me :berk:
EricHallahan#1051: I launched the **S**pecial **P**rojects Group on **Sp**eech and **S**ignal **P**rocessing today, because there are too many projects with speech/signal processing in mind that a few of us would like to work on.
Daj#7482: Great, throw a gdoc together, gather some other people to work with, and we'll give you a channel and hardware
EricHallahan#1051: I need to put together a project proposal sooner rather than later.
StellaAthena#3530: Hmmm
StellaAthena#3530: @Daj I took a stab at what a project summary card might look like. What should be changed do you think:
https://discord.com/channels/729741769192767510/785968841301426216/828328030714724363
bmk#1476: hopefully in a few weeks speedrun and eval harness other stuff I'm involved in will be done and so i can focus efforts on whatever else that needs help
Daj#7482: Ah yes, this is why you are our not-PM-PM, a template like this is a good way to get people actually started
Daj#7482: Looks good to me, it's minimalist but that's probably to its benefit
StellaAthena#3530: I’m thinking it might make sense to have something like this pinned in each channel, and then an issue in each GitHub repo that describes the overall projected and gets updated weekly. These issues can be cards in an org-wide Kanban even!
StellaAthena#3530: Ooo I like this idea
Daj#7482: Disclaimer: I will not use org-wide kanban
Daj#7482: Nothing personal I'm just retarded
StellaAthena#3530: Rude
Daj#7482: lol
Daj#7482: I will try to use it if we make it a norm
StellaAthena#3530: I’m thinking of this as being a five minute a week task for the project lead. Basically weekend status reports |
Daj#7482: I know I'm (mostly) joking
Daj#7482: (and constantly neglect my kanbans at work)
bmk#1476: i like giving each project the freedom to use whatever pm structure they want
StellaAthena#3530: I was good at it until my boss loved ours to MSFT teams
Daj#7482: I'm interested in experimenting with different organisation strategies though 100%
Daj#7482: Succesful and failed attempts at Eleuther have taught me a ton about organisation
StellaAthena#3530: Me too, but I think a very minimal amount of consistency across projects will go a long ways towards making things more accessible to newcomers and help smooth things along
StellaAthena#3530: I’m not saying “this is how you must organize your project” I’m saying “write a weekly status update on GitHub”
Daj#7482: The "having a person in charge" is a big step forward in legibility
Daj#7482: I'm down for trying this
Daj#7482: If we hate it we can just awkwardly stop doing it as we do with most failed organisational experiments lol
StellaAthena#3530: Lil
Sid#2121: I don't think it's too much to ask of a project that they have at least a google doc and report progress, I approve 👍
bmk#1476: how long does the report have to be?
Sid#2121: one
bmk#1476: ok
Sid#2121: approximately one long
bmk#1476: how many is that in metric
Daj#7482: two
nz#9710: yes |
bmk#1476: perfect
Sid#2121: no
bmk#1476: [automatically typecasting bool to int] ok so 0 awesome
mkualquiera#3484: Type theory people: :nooo:
bmk#1476: is a long basically a less furry version of furlong?
mkualquiera#3484: please don't bring the offtopic virus to general
bmk#1476: lol k
bmk#1476: so uh how long tho
StellaAthena#3530: This week we kicked off the scaling laws for fine-tuning project. Stella, Leo, Charles, and Preetham are the starting core team.
Most of our work has to wait for the suite of GPT-Neo models to be trained, but we can set up the fine-tuning process while we wait. Preetham took a stab at implementing it, and Stella gave him feedback on his PR.
StellaAthena#3530: I’m not asking you to write a report. I’m asking you to do the bare minimum so that someone can get up to speed on a project without having to be personally briefed by a project lead.
StellaAthena#3530: If it takes you more than 5 minutes you’re doing it wrong
StellaAthena#3530: Does that answer your question @bmk
bmk#1476: I'll try to write summaries but I'll probably forget very often and also most of them will be "nothing happened"
EricHallahan#1051: Also, it really helps to know what needs to be updated on the website.
Daj#7482: _looks at my citations list_ :guilty:
Daj#7482: but yeah lol I think this is a good idea
StellaAthena#3530: @bmk You will be reminded every weekend to do it
Spy#9778: have you guys done anything with any of the long context transformers? |
Spy#9778: reformer, compressive transformer etc
Spy#9778: I've consistently been surprised none of the big pretrained models make use of stuff like that
Spy#9778: I assume it's because people have done the experiments and found they don't work at that scale or something but since that's not very publishable it's hard to say
EricHallahan#1051: Have you read the FAQ?
Carl-bot#1536:
Spy#9778: you mean this? https://cdn.discordapp.com/attachments/729741769738158194/829457391274950777/screenshot.png
EricHallahan#1051: Yeah.
EricHallahan#1051: It is pretty poorly worded and needs an update.
Spy#9778: I didn't even mean the organization necessarily, I'm more curious if anyone has any idea why those models haven't been having more of an effect on large scale pretraining
Spy#9778: there's sorta hypothesis 1: they suck, and hypothesis 2: they don't suck and google has a big pretrained compressive transformer
EricHallahan#1051: It is because "they kinda suck."
Spy#9778: sucks
bmk#1476: they kinda suck, they're a lot of work to make right, and we don't have enough engineering effort to make it work
cognomen#6297: a "why not..." section would be nice
cognomen#6297: beyond DMoE which is explained already
Spy#9778: just to be clear I wasn't trying to ask anyone to do anything
Spy#9778: I was just looking for more info on the topic
Spy#9778: since I was really shocked when gpt3 turned out to almost be a vanilla transformer
Sid#2121: there's been a lot of discussion of various architectures in #research as they've been released. Lucidrains normally has a repo on them. But the tldr is, they either a) don't save that much compute with short context windows or b) don't perform as well as vanilla attention on language tasks
EricHallahan#1051: I need to make Aran and Lucid to take a look at that to improve it. |
Aran Komatsuzaki#5714: yeah usually what happens is that he wants to build something and i say that it's not worth building it for giving only marginal improvement
lab#1636: can we track ur git commits and use gpt to summarize for the report :thonk:
bmk#1476: won't work because my commits are all "fix shit again (again)"
lab#1636: what if it tracks the diff
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/829461545536585778/Screenshot_2021-04-07-15-03-46-209_com.github.android.png
lab#1636: let me try with the diff over the weekends and get back :goose:
Daj#7482: @StellaAthena Suggestion to add to that project template: A "Things we could use help with" line
Wes#9024: That was not enough to run the 2.7B model on the colab link. Is there another colab to play with gpt-neo? ^^"
zack#6238: i have one for 1.3B, you can swap it out with 2.7B if you're not tryna generate anything too long https://colab.research.google.com/drive/1KDNsA0EpofIMEpd64hJCpxGhpa2lEOsi?usp=sharing
EricHallahan#1051: ¯\_(ツ)_/¯
StellaAthena#3530: Continuing the theme of improving documentation, I was thinking of creating commands users can enter to pull up info about our projects. I'm thinking `!projects` for a list of currently active projects with pointers as to who to speak to about joining them, and `!releases` for a list of papers, models, and datasets we've released.
mkualquiera#3484: make it show an embed
mkualquiera#3484: like what shows up when you do !battle
StellaAthena#3530: For example, typing !releases might trigger
Carl-bot#1536:
mkualquiera#3484: yeah that
Daj#7482: nice
StellaAthena#3530: There's a 1000 char limit per box, hence the "see more" for the papers
EricHallahan#1051: Why can we not just put it on the website?
StellaAthena#3530: It is on the website. This is for people in discord |
StellaAthena#3530: The key to accessibility is to write things down everywhere
𓅬 gabriel_syme 𓅬#3220: "it's down there some place, let me have another look"
𓅬 gabriel_syme 𓅬#3220: man I wish I was doing some LM and do fun projects with you all, if anyone is lurking here take that chance. It's not common you find one like it
waz#1466: Hi all,
I am a novice and it would be helpful if someone can share a guide ot steps on how to install this on my windows pc
Carl-bot#1536: Welcome to EleutherAI! This is a research-focused discord and is not the best place to get answers to entry-level questions, learn basic machine learning, or get tech support. Novices are welcome to hang out here, but we encourage you to lurk more.
The #communities channel has links to other ML-oriented communities that may be better suited for your question.
EricHallahan#1051: Just use this notebook in Google Colab. Google's servers are likely far more powerful than the computer you would like to install it on.
https://colab.research.google.com/drive/17MhFnXeHE7ZnLo2vlQ1Htqm03_X1ULqm
EricHallahan#1051: We cannot really help you any further than that, sorry.
Exocamp#8255: Wait, is there any difference between the models on Colab and the ones on HuggingFace?
Exocamp#8255: Sorry if I didn't see that somewhere, also kinda new to ElutherAI
Exocamp#8255: Wouldn't think so, right?
bmk#1476: nope they're the same
Exocamp#8255: Same number of parameters and all.
Exocamp#8255: I see, thank you
Exocamp#8255: I imagine I would have more "flexibility"/ideas with text generation tho on Colab haha, I'll look at that more
Exocamp#8255: You guys ever thought about meeting up with AI Dungeon when the larger parameter models are fully trained? I actually first heard of you guys on their Discord lmao
bmk#1476: we don't really get involved in what other people use our models for tbh |
bmk#1476: our job is to put the models there and do the things we want to do with the models
EstebanSir#2189: hey, you guys wouldn't say that AWS lambda is good for machine learning, right? i mean, can you load a model and leave it on memory before making a request? it sounds against the concept of lambda
bmk#1476: that being said, wauthethird does hang out here
Exocamp#8255: Ah, cool!
AI_WAIFU#2844: @bmk do we have the pile in jsonl?
bmk#1476: yes
bmk#1476: in fact that's the preferred format
AI_WAIFU#2844: I want to re-tokenize it with a context lengths of 1MB. Packing documents together if need be. Do you foresee anything going wrong with me trying to do that?
kindiana#1016: give it a shot if you want
kindiana#1016: but I got significantly worse results when training on pile in 2k chunks, but longer documents are placed in sequential chunks in the same batch
kindiana#1016: like a 1-2% increase in loss
bmk#1476: well, i think you might have a hard time training that
bmk#1476: but i mean if you think you can make it work go for it i guess
AI_WAIFU#2844: I want to do a run with a small batch size and all local attention.
bmk#1476: ah
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/829545283728834620/unknown.png
kindiana#1016: correlated batches are no good lol
EricHallahan#1051: I expect that we will re-release all the smaller models with better shuffling at some point.
Louis#0144: @bmk we have the same shit for our model
Louis#0144: are u sure the val set is shuffled |
bmk#1476: uhhhh
kindiana#1016: ?
kindiana#1016: Wdym
bmk#1476: idk
Louis#0144: part of the dataset we're using
Louis#0144: we dont know if its shuffled
Louis#0144: leo and I discussed this last week
Louis#0144: and he said it was
Louis#0144: but Im not sure
bmk#1476: pile is shuffled doc level
bmk#1476: no idea about the clip stuff
kindiana#1016: Idk what data you are using
Louis#0144: o
kindiana#1016: Is it jsonl?
Louis#0144: yeah
kindiana#1016: Or tdrecords
Louis#0144: jsonl
Louis#0144: do u have a shuffle script
kindiana#1016: Only for chunked and packed tfrecords
Louis#0144: shit |
Louis#0144: ok
Louis#0144: I'll write a shuffle script tmrw I guess
kindiana#1016: You might be able to just slap a shuffle iterator in front of your data loader
kindiana#1016: Idk how it works in pytorch
Louis#0144: even if its an iterable dataset?
Louis#0144: how does that work
Louis#0144: lol
Louis#0144: i have a feeling it wouldnt
kindiana#1016: You can have a rolling shuffle buffer
kindiana#1016: Tf does it
Louis#0144: ohhh
Louis#0144: true
Louis#0144: hm
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/829555858253938728/unknown.png
𓅬 gabriel_syme 𓅬#3220: such a sad truth it feels like
jiawei lin#6900: did anybody encounter this warnning huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
jiawei lin#6900: when i set the TOKENIZERS_PARALLELISM,it doesn't work |
Daj#7482: I just told you we are not a tech support discord, we are not Hugging Face
mgostIH#0245: Hugging Goosegirls :goosegirl:
Daj#7482: sit in the corner
jiawei lin#6900: sorry ,i saw this Development channel for GPT-Neo. If you have questions, head to #general .
Daj#7482: Questions relevant to Eleuther
Daj#7482: You're using Hugging Face code, as far as I can tell
jiawei lin#6900: ok,i know
EstebanSir#2189: It’s almost like.. A bitter lesson ;)
𓅬 gabriel_syme 𓅬#3220: yeah lol thought of that the minute I wrote it 🙂
𓅬 gabriel_syme 𓅬#3220: it's a bit more sad than bitter to me, but same idea yea
StellaAthena#3530: I just heard the weirdest question on a work call ever
StellaAthena#3530: My girlfriend is on a work call and someone interrupted the speaker to say “Janet... this is a really personal question but have you ever been kidnapped? You use the phrase “proof of life” an awful lot...”
Louis#0144: OMG
Louis#0144: That’s notifying
Louis#0144: Mortifying
Daj#7482: wtf, I don't even know how to parse why he thought that was an appropriate thing to ask
Daj#7482: Is this some kind of meta-threat?
bmk#1476: wat
StellaAthena#3530: I have no idea
bmk#1476: this format is perfect for memes |
"this is a really personal question but have you ever been a member of a labour union? you use the phrase 'proof of work' a lot"
nz#9710: (isn't the other star needed for starboard?)
mkualquiera#3484: wrong star guys :hap:
StellaAthena#3530: “This is a really person question, but have you ever been face down in a gutter? You use the phrase “proof of liquor” a lot”
Louis#0144: Already tweeted it
mkualquiera#3484: ok that's a bit out of context LOL
bmk#1476: @Louis no pls delet
bmk#1476: at least don't tag me
Louis#0144: LMAO
Louis#0144: ok ok
bmk#1476: I'm serious
Louis#0144: I did it
Louis#0144: https://twitter.com/lcastricato/status/1380181625569435649?s=21
Louis#0144: @StellaAthena @bmk @Daj I have a bunch of people from my lab interested in the compute reserves over the summer for computational creativity stuff
Louis#0144: like 3 or 4 people now
Louis#0144: how do we want to organize this
Louis#0144: I also have someone from UPenn interested in using it for poetry stuff
Daj#7482: They join the server and talk to us about their project and we take it from there?
Louis#0144: ah ok |
Louis#0144: so you want to do one off type projects
Louis#0144: sg
Daj#7482: What were you envisioning?
Louis#0144: a small computational creativity subgroup
Louis#0144: with like a set of proposals to work towards
Daj#7482: Sure
Daj#7482: I still assume they wanna do some kinds of concrete projects
Louis#0144: yeah
nz#9710: keep some available, tomorrow I'll present a proposal too :berk:
Louis#0144: we have more than enough compute to go around
Daj#7482: We're trying to be a bit better with organizing projects so it's easier to show newbies what we're working on
Daj#7482: So yeah, concrete proposals (they can just be a single page gdoc or whatever) is encouraged
StellaAthena#3530: @Daj Maybe it would be helpful to create a template or something? Or at least examples
Daj#7482: Yea probably a good idea. Wouldn't make it mandatory but having some examples to work from would probably help
Daj#7482: I wrote a one page summary for #deleted-channel
StellaAthena#3530: Oh definitely not. My intention is to make it illustrative, so people have some idea of what kinds of things we would like to know
thepok#1770: the current transformers look at a fixed length prefix right? couldnt we add a system where the neural net learns wich words of the prefix to keep up to an maximal length and wich to forget? this way the atention could concentrate on important words and ignore "fillers" that dont add semantics only are there vor syntacs
thepok#1770: i hope someone understands my broken englisch 😦
mkualquiera#3484: that's how transformers work already?
thepok#1770: lol realy? |
mkualquiera#3484: yes
mkualquiera#3484: They use "attention heads" to determine what is important to pay attention to and what isn't
thepok#1770: yes but only in the 2000 token window
bmk#1476: are you German? lol
thepok#1770: yes
thepok#1770: 😉
bmk#1476: lol nett
thepok#1770: you too?
thepok#1770: many germans here?
Daj#7482: Nein, keine Deutschen hier
thepok#1770: we need a german pile ;D
StellaAthena#3530: The window is for computational reasons. It quickly becomes infeasible as you get a larger window
thepok#1770: yes thats why i would like to fill it with only important tokens
CRG#8707: <https://arxiv.org/abs/1905.07799>
cfoster0#4356: https://openreview.net/forum?id=ZVBtN6B_6i7
thepok#1770: i think i made an error - i was thinking of an additional notepad for the transformer, where it can note important words, like names of main charakters..when writing a looong book
thepok#1770: this could fill first half of atentionspan
thepok#1770: @cfoster0 thx that paper sounds like that
thepok#1770: i guess its not better than normal transformers, or else you guys would use it :/
cfoster0#4356: What was the magic number that TPUs like for tensor sizes? Was it 256? |
AI_WAIFU#2844: I think it was 128 someone correct me
nz#9710: 128 IIRC
cfoster0#4356: Thank you both 👍
nz#9710: https://github.com/google/jax/blob/master/cloud_tpu_colabs/README.md#padding
nz#9710: (one dimension a multiple of 8, the other of 128)
zphang#7252: *2^7 is the most magically powerful number*
cfoster0#4356: Ah ok. Funky
StellaAthena#3530: The pseudoscientists aren't even pretending anymore. Surely the most hardcore biological essentialist would admit that you can't predict **names** from faces
https://www.forbes.com/sites/thomasbrewster/2021/04/08/a-2-billion-government-surveillance-lab-created-tech-that-guesses-your-name-by-simply-looking-at-your-face/?sh=551eeb7276b1
StellaAthena#3530: https://tenor.com/view/all-the-idiots-idiot-villages-idiot-worlds-you-stand-alone-michael-scott-gif-14598259
StellaAthena#3530: Unless this is a deliberate fraud to take the US military's money, in which case *touche*
RyanT#5929: Almost definitely something like phrenology + name statistics by race
mkualquiera#3484: why not just predict age + race and be done with it
mkualquiera#3484: you get a better model and no bs involved
EricHallahan#1051: Like I think that it is okay from the usage of "lets all laugh at how bad it is" entertainment value, but any serious usage is insanely unethical.
mkualquiera#3484: plus phrenology is pseudoscience in itself
RyanT#5929: That’s why I said phrenology specifically haha
gwern#1782: it's not a dorian gray effect (reminder: priming is bullshit! anytime anyone tells you that there's a quirky environmental influence on stable personal traits, it's usually wrong!), it's a clan/lineage/ethnicity/SES correlate -_- and where do they think they're going to get in searching millions of faces when you only get like 80% max out of 5 faces (20% so 4x)
AI_WAIFU#2844: You say that, but names cluster pretty well by race/gender, and both of those can be fairly reliably determined from images. So doing better than random isn't hard. |
gwern#1782: indeed, the real question is whether you can do it in same-sex siblings. my prediction is that all of the predictive power there will disappear, because the dorian gray story is BS and it's all family-level and higher clustering. the names assigned at birth merely reflect generic predictions, and don't change anything
gwern#1782: (you might get a small signal from people changing their names to match their circumstances, but it'd be way too small to detect imo.)
thepok#1770: people with smart or uper class names get prefered treatmant at school and will look more educated...
thepok#1770: later in life...maybe the system can catch up on this...
thepok#1770: purple hair ->jennifer classic hair-> maria
EricHallahan#1051: It is known that people with alphabetically earlier surnames have directly-correlated self-esteem.
gwern#1782: pygmalion effect doesn't exist to any appreciable degree, so that one is already excluded
StellaAthena#3530: Most of the Jennifer’s I know are highly driven and successful Asian women, while the only white Marias I know are queer and very socially deviant. One dropped out of college to become a prostitute. So YMMV by locale / sample.
gwern#1782: the jennifer I suppose you could explain by confirmism/assimilation ("I will name my child the *most american girl name possible* and tiger-mom her to success!"), but I dunno about 'maria'... maria for me is associated heavily with hispanics. maybe that's what's going on there
AI_WAIFU#2844: also we should take this to #off-topic
StellaAthena#3530: Doing better than random isn’t hard, but doing better than random doesn’t mean it works. I agree with Gwern that a minimally informed baseline will perform on-par with this if not better. I wouldn’t even expect you to have to go as far as siblings tbh.... I bet if you condition on gender, race, and age the performance completely disappears. I would have more confidence in an algorithm that samples randomly from the name distribution conditional on age/race/gender than I would in this algorithm.
gwern#1782: oh no way, gender/race/age is not remotely enough. there are *huge* SES gradients in names
gwern#1782: even 'race' is a highly lossy control variable. think about say Irish descendants vs english or dutch
StellaAthena#3530: I said white, but I guess by US standards I should have said white / non-Hispanic. Yes, Maria is highly correlated with being Hispanic in the US which is in turn highly correlated with social conservativism. That said, being someone I know is highly correlated with being atheistic, queer, and socially deviant so it’s not like I can make any particular claims about my own observations, even with the standard caveats about anecdotes
mgostIH#0245: > I would have more confidence in an algorithm that samples randomly from the name distribution conditional on age/race/gender than I would in this algorithm.
Why? I'd expected a decent NN trying to predict this would at the very least form the same process
StellaAthena#3530: @gwern’s response to my comment is correct, and country of origin / country of ancestry and SES are very important too. But to answer your intended question, I believe that there are relatively simple controls that give optimal performance and I do not expect that a NN will necessarily achieve optimal performance.
gwern#1782: (imagine a user staring at a panel of 5 faces trying to guess which one is 'Seamus'. well, it's probably the one on the left with the freckles, really pale skin, and bright red hair, and not all the brown/black-haired dudes on the right...)
mgostIH#0245: I think that those controls can lead to a very good (compared to random) performance very easily, but I don't think they necessarily best a good NN
StellaAthena#3530: Yeah, *mea culpa* |
mgostIH#0245: I remember a paper Gwern linked some time ago about RNNs achieving bayesian optimality on the task they are trained on
StellaAthena#3530: I think that the kinds of controls we are discussing not only beat a NN but are *pretty much optimal*
gwern#1782: it's in the external links of /Backstop yes
mgostIH#0245: Sure it's a difference between theory and practice, but still may be possible
mgostIH#0245: But this seems like a well informed guess, I'd like to see experiments
mgostIH#0245: Who knows maybe the chin size has some weird relation to it
gwern#1782: anyway, that's why sibling comparisons are so good. if there's anything to the dorian gray causal story, siblings will show the effect powerfully, with great variance reduction, and with *all* of these confounds like race/ethnicity/family SES already controlled away. you can spend your life doing dozens of dubious confounded studies, or you can do 1 good sibling study and kill it dead
mgostIH#0245: Basically my idea is that more data shouldn't hurt the prediction performance for a model that behaves well
gwern#1782: (as indeed so many causal stories have been slain by sibling comparisons. all of your prescription meds, smoking, etc etc - oops, disappear once you compare siblings. oh well!)
mgostIH#0245: Going from image of a face to just a list of attributes you picked with a well informed decision may cut data that would've been useful
AI_WAIFU#2844: Realistically, how much text does a human consume between birth and becoming an adult?
AI_WAIFU#2844: Speech counts
bmk#1476: I'm guessing you want an upper bound?
AI_WAIFU#2844: Yep
AI_WAIFU#2844: I keep finding people parroting 30,000 words per day but that sounds like bullshit.
EricHallahan#1051: Does an image count for 16x16 words?
Louis#0144: honestly 10k atleast
Ward#1738: At an upper bound I think you can go with 100,000 words a day (one novel). People speak on average about 16,000 words, suggesting they also listen to 16,000 thousands words of speech. Add in audio books, podcasts, etc, then reading - you might reach 100,000 words a day (but it would be hard to do this day in and day out). So this gets you to 36.5 millions words per year, and over 50 years about 1.8 billion. But this would be near the upper end of what is possible.
AI_WAIFU#2844: You mean 36.5 million words per year and ~~18~~1.8 billion right?
Ward#1738: oops - you are correct |
AI_WAIFU#2844: So realistically we're looking at < 1 billion words or 2GB worth of tokens
AI_WAIFU#2844: + some some IRL experience
Ward#1738: I have tracked myself over several months and I can consume roughly 50,000 words of reading and audio words (audio books/podcasts/youtube) per day, and I would say I am not pushing it to the maximum.
Ward#1738: not including talking to people
Ward#1738: but from birth to adult (say 20 years old) I would guess maybe 33,000 words a day (less when really young, but then increasing over time) might get you up to around 250 million words by the time you hit 20.
AI_WAIFU#2844: Ok, so we're effectively training our LMs on somewhere between 100 and 1000 lifetimes worth of text.
Ward#1738: and wait till we get to GPT-4, 5 and 6 🙂
AI_WAIFU#2844: So either humans are *really* leveraging multimodal, our NNs suck, or our current models are massive overkill.
kindiana#1016: I think NNs suck is pretty uncontroversial lol
AI_WAIFU#2844: I'm coining the term "data overhang", we have orders of magnitude more data than we need to build AGI.
bmk#1476: well, evolution did bake a lot of bits of optimization into us
AI_WAIFU#2844: Yeah but that's capped at 700MBs
bmk#1476: 700MB is All You Need
AI_WAIFU#2844: And most of that isn't dedicated to intelligence.
kindiana#1016: enwik9 is all you need
Ward#1738: yes, I agree wth bmk that our brain have evolved for a long time to be efficient processing data
StellaAthena#3530: Audio processing rate in words per minute depends on the language
StellaAthena#3530: English speakers process far fewer wpm than Spanish speakers
AI_WAIFU#2844: Yeah but that's gonna be at most a small multiplicative factor, and we see general intelligence in english speakers anyways.
Sai#6350: Hey everyone! I'm an undergraduate CS and Math major who has built some DL models for research, class, internships, etc. I'm particularly interested in ML theory, graph neural networks, bioinformatics/chemioinformatics/'omics and any combination thereof, but the other areas are pretty cool too. Are the Alphafold2 team and the Math group (the group equivariant and TDA?) still looking for people to help? |
Also, is there an introductions channel, I feel bad interrupting convos
EricHallahan#1051: No, you are in the right place. Welcome!
Sai#6350: Hi! Ok, that's good
Ward#1738: But you might want to also jump into the alphafold channel.
StellaAthena#3530: Welcome! I'm the lead mathematician of our non-existent mathematics group
StellaAthena#3530: the people doing #alphafold are dope
EricHallahan#1051: I think equivarience is more useful.
EricHallahan#1051: Though it depends on skillset.
bmk#1476: i think eval harness is more useful
EricHallahan#1051: I need to contribute more to that.
bmk#1476: we must feed the meat grinder!
EricHallahan#1051: But #carp needs to be completed.
gwern#1782: 700MB suddenly seems like a lot when you look at how you can fit resnets in like 1000 bits
Sai#6350: Oh haha, should I join the equivariance channel then? Is there an active project?
Sai#6350: Ok, thanks!
EricHallahan#1051: It involves fractal geometry. More exactly, it is determining how to make convolutional networks be equivarient to data with fractal structure.
RyanT#5929: Equivariance is an active topic. I'll be making a big push on finishing some stuff this weekend, had a crunch for my day job that pulled some of my time
RyanT#5929: @Aran Komatsuzaki Do you know much about what the standard is currently for graph encoding?
RyanT#5929: i.e. Graph VAE, transformers, etc |
Aran Komatsuzaki#5714: graph is @chilli's area, so i hope he can answer that 🙂
kinoc#5731: I did a rough "2 video streams while awake during normal day " (one for video and one for everything else) and came to around 300 TB for 18-20 years. So somewhere in there you go from breathing to being given keys and general rights.
kinoc#5731: So for AI_WAIFU's 2GB of tokes estimate , I'd go to 4 GB to add sensory and narrative annotations.
kinoc#5731: Does anyone have an alignment oriented text corpus / dataset laying around?
StellaAthena#3530: @kinoc What kind of dataset do you have in mind? “Alignment oriented” could mean a lot of things
kinoc#5731: The text from forums, discussions, blog posts (etc) on alignment issues.
kinoc#5731: Or all text I should scrape or point to if I was to make the Align-a-pedia.
kinoc#5731: which I guess is the subset of the Pile that would pertain to normal alignment discussions.
chilli#5665: no great standard haha
chilli#5665: it really depends on your domain
Louis#0144: I still really like gates
chilli#5665: like, more so than most other areas, graph embedding is (currently) super specific to your problem domain
chilli#5665: gated-graph neural networks?
chilli#5665: In some areas it's still better to not use GNNs at all, haha: https://ogb.stanford.edu/docs/leader_graphprop/
RyanT#5929: Do you know if that changes if you restrict the graphs to be trees? I haven’t been able to find too much there except for the tree transformer
StellaAthena#3530: For example, is this indicating that Davinci never produced things to the first two prompts that you would consider “failures”? https://cdn.discordapp.com/attachments/729741769738158194/830099513108332604/image0.png
StellaAthena#3530: @abudhkar when you fill in a box with “N/A” what does that mean?
abudhkar#0417: (We know this is is very subjective) but it means that all of davinci's generations were relevant to the provided description, and no obvious failures jumped out in our runs
StellaAthena#3530: Gotcha
p4bs#8973: @abudhkar - have you run a similar test between GPT-2 (1.5B) and GPT-3? - my experience (also subjective) is that the quality of output to prompts, for example, is quite similar |
zpeng#2458: ah, we didn’t compare it to the largest gpt-2. We tried the smaller gpt-2 earlier (117M and 345M parameters) (before GPT-Neo and getting access to GPT-3) and it didn’t give us good results, but we didn’t run it many times. Here are a couple of examples on the same task: https://cdn.discordapp.com/attachments/729741769738158194/830129765784092703/Screen_Shot_2021-04-09_at_1.08.17_PM.png
chilli#5665: Not really - well, for trees people start to use more "transformer-like" variants
chilli#5665: although I feel like there should be something much better you could use there
mgostIH#0245: Some papers turn the trees into RPN (Reverse Polish Notation) and apply the transformer on the text like that
mgostIH#0245: I thought GNNs would be better since you know the graph structure already but apparently there aren't winning approaches, maybe transformers scale much better for parallel computation and can afford far more compute
chilli#5665: how do you do RPN - where do you specify the number of children?
mgostIH#0245: Idk maybe it doesn't work for any tree or you can only do it with Polish Notation, but the Lample paper about symbolic integration did this
mgostIH#0245: I suppose you could inject some "operation" nodes that tell how many children there are
chilli#5665: hmmm, seems a bit awkward for trees with actual features
chilli#5665: haha
chilli#5665: although perhaps doable
mgostIH#0245: Like if I have a tree
1
/ \
2 3
You could maybe store it as
1 C2 2 3
mgostIH#0245: or something similar
cfoster0#4356: @Napolean_Solo https://huggingface.co/EleutherAI/gpt-neo-1.3B |
Napolean_Solo#2907: Is there a higher parameter one?
Napolean_Solo#2907: 2.7B one I think
cfoster0#4356: https://huggingface.co/EleutherAI/gpt-neo-2.7B
Napolean_Solo#2907: Ah nice thank you
Napolean_Solo#2907: Can we fine tune the model?
Napolean_Solo#2907: @cfoster0
cfoster0#4356: If you want! Whatever floats your boat
Napolean_Solo#2907: Are there any instructions to do so? That would be great!
Napolean_Solo#2907: A medium article or something that can give an idea
triggerhappygandi#0001: There is a colab
triggerhappygandi#0001: In the repo
cfoster0#4356: There was also some code floating around in a few places
cfoster0#4356: From other folks
Napolean_Solo#2907: I see
EricHallahan#1051: I'm looking for it
Napolean_Solo#2907: Thanks kind stranger!
EricHallahan#1051: https://github.com/Xirider/finetune-gpt2xl
Napolean_Solo#2907: Ah nice
Napolean_Solo#2907: Let me star it
Napolean_Solo#2907: Done! |
Napolean_Solo#2907: Any idea how much it would cost?
Napolean_Solo#2907: If I want to fine tune it for summarisation of large datasets
cfoster0#4356: No idea
Napolean_Solo#2907: Alright!
Louis#0144: idk
Louis#0144: if youre just using colab
Louis#0144: like $15
Louis#0144: lol
Napolean_Solo#2907: Well I don't mind paying a few thousands if there's any need to
Napolean_Solo#2907: I think training on TPUs would be better
triggerhappygandi#0001: just use tfrc lol
triggerhappygandi#0001: Get them for free
Napolean_Solo#2907: Anyway folks thanks for helping out!
Napolean_Solo#2907: Will consider that gentleman
Louis#0144: You basically never need to spend this much finetuning
Louis#0144: Most of that cost is data curation and collection
triggerhappygandi#0001: Also if you have a couple grands burning a hole in your pocket I can take care of them for you :berk:
Napolean_Solo#2907: Hmm but BERTs cost $6k I heard
triggerhappygandi#0001: Fine tuning bert is basically free
Louis#0144: To train |
Louis#0144: Not to finetune
Louis#0144: Finetuning Bert can be done on your phone
Louis#0144: Lmao
Napolean_Solo#2907: Ah well it's alright I have hired some PHds
Napolean_Solo#2907: Gotta put them to use
Napolean_Solo#2907: Hmmm interesting
Napolean_Solo#2907: How much would it cost to fine tune GPT-3
StellaAthena#3530: The actual run-time costs are low, the main issue is having the equipment in the first place
Napolean_Solo#2907: I have access to their Private Beta. Tech's good but has a propensity to do stupid things
StellaAthena#3530: If you have the GPUs necessary and/or can amoritze the cost of purchasing them it's cheap
StellaAthena#3530: If you need to go out and buy two dozen V100s that's not cheap.
triggerhappygandi#0001: They put out a form for that. Fill it.
For a model of similar size? I guess $10k range
triggerhappygandi#0001: You need like 100 V100s for ft
Louis#0144: Do you even need to finetune a model the size of GPT3 tho
Louis#0144: lol
Louis#0144: That’s wild
StellaAthena#3530: Also everything we say is conjecture because nobody has ever publicly shared timing results or data requirements for fine-tuning a 175B model.
Louis#0144: Like
Napolean_Solo#2907: Yes we need to |
StellaAthena#3530: I think 100 seems like overkill @Louis
triggerhappygandi#0001: openai has a fine tune form lol
Louis#0144: Yes but how many people need that
Louis#0144: Over a worse LM
cfoster0#4356: Even the learning to summarize from human feedback paper didn't go for the full 175B
Louis#0144: With some careful feature engineering
triggerhappygandi#0001: I filled it for shits and giggles
Louis#0144: Also this
Napolean_Solo#2907: Like I said the model tends to do stupid things.. I am hoping fine-tuning can solve that
Louis#0144: No
Louis#0144: It doesn’t
Louis#0144: Trust me when I say this
triggerhappygandi#0001: isn't that also because it can't be compared to anything else?
Louis#0144: It absolutely doesnt
triggerhappygandi#0001: The 13.5B one can be compared against the T5
triggerhappygandi#0001: mT5 whatever
cfoster0#4356: Like, they could have fine tuned *that* model
Louis#0144: Bigger doesn’t solve the “stop saying stupid shit” issue
Louis#0144: Atleast not yet
cfoster0#4356: Their 6.7B model did better than human baseline |
Napolean_Solo#2907: It does folk. It does.
They recently rolled out something called Instruct-series which they informed me they will be publishing soon.
Instruct is a good move. Solves a lot of problem.
Louis#0144: Lol
Louis#0144: I work in the area of controlllable language generation for stories
Louis#0144: There does not exist a controllable language model yet that does what it promises
cfoster0#4356: Publishing = paper publishing or releasing?
Louis#0144: We are like
Louis#0144: Atleast years aaay
Louis#0144: Away*
Louis#0144: Finetuning cannot fix logical incoherencies
Napolean_Solo#2907: They have already released in the private beta but they will be publishing an article on it soon.
Louis#0144: It cannot fix racism issues
Louis#0144: It cannot fix any -ism issue
Louis#0144: Not yet
Louis#0144: Maybe eventually
Napolean_Solo#2907: Yes indeed it won't but it does make it easier to control it further
Louis#0144: Eh |
Louis#0144: You get more bang for your buck by feature engineering a decode function
Louis#0144: And using a smaller LM
Napolean_Solo#2907: Well I don't have much knowledge on that but if they are investing billions I guess they know what they are doing no offence folk
triggerhappygandi#0001: But if openai allows you to fine tune for free it is win win
Napolean_Solo#2907: Lol they never will make it free
Louis#0144: They don’t and they admit they don’t
Louis#0144: It is an open problem
Louis#0144: No one “knows what they’re doing” for control
Napolean_Solo#2907: Well I can vouch for the instruct-series
Napolean_Solo#2907: It does pretty good
Napolean_Solo#2907: Makes it more controllable
Napolean_Solo#2907: Check out when they publish about it
triggerhappygandi#0001: Whats there to publish?
Napolean_Solo#2907: I have used it myself and gotta say it's way better than the normal one
triggerhappygandi#0001: Instruct isn't a different model
Napolean_Solo#2907: It's a fine-tuned
triggerhappygandi#0001: No its the regular model that does the command you instruct it to
Napolean_Solo#2907: But they offer it as different model
triggerhappygandi#0001: Regular one does autocomplete
triggerhappygandi#0001: This just follows the command |
Napolean_Solo#2907: Yes but it's been fine-tuned
Napolean_Solo#2907: It does some things better than the regular model
Napolean_Solo#2907: It has its own strengths and weaknesses
Napolean_Solo#2907: One of their employees said it's one of the experiments of fine tuning that OpenAI has been carrying out
triggerhappygandi#0001: I guess. They are going to open fine tuning access for some users at some point
Napolean_Solo#2907: Yeh next quarter
triggerhappygandi#0001: So this was maybe an internal test
Napolean_Solo#2907: They are going to expand the access to some more users
Napolean_Solo#2907: They are very hesitant when it comes to pricing
Napolean_Solo#2907: For fine tuning
triggerhappygandi#0001: Well it sure wouldn't be free lol
triggerhappygandi#0001: But I guess
Napolean_Solo#2907: Yes it definitely will cost thousands
triggerhappygandi#0001: I agree with louis at this point you might just feature engineer by hand and be better off
Napolean_Solo#2907: But as a business you can't do that
Napolean_Solo#2907: The easier you make it the better it's for your earnings
Napolean_Solo#2907: Honestly I just don't think GPT-3 is even production ready yet
Napolean_Solo#2907: I am sure folks at OpenAI are just being forced to satisfy their investors
Napolean_Solo#2907: Although it's not production ready but it's enough to build on it something that can give you a first mover advantage and then evolve with the developments in the tech
Napolean_Solo#2907: Hmm |
Napolean_Solo#2907: Well I wasn't intending to gossip
Napolean_Solo#2907: Thought it would make a better #general convo
Napolean_Solo#2907: It might even help some folks here to understand and get updated on the developments
cfoster0#4356: Fair enough
Napolean_Solo#2907: Anyway gotta leave now. Will see you later folks! Thanks for the help!
Napolean_Solo#2907: Highly appreciated.
p4bs#8973: same here, the smaller GPT-2's per Huggingface implementation give me low quality results compared to the 1.5B version
nz#9710: Hey folks, here is my proposal for a possible eleuther project using spare compute. It's nothing compared to what others are doing, but I still think it would be interesting: https://docs.google.com/document/d/1cS0DFJu2e5BuKtXSnTtRII-lvw5sNVMyFOyP3lHO7h4/edit?usp=sharing
nz#9710: Note that I have training code based on the NFNets repo pretty much ready to go as well as several implementations (currently BoTNet, TNT, DeepViT and CvT) too.
bmk#1476: what's the elevator pitch? on mobile rn and it's kinda hard to read docs on mobile
nz#9710: Mainly about having reference implementations and replication + common evaluation framework + scaling studies of computer vision papers
nz#9710: I think wightman's timm does the first part really well (and it has been used more and more in research) but I feel the other two are highly needed, plus a project like this doesn't really have much in terms of compute requirements
nz#9710: (oh and I'm mainly asking around because I'm interested in whether anyone wants to cooperate, otherwise I still have access to some compute through TFRC to start by myself)
bmk#1476: so eval harness but for image nets
ethan caballero#6044: Training for multiple epochs on ImageNet is a scam.
bmk#1476: I'd be down for image net scaling laws
bmk#1476: (to be clear, i mean image networks, not ImageNet)
kindiana#1016: there's not really enough data for one epoch image models :thonk:
kindiana#1016: if you want competitive results
nz#9710: The evaluation part would be something like it, yup |
nz#9710: Yea one epoch is not enough currently, though one aim would be to standardise the number of epochs at e.g. 300 for a fair comparison
kindiana#1016: :wojak_despair:
bmk#1476: time for OpenYFCC
nz#9710: what :sadge:
ethan caballero#6044: more than one epoch is a scam
bmk#1476: one epoch is All You Need
bmk#1476: (Komatsuzaki, 2019)
nz#9710: ok, but currently on standard datasets there's not enough data for competitive results
cfoster0#4356: I want to push back at the boldness of this, but then again I just called video a trap this week
gwern#1782: do we not have enough CNN scaling laws now?
gwern#1782: it also feels slightly horse/barn-ish, if you know what I mean. when sun or hestness et al 2017 did CNNs, it was super-useful. but putting out a CNN scaling law paper in 2021 or 2022...
nz#9710: this would not be a CNN scaling laws (in fact, if I end up doing this by myself I plan on focusing on ViT based models), rather testing all those ideas from small-medium labs that do not have much compute around at larger scale (e.g. imagenet-21k)...
nz#9710: and I agree there's nothing novel for a dedicated paper, but I feel like the project would still have value for the vision community as a whole (e.g. see the effect timm is having for open research)
gwern#1782: ah. so this would be transformers but skipping the autoregressive approach to focus on classification scaling?
RyanT#5929: Yeah I'll do some digging. I'm looking to do something on ASTs
chilli#5665: if it's code ASTs then you should take a look at code paths
nz#9710: I'm mostly interested in the common image classification tasks, yea, though as I mentioned the scaling study is just one part of it
RyanT#5929: whats that?
chilli#5665: basically, you split up your tree into a set of paths from the leaf to the current node you're looking at
chilli#5665: run a transformer on each of those paths |
chilli#5665: then run another transformer on the resulting set
RyanT#5929: Hm interesting
RyanT#5929: Do you have a link to a paper?
chilli#5665: I guess this is a good one: https://arxiv.org/abs/1910.00577
chilli#5665: haha, I feel like you're going down the same discovery path I did a while back
RyanT#5929: lol probably
chilli#5665: the redpill is that GNNs kinda suck for this kind of task
RyanT#5929: I've always loved graph algorithms from the math side and my former officemate at my current job got me interested in compilers and ASTs
RyanT#5929: unfortunate lol, are there results you know of ?
chilli#5665: the paper I linked has some results comparing
RyanT#5929: really what I want to do is get the graph representation of the program that the haskell compiler constructs and work with that but thats more down the line
RyanT#5929: Thanks for sending this, it's definitely similar to an idea I was playing around with a while back
RyanT#5929: I did notice this one not too long ago
RyanT#5929: https://arxiv.org/abs/2005.02161
chilli#5665: haha, that feels like years ago
chilli#5665: one thing is that type inference is not *generally* that hard of a problem
RyanT#5929: Yeah code completion seems much more difficult
RyanT#5929: is there a way to get statistics on the eleuther github scraping dataset without downloading all of it?
gwern#1782: type inference seems to very rapidly go from 'trivial' to 'impossible in general'
chilli#5665: right, but when your metric is "number of tokens that are correctly typed", you don't really need to care about the "impossible in general" cases |
chilli#5665: haha
gwern#1782: your program either typechecks in seconds, or it'll never typecheck. there don't seem to be a lot of usecases where a useful natural program typechecks in, say, an hour
gwern#1782: even stuff like liquidhaskell exploiting SMT solvers where you might expect to be able to turn compute into more typed programs
gwern#1782: (this is perhaps similar to john carmack's point about warnings/linters: even if what you are doing is safe, if the checker is confused, it may then confuse someone else reading it or confuse you in the future, and ought to be rewritten to be more obviously correct)
cfoster0#4356: :hap: https://docs.google.com/document/d/1TDLp0BcQvFjjEMnO9HqFpvp9Tltxph61BkxRYPPcf4E
𓅬 gabriel_syme 𓅬#3220: really nice, very interesting as well! if it works, maybe non-speech data can be next
Kazumi#1297: could you make this multimodal, so audio and image are both encoded into a shared latent space?
MicPie#9427: Hmm, I guess getting data where you have samples that have all three modalities is maybe tricky, but I'm wondering if you can train the Speech CLIP and at the same time train a Image CLIP and use the same text encoder in both setups. Like that the embedding space should be kind of guided by both and you would not need samples with all three modalities, but I'm not sure. :thonk:
Kazumi#1297: yeah, exactly what I was thinking. you would only need audio/text or image/text pair samples, and use a different encoder for image and audio, but the same text encoder, and the same similarity network
𓅬 gabriel_syme 𓅬#3220: an interesting application of 3 modalities is text, acoustic responses from interior spaces, and 3d scene representations of them (probably made through images/views and some sort of NeRF)
Kia#2550: Congratulations
Kia#2550: Also Good for you
thenightocean#6100: God I hate that!
thenightocean#6100: Is it an AI-related company?
chirp#4545: Super general question: It’s been almost 3 years now since BERT came out. It’s been a huge help for NLP problems, but have there been very many new NLP-enabled startups? If so, how successful have they been?
StellaAthena#3530: Grammerly might be the most successful example?
Kia#2550: Ow wow
Kia#2550: They use BERT
cognomen#6297: DeepL
Kharr#7888: professorbob.ai |
Kharr#7888: Seems like there's a new one every week now. The hip new thing is to wrap Hugging Face pretrained models in a UI and call it a product.
RyanT#5929: Are there docs for the datasets in The Pile?
fristiloverke#4159: this looks super interesting, how can I get involved
EricHallahan#1051: Tell me or him lol
Daj#7482: Upgrade to EleutherAI Gold Membership™️
fristiloverke#4159: 🤔
fristiloverke#4159: please add me then
EricHallahan#1051: Cool. We are still working on getting together who wants to work on what, and we expect to hold some sort of meeting some point soon to get everything organized.
fristiloverke#4159: nice, just @ me when you know more
EricHallahan#1051: That is the plan. `:)`
Kazumi#1297: I'm interested in this as well
EricHallahan#1051: Cool
EricHallahan#1051: Settle down there...
cfoster0#4356: Awesome. Y'all will also want to stay tuned for the other audio ideas that Eric Hallahan has been working on
cfoster0#4356: Outside the Pile paper?
RyanT#5929: Yeah, I wanted to see if there was information on how the GitHub stuff was scraped. Idt I saw it in the paper but I’ll double check
EricHallahan#1051: I won't likely have a proposal out until Tuesday. Way too much work to do this weekend.
StellaAthena#3530: @RyanT the code is online: https://github.com/EleutherAI/the-pile
nz#9710: TPU podcast folks probably have more info about that kind of tasks
chilli#5665: They don't use these models afaik |
StellaAthena#3530: https://www.grammarly.com/blog/engineering/under-the-hood-at-grammarly-leveraging-transformer-language-models-for-grammatical-error-correction/
Deleted User#0000: yea truth is, huggingface is going to make it trivial for new startups to take on grammarly
Deleted User#0000: i actually attended a deep learning meetup with the head of ML at grammarly some time ago. karpathy was giving a lecture about his image to caption paper (with CNN -> RNN)
Deleted User#0000: i remember asking him what he thought after the meetup, and he told me 'the singularity is coming'
Deleted User#0000: never forgot that
Deleted User#0000: i should have gone all-in on deep learning then.
mgostIH#0245: Didn't you?
bh#3738: Commoditization is coming fast
Deleted User#0000: i didn't, i ended up doing software eng for a while longer
Deleted User#0000: wrestling with complex apps etc
Deleted User#0000: i think the tipping point for me was when AIs overthrew humans at Dota2
Deleted User#0000: back then, it took a lot of knowledge to do what grammarly was doing. now NLP is pretty much just transformers
Deleted User#0000: it's really interesting going back in time and reviewing my thoughts and feelings. i feel like every time a new result came out, it was initial awe
Deleted User#0000: and then doubt would creep back in
Deleted User#0000: "it can't possibly do <x> or <y>"
Deleted User#0000: then a new result would come out
Deleted User#0000: now i have no doubt.
bh#3738: Just like how vision used to be a PhD thesis worth of material and now it's just CNNs and more layers
Deleted User#0000: starting to be that way for every field
bmk#1476: that's a good thing |
Kharr#7888: There's still room for innovation. Things like Hugging Face library + Transformers have just put in the floor. You can only go up from there.
Kharr#7888: If you try and put a research model into production, you quickly realize that user input and real-world data absolutely destroys those models. But they'll definitely get you like 80% of the way there.
bmk#1476: just get even bigger models
Sphinx#2092: lol
bmk#1476: problem solved
Sphinx#2092: The economies of scale will get in the way
Deleted User#0000: yea, i guess it depends on what costs more
Sphinx#2092: but scaling law research should make things a bit less palm-reading.
Deleted User#0000: hiring ML engineers, or scaling up
Deleted User#0000: the bitter lesson will swing around, that's for sure
Sphinx#2092: its not just a matter of scaling up though.
bmk#1476: in general, hardware gets cheaper over time, engineers don't
Sphinx#2092: Real life is, as always, more complicated than that.
Deleted User#0000: well, what more if you can few-shot a person's style and just write the entire email for them
Sphinx#2092: Like realistically, you can't just make giant models for everything.
Sphinx#2092: We don't have infinite money.
Sphinx#2092: Instead, you can only make a few giant models.
Deleted User#0000: i know i know
Sphinx#2092: Then the question, what giant models _should_ you train?
bmk#1476: one giant model to rule them all |
Sphinx#2092: what giant models are the best generalists?
Sphinx#2092: what tasks requrie their own models?
Sphinx#2092: etc.
Kharr#7888: I think in a few years things will definitely get a lot better. Right now the issue is still that real-world usage is underspecified and these models tend to blow up when they see something "weird"
bmk#1476: thankfully, what "giant" means increases by an oom every few years so there will be many chances
BoneAmputee#8363: Contrastive Language-Audio Pretraining 👏
BoneAmputee#8363: please name it CLAP :berk:
Sphinx#2092: Maybe, but what about right now?
Deleted User#0000: sure.. just don't count on all that costly effort you put into reigning in the model to not be somehow learned by something significantly cheaper
Sphinx#2092: Like if eluther depended on 1 (or 2 , or 3) giant models to exists, you would think very carefully what about model you would train
Sphinx#2092: rather than go "stack more layers" clown.
Deleted User#0000: if there is a cheaper way, and it is good enough, your competitors will use it
Sphinx#2092: Sure but you can also fuck up with this.
Sphinx#2092: https://scientia-socialis-f-discolor.hatenablog.com/entry/2020/05/01/145127#DeepL%E7%89%B9%E6%9C%89%E3%81%AE%E5%95%8F%E9%A1%8C
Deleted User#0000: im just pointing out the general trend
Kharr#7888: Very deep models are not production friendly since layers have to be processed sequentially. That's why things like MoE look interesting.. giant model + fast.
Deleted User#0000: it may or may not continue 🤷♂️
Sphinx#2092: I think the question is , if you have 100 million dollars, what should you do with tha
Sphinx#2092: if you want to get the best model?
Sphinx#2092: The clown answer is to just train some gpt-3 model |
Sphinx#2092: but this may miss out on a lot of useful tasks which would generate even more profit
Sphinx#2092: that would allow you to train more models or whatever
Sphinx#2092: instead, it might be more useful to train 2 or 3 generalist models across a few domains, beat everyone across the board, but still retain some generalist knowledge for other tasks.
Deleted User#0000: well, i've seen a couple valley companies spend millions trying to code up text to image
Deleted User#0000: https://github.com/lucidrains/big-sleep
Deleted User#0000: now we have that
Deleted User#0000: and DALL-E is on the way
Deleted User#0000: you tell me lol
bmk#1476: depending on the industry, it might genuinely be a better strategy to put your cash in the bank, twiddle your thumbs for a few years, and then just use the better compute available then
Deleted User#0000: they hired all the top of the line expertise too
Sphinx#2092: lol but you could use that argument for everything.
bmk#1476: :gameryes:
Sphinx#2092: Certainly multi-modal stuff is good, but I'd be curious to see where DALL-E is beign used inproduction for anything
Sphinx#2092: or how it compares to traditional SOTA models on normal tasks
Sphinx#2092: or even in data-rich scenarios.
Sphinx#2092: Like sure, we can have fun doing meme-shit but at some point, youcan think beyond memes.
Kharr#7888: So that means when a new SOTA comes out, their experts should be primed to pick it up and integrate it 🙂
Sphinx#2092: Don't let your dreams be memes, after all.
Deleted User#0000: sure.. but admittedly a lot less expertise is needed
bmk#1476: any business where getting enough compute is the cost bottleneck and there's also very little early mover advantage *does benefit from thumb twiddling* |
Sphinx#2092: Like, GPT-3 can barely compete with Transformer Big models for translation.
nz#9710: You have raised valid points sphinx and I'm curious, what would you do with those 100 million dollars?
Sphinx#2092: Luckily, I'm not in that position lol.
Sphinx#2092: But what I would expect people in that position is that you ask all your scientists to make proposals
Sphinx#2092: and demand scaling laws for those proposals
Deleted User#0000: all i know is, go back 10 years ago, i couldn't even hire enough engineers to code up something that can generate infinite faces
Deleted User#0000: it would not even be possible
Sphinx#2092: Sure, and if your goal isto wait 10 more years
Sphinx#2092: by all means
bmk#1476: i know you use translation as a reference point because that's your thing, but gpt3 is uniquely bad at translation from the start since the data is almost all English
Sphinx#2092: Fine. Then pick any task you want with large dataset size.
Sphinx#2092: OpenAI already showed that it fails in those cases
Sphinx#2092: see e.g. the scaling laws for transfer paper.
Sphinx#2092: Just because you have some model which can do a lot of things, doesn't mean it's acutally really good at many of them.
Sphinx#2092: and the economies of scale start to show once you have actual training data as opposed to some bullshit 10k examples
Kharr#7888: This is true for most software 🙂 Tools get better, new standards are established, etc. It used to take a lot of work to put up a website.. or to author a video. Now random individuals do it as a fun hobby.
Deleted User#0000: yup, what's going on right now is more than just improvements in software though
Deleted User#0000: deep learning is about tapping into some emergent computational phenomenon of nature
Deleted User#0000: and it speaks to the very physics of our reality
Kharr#7888: A year ago I was telling investors that in 5 years we would likely have AI that can do all sorts of tasks with just a few examples. No one wanted to believe me 😆 |
Deleted User#0000: are you in the startup game Kharr?
Deleted User#0000: btw, i loved final fantasy VI
Daj#7482: Glad I'm not the only one that thinks this
Deleted User#0000: one of my favorite games 🙂
Deleted User#0000: recognize your avatar right off the bat
Daj#7482: there is some serious, fundamental scientific revolution happening by proxy through DL
Kharr#7888: We're like 9 years old now, so kind of. We're always early adopters of tech. We built our own autoML tools ~ 4 years ago, deployed transformers a year ago, etc
Deleted User#0000: ok, makes sense why you are hanging out here
Kharr#7888: This is honestly the first group I've seen where research --> code and experiments is so rapid. It's what we do as well.
Deleted User#0000: yea, this is the crowd for transformers and scaling mainly
Kharr#7888: Luckily, everything is transformers now 🙂
Deleted User#0000: yea it seems that way
Kharr#7888: When we tried out transformers for the first time we abandoned everything else. It was pretty obvious the rest of the community would come over. I think by the time they are truly mainstream we'll see the next thing, whatever it is. Always have to be ready to jump ship 🚀
Deleted User#0000: yea, the next big thing seems to be making the models big, whether scaling laws can be bent or made more favorable is still open research
Deleted User#0000: the core algorithm, attention, likely cannot be improved upon
Kharr#7888: Agreed. Attention is one of the most exciting things to hit DL in a long time.
Kharr#7888: I think we're going to revisit some of the classic ideas that have been swept under the rug. The biggest gains I've seen has been from revisiting some of those older concepts Hinton published early on but with a modern architecture. It's not a coincidence a lot of the conversations from the bigger AI heads are about energy.
gwern#1782: I told someone the other day that if they want to understand the 2020s, just go read all of schmidhuber's old papers and imagine if they actually work now
mgostIH#0245: Get this man a Turing Award ffs
Deleted User#0000: yea agreed. attention is essentially a free architecture search for problems where you have enough data to brute force it |
Deleted User#0000: i don't think it's the end of the story though
Deleted User#0000: or maybe it is.. who knows, we'll just have to see where this scaling saga ends
andyljones#7746: > It's not a coincidence a lot of the conversations from the bigger AI heads are about energy
you're right, the bigger ai heads are all 1995-2005 researchers plus 20 years
Deleted User#0000: it's def worth revisiting certain ideas, fill in attention + data + scale, and see if it works or not
Deleted User#0000: shmidhuber gets a lot of flak, but i love his papers
Deleted User#0000: https://arxiv.org/abs/1112.5309
Deleted User#0000: one of my favorites
ethan caballero#6044: is energy referring to energy-based models or electricity costs?
andyljones#7746: ebm unless i'm totally misinterpreting
Kharr#7888: Yes. See stuff like this a lot, especially during their talks: https://twitter.com/ylecun/status/1314697333305638919?lang=en
Daj#7482: Quick question: How is EBM different from loss minimization?
Daj#7482: Isn't energy just a different word for loss?
Daj#7482: (at least, I remember learning such a formulation of physics)
Kharr#7888: It is very similar. That paper in the tweet might help clarify -- they compare softmax vs energy
andyljones#7746: it's a certain kind of loss
Daj#7482: ah, thanks
ethan caballero#6044: Oh, if "energy" was referring to electricity costs, then I was going to post this meme: https://cdn.discordapp.com/attachments/729741769738158194/830551767120216064/Neural_Scaling_Laws_1.jpg
Daj#7482: SCOOP: AI researchers develop new technology that runs on pure energy, solely responsible for climate change |
andyljones#7746: gotta say, i am generally skeptical of the object-level directions that emiratus researchers pitch. too often smells of fighting the last war. but i've a lot of time for meta-level commentary, cf bitter lesson
AI_WAIFU#2844: It's just loss minimization, but evaluating the loss is usully #P-Complete.
bmk#1476: c a p s n e t s
Daj#7482: Ok I need to think about this tomorrow when I'm more rested
andyljones#7746: WHERE DO th eSYMBOLS GO
Daj#7482: :smallbrain: DL won't work because it doesn't use symbols
:bigbrain: Each possible floating point value is a symbol
Deleted User#0000: i mean, should just point the symbolic crowd at https://openai.com/blog/multimodal-neurons/
Deleted User#0000: and ask them to explain that
andyljones#7746: quick someone reshape the zeros of the riemann zeta into a weights matrix and load it into BERT
Daj#7482: "Of course we knew DL would do that, it's not _real_ reasoning, after all, just sensory processing" - Outgroup, probably
ethan caballero#6044: EBM means you don't normalize the output to be a valid probability distribution (e.g. you would no longer use a softmax on the output).
Deleted User#0000: yea, certain academic types can dance their way out of an answer, like a politician
Daj#7482: My understanding grows, thank you
catal#4638: Any ideas for small ML projects that I can do on moderate hardware / google colab? For now I have done mostly word and graph embeddings and things like kaggle competitions. My background is more in physics and algorithmics.
EricHallahan#1051: I would say it depends on what you are interested in.
catal#4638: For now I just want to learn more about ML by working on something cool.
ethan caballero#6044: #scaling-laws
cfoster0#4356: The converse is that models with softmax sometimes admit an energy-based interpretation. See <https://arxiv.org/abs/1912.03263> and <https://mcbal.github.io/post/attention-as-energy-minimization-visualizing-energy-landscapes/>
ethan caballero#6044: will grathwohl is prophet of EBMs |
catal#4638: What do you mean? Isn't that literally the opposite of small and able to run on any hardware I can afford? 😅
ethan caballero#6044: @cfoster0 @Deleted User
why do people use softmax attention instead of energy-based attention?
EricHallahan#1051: I think his point that you can still look at it at small scales.
ethan caballero#6044: https://twitter.com/MitchellAGordon/status/1380378088362684419
https://twitter.com/andy_l_jones/status/1380049774938886144
cfoster0#4356: I mean, softmax works? There's some properties about exponential storage capacity and rapid convergence to basins of attraction, but that's not the *real* reason why people use it.
EricHallahan#1051: Though I don't know if I agree.
cfoster0#4356: Lmao
catal#4638: Ohh that does look interesting. I could look for a similar (easy game) and see if i can implement something like that
cfoster0#4356: I get why you use the thumbs-down the way you do, @ethan caballero, but does it really fit with our discussion norms?
ethan caballero#6044: I mostly was being cheeky.
cfoster0#4356: if you say so :berk:
catal#4638: I even saw that tweek and did not think about it. Thanks 🙂
Deleted User#0000: > @cfoster0 @Deleted User
> why do people use softmax attention instead of energy-based attention?
@ethan caballero hochreiter tried to improve attention from the energy perspective
Deleted User#0000: In the end, they concluded only one step is needed
Deleted User#0000: Attention is good enough
Deleted User#0000: Hopfield network is all you need paper |
Deleted User#0000: They tried learning the temperature too
cfoster0#4356: It's a nice unifying framework tbh
Deleted User#0000: No dice
cfoster0#4356: Also lets you do things like, explicitly setting the attractors to make a neural associative memory
cfoster0#4356: Which kinda motivates why transformers are so damn good at learning and meta-learning
ethan caballero#6044: Why do you think #scaling-laws are not the democratization of scaling research?
chilli#5665: Just because they're doing research on these models doesn't mean they're using them
chilli#5665: From what I've heard, they aren't currently being used
chilli#5665: Although obviously they're cognizant
andyljones#7746: fwiw, the next on my list was breakthrough
https://en.wikipedia.org/wiki/Breakthrough_(board_game)
coz it's also simple as games come but a more chess-y dynamic to contrast hex's go-y dynamic. you'd need to write a kernel for it, a substitute for this folder
https://github.com/andyljones/boardlaw/tree/master/boardlaw/hex
big list of other games research-relevant games here:
https://openspiel.readthedocs.io/en/latest/games.html |
if nothing else, pick an openspiel game because that'll give you a solid implementation to fuzz against.
gwern#1782: another board game? not ALE or a continuous control task?
𓅬 gabriel_syme 𓅬#3220: A question also should be if you don't have that kind of money (sounds odd but outside of AI and finance right now people don't, try and get funded for environmental stuff) then what do you do? This is another reason why I felt CLIP is such a HUGE thing. Everyone out there, literally everyone, can use it to make amazing things.
gwern#1782: launch a kickstarter to fund training a model to release
𓅬 gabriel_syme 𓅬#3220: Lol smh that sounds so much like a Vatican reply to doubting god.
Sphinx#2092: Not having the money is precisely the problem. If you are independently wealthy and can afford the compute, you can and should do whatever you want. But if you're like most people, even working at big companies, you don't have that money and instead you need to convince someone that whatever giant model you want is really the one worth spending the $$$.
And, even if you manage to convince the people with money to let you build the model, they are unlikely to fund you for unlimited number of giant models. Instead, they'll like fund one and you have to make the case that the one you want to build is the correct one. Like, sure, "lol stack more layers" works and maybe like 5 years, compute is free or whatever. But if you want to actually build giant models now, you have to actually think about what kind of giant model you want, what the limitations are, and what can you really expect. I can promise you anyone who's seriously building these kind of models has these questions in mind, which is likely how the whole scaling law research began.
StellaAthena#3530: > A question also should be if you don't have that kind of money (sounds odd but outside of AI and finance right now people don't, try and get funded for environmental stuff) then what do you do? This is another reason why I felt CLIP is such a HUGE thing. Everyone out there, literally everyone, can use it to make amazing things.
Most people don't have 100 million dollars inside AI either. 500 bundles of 100 million dollars would be the entire global corporate spending on AI.
StellaAthena#3530: Source: https://www.idc.com/getdoc.jsp?containerId=prUS46794720
bmk#1476: i feel like we're talking past each other tbh
bmk#1476: i don't think anyone disagrees with "before you spend a huge bunch of money on a model, you should make sure it's actually the best model you could get for that model"
gwern#1782: so? 'give us $10m to make a thing we'll call CLIP, it might or might not work!' is a big ask, yes. 'give us some money to make and open source OA's DALL-E which works awesome and which they've released a paper on but not the full checkpoints' is a very different thing.
Sphinx#2092: Sure. If the goal is reproduction, that's a bit of a different story.
Sphinx#2092: I guess my point here is that it's good to ponder how to define "best".
bmk#1476: what I'm trying to argue is that performance = compute amount + engineering amount and that under a fixed budget, if you wait, the compute gets bigger and bigger but the engineering doesn't
bmk#1476: and also that compute is cheaper than engineering even today for some cases
bmk#1476: so for a lot of things just throwing brute force at it is enough, and if it's not enough you can wait a few years until it is |
gwern#1782: hm... that's kind of pessimistic. surely the engineering *does* get cheaper over time as well? libraries get debugged, new frameworks like jax come out, tricks become standardized and field folklore...
bmk#1476: not nearly as fast as compute gets cheaper though
Sphinx#2092: And my argument is that "performance = compute amount + engineering" is not necessarily true.
gwern#1782: hm... you know what, I'm not even sure about that. look at hernandez's CNN report. weren't there more algorithmic/software gains than GPU gains?
Sphinx#2092: It's not clear to me that , for example, a giant LM is going to do better than a giant seq2seq model for certain tasks like summarization or translation.
Sphinx#2092: And in fact, it's not clear to me that giant pretrained models will also do better than training from scratch on some tasks.
bmk#1476: I'm not familiar with translation, does this mean something with a lot of custom engineering?
Sphinx#2092: Moreover, it's not even clear to me what the right pre-training tasks for the giant model. Certainly LM training has nice advantages (e.g. it's generative versus MLM), but it's not clear what you should do whenyou want go beyond that, for example multi-modal, multilingual, etc.
Sphinx#2092: Not necessarily engineering. It's a well-kept secret that most pretrained models don't get SOTA if you finetune them for translation.
Sphinx#2092: Even language models pretrained on text then finetuned on code can do worse than just training on code from scratch.
Sphinx#2092: which already is hinting at some potential limitations of pretrained models.
StellaAthena#3530: When you say "pretrained" you mean like BERT / GPT-X finetuned for the task?
Sphinx#2092: Right.
StellaAthena#3530: *Of course* they don't get SOTA. Does anyone expect them to?
Sphinx#2092: Why wouldn't they?
Sphinx#2092: Or are you arguing that starting from a pretrained model is worse than starting from scratch?
StellaAthena#3530: For the overwhelming majority of tasks, if you use the same amount of compute to train BERT and then fine-tune on your task as you spend training a custom model from scratch, the BERT/GPT-2/whatever will be worse
StellaAthena#3530: I'm surprised you think people disagree with this
Sphinx#2092: What?
StellaAthena#3530: General models are *always* worse than custom models. |
Sphinx#2092: No, assume you have infinite time and data.
Sphinx#2092: SO you can just train BERT, then finetune on the task.
Sphinx#2092: Both until convergence.
Sphinx#2092: or you can start from scratch, then train until convergence.
bmk#1476: when i say engineering i don't mean choosing between RNN/transformer or doing finetuning vs training from scratch, i mean like "let's spend a year crafting a complicated bespoke system using all the bits and pieces lying around for this one narrow specific task" - which i admit i don't have a lot of experience but for the entire one time that I've had to do this, it got absolutely crushed by gpt3
StellaAthena#3530: The question doesn't make sense with unbounded resources
Sphinx#2092: Of course it makes sense. It's essentially asking, "is transfer learning always useful?"
Sphinx#2092: Or perhaps "is using a pretrained model as an init always better than random?"
Sphinx#2092: and the answer is, sometimes no.
bmk#1476: are you saying that if i have 100GB if github, and i train a random init GPT2 on that, and also a pretrained GPT2, the random init one would win (sometimes)
𓅬 gabriel_syme 𓅬#3220: that is correct, it was a terrible exaggeration on my part. I meant mostly access to resources that are difficult to find elsewhere and can be more common in AI than my field at least. Especially in research (since in business I expect large companies to afford if they want to)
gwern#1782: destructive interference seems rare. like in the last imagenet transfer paper there was only very small destructive interference from imagenet to like 2 out of 20ish
Sphinx#2092: Yes.
bmk#1476: errrr
Sphinx#2092: It's called ossification.
bmk#1476: citation pls?
bmk#1476: I've not seen this before
Sphinx#2092: https://arxiv.org/abs/2102.01293
Sphinx#2092: Section 3.
Sphinx#2092: They don't explore it too much, unfortunately |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.