data
stringlengths 115
7.61k
|
---|
rom1504#5008: Do you know that it does ?
alstroemeria313#1694: Nope
alstroemeria313#1694: I have no clue.
rom1504#5008: Can you connect to anything else in that network through other means ?
alstroemeria313#1694: Nope
rom1504#5008: Ok, can you send the output of `route -n` ? I find it a bit clearer
alstroemeria313#1694: I don't have that command.
rom1504#5008: Ok, do you have iptables ?
alstroemeria313#1694: Yes
alstroemeria313#1694: the main route table on AWS https://cdn.discordapp.com/attachments/729741769738158194/918273159596228628/Screen_Shot_2021-12-08_at_2.49.36_PM.png
alstroemeria313#1694: the private subnet route table on AWS https://cdn.discordapp.com/attachments/729741769738158194/918273206811492382/Screen_Shot_2021-12-08_at_2.49.46_PM.png
alstroemeria313#1694: So I have the /16 and then different subnets sliced out of it
rom1504#5008: <https://www.ipaddressguide.com/cidr> seems good to go from /16 or /18 or /20 to a range
rom1504#5008: from these 3, only /16 contains 172.31.128.67
rom1504#5008: I guess try changing to /16 with that ip route command
alstroemeria313#1694: Yes it is in a different subnet sliced out of the /16
alstroemeria313#1694: It can't reach it directly
rom1504#5008: I think it's something in <https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html>
rom1504#5008: I'm not sure you can do it with linux commands on the box
rom1504#5008: looks like it something aws side
|
alstroemeria313#1694: Yes
rom1504#5008: <https://docs.aws.amazon.com/vpc/latest/userguide/vpc-getting-started.html> yeah really looks like it's something you need to click in the ui
rom1504#5008: creating a "vpc" and then putting the right settings to include your 2 subnets
alstroemeria313#1694: I have a VPC
alstroemeria313#1694: I posted the route tables
alstroemeria313#1694: Is there actually something wrong with the networking on the other box.
alstroemeria313#1694: How do I get into it
rom1504#5008: <https://docs.aws.amazon.com/vpc/latest/userguide/vpc-subnets-commands-example.html> looks good
alstroemeria313#1694: Since I can't SSH from the jump host.
rom1504#5008: in particular `Create a security group in your VPC using the create-security-group command.`
alstroemeria313#1694: Right but I have security groups
rom1504#5008: even in the other box ?
alstroemeria313#1694: That allow all traffic on the private subnet
alstroemeria313#1694: And inbound SSH + all outbound on the public subnet
rom1504#5008: like maybe the other box doesn't have any ssh permission
alstroemeria313#1694: Can I connect to it somehow, like is there a serial console
rom1504#5008: is igw-2ecfd225 the gateway of the subnet ?
alstroemeria313#1694: that's the internet gateway and it is for all the public subnets.
alstroemeria313#1694: There are six public subnets, bc six availability zones in us-east-1.
rom1504#5008: I think you want something that looks like that <https://docs.aws.amazon.com/vpc/latest/userguide/gwlb-route.html>
|
rom1504#5008: you need to make the internet gateway point to the subnet gateway
alstroemeria313#1694: I am only using us-east-1a for this, the public one for it and a private one I made in it.
alstroemeria313#1694: So I could technically do something dumb like add another network interface to the jump host that is in the private subnet
alstroemeria313#1694: Bc *they are both physically in us-east-1a*
alstroemeria313#1694: what subnet gateway
alstroemeria313#1694: I don't know how to make a gateway lol
rom1504#5008: maybe that's nat-0b..
alstroemeria313#1694: ...
alstroemeria313#1694: Wait
alstroemeria313#1694: That NAT gateway is not in the private subnet
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/918278866987012126/Screen_Shot_2021-12-08_at_3.12.20_PM.png
alstroemeria313#1694: OK I made a new one that is in the subnet
alstroemeria313#1694: well, it's still creating
rom1504#5008: ok then you need to in the internet gateway add a route with range containing 172.31.128.67 point to that gateway
rom1504#5008: that way when you try to reach 172.31.128.67 it will send traffic to the subnet gateway
rom1504#5008: which will send that to 172.31.128.67 hopefully
alstroemeria313#1694: I can't add routes "in the internet gateway", it has no options
alstroemeria313#1694: So this NAT gateway has a private IP address, how do I put it on the private subnet box
alstroemeria313#1694: When I can't reach the private subnet box or get a shell on it
alstroemeria313#1694: Oh wait I can ping it now
|
alstroemeria313#1694: But only on one of its four private IPs?
rom1504#5008: so much work just do access a few a100 ๐
alstroemeria313#1694: it is going to be 32x A100
alstroemeria313#1694: At first.
rom1504#5008: yeah I followed that ๐
rom1504#5008: it's cool
alstroemeria313#1694: So each 8x A100 box has four network interfaces
alstroemeria313#1694: For speed
rom1504#5008: ah yeah I see
alstroemeria313#1694: and i have brought one of the four up so i can make an image for the box
alstroemeria313#1694: Aha I got in with SSH!
rom1504#5008: nice
alstroemeria313#1694: ...It can't access the internet
alstroemeria313#1694: Well ofc, it doesn't know about the NAT gateway
rom1504#5008: yeah I guess then you need to add a route in there pointing back to the internet gateway
kurumuz#5695: what are you guys trainin
alstroemeria313#1694: eheh
alstroemeria313#1694: it's for my diffusion models
kurumuz#5695: cool
alstroemeria313#1694: i am trying to bring a p4d ultracluster up on AWS
|
kurumuz#5695: infiniband between machines?
alstroemeria313#1694: it's some Amazon special thing
kurumuz#5695: ic
kurumuz#5695: seems like you are training a big model then :berk:
alstroemeria313#1694: but it does gpudirect and stuff
rom1504#5008: I feel there should be a button "setup my a100 cluster"
alstroemeria313#1694: so only the primary network interface works somehow
rom1504#5008: but I guess that's exactly what you're building now
circuit10#0158: Hello rom1504
rom1504#5008: hey circuit, small world
rom1504#5008: you need routes for all the interfaces
alstroemeria313#1694: ```
ubuntu@ip-172-31-172-67:~$ ip route list
default via 172.31.128.1 dev ens32 proto dhcp src 172.31.172.67 metric 100
default via 172.31.128.1 dev ens65 proto dhcp src 172.31.130.202 metric 200
default via 172.31.128.1 dev ens130 proto dhcp src 172.31.152.197 metric 300
default via 172.31.128.1 dev ens163 proto dhcp src 172.31.167.2 metric 400
172.31.128.0/18 dev ens65 proto kernel scope link src 172.31.130.202
172.31.128.0/18 dev ens130 proto kernel scope link src 172.31.152.197
172.31.128.0/18 dev ens163 proto kernel scope link src 172.31.167.2
|
172.31.128.0/18 dev ens32 proto kernel scope link src 172.31.172.67
172.31.128.1 dev ens32 proto dhcp scope link src 172.31.172.67 metric 100
172.31.128.1 dev ens65 proto dhcp scope link src 172.31.130.202 metric 200
172.31.128.1 dev ens130 proto dhcp scope link src 172.31.152.197 metric 300
172.31.128.1 dev ens163 proto dhcp scope link src 172.31.167.2 metric 400```
alstroemeria313#1694: They all have local routes
alstroemeria313#1694: I can't get to it via SSH except through one of the four IPs and can't ping anything but that IP.
alstroemeria313#1694: Namely the first one, 172.31.172.67
alstroemeria313#1694: They are all in the 172.31.128.0/18 subnet
alstroemeria313#1694: So their VPC internal gateway is 172.31.128.1.
alstroemeria313#1694: How do I make it use the NAT gateway
rom1504#5008: I think that's a button in the ui again
alstroemeria313#1694: ...
alstroemeria313#1694: The buttons don't actually work
rom1504#5008: when you mean you can connect to 172.31.172.67 but not 172.31.130.202 ; that's from the internet gateway ?
alstroemeria313#1694: At least anything that claims to set a bunch of stuff up for you
alstroemeria313#1694: It's from the jump host
rom1504#5008: yeah right jump host
rom1504#5008: so I would think it's the jump host routes that are not reaching 172.31.130.202
ne0#8965: anyone have any cheap A100 instances they can recommend?
|
rom1504#5008: cheap and a100 don't usually go together
alstroemeria313#1694: I am just going to make another box
alstroemeria313#1694: datacrunch.io
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/918283537369673738/Screen_Shot_2021-12-08_at_3.30.53_PM.png
alstroemeria313#1694: yeah it can't get to the internet
rom1504#5008: you can check with traceroute how far it can go
alstroemeria313#1694: The box does not have traceroute
alstroemeria313#1694: And I cannot install it bc I do not have access to a package repo.
alstroemeria313#1694: > The NAT gateway must be in a public subnet with a route table that routes internet traffic to an internet gateway.
alstroemeria313#1694: -.-
rom1504#5008: this kind of stuff would be easier with a config file rather than button pushing
alstroemeria313#1694: ok let me remake the NAT gateway again
alstroemeria313#1694: and wait for it to come up.
rom1504#5008: maybe using the aws command line would be more informative
alstroemeria313#1694: ugh
alstroemeria313#1694: Only if I have like, a list of commands to run
alstroemeria313#1694: I have it set up on this laptop but never really use it
alstroemeria313#1694: yay it works now!
alstroemeria313#1694: so ok. the other three network interfaces still don't work.
alstroemeria313#1694: trying to install CUDA now
|
alstroemeria313#1694: well at least that worked and I can see the A100s
alstroemeria313#1694: ```
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 802: system not yet initialized
```
alstroemeria313#1694: ;_;
alstroemeria313#1694: Oh wait I forgot to ldconfig
alstroemeria313#1694: still broken.
peps#9152: anyone have any luck with NVIDIA MIG on linux with the A100s?
peps#9152: seems to be iffy with me
alstroemeria313#1694: gonna reboot
alstroemeria313#1694: rebooting didn't help
chirp#4545: Iโve heard AWS support is quite good, maybe they can help lol
alstroemeria313#1694: torch.cuda.is_available() returns False
fengoku#4038: hey everyone! just a reminder about our CtrlGen controllable generation workshop (https://ctrlgenworkshop.github.io/) taking place at NeurIPS next Monday. Feel free to attend if you have some time! We feature an exciting line-up of speakers, a live QA + panel discussion, poster presentations of several interesting works, creative demos of controllable generation systems, and networking opportunities.
also, we are soliciting questions beforehand for our speakers and panelists. if you have any questions related to controllable generation (for images, text, or other modalities), feel free to add them to the following Slido link: https://app.sli.do/event/rmabxoqx
๐
ฌ gabriel_syme ๐
ฌ#3220: has anyone used MAUVE yet? I'm going to be using it for the layouts data soon, wonder if ppl had thoughts on more typical language domains
bmk#1476: pyfra 0.3.0 is out!
Louis#0144: py-what?
Louis#0144: Never heard of it
Louis#0144: Oh u mean that thing that keeps breaking backwards compatibility?
|
Louis#0144: Jkjk
Louis#0144: I'll check it out
kurumuz#5695: cutting edge or bust
bmk#1476: see here's the thing right
bmk#1476: using pyfra makes me a lot more productive
bmk#1476: which gives me a big competitive advantage against all non pyfra users
bmk#1476: so I don't really have an incentive to make pyfra stabler (as opposed to improving things whenever I can) to make it easier to learn, because that would simultaneously make me less productive and everyone else more productive
Louis#0144: Same argument transformers used
kurumuz#5695: no, you dont understand. our transformers fork is just better
bmk#1476: I mean it's not like any of you are paying me to use pyfra
kurumuz#5695: why would we upgrade :berk:
EricHallahan#1051: Transformers isn't research code.
EricHallahan#1051: pyfra, by definition, is research code.
bmk#1476: I have a proclivity towards making things public and stuff so I don't intentionally make pyfra hard to use for everyone else but the incentives just aren't there to make it easy
bmk#1476: I still write docs because that costs me less than it costs me to not make breaking changes
bmk#1476: and so the open source proclivity wins out there
bmk#1476: but not making breaking changes is a *huge* limitation
bmk#1476: breaking changes good actually, etc
kurumuz#5695: yea
bmk#1476: tldr my goodwill extends far enough to make pyfra the way it is right now but not far enough to make it stable
|
bmk#1476: anyways there are a number of breaking changes that have been a really good idea in retrospect
kurumuz#5695: I mean if they want it to not change they can literally stay on a specific commit
bmk#1476: yeah
kurumuz#5695: and fork stuff etc
bmk#1476: anyone is free to fork pyfra at any time
bmk#1476: it's MIT
bmk#1476: pyfra might eventually become more stable naturally just due to me running out of new ideas
bmk#1476: but now is not that time
bmk#1476: I'm on a quest to make pyfra idempotentpilled right now
bmk#1476: the new idempotency decorator is *wild*
kurumuz#5695: I dont even know what that word means.
ilovescience#3282: this is us https://cdn.discordapp.com/attachments/729741769738158194/918344891677945866/5x6uay.png
kurumuz#5695: lol
ari#9020: After you've learned what idempotency means once, learning it again won't change anything
kurumuz#5695: That is why I'm keeping back.
bmk#1476: oh god.. *it works*
bmk#1476: anyone wanna try it out
bmk#1476: @kurumuz ?
ilovescience#3282: try what?
bmk#1476: https://github.com/EleutherAI/pyfra/tree/idempotent install this branch
|
kurumuz#5695: so i need to google what idempotency is
bmk#1476: no no
bmk#1476: ok so make a file, test.py
bmk#1476: and do this:
```py
from pyfra import *
@cache
def my_function():
print("hello world")
my_function()
```
bmk#1476: and run it
bmk#1476: and have your mind blown
ilovescience#3282: why in the world would you need to cache the printing of hello world
bmk#1476: it's a demo
EricHallahan#1051: why not
kurumuz#5695: ok trying
|
ilovescience#3282: if you're doing caching, maybe some sort of calculation would be a better demo, no?
bmk#1476: if it's cached it won't print hello world again so you'll know the caching worked
bmk#1476: oh no the caching itself isn't the main point
bmk#1476: (I just pushed an update to make blobfile not a mandatory dependency, if you were running into that error)
kurumuz#5695: yep :berk: installed it tho
bmk#1476: lol
kurumuz#5695: yeah this is pretty cool
bmk#1476: did you run it
kurumuz#5695: yep
kurumuz#5695: was surprised definitely lol
StellaAthena#3530: $x$ is idempotent if $x\cdot x=x$. For example, $\begin{bmatrix}1 & 0\\0&0\end{bmatrix}$ is idempotent with respect to matrix multiplication
kurumuz#5695: cool party trick :berk:
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/918348924807417896/193204646687408129.png
bmk#1476: oh no it's not just a party trick
bmk#1476: there's a real legit reason to do this
kurumuz#5695: sure
kurumuz#5695: even if i delete that hash though, it still seems to use that cache
bmk#1476: if you want to clear the cache, you increment the v0
kurumuz#5695: ohh
bmk#1476: there are several reasons why making it clear the cache when you delete the hash would be a bad idea
|
kurumuz#5695: what if it also restored my code when I roll back to v0
kurumuz#5695: :thonk:
bmk#1476: that.. hmmm
bmk#1476: lemme think on this
bmk#1476: oh my god that's insane I love it
kurumuz#5695: not exactly easy to remember what the function was doing at 10 verseions ago
kurumuz#5695: :berk:
bmk#1476: yeah that's a problem
bmk#1476: also there's an important reason the version incrementing isn't automatic
bmk#1476: even though I could pretty easily make it be
bmk#1476: sometimes you want to change the function without rerunning
bmk#1476: and sometimes you want to rerun without changing the function
bmk#1476: this compromise allows you to do both
kurumuz#5695: yeah pretty good
kurumuz#5695: can you make this work with, "with cache:"
bmk#1476: sorry that's actually impossible
bmk#1476: the python spec makes it impossible to skip a context manager
bmk#1476: yes i checked
kurumuz#5695: I see
bmk#1476: anyways the implications of this are immense
|
bmk#1476: still gotta polish it up a bit first but this could make life so much easier it's crazy
bmk#1476: gotta wire it back up to all the Env stuff like `@stage` was doing
bmk#1476: getting idempotency right is *hard*
bmk#1476: this is the third iteration of this general shape of thing
bmk#1476: first time was `@once`
bmk#1476: second time was `@stage`
bmk#1476: and now, `@cache`
nev#4905: will you make a pyfra streamlit?
bmk#1476: uhh
bmk#1476: I probably won't but someone should
Atsu#1282: @StellaAthena
How can I get permission to run the gpt-neox training code base on CoreWeave ?
jacquesthibs#6131: I'm going to be working with text for fine-tuning that adds up to ~10 GBs. How should I store it?
StellaAthena#3530: For what purpose?
Atsu#1282: @StellaAthena My intent of the above questions is as follows. README of gptneox has an explanation "If you are an EleutherAI member and using the Kubernetes cluster, the eleutherai_cluster.yml config should be used instead.", and if Kubernetes cluster is in CoreWeave I am not sure about the way to get permission to it and a person who has the authority.
EricHallahan#1051: We are asking why you are asking for access.
StellaAthena#3530: Yes, I understood that is why you pinged me. My question is why do you want access? Are you working on something and lack the compute to test it locally? I typically don't give people I don't know access to million-dollar pieces of hardware simply because they ask for it.
Atsu#1282: @EricHallahan @StellaAthena
Thank you for your responses.
I am working on building language models for Asian languages like Chinese, Japanese, and Hindi etc.
|
The corpus contains OSCAR and other crawled sentences.
Usually, I use colab pro plus in 24 hours, but gpt2 or later seems not to be trained in this colab.
I have read FAQ on your organization, but I am not sure whether this situation is appropriate for your organizationโs purpose.
Atsu#1282: Is your organization's scope a language model building only in English or other European languages? If so, my situation does NOT suit, and sorry for disturbing you. ๐
Sid#2121: We're mildly interested in doing stuff for other languages, but all of our training infra is currently occupied rn + we generally don't just hand out the compute (since it's not really ours to hand out). Coreweave is generally quite open to using their compute for interesting projects with us facilitating, but I think in terms of just training LMs, we're not particularly interested in going beyond what we're currently doing rn.
Sid#2121: Generally we vet projects that we work on quite closely, and typically prefer to work with people who have contributed to some projects here previously, and/or have convinced us with a really cool proposal
Sid#2121: We're also trying to pivot more into alignment / interpretability focused projects these days
nev#4905: kurumuz trained Genji-6B, you might be interested in that
Atsu#1282: I do understand that my situation does NOT suit.
> We're also trying to pivot more into alignment / interpretability focused projects these days
What kind of experience is required to join this field?
I wrote my introduction at here and I have experience with pytorch and tensorflow but I do not have publications on these fields.
Sid#2121: if you can do the work, we don't care what experience you have. But if you know nothing about alignment, here are some good resources https://discord.com/channels/729741769192767510/730451873613611079/836001120513163335
naclbbr#9203: In this server there have been open discussions about the other languages (such as French, Korean, and Japanese which I have been working for opensourcing pre-trained model and datasets) but EleutherAI founders and core members I believe are mostly working on English-based projects. As for CoreWeave if you have active project(s) you can contact them for opening a new account.
naclbbr#9203: They have excellent support and resource availability compared to other larger cloud services such as GCE/AWS
Atsu#1282: Thanks! Indeed, I have read some of these because I am interested in RL based interactions between humans and LMs
Artia#1759: Some say thereโs a 20b model training, is it real?
|
Quill#9732: it's a mystery
bmk#1476: by policy, we don't provide timelines or roadmaps
Sid#2121: our wandb is public though, lol
Quill#9732: also eleutherai has public wandb reports which would show metrics for such a run if any exists/existed
cfoster0#4356: Would take a lot of work to fake wandb reports, wouldn't it? ๐ค
Quill#9732: nah, known scaling laws + lognormal noise
Artia#1759: So you mean itโs not true? Why not just say it
Quill#9732: confirming or denying could be interpreted as inviting additional questions regarding timelines or roadmaps
Artia#1759: ... I can feel the vibe like... a mega corporation
cfoster0#4356: Wait what
Quill#9732: (ssh!)
Artia#1759: Thought itโs an open source project made by developers around the world
Quill#9732: try discord search
cfoster0#4356: It's what it says on the tin
cfoster0#4356: No bs
Quill#9732: https://wandb.ai/eleutherai/gpt-thicc/reports/20B-Pretraining--VmlldzoxMTk3NjEy metrics are here
cfoster0#4356: Aside from that,
>>> by policy, we don't provide timelines or roadmaps
Sid#2121: i don't know how much more open we can be, we literally log everything we do out in public, all our code is public lol
|
Artia#1759: Wait so it exists
Quill#9732: no comment
Artia#1759: Thanks for the leak
Sid#2121: We're avoiding giving a straight answer because generally if we say "X model is training and will be done in Y time" that a) sets expectations and b) makes the work feel like an obligation. No one who does work for eleuther should be obligated or feel pressured by deadlines. We do everything in our free time, and stuff will be done when it's done
Sid#2121: if something goes wrong with some training job, we reserve the right to take as long as we want to fix it and delay release indefinitely, because no one's paying us for this stuff
Quill#9732: I had a *ton* of expectations for when the O(100B) model would be released and blame eleutherAI, directly and specifically, for breaking a verbal contract that they never made in particular and the global semiconductor shortage in general
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/918601455328456784/5xaddb.jpg
Artia#1759: Understandable, although I donโt think anyone would blame if something goes wrong when training the AI in this project
Quill#9732: me_irl https://cdn.discordapp.com/attachments/729741769738158194/918601833746952253/flat750x075f-pad750x1000f8f8f8.png
Sid#2121: yes, but people may be disappointed
Sid#2121: can't be disappointed if you don't set expectations
Sid#2121: https://tenor.com/view/smart-thinking-thoughts-think-ponder-gif-7713620
naclbbr#9203: You can also train up to 40B I believe with the current MTJ V2 codebase if you want to
naclbbr#9203: We trained a 20B model for about 1/3 but not sure if I would continue with TPUs :optofiber:
Sid#2121: who's we? ๐
naclbbr#9203: Oh, it's the company I'm running
Sid#2121: Bit192?
naclbbr#9203: yeah
Sid#2121: where does an experimental game design company get the budget to train a 20B param model lol
Sid#2121: also interested to hear how you're using LMs in your games
|
naclbbr#9203: TRC granted us multiple v3-128 accesses before
naclbbr#9203: originally I just wanted to have an interactive chatbot in the game
StellaAthena#3530: That's not enough to train a 20B model tho...?
naclbbr#9203: but I've just been maintaing public inferencing site for the past months
naclbbr#9203: not quite enough, yeah
naclbbr#9203: only had a small dataset atm
StellaAthena#3530: tbh I would finetune GPT-J on your data then
naclbbr#9203: our model is Japanese so finetuning wasn't in the choice
Sphinx#2092: Just finetune anyways.
StellaAthena#3530: People have had a lot of success finetuning GPT-J into other languages
naclbbr#9203: tokenization is a hell
Sphinx#2092: Just switch tokenizers and align common embeddings.
StellaAthena#3530: https://huggingface.co/NovelAI/genji-jp
StellaAthena#3530: https://huggingface.co/VietAI/gpt-j-6B-vietnamese-news
naclbbr#9203: I think finetune can explain it better (he shared the experience with me in this server) but GPT-2 Tokenizer is extremely inefficient w/ Japanese
StellaAthena#3530: Sure
Sphinx#2092: So switch tokenizers?
StellaAthena#3530: So use a different tokenizer
naclbbr#9203: We used SentencePiece to train a new tokenizer
StellaAthena#3530: Great, so finetune GPT-J with it?
|
Sphinx#2092: ^
naclbbr#9203: hmmm
naclbbr#9203: I just didn't find much incentive to use GPT-J for Japanese base because tokenization is entirely different. We already made a progress with wip pre-trained model from scratch anyway, and preparing for additional training rn
Sphinx#2092: I mean, it costs you nothing lol
Sphinx#2092: it's just a different init, if you would.
naclbbr#9203: sure thing
Atsu#1282: No. sentence piece is not a must for languages with space delimiters.
But some languages have no boundary of words.
Could you imagine recovering all space delimiters for a given English sentence whose space is replaced with empty stings ?
Genji does not recover correct boundary.
naclbbr#9203: SentencePiece also does a lot of normalization work
EricHallahan#1051: The pile of linear algebra has just been stirred a bit already.
naclbbr#9203: just for example: we (Japanese) use 1-byte ! and 2-byte ๏ผ randomly by writer's preference
naclbbr#9203: and many different kind types of dots incl. ./.../ใป/โฆ/๏ผ๏ผ๏ผ/ใ etc.
Atsu#1282: I think that byte-T5 has an universal ideas to tokenize cross lingual ones.
Atsu#1282: https://arxiv.org/abs/2105.13626
StellaAthena#3530: Yes, I could probably do this in english
naclbbr#9203: I thought byte-T5 may work with Japanese as we have a lot of 1-char words (like cat/็ซ dog/็ฌ)
StellaAthena#3530: I'm not an expert in tokenizers or Japanese, but if the tokenizer means you can't transfer learn GPT-J into Japense that means youneed a better tokenizer
naclbbr#9203: and there are many commonly used dialect speaks in dialogues which typically uses morphology at the end of each sentence
|
StellaAthena#3530: Not that you can't do what you want with GPT-J
naclbbr#9203: I can't think of a better tokenizer than T5/SentencePiece for Japanese rn >_>
Sphinx#2092: lol
Sphinx#2092: Just build your tokenizer with sentencepiece, make it the same size as GPT-J. Then align the embeddings so whatever is in the intersection has the same index.
Sphinx#2092: ANd start from GPT-J.
Sphinx#2092: That's the entire suggestion.
Sphinx#2092: The only step required is the aligning, otherwise nothing changes in your model.
Atsu#1282: Yes, human could easily recover it, but machine is different from ourselves. Chineses are also struggling to tokenize raw strings.
Sphinx#2092: and you get to start from a pretrained model.
Sid#2121: with japanese i doubt there's much intersection at all tbh, aside from byte level stuff
Sphinx#2092: This even works with new alphabets
Sphinx#2092: The core thing is the main body of the transformer is still useful.
naclbbr#9203: yup, that's my concern + (likely) no normalization in The Pile
Sphinx#2092: The thing is, it's not even a concern. Unless you think starting from GPT-J is worse than from scratch.
Sid#2121: but yep @naclbbr this is the point - even for other modalities
naclbbr#9203: I think there was a French finetune model for J recently released that worked pretty well?
naclbbr#9203: which I think used the exact same (GPT-2) tokenizer
Sphinx#2092: Somewhat related: https://arxiv.org/abs/2012.11995
Sphinx#2092: People also had success with the strategy I recommended for MT, even for languages with unique alphabets.
naclbbr#9203: I do understand what you telegraphed here
|
naclbbr#9203: and btw I don't think starting from GPT-J would have been worse. It would at the very least have retained single-character token level English knowledge
naclbbr#9203: although Google C4 Multilingual still contained a lot of foreign languages so some will be overwritten with our datasets
Atsu#1282: I think that this has important point. While there is not mathematically proved theorem, we might be able to obtain an universal compressing function which outputs an embedding matrix for given text corpus, even if the given text corpus is not human readable ones like bytes data of exe files.
Atsu#1282: By-T5 contains these universal compressing ideas.
Sphinx#2092: Don't over think it. Just big model go brr
naclbbr#9203: that's true
naclbbr#9203: although GPT3 DaVinci really underperforms with Japanese from my experience
naclbbr#9203: GPT-J performed better
naclbbr#9203: (dataset representation?)
naclbbr#9203: DaVinci was "oh no it's not even looking coherent at all" bad w/ jp
Sid#2121: OAI filtered non-english data from pretraining, we didn't
naclbbr#9203: ahh
Well it still spews 5ch(Reddit counterpart)-like stuff when some Japanese is fed
Atsu#1282: Because this boundary recovering problem is NP-hard, not only CRF and approximation algorithm with Viterbi decoder but also large dictionary of entity nouns is required to preserve the boundary of entity. Human could easily recover the boundary because they remember the entity dictionary as a prior knowledge. Like German, some language including Chinese and Japanese creates novel nouns by concatenating two or more words. Tokenization by space boundaries creates this unnecessary costs to update dictionary for solving NP-hard optimization.
So, we could break down all sequences into byte sequences (or binary ) and then should compress into sequences of 32k code symbols as an information theoretic manner.
naclbbr#9203: If memory and training/inferencing cost allows byte-to-byte tokenization might even be better than having long tokens
naclbbr#9203: There are situations GPT-2 Tokenizer acts a bit weirdly thanks to space boundaries (and inclusion of space in tokenization)
Atsu#1282: In ACL 2019, an UK student said "Why don't Koreans and Chinese and Japs speak English ? Indians do!! Then, we get a more self-supervised data !!!"
He is not wrong in the sense of pure capitalism.... ๐
|
naclbbr#9203: btw Korean GPT-J has recently been released by KakaoTalk which I think is also a pre-training from scratch:
https://github.com/kakaobrain/kogpt
StellaAthena#3530: If this is GPT-J / mesh-tensor-jax, the complete lack of citation is extraordinarily shady
naclbbr#9203: I believe this is GPT-J as parameters look identical and provided tokenizer_config.json says mesh-transformer-jax
naclbbr#9203: not sure why there is no citation for MTJ
๐ค
Atsu#1282: Yes, Korean characters are like Japanese Hiragana, that is, one sound unit corresponds to one character.
Japanese people might be harder to read Hiragana only sentences, but tokenizer is completely different from humans.
StellaAthena#3530: Oh, I'm not doubting you. I'm politely accusing them of plagiarism
Atsu#1282: Did they use only config values ? or did they use code base ?
naclbbr#9203: inferencing code imports GPTJForCausalLM
StellaAthena#3530: Where? The HF page doesn't, and lists it as a GPT-2 model
naclbbr#9203: https://github.com/kakaobrain/kogpt/blob/main/kogpt/inference.py
StellaAthena#3530: Yeah, this is low-quality plagiarism
naclbbr#9203: Mmmngh. Kakao is a large company so why they did this
StellaAthena#3530: Probably because the license that they're putting it under is a violation of the MTJ license
naclbbr#9203: Good find.
Atsu#1282: They even does not differentiate huggingface inc and the other organizations to feed models and datasets.
Atsu#1282: If this is a copy-left like GPLv3 the problem is serious. There are a lot of GPL court cases ๐
MrSenator#4844: So what are ya'lls favorite image generation models/implementations right now? StyleGAN2? ruDalle/Dalle? Diffusion?
|
EricHallahan#1051: lol the readme is completely ripped from model hub.
EricHallahan#1051: I know because I'm the one who did the fancy formating.
StellaAthena#3530: Copy and pasted so hard it breaks the GitHub UI
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/918628672729153586/Screen_Shot_2021-12-09_at_5.22.18_PM.png
StellaAthena#3530: lol
EricHallahan#1051: The math in the hyperparameter section is a dead giveaway.
StellaAthena#3530: yeah
EricHallahan#1051: https://cdn.discordapp.com/attachments/729741769738158194/918629006843191327/unknown.png
EricHallahan#1051: So yeah, they totally didn't even try.
EricHallahan#1051: Here is the commit in Model Hub to prove it.
https://huggingface.co/EleutherAI/gpt-j-6B/commit/d3d295606169fbaf668141935a19865740c0ce20
EricHallahan#1051: (I spent way too much time on it lol)
EricHallahan#1051: (but it looks nice)
EricHallahan#1051: `:)`
naclbbr#9203: it does ๐
Atsu#1282: Wow, unbelievable ! Did they just copy the MD source of huggingface's hub into their README ?
EricHallahan#1051: Seems like it.
alstroemeria313#1694: It's a fine-tune, not just the original weights?
Atsu#1282: Nope, Korean language usually requires an entirely different manners to tokenize.
alstroemeria313#1694: Ah
|
alstroemeria313#1694: So it was from scratch?
EricHallahan#1051: Just replace the embedding matrix then.
kindiana#1016: easy to test by downloading it and checking cosine similarity with J weights
KJSMAN#9839: Korean has japanese-like grammar structure and made by combinations(ใ
+ ใ
+ ใด => ์)
mike_landis#4103: You should look up unsupervised word segmentation. There's a robust line of research in the computational linguistics community. Back in the 2000s when hierarchical bayesian models were the rage, there were models to simultaneously segment and cluster word senses from scratch. Pretty cool stuff as it relates to first language acquisition
naclbbr#9203: I'm guessing that Korean is easier to transfer given that they still use spaces
mike_landis#4103: In particular Sharon Goldwater at the University of Edinburgh
mike_landis#4103: https://homepages.inf.ed.ac.uk/sgwater/publications.html
KJSMAN#9839: Hrm
KJSMAN#9839: They licensed their weights as CC BY-NC-ND
Atsu#1282: Yes, I do know the line of research but predictions often does not preserve boundary of nouns of entities. So, HBM is not widely used in japanese NLP...
KJSMAN#9839: You know, `ND` is 'No derivative'
KJSMAN#9839: So seems that I can't finetune and upload that model :/
naclbbr#9203: yea it's kinda pointless you have an open model and you aren't allowed to modify it
naclbbr#9203: though you can still prompt soft-tune it, I guess
StellaAthena#3530: @naclbbr you canโt even use their model, as the tokenizer they uploaded to HF has a dependency on another file that isnโt included
StellaAthena#3530: The path to said file is listed as `"/home/curtiskim/curtis_dev/mesh-transformer-jax/tokenizer/v1.5_b/`
naclbbr#9203: lol I saw that
naclbbr#9203: it looks like it still works without that dir
naclbbr#9203: (it's sentencepiece)
|
finetune#0907: spent some time trying out the model on ai-novelist btw. definitely notice the difference in using a more appropriate tokenizer compared to genji-jp in generation quality. curious how a gpt-j finetune with better tokenization will do now
StellaAthena#3530: @naclbbr ahhh. I just saw that the online inference was borked, didnโt try to run it locally
naclbbr#9203: I think someone did a run of our wip model and genji-jp side by side
naclbbr#9203: (I regret a lot of decisions about the tokenizer we made for the wip model as there are many things I never come to aware of until I actually was able to run a lot of real-life inferencing tests, such as character efficiency for dialogues weren't as good as I was hoping for)
naclbbr#9203: one issue we had with the current wip is that because we have relatively many "aaaaaa" 's or "eeeeee" ("whaaat?!") 's as in excl. such sequence spawns same-token loops a lot
naclbbr#9203: could have been better with a set of common "aaaaaa" 's
EricHallahan#1051: aaaapilled
bmk#1476: aaaaa
naclbbr#9203: It's not even aaaa*h* so once it loops it never stops
bmk#1476: are you doing beam search or anything
finetune#0907: don't think so. probably just actually common in the dataset which makes the general aaaa tendency worse
finetune#0907: got about 2.2k occurences of 17+ long ใ sequences in about 6gb of japanese text data
bmk#1476: ใใใใใใซ
naclbbr#9203: this reaaaaaally shucks
naclbbr#9203: the damage can be controlled somewhat by regex'ing in pre- and post-processing phase but obviously, this is where a better tokenization will help
Sid#2121: even the best tokenization is not going to stop aaaa
๐
ฌ gabriel_syme ๐
ฌ#3220: Wild https://twitter.com/tg_bomze/status/1468586352291819527?t=bwImJJDW18WaY89-61uRxg&s=09
Kernow#3794: What type of image generation models/implementations does @BATbot use in #the-faraday-cage-archive?
Kia#2550: VQGAN+CLIP And CLIP Guided Diffusion model
Kernow#3794: Excellent, thank you!!! Do you know the process of making getting that set up with the same bot on another server?
|
StellaAthena#3530: Ask @BoneAmputee
Kia#2550: No sorry,Im not sure if there's public code yet For Image Generation Discord bots yet
Kernow#3794: Dang
mega b#6696: hi yalls, so i've been wanting to build a model that automatically grades a short answer prompt (ASAG) that reliably grades the student's answer based on the reference answer and question. I've tried attempts on GPT-2 medium but with very miniscule accuracy, should I try simply prompting GPT-J, or should I try training a T5, or similar text to text, model?
TY#4345: it sounds like an encoder type model (e.g. BERT) is more suitable
mega b#6696: interesting, why would that be so?
TY#4345: since you are not generating any text in this task, right?
mega b#6696: grade, as in PASSING or FAILING but otherwise nope
TY#4345: so it is a binary classification task, and it is much more common to use an encoder to implement a classifier.
TY#4345: there are also recent practices to use very large LMs for zero-/few-shot classification tasks, but it might be easier to start with the encoder appraoch IMO
mega b#6696: thanks for the helpful info, will look into this! ๐
mega b#6696: ZAMN ๐
Atsu#1282: The generated strings like "saaaacks" have very low Kolmogorov complexity which measures repetitions and redundancy of finite strings.
I always survey about text generations with less redundancy or repetitions but no paper directly solve this problem as far as I know.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/918740348304519188/20211209_224604.jpg
Atsu#1282: That's mad but you can say that again๐
nev#4905: that uses this https://github.com/mttr2021/MTTR
nev#4905: each frame individually looks ok but together it's messy https://cdn.discordapp.com/attachments/729741769738158194/918763584157253632/Screen_Shot_2021-12-10_at_10.18.16.png
nev#4905: conditioning it on previous frames would solve that
zach#3463: Hi all, I'm Zach. My background is in social neuroscience, but I work as a data scientist now. I work with high level NLP tools, but I am not an NLP researcher.
|
I am planning on lurking in here. Maybe I can contribute to a project at some point.
EricHallahan#1051: Welcome!
navidre#4804: Hello, I am Navid. Doing my PhD in University of Alberta in Canada. I learned about this community and I am interested to get involved. May I know where/how I can start?
bmk#1476: oh a fellow albertan
bmk#1476: welcome
bmk#1476: I'm curious who's your advisor?
navidre#4804: Thanks ๐ ECE department. Dr. Reformat
EricHallahan#1051: Welcome! If you haven't already, read the #rules and the FAQ (https://www.eleuther.ai/faq), there is some good information in there. After that, check out the project board at https://board.eleuther.ai to get a sense of what we are up to.
navidre#4804: Thanks Eric. I will do that
bmk#1476: ah, not familiar sorry
IGg#7871: hi, could anyone think of how to create 3d objects from text and initial image? they are very intelligent I know you can get it if you concentrate
bmk#1476: stop clickbaiting pls
Asca#3513: https://twitter.com/mangata_art/status/1469391437028634628?s=21
Isnโt this your bot being used for this, or am I tripping?
Asca#3513: Because if this isnโt one of you this dudeโs profiting off your bot you let everyone use for free
Asca#3513: It looks a little different but Iโm not totally sure
BoneAmputee#8363: vqgan+clip code is pretty widespread. that looks like it's from NerdyRodent's repo
BoneAmputee#8363: maybe :cat_thonk: hard to say for sure
|
Kia#2550: Just search it
Kia#2550: Most easiest way to tell by looking at the log,They don't really have the the perm to delete it
Asca#3513: Bet ๐
Louis#0144: @Sid how thicc
Alexander B.#0332: Hi everyone! Regarding the model size in our demo: what happened there is that we took a model with 1.1B params (64 layers with d = 1024 + embeddings) and shared the weights of some layers reducing that to 125M params. Still, the resulting model does the same amount of computations (and has the same number of activations) as the 1.1B model.
Why did we share the weights? That's because typical Internet connections are orders of magnitude slower than networks in HPC clusters, so we should use all means to reduce the communication size (including weight sharing, gradient compression, etc.) to make training over the Internet practical.
Does weight sharing hurt the model performance on downstream tasks? Yes of course. However, if increased, the model with shared weights still needs less params to achieve the same performance! This is investigated in the ALBERT paper by Google (https://arxiv.org/pdf/1909.11942.pdf, Table 2). There, for instance, ALBERT (= BERT with shared weights) with 60M params beats BERT with 334M params on 3 of 5 downstream tasks **while having 80% less params!** Thus, weight sharing is a way to trade extra computations for reduced communication (necessary for us, since communication is a bottleneck in our setup).
**Is it fair to compare our model to the standard 125M model without sharing?** I don't think so, since the ALBERT paper shows that models with sharing are much more powerful than models without sharing in case of the equal number of params.
Surely, we can't say that our model is a 1.1B model as well - it's somewhere in the middle. Weight sharing makes it impossible to compare models just by the number of params, we need to consider the amount of computations as well.
Kia#2550: @kurumuz
Sid#2121: https://arxiv.org/pdf/2111.09839 did you see this recent work from colin raffel using fixed sparse masks? It should be useful for federated learning by greatly decreasing communication volume - you can distribute a subset of the data to each user and have them only update the parameters that would be most important wrt that subset of data.
Is something similar to this what you're already doing? Or are you just distributing all parameters to all users, then averaging weight updates?
Sid#2121: also - do you have any measures to prevent adversarial attacks? What would stop me from e.g sending a super large gradient update that would break training
Sid#2121: or just sending the inverse of my gradients, or something
Sid#2121: welcome, btw ๐
|
Alexander B.#0332: That's a great question! One possible defense is replacing the naive averaging of the peers' gradients with an aggregation technique that is robust to outliers. An example of a suitable technique is CenteredClip (described in https://arxiv.org/abs/2012.10333). What's good about this paper is that the authors prove that such aggregation does not significantly affect the model's convergence and is robust to a certain share of attackers. However, it assumes that aggregation (the CenteredClip procedure itself) is performed by a trusted server, so it's not enough for our case (our system is fully decentralized, every peer takes part in aggregating gradients and we can't trust all of them).
That's why we had to come up with a new robust aggregation protocol for decentralized system that does not require this assumption. We recently published the resulting paper here: https://arxiv.org/abs/2106.11257 This protocol uses CenteredClip as a subroutine but is able to detect and ban participants who performed it incorrectly (thanks to several cryptography techniques).
Both of these papers test the attacks you've mentioned! E.g., sending the inverse gradients is called "bit-flipping" in the 1st paper and "sign flipping" in the 2nd one.
Unfortunately, the algorithm in the 2nd paper is rather complex and we didn't implement all parts of it in our system yet. Hopefully, we will finish it if we ever scale up to collaborations with thousands of peers where the risk of attacks is serious ๐
For now, while we're testing our system with smaller collaborations, we're just using authentication via Hugging Face + model checkpointing (assuming that if someone attacks us, we can determine their username from logs, ban them manually, and revert the model to the latest checkpoint unaffected by the attack). We also have an option for using the original CenteredClip (it helps with attacks involving wrong gradients and hardware errors).
Alexander B.#0332: Thanks for the link! We didn't try this particular technique yet, though we're really interested in everything improving communication efficiency.
> are you just distributing all parameters to all users, then averaging weight updates?
We don't have a central server receiving all updates and distributing parameters (it would be a bottleneck) but use Butterfly All-Reduce (where i-th peer becomes responsible for aggregating i-th part of the gradient vector).
However, peers indeed average the full gradient vector at the end of each step! So we use large batches to make this happen as rare as possible, overlap computations with sending/receiving gradients, etc.
Alexander B.#0332: Thank you! Sorry for the long messages, I hope it doesn't hurt discussions in this channel ๐
Sid#2121: not at all, thank you for the detailed response!
Sid#2121: do you have an ETA for the DALL-E model you're training currently? Can you give some data on what average % of the training time is occupied with allreducing grads vs. forward/backward passes?
Sid#2121: cool work btw, we get a lot of questions here about whether we can use a distributed approach like this, and the answer we generally give is 1) slow communications and 2) not robust to adversaries - so it's good to see work going toward fixing these issues
|
Sid#2121: hopefully one day it's robust enough to work on a very large scale
Alexander B.#0332: Currently, allreduce takes ~20 sec and forward-backward passes take ~120 sec. The latter becomes faster when more peers join (one can observe that in logs while joining via Colab).
Actually, you can do allreduce and forward-backward passes simultaneously if you allow delayed parameter updates (a 1-step delay doesn't usually hurt model quality much), though we've decided to minimize risks and disabled this feature in the NeurIPS demo.
As for the ETA, we expect convergence somewhere at loss ~4.5 (the current loss is 4.77), however, we don't have much experience with DALL-E yet and we're not sure how long it will take. Overall, our plan is to conduct several experiments with DALL-E (e.g., test different variants of weight sharing) and then move on to the full-fledged collaborative run with a larger model.
> cool work btw
Thank you! I'm happy to answer any questions about what we're doing ๐
Utlagi#0001: any quick resources for fine-tuning CLIP ? I've got it working quite well and made a nice performance analysis dashboard and stuff....but I'd like to train it for my specific domain
mega b#6696: works fabulously ๐ more testing needs to be done on the current model but overall accuracy is almost on point
mega b#6696: sucks not alot of ASAG datasets are present
suh#2879: is there a discord or channel for noob ml questions?
bmk#1476: see #communities
bmk#1476: this server is not for noob questions
suh#2879: ok thanks
nev#4905: is it possible to regain TRC access after not replying to the feedback form for several months? asking for a friend :guilty:
๐
ฌ gabriel_syme ๐
ฌ#3220: You might ye. I would make sure to write a detailed email of what you did and plan to do, maybe send some examples (code, outputs, etc.). They seemed like a nice bunch then but not sure if things changed
nev#4905: ah cool, do you just email them?
|
Ajay sahu#2540: Hello, does anyone has code to train open Ai CLIP ?
Ajay sahu#2540: I mean fine-tune it
naclbbr#9203: Hi! Thanks for sharing that, very interesting re: Byzantine generals issue with de-centered training. I was wondering how the experiment was safeguarding against potential aggregation.
๐
ฌ gabriel_syme ๐
ฌ#3220: Sry I'm in and out today. Yeah just send to the trcsupport email directly
anthony_fuller#1075: Has anyone trained vq-vaes? Seems a bit trickier than I expected. Trying to quantize image patches via a FFN, using Lucid's repo...
cfoster0#4356: What's your setup? I've had an easier time with discrete VAEs than with regular VQVAEs
alstroemeria313#1694: what's the difference exactly?
alstroemeria313#1694: do you mean gumbel quantization vs vector quantization?
cfoster0#4356: Mostly yeah
alstroemeria313#1694: Ahh
alstroemeria313#1694: Yeah my own experiments have been all Gumbel
cfoster0#4356: You could still do Gumbel based on the codebook distances, which is a weird middle ground
anthony_fuller#1075: basically 512x512 image -> 16x16 patches -> FFN -> Lucid's vq-vae -> FFN
dim = 256, codebook_size = 512, decay = 0.8, commitment_weight = 1.0
anthony_fuller#1075: and loss = vq_loss + reconstruction_loss
cfoster0#4356: Ah makes sense. Are you seeing problems with that setup?
anthony_fuller#1075: yeah, its not even converging. I may be making a noob mistake. I'm not explicitly copying gradients from decoder to encoder, I assume lucid's vq-vae function handles that, but I'm not sure.
anthony_fuller#1075: let me play around with someone's example notebook. I'll report back ๐
anthony_fuller#1075: seems like better data normalization + replacing the FFN with a tiny CNN worked
Dashiell#8739: Does anyone ever do model parallel convolution? Like if I have a v3-8 TPU and wanted to split up the calculation of a model that wouldn't fit on a single node? On the one hand it doesn't seem like it would be hard to implement myself, but a cursory search didn't turn up any libraries that do it and just the fact that it isn't used that often is giving me pause
|
chirp#4545: IIRC convolutional models usually arenโt that big
Dashiell#8739: @alstroemeria313 is the expert, but it feels like diffusion convolutional U-Nets really rack up the parameters
Dashiell#8739: Like, I'd like to train a model for 512x512 images, but it'd probably need to be close to a billion parameters, which wouldn't fit in a TPU memory
Dashiell#8739: Especially not optimizing w/ Adam etc etc
tpapp157#3643: Full convolution kernels can have a lot of parameters, which of course get multiplied by the optimizer. But often even worse for memory is resolution because layer activations must also be held in memory for the backward pass. If you're working with high resolution images (512+) then layer activations can take up huge chunks of memory. This is why many convolutional architectures will downsample the resolution pretty aggressively in the early layers. Unets get it even worse because they need to scale back up to full res again at the end.
Dashiell#8739: do you know if anyone does model parallel convolution? Or in the big-vision-model regime is it just all vision transformers?
tpapp157#3643: No I don't really have much experience with distributed training of large models.
AI_WAIFU#2844: I think jax can do it if you use pjit/gsmpd
tpapp157#3643: Something I sometimes find useful, though, when running into memory constraints is pretraining with large batches of small crops, and then finetuning with small batches of larger crops.
kindiana#1016: yeah pjit can do it
kindiana#1016: it uses the tpu spatial partitioning stuff for activation sharding
kindiana#1016: if you split on a spatial dim
tpapp157#3643: Reminder of this technique: https://arxiv.org/abs/1907.05550 . Recently came in handy for me again when preprocessing was too slow.
Dashiell#8739: whoa, I was not really aware of pjit
Dashiell#8739: that looks practically like magic
Dashiell#8739: and I don't even have to worry about placing x_i parameters on i device?
kindiana#1016: yeah and when it breaks you need to be a sorcerer to fix it
Dashiell#8739: lol gotcha
kindiana#1016: as long as you make your init function and inference/training functions take the same shard specification, you don't have to worry about how they are placed
kindiana#1016: see: https://github.com/kingoflolz/mesh-transformer-jax/blob/master/mesh_transformer/transformer_shard.py#L389
|
Dashiell#8739: I've been meaning to really dig into your MTJ code--I guess now is the time to do it!
chilli#5665: I think a decent chunk of people use Zero-3 type techniques too
chilli#5665: but generally data-parallel gets you a lot further with convolution networks
chilli#5665: due to the proportion of activations vs. parameter memory
kindiana#1016: if you want high resolution you might need to split the activations for a single example across multiple tpus
kindiana#1016: which fortunately tpus support pretty nicely
chilli#5665: yeah that's the main case
chilli#5665: btw, unrelated
chilli#5665: but can you use tensor cores for other operations?
chilli#5665: Like, let's say you wanted to do a max reduction instead of a summation
kindiana#1016: I don't believe so
kindiana#1016: but I'm not tooo familiar with it
chilli#5665: what about on TPUs?
chilli#5665: like, what are TPU peak flops for non-matmul operations?
kindiana#1016: I believe it has some vector units
kindiana#1016: would make sense if it was about the same throughput as input/output to mxu?
kindiana#1016: so like 256 wide
kindiana#1016: I'm not sure if it has any special cases for reduction
chilli#5665: hmm
chilli#5665: like, let's say a TPU was just performing an operation like
|
chilli#5665: iterated multiplication
chilli#5665: ```
def f(x):
return x*x*x......x
```
chilli#5665: any idea about the peak flops it could hit?
chilli#5665: so high compute intensity
chilli#5665: just not a matmul
kindiana#1016: I think it will need to be pretty wide
chilli#5665: I'm not sure what that means ๐ค
kindiana#1016: like, x might need to be a wide vector for good throughput
chilli#5665: wide => "big"?
kindiana#1016: yes
chilli#5665: lol
chilli#5665: sure, let's assume it's arbitrarily large
kindiana#1016: but I think they might have optimized for this type of workload because of rnns
chilli#5665: I don't think TPUs could hit anywhere near peak flops on this
kindiana#1016: no
chilli#5665: right?
kindiana#1016: at least a 256x slowdown
|
kindiana#1016: lol
chilli#5665: oh really?
chilli#5665: interesting
chilli#5665: I wonder how much Nvidia GPUs suffer on this
chilli#5665: they hit 20 non-tensor core flops
chilli#5665: but I'm not actually sure if they have other matmul-dedicated hardware units
chilli#5665: I guess this is testable
kindiana#1016: I think you could get pretty close to 100% non-tensor core flops on gpus with this
kindiana#1016: or 50% because not fma I guess
kindiana#1016: lol
chilli#5665: oh lol, good point
chilli#5665: sure
chilli#5665: interesting, so I guess GPUs are quite a bit faster than TPUs
chilli#5665: for non-matmul ops
kindiana#1016: yeah
chilli#5665: hmm
chilli#5665: intriguing
kindiana#1016: most of the alus in a tpu are wired up in the fixed function matrix unit
kindiana#1016: doesnt help if you are not doing matmuls
chilli#5665: I think you might actually end up running into issues here with recomputation
|
chilli#5665: on TPUs
chilli#5665: I mean that recomputation often leads to higher compute intensity
chilli#5665: and that since TPUs hit much lower flops on non-matmul operations here
chilli#5665: it's easier to recompute enough that you become compute bound
kindiana#1016: I don't get it lol
kindiana#1016: in my mind most of your recompute is either memory bound or matmuls
kindiana#1016: memory bound runs at ~same speed on both
chilli#5665: if you're recomputing enough
kindiana#1016: same for matmuls
chilli#5665: then you can become compute bound
chilli#5665: or really, it's not the recomputation amount that matters here
chilli#5665: it's the size of the graph you can fuse together
kindiana#1016: wdym?
kindiana#1016: like, if you fuse a single layer vs a whole network
kindiana#1016: you have different performance characteristics?
chilli#5665: as in, on GPUs you can create a larger fusion group (of say, pointwise ops) without suffering any performance hit
chilli#5665: compared to TPUs
kurumuz#5695: @rom1504 Hey, I was curious about why jit is not used in your clip-retrieval code. Is it because it's actually slower or some other reason?
kindiana#1016: if you are alu bound on pointwise ops, you must be doing quite a lot of pointwise ops lol
kindiana#1016: but I feel like this only applies to pointwise ops right?
|
kindiana#1016: as soon as you fuse a matmul in there
chilli#5665: reductions too
chilli#5665: but yeah, it doesn't matter once you get to a matmul
chilli#5665: hmm
kindiana#1016: do most networks actually have that many pointwise/reduction ops in a row?
kindiana#1016: I feel like thats kind of a weird edge case
chilli#5665: I don't know tbh
chilli#5665: things like trig ops seem to quickly fill up your quota
kindiana#1016: people put trig ops in their networks?
kindiana#1016: maybe nerf I guess lol
chilli#5665: well, gelu
chilli#5665: approximation
chilli#5665: does
kindiana#1016: you could also use the faster gelu approximation ๐
kindiana#1016: but I don't think that really matters if your matmuls are a reasonable size
kindiana#1016: the throughput of your matrix unit needs to scale with d^2 but the vector unit with d
chilli#5665: I also don't have a great mental model
chilli#5665: of what's actually fusable
chilli#5665: ๐ค
chilli#5665: Like, pointwise ops are trivially fusable
|
chilli#5665: but once you throw in reductions
chilli#5665: and broadcasting
chilli#5665: figuring out which of these bandwidth bound ops are actually fusible gets trickier
kindiana#1016: I don't think there's too many fusable things in transformers
chilli#5665: haha
chilli#5665: that's where you're wrong ๐
kindiana#1016: ๐ค
chilli#5665: even say, any combination of dropout + residual + bias + activation
chilli#5665: is fusable
kindiana#1016: well yeah
kindiana#1016: but those are not particularly fancy
chilli#5665: sure
kindiana#1016: and should be fused already
chilli#5665: well...
kindiana#1016: well
kindiana#1016: maybe not the residual
chilli#5665: not in PyTorch lol
kindiana#1016: especially not the dropout
kindiana#1016: but who needs dropout xP
chilli#5665: then there's stuff like layer norm
|
kindiana#1016: well that's really not trivial to fuse haha
kindiana#1016: considering you need to reduce over d_model
chilli#5665: I think forward isn't so bad
chilli#5665: backwards is more difficult
chilli#5665: it's a lot easier if you split it into 2 kernels iirc
kindiana#1016: for systolic array based accelerators the memory access patterns are all wrong I believe
kindiana#1016: so its pretty hard
chilli#5665: for forwards or backwards?
kindiana#1016: forward
kindiana#1016: for backwards you can cheat and save the stats so its pointwise
chilli#5665: not sure what's that hard about it ๐ค
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/919516409208377394/unknown.png
kindiana#1016: getting late, continue chatting tomorrow?
chilli#5665: sure
chilli#5665: ๐
chilli#5665: I haven't totally thought about the backwards
chilli#5665: but Philippe mentioned at some point that there's a patttern like
chilli#5665: ```
x.sum(dim=0)
x.sum(dim=1)
|
```
chilli#5665: and it's tricky to fuse those 2 reductions together
kindiana#1016: yeah
kindiana#1016: I'll elaborate tomorrow ๐
chilli#5665: I have some stuff to do that i've been procastinating for weeks on
chilli#5665: so I'll go do that
chilli#5665: lol
annahahn#7168: Is anyone familiar with any tools that allow for re-lighting of an image? I have been looking at googleโs total relighting https://hackernoon.com/introducing-total-relighting-by-google-2t4p24g1
But I havenโt found anything publicly available that can help to match lighting conditions between two images / change lighting direction etc
chilli#5665: on the nose btw, I reached about 9.3 teraflops on GPU for that kind of function
chilli#5665: and I seem to reach about ... 17 gigaflops(?) on a TPU-v2
chilli#5665: This is actually kind of a neat way of measuring how many ops a primitive is worth
chilli#5665: for example, I can reach 2.3 teraflops of `x.cos()` ops, which means a `cos` must be about 4 ops on an A100
chilli#5665: (well, at least with NVFuser)
chilli#5665: this, btw, does seem to imply that doing about 16 multiplications is enough to get you compute bound on a TPU
naclbbr#9203: may I use this pic on my blog post? It's hilarious but very much on point
tpapp157#3643: Not off hand. Lighting is really hard. It requires calculating a lot of very precise information about the objects and light sources in a scene and their interactions, many of which are outside the camera field of view.
tpapp157#3643: :blobsad: The next morning when you realize your code waited until it was sure you went to sleep before throwing an error and slacking off.
bmk#1476: yeah go ahead, just make sure to credit me by linking to my original tweet with the image
|
bmk#1476: https://twitter.com/nabla_theta/status/1465772781572812803
EricHallahan#1051: Alternatively, you could embed the tweet.
naclbbr#9203: Thanks! will do
bmk#1476: also post the blog post here when it's up, I wanna read it
bmk#1476: more aaaaapill content is always a good thing
naclbbr#9203: I am trying to create a blog post for common (noob) questions and ideas for Japanese audience because the information is extremely scarce over here and misinformations are galore
naclbbr#9203: like "GPT-3 was trained with 45TB corpus"
naclbbr#9203: ๐ค
naclbbr#9203: or "everyone should do text gan instead of ar"
EricHallahan#1051: Is it really that hard to come by?
naclbbr#9203: for general public, unfortunately yes
bmk#1476: to be fair this is also really common on the english internet
bmk#1476: also aaaaapill isnt common knowledge in the english internet either
bmk#1476: people still keep acting like logprobs are a quality metric all the time and it drives me insane
nacnud#3491: not sure if #infrastructure is a better channel to post this but it seems https://the-eye.eu/ is down
nacnud#3491: any chance there is a backup hosting platform that y'all wouldn't mind moving to in the meantime?
nacnud#3491: in this specific case I'm interested in: https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt but generally it might allow for more reliability of serving the weights
Teemochu#8740: Would you say it makes you say aaaaaaaaaaaaaaaa?
nacnud#3491: solved my own problem https://discord.com/channels/729741769192767510/730484623028519072/912226497534775296
nacnud#3491: but still worth considering a more reliable weight pinning service imo
|
EricHallahan#1051: Yes, the Eye has notified us that they expected to be down as they do some juggling with their infrastructure.
StellaAthena#3530: In particular, there were several dozen severe tornadoes in the mid west USA yesterday that disrupted a lot of peoples lives.
alstroemeria313#1694: How do you do a linear regression again ^^;;
I have not done one since college decades ago.
StellaAthena#3530: scipy.stats.linregress
bmk#1476: train a NN with only a single linear layer
alstroemeria313#1694: I could do that but I have forgotten the really simple closed form for it
StellaAthena#3530: Or as an equation https://cdn.discordapp.com/attachments/729741769738158194/919713929574363197/IMG_8583.png
alstroemeria313#1694: ooh ty
alstroemeria313#1694: I want to throw various regression types at this
alstroemeria313#1694: Some of them will require optimizing a thing.
alstroemeria313#1694: I am getting a "too many open files" error when trying to compute the CLIP embeddings for this thing :/
alstroemeria313#1694: What is leaking FDs.
alstroemeria313#1694: oh it is a shared memory leak i bet
alstroemeria313#1694: ok i got the embeddings computed
alstroemeria313#1694: for train and val sets
rom1504#5008: because jit=True requires pytorch 1.7 whereas jit=False has no such requirement
and I did not notice any speed difference
EricHallahan#1051: That isn't true.
EricHallahan#1051: Because I have had JIT work perfectly fine in 1.10.
|
alstroemeria313#1694: I think they may have fixed it at some point.
alstroemeria313#1694: It was the case back when we first were using CLIP.
Louis#0144: oh no
Louis#0144: :/
EricHallahan#1051: There are some oddities because the model acts differently when JITed, but it works pretty much the same.
rom1504#5008: ok I guess I'll try again with jit=True
EricHallahan#1051: (When I say "works differently", I mean that variables that are there when run normally do not exist when JITed.)
rom1504#5008: anyway the code loads the model with torch.jit.load even if jit=False so not sure what's really being run
alstroemeria313#1694: ooh
Some Point Process#3793: The basic intuition is that you have your data (design) matrix (X), which are row vectors, where each row corresponds to a data sample (i.e. as feature vectors that you're trying to regress into the y vector). Assume the y's are scalar regressands. Then the y is a data vector or scalar samples.
-> The equation is then Xb = y
Since X is not a square matrix, the "trick" is to multiply both sides X^t.
-> The equation is then X^t(X)(b) = X^t(y)
On the LHS, X^t(X) happens to correspond to your data covariance matrix, which is also symmetric (not just a square matrix)
If you again multiply both sides of the equation, this time by (X^t(X))^-1, you get the equation for b that stella posted, which are exactly the coefficients that minimize the mean squared error. This can be thought of as minimizing euclidean distance onto the regressors but I just mention the algebraic solution for now
alstroemeria313#1694: the cosine sim between the two vectors i got from my linear regression for the train set and val set is 0.9812
alstroemeria313#1694: x.shape is [229980, 512]
alstroemeria313#1694: y.shape is [229980]
alstroemeria313#1694: x is CLIP embeddings
|
alstroemeria313#1694: y is the mean human rating for the image (on a scale of 1-10)
EricHallahan#1051: It would be really cool to see this work.
alstroemeria313#1694: the val set has 12690 items
alstroemeria313#1694: it is this dataset https://github.com/mtobeiyf/ava_downloader
alstroemeria313#1694: using the pre-filtered csvs for it from here (which exclude bad images) https://github.com/kentsyx/Neural-IMage-Assessment
alstroemeria313#1694: and its data loader code
StellaAthena#3530: Thatโs absurdly high
StellaAthena#3530: Thatโs *val* correlation?
alstroemeria313#1694: I wanted to see if I could find a direction in CLIP latent space that corresponded to image quality so we could just add it to the target we optimize for and get higher quality images
alstroemeria313#1694: it is the cosine sim between the result of the regression on the train set and the result of the regression on the val set.
StellaAthena#3530: What is the r^2 for the regressions themselves though
alstroemeria313#1694: How do I get those
chilli#5665: @Deleted User do you know the peak flops for TPUs for non-matmul ops?
StellaAthena#3530: Are you using scipy or home-brewing it
alstroemeria313#1694: home-brewing lol
StellaAthena#3530: $$r^2 = 1-\frac{\sum (y_i - \hat{y_i})^2}{\sum (y_i -\overline{y})^2}$$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/919721408232292363/193204646687408129.png
StellaAthena#3530: $y_i$ is actual, $\hat{y_i}$ is predicted, $\overline{y}$ is mean of actual
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/919721490197409792/193204646687408129.png
alstroemeria313#1694: it is 0.4918
|
alstroemeria313#1694: for train. 0.5054 for val
StellaAthena#3530: Thatโs pretty good.
StellaAthena#3530: Very good, even
EricHallahan#1051: Yeah that's higher than I expected.
alstroemeria313#1694: So this data actually has the distributions of human ratings for the individual 1-10 values
alstroemeria313#1694: I computed the means from them.
StellaAthena#3530: The way you interpret that number is to consider the function *f* that maps input images to quality scores. We have a hypothesis that if we move in a specific direction (the one given by the OLS) quality improves.
Those scores indicate that half the image-to-image variation in the value of *f* is explained by our hypothesis, and half is explained by other factors.
alstroemeria313#1694: I could try logistic regression
alstroemeria313#1694: Like, or whatever it is when you try to output logits and minimize the KL divergence between those logits and the ground truth distributions.
rom1504#5008: <https://github.com/rom1504/clip-retrieval/pull/92> looks like jit=True is working indeed
will check in some other envs and the speed, if it works will merge
it was failing a long time ago but I don't quite recall the exact conditions, thanks for the remark
StellaAthena#3530: Thatโs a really strong signal all things considered
alstroemeria313#1694: That is logistic regression if the ground truth distributions are one-hots.
alstroemeria313#1694: Now the question is what happens if we shift a CLIP text embedding in this direction and optimize an image for it
alstroemeria313#1694: Vs with the original text embedding.
StellaAthena#3530: This is a dataset-wise average. For some specific inputs we donโt see as big of an impact and for others we see a bigger one. But on average 50% of the variation is explained
alstroemeria313#1694: *nods*
|
StellaAthena#3530: If this works itโs nuts
alstroemeria313#1694: Ehehe
cfoster0#4356: *just tell the model to make it look good*
alstroemeria313#1694: This dataset was used to train a linear layer on top of a VGG16 feature extractor
alstroemeria313#1694: So this is kinda the same thing except using linear regression and a CLIP feature extractor rather than logistic and a VGG16 feature extractor
alstroemeria313#1694: Or maybe it was two linear layers, I forget
alstroemeria313#1694: The embeddings are for ViT-B/16
StellaAthena#3530: What dataset? The image quality ratings?
alstroemeria313#1694: Yeah
alstroemeria313#1694: downloaded the saved embeddings to my laptop
alstroemeria313#1694: the dataset is like 32GB, I torrented it on a random GPU box
alstroemeria313#1694: Except the repo I posted got the dataset preparation wrong for their VGG feature extractor
alstroemeria313#1694: They didn't normalize by the means and stds of the VGG training set.
alstroemeria313#1694: So they kind of mistrained their model.
alstroemeria313#1694: I told them about this several months ago in a GitHub issue but they seem to have not done anything about it.
alstroemeria313#1694: `r^2: 0.49334` with normalizing the embeddings first
alstroemeria313#1694: It seems to not affect the goodness of the fit much
alstroemeria313#1694: We do actually need to normalize the embeddings first if we are going to add that direction to a *text* embedding
alstroemeria313#1694: Bc it came from image embeddings and we can't mix without normalizing.
alstroemeria313#1694: So like.
|
alstroemeria313#1694: Since the ratings were on the scale 1-10.
alstroemeria313#1694: If we add this direction then should it try to bump the rating of the result up by one?
alstroemeria313#1694: ...also there is no intercept in this model do i need one
Some Point Process#3793: If you normalize then generally that maps to "removing scale information", etc. So you're regressing onto angles or directions in the feature space since that's all the information you now have
alstroemeria313#1694: CLIP embeddings are normalized before comparison so
EricHallahan#1051: Wait how are you interpolating the embedding? Rotating it in embedding space? Adding the direction then normalizing?
alstroemeria313#1694: I am not doing this yet
EricHallahan#1051: I'm just trying to think about how the topology plays into this.
alstroemeria313#1694: I was probably going to add it and then normalize or smth
alstroemeria313#1694: the norm of the direction is 44.28
alstroemeria313#1694: That is huge
alstroemeria313#1694: This may be a problem
alstroemeria313#1694: Since the actual embeddings are norm 1.
EricHallahan#1051: Yeah I think you need to identify a rotation rather then a linear direction.
alstroemeria313#1694: hm how do i do that.
EricHallahan#1051: Or alternatively a start and an end vector across the range.
EricHallahan#1051: I guess you could try the spheremean code?
EricHallahan#1051: IDK
rom1504#5008: Why not just train f(clip) = score
And then you add/substract f(clip) in your clip guiding loss
|
EricHallahan#1051: You could, but it is more complex than it probably needs to be.
rom1504#5008: Why is it more complex? Seems more simple to me
alstroemeria313#1694: Is the direction so big bc it has to be to predict the rating, which is range 1-10, from a CLIP embedding with norm 1.
EricHallahan#1051: Maybe?
EricHallahan#1051: That's probably part of it.
alstroemeria313#1694: Also do I need to add an intercept.
EricHallahan#1051: Can you run it with one and see what changes?
alstroemeria313#1694: How do you add one
EricHallahan#1051: Oh right you're homebrewing.
alstroemeria313#1694: i think scipy.stats.linregress() may be for the 1D case?
alstroemeria313#1694: Or do I need to transpose both matrices first
EricHallahan#1051: scikit-learn would probably do the trick.
https://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares
Some Point Process#3793: Then why did I see that clip scores (dot product) was > 1? In any dimensional space the maximum inner product must be 1 if vectors (presumably reshaped?) are normalized
alstroemeria313#1694: it can't be > 1
alstroemeria313#1694: oh fun it is now building scikit-learn from source
alstroemeria313#1694: I am on an M1 Mac
alstroemeria313#1694: It had to build pandas too (It works though)
alstroemeria313#1694: ```
The recently introduced macos/arm64 platform (sometimes also known as macos/aarch64) requires the open source community to upgrade the build configuration and automation to properly support it.
|
At the time of writing (January 2021), the only way to get a working installation of scikit-learn on this hardware is to install scikit-learn and its dependencies from the conda-forge distribution, for instance using the miniforge installers:```
alstroemeria313#1694: oh no
alstroemeria313#1694: wait it built
EricHallahan#1051: It seems like it was recently implemented.
EricHallahan#1051: https://github.com/scikit-learn/scikit-learn/issues/19137
EricHallahan#1051: ... I actually don't know if this is relevant.
EricHallahan#1051: Anyway, scikit-learn is pretty useful to have around.
alstroemeria313#1694: @EricHallahan it works~
alstroemeria313#1694: I got an R^2 of 0.4972
EricHallahan#1051: So it barely changed.
alstroemeria313#1694: the norm is now 26.433716
alstroemeria313#1694: and it found an intercept of 4.554493
alstroemeria313#1694: 44.260273 when I don't fit an intercept.
alstroemeria313#1694: and R^2 0.493368
alstroemeria313#1694: So it looks v similar and it is probably just down to like, floating point rounding.
alstroemeria313#1694: the cosine sim of the direction w/o the intercept and the direction with the intercept is 0.82518697
rom1504#5008: Might be worth trying to divide your scores by the average
alstroemeria313#1694: Ah
alstroemeria313#1694: The direction with the intercept is probably better
|
alstroemeria313#1694: Since the 'default' rating is clearly not 0
alstroemeria313#1694: Oh.
alstroemeria313#1694: Right.
alstroemeria313#1694: Instead of like, adding this to the target embedding.
alstroemeria313#1694: I can add a loss based on it
rom1504#5008: Yeah
rom1504#5008: Telling the guiding process to make image that have a high score
rom1504#5008: The benefit is if doing that your transformation of the clip embedding can be arbitrarily complex
rom1504#5008: It's interesting to be able to transform the query though
alstroemeria313#1694: wow gradient descent on this is kinda slow
Sid#2121: is there a straightforward way to train *part* of a parameter in pytorch? say i only wanted to train [:256, :] of a 512x512 matrix
alstroemeria313#1694: Multiply the .grad by a mask before each optimizer step
alstroemeria313#1694: Um, mb not if you are using weight decay
alstroemeria313#1694: Bc it will apply to the parts with zero grad.
Sid#2121: but then i still have all the params in memory
alstroemeria313#1694: The good way to do it is to take the part you want to train and make it its own param, and then during the forward pass stick it into the old param (which does not require grad now) and use the result in the forward pass
alstroemeria313#1694: Then gradients will propagate back to the subset
alstroemeria313#1694: This will protect the parts of the old param tensor you aren't updating from weight decay.
Sid#2121: huh yeah that seems like the way to do it ig
Sid#2121: thanks
|
Sid#2121: a little hacky, i was hoping torch would have some API for sparse masking like that
alstroemeria313#1694: you might be able to do it with the new parameterizations thing but idk how to use it
Sid#2121: what's that?
alstroemeria313#1694: it lets you do things like symmetric matrix constraints on weights
alstroemeria313#1694: https://pytorch.org/tutorials/intermediate/parametrizations.html
Sid#2121: ah, this looks exactly like what i want
Sid#2121: i think
alstroemeria313#1694: Like I think you can do the substituting the part that requires grad into the old weights in one of those.
alstroemeria313#1694: like register the old weights as a buffer on the parameterization.
alstroemeria313#1694: or smth.
linadina#3184: Hello
EricHallahan#1051: Welcome!
onealeph0#1502: It seems like 'https://the-eye.eu/public/AI/pile/' has stopped working, any chances it will be up again soon? any known mirrors?
EricHallahan#1051: ^
onealeph0#1502: thank you!
chilli#5665: @kindiana there's no real reason systolic arrays have to be used to compute matrix multiplies, right?
kindiana#1016: no
chilli#5665: Like, you could swap the add and the multiply
kindiana#1016: but as implemented in the tpus that's the only thing they can compute I believe
kindiana#1016: its pretty fixed function afaik
|
chilli#5665: or GPUs
chilli#5665: but to a lesser extent
chilli#5665: I could only get like...
chilli#5665: 20 GFlops of compute
chilli#5665: out of a TPUv-2
kindiana#1016: yeah I'd believe that
chilli#5665: lol
chilli#5665: that's actually terrible
chilli#5665: and it's not very hard to saturate 20 GFlops
chilli#5665: I wonder if I'm somehow benchmarking weird
kindiana#1016: what is your benchmark?
chilli#5665: mmm
chilli#5665: it disappeared
chilli#5665: but it was just something like
chilli#5665: ```
def f(x):
for _ in range(repeat):
x = x * x
return x
```
|
chilli#5665: and then I was measuring flops
chilli#5665: across increased sizes of repeat
chilli#5665: for a 2**26 tensor
kindiana#1016: with jit?
chilli#5665: ofc lol
chilli#5665: and with warmup and stuff
chilli#5665: and `block_until_ready`
kindiana#1016: maybe you can try a scan
chilli#5665: :thonk:
chilli#5665: why would that change things
chilli#5665: this is as pure as it gets
chilli#5665: (unless XLA fails to fuse it for some reason)
chilli#5665: I did try some stuff like `x = x*2`
chilli#5665: but pretty sure XLA just optimized those expressions together
chilli#5665: I was a bit worried XLA was gonna optimize these multiplies into a pow
chilli#5665: but no way it'd be this slow if so
kindiana#1016: well xla is just going to unroll the whole loop
chilli#5665: yeah
chilli#5665: well, it's not even XLA unrolling the loop
chilli#5665: it's just jax's tracing
|
kindiana#1016: yeah
kindiana#1016: I mean theoretically it should be the same thing
kindiana#1016: what does plot look like for different sizes of x?
chilli#5665: not that different iirc
chilli#5665: obviously it improves more for higher x
chilli#5665: oh, another thing I thought was interesting
chilli#5665: seems like fp16 non-tensor core flops
chilli#5665: and fp32 non-tensor core flops
chilli#5665: were identical for A100s
chilli#5665: hmm
chilli#5665: actually
chilli#5665: nvm
chilli#5665: this might be an NVFuser thing
chilli#5665: where most fusers upcast their intermediate values to fp32
chilli#5665: because you're usually not compute bound on pointwise ops in real life
chilli#5665: lol
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/919801308997894164/unknown.png
kindiana#1016: yeah it should certainly be able to do double rate fp16 in the sm
chilli#5665: yeah, I'm pretty sure it's just an NVFuser thing
chilli#5665: since you almost always want to compute your intermediates in fp32
|
chilli#5665: but yeah
chilli#5665: I thought it was pretty cool to actually see the FLOPs I'm paying this GPU to achieve
chilli#5665: ๐
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/919801619367997480/unknown.png
kindiana#1016: I suspect you can make the throughput go down if you increase the size of x by a lot
chilli#5665: why?
chilli#5665: you're computing everything locally anyways
kindiana#1016: well
kindiana#1016: idk if the compiler is that smart
chilli#5665: i dont' think it's the compiler
chilli#5665: it's just the CUDA runtime model
chilli#5665: you split into blocks that go onto your SMs as fast as they can
kindiana#1016: yeah
kindiana#1016: but if your intermediates do not fit on chip
kindiana#1016: then you need to reorder the computation
chilli#5665: but if you're fusing it
chilli#5665: why would your intermediates ever not fit on chip
chilli#5665: you're only computing a thread-block size of memory at a time
chilli#5665: and you're basically just doing
kindiana#1016: oh yeah you are right
|
chilli#5665: ```
for (int i=start; i<start+1024; i++) {
val = A[i];
for (int j=0; j<repeat; j++) val *= val
out[i] = val
}
```
chilli#5665: I did learn that NVFuser uses `use_fast_math` on CUDA though (by default)
chilli#5665: which is ... somewhat suspicious sometimes
chilli#5665: like, it makes `cos` take 4 ops
chilli#5665: instead of the ~10 it usually does
chilli#5665: oh yeah, are you gonna explain the layer norm thing :^)
chilli#5665: part of the problem is that I haven't actually figured out what the backward of layer norm actually is
kindiana#1016: basically the problem is that the memory hierarchy of tpus is i believe pretty limited
kindiana#1016: there's a hbm, scratchpad sram and basically that's it
kindiana#1016: the vector unit can either be fed from scratchpad/hbm, or output of systolic array, and can output to either scratchpad/hbm or input of systolic array
kindiana#1016: if you want to fuse bmm(ln(x))
kindiana#1016: that requires you to do block_bmm(block_ln(x))
kindiana#1016: but you can't compute the ln of a block of a matrix
kindiana#1016: you need to see the whole row
|
chilli#5665: Hmm
chilli#5665: I wasn't even talking about fusing matmul into layer norm
chilli#5665: Just layer norm by itself
chilli#5665: But it sounds like what you're saying isn't specific to LN, right?
kindiana#1016: yes
chilli#5665: Like, this wouldn't work for mm => sum
kindiana#1016: well
kindiana#1016: for some types of reductions it would work
chilli#5665: Hmm
chilli#5665: Mean
kindiana#1016: its the fact that you need reduction and then modify the values based on reductions thats problematic
kindiana#1016: in a regular gpu you have very high speed local memory (register file) that has O(alus) size and speed
kindiana#1016: but in a tpu the pipes are only O(sqrt(alus))
chilli#5665: Hmm
chilli#5665: What advantages do TPUs have hardware wise
chilli#5665: Lol
chilli#5665: Other than interconnect
chilli#5665: Or are those inherently connected to their disadvantages
kindiana#1016: you get more alus for your buck
kindiana#1016: and save a decent amount of power
|
qui#7777: I've been lurking and chatting occasionally in here for a few months now. I thought you guys might like this
https://youtu.be/3mr1AdUUehg
Deleted User#0000: what advantage other than their main advantage :blobsad:
Deleted User#0000: I got a red notification but now I cant see anything, did you ping me on the TPU debate?
MicPie#9427: this outlines it nicely: https://stackoverflow.com/questions/57668368/how-to-create-torch-tensor-object-and-to-update-only-some-of-its-elements
(see also the update/last comment with simple concat instead of fancy masking or indexing)
Sid#2121: I actually found a really nice method to do it with torch.parametrize which doesn't require you store another tensor of equal size in memory: (topk is just an example, I want to use it to replicate this https://arxiv.org/abs/2111.09839)
Sid#2121: ```python
import torch.nn as nn
import torch
import torch.nn.utils.parametrize as parametrize
class TopKMask(nn.Module):
def __init__(self, orig_module, k, attr_to_parametrize='weight'):
super().__init__()
self.k = k
orig_param = getattr(orig_module, attr_to_parametrize)
assert orig_param is not None, f"Module does not have attribute {attr_to_parametrize}"
|
if orig_param.ndim == 2:
topk_values, topk_inds = torch.topk(orig_param.flatten(), k=k)
elif orig_param.ndim == 1:
topk_values, topk_inds = torch.topk(orig_param, k)
else:
raise NotImplementedError("Only 2D and 1D tensors are supported")
self.topk_values = torch.nn.Parameter(topk_values)
self.register_buffer('topk_inds', topk_inds)
orig_param.requires_grad = False
parametrize.register_parametrization(layer, "weight", self)
def forward(self, X):
# put top_k inds back into X
if X.ndim == 2:
orig_shape = X.shape
X = torch.scatter(X.flatten(), 0, self.topk_inds, self.topk_values)
X = X.view(orig_shape)
elif X.ndim == 1:
X = torch.scatter(X, 0, self.topk_inds, self.topk_values)
|
else:
raise NotImplementedError("Only 2D and 1D tensors are supported")
return X
layer = nn.Linear(3, 4)
mask = TopKMask(layer, k=3)
print(f"\nInitialized weight:\n{layer.weight}")
optim = torch.optim.Adam([i for i in layer.parameters() if i.requires_grad])
for _ in range(10):
optim.zero_grad()
y = layer(torch.randn(1, 3))
loss = torch.sum(y)
loss.backward()
# take a step
optim.step()
print(f"\nAfter step {_}:\n{layer.weight}") # only the top 3 values are updated!
```
MicPie#9427: looks very clean, thanks for sharing!
|
guess I need to read the `torch.parametrize` docs in detail then ๐
StellaAthena#3530: Whatโs wrong with the code that they released?
Sid#2121: wait what lol
Sid#2121: it's literally on the first page of the paper lol ๐คฆ
Sid#2121: ok well i've found the problem, it's literally a whole transformers clone, i'd rather rewrite it than find where the code actually is
Kharr#7888: This is the new way to obscure research code -- write a tiny module within a massive codebase and say "we included the code" in the paper.
Sid#2121: right?? Like i don't even understand it. It's much harder to slot your research work into a bloated codebase than just write one from scratch
Sid#2121: also, it looks like their fisher mask isn't actually sparse, so I maintain my code is better :berk:
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/919960162410856488/IMG_8588.png,https://cdn.discordapp.com/attachments/729741769738158194/919960162620559400/IMG_8589.png
StellaAthena#3530: ????
StellaAthena#3530: Wow itโs hilariously nontrivial to unravel whatโs going on here
kurumuz#5695: uh, so its not there?
StellaAthena#3530: `run_glue.sh` calls `run_glue.py`, but the second is in a different directory than the first.
Sid#2121: Fisher stuff is in (the folder that isn't transformers that I can't remember the name of)/fisher.py
StellaAthena#3530: There are three different `scripts` directories. The want you to run `examples/text_ classification/scripts/run_figure_2.sh` (note the `cd` after the install at the top).
Sid#2121: I'll try and release my code as a lucidrains readability level pip package
chilli#5665: I was asking how many flops TPUs have for non matmul ops
chilli#5665: are TPUs even more power-efficient than GPUs though? ๐ค
chilli#5665: yeah, I think torch.parametrize is pretty cool ๐
chilli#5665: well, in my mind, interconnect seems orthogonal to the actual compute units
|
chilli#5665: but perhaps that isn't true
Deleted User#0000: well some of the comments especially about hardware memory hierarchy etc in the conversation above were referring to now very dated harware and whatever conclusions drawn from that will be quite wrong.
Deleted User#0000: hmh. It's important to also think about deployment contexts and scheduling. Scheduling variably sized tpu slices from small to super large is much nicer than scheduling equivalent numbers of GPUs as the GPUs you will have to think hard about the actual specific topology and how to generate reduction strategies and the actual device placement of ops
chilli#5665: while with TPUs it's much more uniform?
Deleted User#0000: that alone is such a huge advantage when you run a datacenter fleet of accelerators that is very difficult to catch up to because you need to run lots of optimization per specific model to get the same performance on an aggregate basis, so instead you can focus on next level optimizations
Deleted User#0000: that's the interconnect mesh, yes
Deleted User#0000: approximately homogeneous and now you can flexible subslice pods to jobs knowing the placement wont be that sensitive
chilli#5665: and when you're talking about reduction strategies you're talking about reductions across devices now
chilli#5665: like allreduces
Deleted User#0000: yep
chilli#5665: what about this one?
chilli#5665: or this confidential info somehow
chilli#5665: my benchmarking got me a number like ... 20 Gigaflops
chilli#5665: but that seemed ludicrously low
chilli#5665: well, I kinda doubt this info is actually confidential tbh
chilli#5665: since it's publicly accessible hardware...
chilli#5665: and you can tease this stuff out through microbenchmarks
chilli#5665: well, I'll explain my actual microbenchmark, and perhaps you can tell me if I'm using TPUs wrong ๐
chilli#5665: it was just a benchmark like
chilli#5665: ```
|
def f(x):
for _ in range(repeat):
x = x * x
return x
```
chilli#5665: to arbitrarily increase compute intensity (for a fixed bandwidth cost)
chilli#5665: and then applying a fuser like `jax.jit`
chilli#5665: to see what kind of peak FLOPs I can get
chilli#5665: On an A100 this kind of setup got me to ~9.3 teraflops (which is what's listed on their spec sheet - 20 FMA teraflops)
chilli#5665: but on a TPU this only got me to the aforementioned 20 gigaflops
chilli#5665: on a V2-8
alstroemeria313#1694: what size tensor are you feeding to the function
chilli#5665: 2**26
alstroemeria313#1694: What shape.
chilli#5665: just flat
alstroemeria313#1694: Oh
alstroemeria313#1694: It's gonna pad and slow down
chilli#5665: :thonk:
chilli#5665: what is it gonna pad to?
alstroemeria313#1694: You need to like, have a multiple of 8 and a multiple of 128 in the first two dims of the shape
|
alstroemeria313#1694: Rest can be whatever, I think.
chilli#5665: :thonk:
alstroemeria313#1694: This might happen even for stuff like elementwise ops?
chilli#5665: I'm quite skeptical
chilli#5665: lol
chilli#5665: maybe this is true for matmuls
chilli#5665: but for pointwise ops the layout really shouldn't matter
alstroemeria313#1694: Well it wants that padding even for stuff like average pooling, so
chilli#5665: I think a flat vector with a power of 2 shape
chilli#5665: is pretty sensible
chilli#5665: ๐ค
alstroemeria313#1694: OK but actually try a shape like I said
chilli#5665: wait also, I just realize
chilli#5665: for benchmarking jax code
alstroemeria313#1694: Oh no.
chilli#5665: should I be using `x.block_until_ready()` on every iteration?
alstroemeria313#1694: hm probably not
alstroemeria313#1694: um
chilli#5665: ok, well luckily, it didn't seem to matter much
alstroemeria313#1694: You ran the JITted function before doing the benchmark right?
|
chilli#5665: obviously
chilli#5665: yeah, I figured out a thing to do
chilli#5665: I was originally doing something like
```
for _ in range(iters):
z = f(x)
z.block_until_ready()
```
and I was worried that one computation might run ahead of the other one
chilli#5665: so we might finish even before all of the iterations are done
chilli#5665: ```
for _ in range(iters):
x = f(x)
x.block_until_ready()
```
chilli#5665: but just switched to that ^
chilli#5665: to have the requisite dependency chain
Deleted User#0000: you may want to re-use the output to avoid this, like a train step would be params = train(params)
Deleted User#0000: yes precisely
|
bmk#1476: vaguely reminiscent of IO monads
chilli#5665: wait, so is that a real concern?
chilli#5665: also, why doesn't my TPU work
Deleted User#0000: uh yes, it's async dispatch
chilli#5665: :thonk:
chilli#5665: I OOMed
chilli#5665: and now I keep on getting
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/919980993530695742/unknown.png
alstroemeria313#1694: unfortunately this is colab so
chilli#5665: I already factory reset and stuff
alstroemeria313#1694: On a TPU VM you could kill the process hogging the TPU.
alstroemeria313#1694: Which exited uncleanly.
alstroemeria313#1694: Actually. Are you getting low flops numbers bc you are not on a TPU VM.
chilli#5665: right
chilli#5665: well, I mean, my suspicion is that I'm getting low flops numbers since I'm not doing a matmul
alstroemeria313#1694: Ah.
alstroemeria313#1694: Bc it has a special matmul unit that's not getting used?
chilli#5665: yeah
chilli#5665: it's like on GPUs
chilli#5665: a similar experiment gets me to about 10 teraflops
|
chilli#5665: which is what I would expect
chilli#5665: since the spec sheet for an A100 says 20 teraflops is the non-tensor core flops
chilli#5665: so then you divide by 2 since it probably can't use FMA instructions here
Deleted User#0000: I mean a tpu is basically a matmul unit
chilli#5665: right
chilli#5665: actually
chilli#5665: I'm quite suspicious my benchmarking setup is wrong
chilli#5665: or maybe it's just the GPU that makes it start at a slower number
chilli#5665: or maybe just XLA overhead or something
Deleted User#0000: what overhead
chilli#5665: nah
chilli#5665: it's just hardware
chilli#5665: well, anyways, same benchmarking setup on the colab K80 gets to 1.6 teraflops
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/919983396149669898/unknown.png
alstroemeria313#1694: @chilli i seem to have gotten 4.4 teraflops from a single TPUv3 core
chilli#5665: with this kind of benchmarking setup?
alstroemeria313#1694: ```
In [21]: def f(x):
...: for _ in range(128):
...: x = x * 1.001
|
...: return x
```
alstroemeria313#1694: `f_jit = jax.jit(f)`
chilli#5665: oh
chilli#5665: I don't believe that
alstroemeria313#1694: then i run it
chilli#5665: I think it's optimizing it to exponential
alstroemeria313#1694: Oh?
chilli#5665: I think if you increase that to 1 million
chilli#5665: it'll still be the same speed
chilli#5665: yeah, I tried that previously
alstroemeria313#1694: it is the same speed at 512 repeats.
chilli#5665: yeah, I believe that
chilli#5665: lol
chilli#5665: you can try my benchmarking code
chilli#5665: ```
def mul(x):
acc = 1
for i in x:
acc *= i
|
return acc
for repeat in [2**i for i in range(0, 15)]:
sz = (8, 128, 2**18,)
iters = 100
def f(x):
for _ in range(repeat):
x = x * x
return x
f = jax.jit(f)
x = jnp.ones(sz)
for _ in range(3):
x = f(x)
x.block_until_ready()
begin = time.time()
for _ in range(iters):
x = f(x)
|
x.block_until_ready()
flops = mul(sz) * repeat/1e9
time_taken = (time.time()-begin)/iters
print(f"{repeat}: {1/(time_taken)*flops} {time_taken*1e6}")
```
alstroemeria313#1694: 800 GFlops if I do x = x * x.
chilli#5665: huh really?
chilli#5665: I can't even get to 800 GFlops
alstroemeria313#1694: This is TPUv3 on a TPU VM
chilli#5665: can you try my benchmarking code and post the results?
alstroemeria313#1694: kk
alstroemeria313#1694: it's going
alstroemeria313#1694: What number of repeats does it go up to
chilli#5665: well, you can stop it if it's saturating
chilli#5665: but it goes up to 2**15
alstroemeria313#1694: ```
1: 29.96352712293228 8958.740234375
2: 86.27059936522059 6223.104000091553
4: 171.97240664285277 6243.686676025391
8: 343.6361853732948 6249.294281005859
|
16: 684.4651991866689 6274.924278259277
32: 955.0027598450732 8994.669914245605
64: 926.7670612687041 18537.418842315674
128: 927.0656240273166 37062.89768218994
256: 893.3504542394643 76923.31314086914
512: 893.8663172201038 153757.83920288086
1024: 927.2587668246972 296441.42150878906```
alstroemeria313#1694: so far
chilli#5665: huh
chilli#5665: intriguing
chilli#5665: yeah, maybe the TPU VM is making some big difference
chilli#5665: or maybe there were some significant hardware changes between V2 => V3
alstroemeria313#1694: It is probably the TPU VM.
alstroemeria313#1694: I can fire up a TPUv2 VM if you want.
alstroemeria313#1694: And run it there.
chilli#5665: yeah, that'd be neat
chilli#5665: but this amount of hardware difference is more expected I think
alstroemeria313#1694: Also it is using only one of eight cores
chilli#5665: hmmm
chilli#5665: are you sure?
|
alstroemeria313#1694: Yes
alstroemeria313#1694: You need to pmap it instead of jitting it
chilli#5665: err
chilli#5665: do you mean
alstroemeria313#1694: To use eight cores.
chilli#5665: 2 of the 8 cores?
alstroemeria313#1694: One of eight.
chilli#5665: since iirc
chilli#5665: there's 4 units
chilli#5665: and 2 cores per unit
alstroemeria313#1694: Right but they are addressed individually
chilli#5665: hmm
chilli#5665: I see
chilli#5665: is that right @Deleted User ?
chilli#5665: if that's true then you multiply by 2
alstroemeria313#1694: running a pmap version now
alstroemeria313#1694: changed to `sz = (8, 8, 128, 2**15,)` and `f = jax.pmap(f)`
alstroemeria313#1694: ```
1: 239.7791762965144 1119.5111274719238
2: 678.541131530333 791.2135124206543
|
4: 1355.077366442153 792.384147644043
8: 2709.967191902168 792.4389839172363
16: 5404.033139088224 794.7707176208496
32: 7496.659790295418 1145.8349227905273
64: 7354.954659051657 2335.822582244873
128: 7382.832960944569 4654.0045738220215
256: 7132.487361131886 9634.714126586914
512: 7142.3425443415845 19242.839813232422
1024: 7413.572974631168 37077.65579223633
2048: 7554.891478893679 72768.19467544556
4096: 7627.1051212624225 144158.44678878784
8192: 7663.721613720384 286939.34440612793```
alstroemeria313#1694: That is eight cores.
chilli#5665: right, that's only a bit worse than a single A100
chilli#5665: probably about comparable to a single V100
chilli#5665: so if we do the calculations
chilli#5665: I'd guess that the spec sheet for a V3-8
chilli#5665: is about 8 teraflops of non-matmul FLOPs
chilli#5665: (vs 420 teraflops of matmul FLOPs)
chilli#5665: I wonder what it looks like for a V2-8
|
alstroemeria313#1694: spinning one up to try.
chilli#5665: a slice is 2 cores?
Deleted User#0000: a 8 TPU2 slice = 4 devices with each 2 cores. One device is what you'd compare to a gpu
alstroemeria313#1694: ok it is running on the v2-8
fengoku#4038: Hey everyone, our CtrlGen workshop (https://ctrlgenworkshop.github.io) at NeurIPS is starting now! If you're registered for NeurIPS, attend here: https://neurips.cc/virtual/2021/workshop/21886
We are beginning with Jason Weston's talk: "Control in Dialogue: When does it work?"
alstroemeria313#1694: @chilli ```
1: 163.07463672005795 823.0447769165039
2: 418.64197234435215 641.2053108215332
4: 868.9041317846717 617.8712844848633
8: 1756.5426215415953 611.2813949584961
16: 3533.759368645687 607.7051162719727
32: 5465.832026470494 785.7847213745117
64: 5425.598751146592 1583.2233428955078
128: 5280.960281391501 3253.171443939209
256: 5301.630937802563 6480.975151062012
512: 5314.741249933605 12929.975986480713
1024: 5516.581996472818 24913.787841796875
2048: 5624.09677640165 48875.03147125244
4096: 5678.408990635932 96815.11402130127
|
8192: 5706.415777501586 192679.90112304688```
alstroemeria313#1694: and i did `sz = (8, 8, 128, 2**14,)`
alstroemeria313#1694: to make sure it still fit into memory, idk if i had to do this
chilli#5665: cool
chilli#5665: ok, so seems like the main issue was the TPU VM stuff
alstroemeria313#1694: oh it still fits with 2**15
chilli#5665: and seems like TPUv2-8 went from about 6 teraflops to 8 teraflops in V3-8
alstroemeria313#1694: the results are the same though.
chilli#5665: out of curiosity, can you try removing the 8/128 stuff at the front
chilli#5665: lol
alstroemeria313#1694: ok
chilli#5665: I will say, that judging by these results, it looks like TPUs have quite a bit more bandwidth from global DRAM to SRAM @Deleted User
alstroemeria313#1694: kk going
chilli#5665: since it saturates the compute much quicker
chilli#5665: than on GPUs
alstroemeria313#1694: i did `sz = (8, 8 * 128 * 2**15,)`
alstroemeria313#1694: bc you need the first 8 in for pmap.
alstroemeria313#1694: But it is running as a 1D array on each core.
chilli#5665: right
alstroemeria313#1694: it's the same as without the special shape.
|
chilli#5665: cool
alstroemeria313#1694: Guess you really do only need it for matmuls
alstroemeria313#1694: ```
1: 239.7817295725764 1119.4992065429688
2: 680.380168625174 789.0748977661133
4: 1364.578296722326 786.8671417236328
8: 2730.8859658975744 786.3688468933105
16: 5361.252610623457 801.1126518249512
32: 7502.638821978483 1144.9217796325684
64: 7409.970788760238 2318.4800148010254
128: 7451.39077499512 4611.184597015381
256: 7244.2389902492505 9486.086368560791
512: 7252.334804448815 18950.99401473999
1024: 7476.927468845238 36763.484477996826
2048: 7587.211937246438 72458.21237564087
4096: 7643.747192844747 143844.58303451538
8192: 7672.17061511713 286623.3515739441```
chilli#5665: well, it's possible you need it for reductions too
alstroemeria313#1694: *nods*
chilli#5665: but it's hard to say
|
chilli#5665: but yeah, neat
chilli#5665: 8 teraflops of non-matmul FLOPs for a TPUv3-8
chilli#5665: up from 6 on a v2-8
alstroemeria313#1694: yeah always use a TPU VM if at all possible
chilli#5665: definitely much worse than a GPU on this kind of stuff
chilli#5665: but only about 5x worse
chilli#5665: I guess this does show that fusion is arguably even more important for GPUs
chilli#5665: than it is for TPUs
Em Elle#8886: Hey guys I am playing around with GPT2 and I wrote a context and I was expecting it to reference that context because itโs auto regressive and all and it does that maybe 3 out of 10 times and is non deterministic is there a way to control this ?
chilli#5665: btw, you can do other kind of cool stuff with this approach
chilli#5665: like, if you swapped out the `x = x*x` for an `x = jnp.cos(x)`
chilli#5665: you can probably figure out how many arithmetic ops a single `cos` is worth
alstroemeria313#1694: *nods*
chilli#5665: oh actually
chilli#5665: I just realized
chilli#5665: we should probably double the GPUs advantage
chilli#5665: since I'm doing things in fp32
chilli#5665: while on TPUs everything is in bf16
chilli#5665: hmm
chilli#5665: although maybe XLA is fusing intermediate operations into fp32 registers?
|
bmk#1476: are you testing pmap
chilli#5665: me?
chilli#5665: no, I was just curious about what the flops of TPUs were
chilli#5665: for non-matmul operations
bmk#1476: oh
bmk#1476: is it true that they take a trivial amount of time compared to the matmuls
alstroemeria313#1694: nope
alstroemeria313#1694: fp32.
chilli#5665: oh sure, for most architectures
alstroemeria313#1694: afaik
alstroemeria313#1694: i can try explicit bf16
chilli#5665: but tbh, I think that this stuff is a really good example of "hardware constraining our architectures"
chilli#5665: There's no fundamental reason on our hardware that matmuls are fast
chilli#5665: as opposed to
chilli#5665: log-space matmuls
chilli#5665: https://twitter.com/srush_nlp/status/1466100828868071426
chilli#5665: it's just that there are hardware units that are specialized for doing matmuls
chilli#5665: Like usually, when people are debating "hardware lottery" stuff
chilli#5665: people are usually talking about sparse ops or something
chilli#5665: and that stuff is kinda unclear, since dense ops are fundamentally always going to be faster
|
chilli#5665: and easier to do from the hardware side
alstroemeria313#1694: ```
1: 471.0799429476595 569.8299407958984
2: 1305.6409711337521 411.1933708190918
4: 2666.8875220555906 402.6198387145996
8: 5255.9953636815035 408.57791900634766
16: 7361.108231919218 583.4674835205078
32: 7528.742455117328 1140.9521102905273
64: 7359.521894739589 2334.3729972839355
128: 7397.82164945597 4644.575119018555
256: 7148.783598512648 9612.751007080078
512: 7162.637639215601 19188.315868377686
1024: 7426.070971965266 37015.254497528076
2048: 7562.218764413613 72697.68714904785
4096: 7630.785701990793 144088.91439437866
8192: 7665.574073614781 286870.00274658203```
alstroemeria313#1694: with `x = jnp.ones(sz, dtype=jnp.bfloat16)`
chilli#5665: mmm
chilli#5665: and previously you were explicitly setting it to float32?
alstroemeria313#1694: it is by default if you just make an array.
|
alstroemeria313#1694: This does not mean it was not using bf16 on the inside of the computation for intermediates I think
alstroemeria313#1694: Either that or bf16 is only faster for matmuls and stuff.
chilli#5665: really?
chilli#5665: I thought the default in Jax was bf16
chilli#5665: hmm
chilli#5665: maybe hat's only for conversion from numpy
fengoku#9000: Discussion panel happening right now for CtrlGen workshop! Join us at https://neurips.cc/virtual/2021/workshop/21886 and add questions to our slido at https://app.sli.do/event/rmabxoqx/live/questions
ethan caballero#6044: Sam Bowman gives detailed answer in replies:
https://twitter.com/ethancaballero/status/1470532845319991306
kiyoshi matsumoto#5637: Does anyone know how this was made by Singlezer0. It's unbelievably beautiful and would love to experiment https://cdn.discordapp.com/attachments/729741769738158194/920194483969212436/XfPJ56u98RdDBgK3.mp4
EricHallahan#1051: I would ask #art.
kiyoshi matsumoto#5637: ok will do thank you
๐
ฌ gabriel_syme ๐
ฌ#3220: @chilli do you know or have any thoughts on this? I don't really understand it but I do see the table and I'm curious if it can be used with such big impact
https://marisa.moe/dtr.html
nev#4905: also curious, how is XLA rematerialization different from this?
chilli#5665: this is dynamic
chilli#5665: and doesn't depend on a graph
chilli#5665: also, I'm not sure if XLA has ever talked about the actual algorithm they use
chilli#5665: lol
nev#4905: yeah that's in the name lol
|
chilli#5665: I mean, that answers your question then
chilli#5665: :thonk:
nev#4905: ๐ค
chilli#5665: XLA is static
nev#4905: ah I thought it was doing similar magic
chilli#5665: ๐ค
chilli#5665: I'm not sure what you mean
chilli#5665: like, XLA needs to capture a static graph
chilli#5665: before it can do any optimizations
chilli#5665: while this is able to do it in eager-mode
chilli#5665: without capturing any kind of static graph
๐
ฌ gabriel_syme ๐
ฌ#3220: cool thx for your thoughts, do you think it is promising?
๐
ฌ gabriel_syme ๐
ฌ#3220: it's pretty cool how much bigger bs they can do
Sid#2121: huh, any pytorch implementation of this yet? looks neat
chilli#5665: hmm
chilli#5665: well, they do have a pYTorch implementation
chilli#5665: lol
chilli#5665: not sure it's production ready
chilli#5665: but I'm also implementing some automatic checkpointing stuff with AOTAutograd
Sid#2121: do you need beta testers
|
Sid#2121: lol
chilli#5665: just implemented my max-flow "optimal" strategy today ๐
Sid#2121: oh? can you elaborate
chilli#5665: basically
chilli#5665: you can think of recomputation
chilli#5665: as finding the minimal amount of memory to transfer between the forwards + backwards
chilli#5665: well, with some simplifications
chilli#5665: so, if you think through this problem, you basically end up with a max-flow/min-cut problem
chilli#5665: where the min-cut defines the place to cut to minimize the amount of memory you need to save for your backwards pass
Sid#2121: huh, interesting
Sid#2121: AOTAutograd sounds pretty neat
Sid#2121: still think you need a better name though ๐
Sid#2121: how production ready is this stuff?
chilli#5665: well...
chilli#5665: we'll see
chilli#5665: or well
chilli#5665: the correct answer
chilli#5665: "not very production ready"
Sid#2121: this is all happening in functorch?
chilli#5665: but
|
chilli#5665: for certain use cases
chilli#5665: I'd say it's probably pretty good
chilli#5665: Like, if you're just trying to speed up some specific module/function
chilli#5665: and you're willing to test it to make sure it didn't screw something up
chilli#5665: for ex, Ross Wightman had some EvoNorm implementation
chilli#5665: that this stuff sped up by like ... 6x
chilli#5665: (vs. 3x with Torchscript)
chilli#5665: so I think it'd make sense to use this for that
Sid#2121: the thing i'm curious about DTR for is more memory critical than speed critical
Sid#2121: i mean, i'm concerned about speed to the extent that i don't want to do everything on cpu lol
chilli#5665: right
chilli#5665: well, I assume you don't want to recompute stuff like matmul here?
Sid#2121: hm, probably not? I'm not really sure what kind of slowdowns you would be looking at there
chilli#5665: (or do you)
Sid#2121: i guess about 100% lol
Sid#2121: or close to
Sid#2121: i'd be open to it if the memory saving was dramatic
chilli#5665: well
chilli#5665: I mean
chilli#5665: this is standard large transformer training, right?
|
chilli#5665: you checkpoint between every single layer
Sid#2121: yeah
chilli#5665: so there, you're already computing your entire forwards in your backwards
chilli#5665: so there's not really any savings to be had there
chilli#5665: (well, mostly...)
Sid#2121: i feel like my understanding of gradient checkpointing is kinda lacking, because i'm not really sure how the granularity affects it
Sid#2121: like in deepspeed you have the option to gradient checkpoint every 'n' layers
Sid#2121: and a smaller n means more memory savings
Sid#2121: so i'd assumed if you could go even more granular than a single layer, you could eke out even more savings
chilli#5665: mm
kindiana#1016: there's a tradeoff
chilli#5665: smaller n doesn't really mean more memory savings
kindiana#1016: between single layer activation size
kindiana#1016: and residual checkpoint size
chilli#5665: I don't understand that terminology
chilli#5665: although I can guess what you mean
chilli#5665: basically, the larger your `n` is, the less memory you're saving from your forwards pass for your backwards pass
kindiana#1016: yeah I made up those terms
chilli#5665: lol
kindiana#1016: but your peak memory usage is basically layers / n * residual size + n * temporary_for_one_layer * n
|
kindiana#1016: so depends on the ratio of residual size vs temporary_for_one_layer
chilli#5665: :thonk:
chilli#5665: I think just giving the intuition is clear enough
chilli#5665: like, if you save stuff less often
chilli#5665: then you obviously use less memory
chilli#5665: (that's what you're calling residual_size here I believe)
chilli#5665: but in the limit, imagine you only saved the input
chilli#5665: then in the backwards, you're recomputing your entire forwards + backwards anyways
Sid#2121: https://arxiv.org/pdf/1904.10631.pdf reading this paper and they have a strategy where they store all activations except the ones that are trivial to recompute (norms, relus etc), kinda interesting
Sid#2121: reduces activation memory by ~50% in their example but only a 1% increase in computation
chilli#5665: that's kind of related to what I'm doing now
Sid#2121: yeah i was gonna ask, sounds pretty similar
chilli#5665: except that with fusion you can usually get improvements in computation
Sid#2121: would be nice to have a model agnostic way to do this stuff though
chilli#5665: wdym
chilli#5665: https://colab.research.google.com/gist/Chillee/c7667dd02fa08fb62edf1b77285f0374#scrollTo=G7CGthC7cLJo
chilli#5665: this is an example btw
Sid#2121: I mean, just switching between neox/deepspeed and huggingface, they use different checkpointing strategies
Sid#2121: and additionally you generally need to implement it into the model yourself, and i'm lazy, so it would be great to have an autograd like strategy where it just does it for you lol
Sid#2121: but i'm not really sure how feasible that is, engineering wise
|
chilli#5665: where simply recomputing the entire forwards + a fuser both lets you use way less memory
chilli#5665: as well as gets you like ... a 2x speedup
Sid#2121: still waiting for torch to install :berk:
chilli#5665: lol
chilli#5665: yeah it's kind of annoying
chilli#5665: ๐ฆ
Sid#2121: is jit compilation really adding that much overhead?
Sid#2121: or am i not understanding where the speedup comes from in this case
chilli#5665: the speedup here is coming from the fact that we're writing and reading out less to global memory
chilli#5665: to steal some from my TVMCon slides
chilli#5665: lol
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/920239598716731452/unknown.png
Sid#2121: did i miss your TVMCon talk, or has it not happened yet
chilli#5665: Like, do you understand why fusion helps in general?
Sid#2121: yeah
chilli#5665: hasn't happened yet
chilli#5665: basically, you're trying to minimize global memory reads/writes
chilli#5665: so, imagine you're calling `x.sigmoid().sigmoid().sigmoid()`
chilli#5665: autograd would normally save every value after the sigmoid
chilli#5665: so, for your forwards pass, you would end up doing one global memory read, and then 3 global memory writes
|
Sid#2121: and i guess for an activation function the memory writes are pretty significant compared to the time spent in the function itself?
chilli#5665: for pointwise functions, they are pretty much *completely* spent in memory writes
chilli#5665: that's why people often call these ops "bandwidth bound" operations
chilli#5665: You might be getting like ... 400 gigaflops on them
chilli#5665: (when your GPU should have 10 teraflops)
chilli#5665: So essentially, with pointwise operations, adding more pointwise operations into your fusion group is essentially free
Sid#2121: I don't really have a good image of how much time is spent in these pointwise ops /memory rws in say, a transformer forward pass
chilli#5665: haha
chilli#5665: that's a different discussion
Sid#2121: remind me when your talk is btw? I'm gonna set a calendar reminder
chilli#5665: it's on Thursday I think?
chilli#5665: I mean, it's just recorded
chilli#5665: https://www.tvmcon.org/schedule/
Sid#2121: ah ok, so it's not going out live or anything
chilli#5665: yeah
Sid#2121: I'm still in pandemic mode where "talks" mean a zoom call
chilli#5665: but yeah, to close the loop on this, if you recompute your bandwidth bounds in your backwards, you can often do *less* global memory reads/writes
chilli#5665: while saving less memory
chilli#5665: (even though you're technically doing more flops)
Sid#2121: damn
|
```
eager
2911.428213119507
Torchscript
973.2153415679932
Op Authoring
401.857852935791
Native
288.06614875793457
```
chilli#5665: yeah, you can try changing around the pointwise ops
chilli#5665: and you should get similar results
Sid#2121: tried it out with softmax and damn, why is functional so much slower
Sid#2121: ```
eager
2922.1510887145996
Torchscript
1359.839916229248
Op Authoring
864.3395900726318
|
Native
5199.648141860962
```
chilli#5665: lol, code?
chilli#5665: I'm a bit skeptical
chilli#5665: lol
Sid#2121: ```python
from functorch.compile import memory_efficient_pointwise_fusion
import time
import torch
from torch.profiler import profile, record_function, ProfilerActivity
from functools import partial
def gelu(x):
return x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)))
def softmax(x):
return torch.exp(x) / torch.sum(torch.exp(x), dim=-1, keepdim=True)
|
compiled_softmax = memory_efficient_pointwise_fusion(softmax)
def bench(f, sz):
iters = 1000
repeat = 10
warmup = 5*repeat
inp = torch.randn(sz, device='cuda', requires_grad=True)
for _ in range(warmup//repeat):
x = inp
for _ in range(repeat):
x = f(x)
x.sum().backward()
def run():
torch.cuda.synchronize()
begin = time.time()
for _ in range(warmup//repeat):
x = inp
for _ in range(repeat):
|
x = f(x)
x.sum().backward()
torch.cuda.synchronize()
print((time.time()-begin)*1e6/iters)
run()
sz = 2**25
print("eager")
bench(gelu, sz)
print("Torchscript")
with torch.jit.fuser("fuser2"):
bench(torch.jit.script(softmax), sz)
print("Op Authoring")
with torch.jit.fuser("fuser2"):
bench(compiled_softmax, sz)
print("Native")
bench(partial(torch.nn.functional.softmax, dim=-1), sz)
```
Sid#2121: all the other implementations probably use the numerically stable version lol
Sid#2121: so maybe it's unfair
|
chilli#5665: hmm
chilli#5665: yeah, not sure
chilli#5665: lol
chilli#5665: one possibility might be
chilli#5665: that this is simply the softmax issue that Triton discovered
chilli#5665: where if you can fit your entire row into shared memory
chilli#5665: then Torch's softmax kernel is pretty suboptimal
chilli#5665: yeah, I strongly suspect it might be this: https://oneflow2020.medium.com/how-to-implement-an-efficient-softmax-cuda-kernel-oneflow-performance-optimization-sharing-405ad56e9031 @Sid
mosmos6#6847: Hello everyone! I'm new to here. Nice to see you. The-eye.eu is said to be under maintenance until 13th Dec but it's still unaccessible. I tried to slim step_383500 by myself but it lacks meta. That'd be great if anyone knows alternatives. Thanks!
ethan caballero#6044: Looks like Tsinghua University has the most most ml compute of any university in the world.
Tsinghua University operates Sunway TaihuLight supercomputer:
https://www.google.com/search?q=Tsinghua%2C+Sunway+TaihuLight+supercomputer
https://www.nextplatform.com/2021/02/10/a-sneak-peek-at-chinas-sunway-exascale-supercomputer/
https://twitter.com/ethancaballero/status/1470548715979083778
https://twitter.com/RotekSong/status/1470596837438816260
m_wAL99#1923: https://ieeexplore.ieee.org/document/9220761
m_wAL99#1923: https://crad.ict.ac.cn/CN/10.7544/issn1000-1239.2021.20200967
janus#0150: How does this compare to corps/govs
xloem#0717: https://syncedreview.com/2021/12/14/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-165/
CRG#8707: > addresses the concerning quadratic time and space complexity of transformer architecturesโ self-attention mechanisms.
|
Wrong, the time complexity is still O(n^2)
StellaAthena#3530: @CRG can you name a situation where the space improvement would matter? I was unable to come up with one
CRG#8707: Low budget?
StellaAthena#3530: By low budget you mean "don't even have a V100" right
CRG#8707: Yeah
xloem#0717: if somebody can save money, or more people can contribute, i hope it's helpful. but it's true, i don't really understand it.
StellaAthena#3530: It does decrease the space complexity, the issue is more that it doesn't really matter in any realistic usecase
Sid#2121: sure it does
Sid#2121: e.g I want to finetune a large model on a small gpu
StellaAthena#3530: ...
StellaAthena#3530: oh, duh
Kharr#7888: We've seen a lot of this with GPT-J already -- even though it is opensource, not everyone can access a big TPU to tune it
StellaAthena#3530: Yeah, I just had a brain fart and was only thinking about *pretraining*
Kazumi#1297: with any training I can do with my resources, memory is the limiting factor, not the training or even inference time
onealeph0#1502: Any news on when Pile will be available again?
HanakoMasaki[Cactuar]#0015: What do you guys make of InfoLOOB? Can anyone explain in an understandable way way it seems to do so much better than condclip
alstroemeria313#1694: condclip?
alstroemeria313#1694: when compared to infonce it doesn't saturate so easily, infonce is mostly satisfied once it can classify a positive pair correctly vs the negative distractors and the gradient scale for that pair drops way down
HanakoMasaki[Cactuar]#0015: so level of how much it discerns is the difference ?
HanakoMasaki[Cactuar]#0015: why didn't condclip just start that way ? was there a reason for it to require more to reach what it considers satisfaction of conditions ?
|
HanakoMasaki[Cactuar]#0015: I guess I'm asking if there is some disadvantage to InfoLOOB I'm not seeing
alstroemeria313#1694: higher variance of gradients, trains less well, apparently.
alstroemeria313#1694: this is why CLOOB is a thing
HanakoMasaki[Cactuar]#0015: interesting and thanks for trying to keep it simple enough for me to understand
HanakoMasaki[Cactuar]#0015: CLOOB is absolutely new ground for me
HanakoMasaki[Cactuar]#0015: doesn't seem like I'll be seeing the last of it anytime soon either
alstroemeria313#1694: also i think InfoLOOB came out after InfoNCE, being inspired by it
jordiae#4107: do we know how they are fine-tuning GPT-3? https://twitter.com/OpenAI/status/1470804063251927040
๐
ฌ gabriel_syme ๐
ฌ#3220: adapters? ๐
Kharr#7888: Guaranteed. I expect they are using the MS version https://arxiv.org/abs/2106.09685
EricHallahan#1051: Nah, more likely magic.
EstebanSir#2189: openai davinci is so expensive
kurumuz#5695: yeah i think its this one too. we couldnt get good results with LoRA but im sus of our implementation
Kharr#7888: I've had better success with just tuning the bias terms https://arxiv.org/abs/2106.10199
kurumuz#5695: @finetune
finetune#0907: not sure what could be wrong with it
mistobaan#2737: Does anyone know a vocabulary dataset? when each word is defined from previous words? I
CRG#8707: The gopher finetuning results were interesting: https://cdn.discordapp.com/attachments/729741769738158194/920708692935053352/d20fda5a303d2b03d1ef2db842c2b844.png
HanakoMasaki[Cactuar]#0015: why does InfoLOOB have the issue of variance to begin with in simple terms if possible. I can now appreciate what CLOOB is gonna bring to the table
HanakoMasaki[Cactuar]#0015: I've tried reading the PDF writeup on it time and time again but it just kinda isn't clicking for me on the variance part
|
alstroemeria313#1694: i don't understand it fully tbh
Sid#2121: Does anyone know what optimizer OAI use lol? I'm interested to know whether Adam will work well for the PALMs experiments they did (i.e finetuning on really small data), or if you'd have to use a different optim maybe less reliant on momentum
Sid#2121: in the PALM paper all they say is "we use the finetuning API" ๐คฆ good science
bmk#1476: I thought the palms paper at least mentioned the lr and bs
kindiana#1016: idk I feel like you should be telling us
Sid#2121: don't you both work at OAI lmao
Sid#2121: yeah it only mentioned LR and BS
StellaAthena#3530: Yeah but it's \~\~super secret\~\~
bmk#1476: I mean, can't you just try all the optimizers
StellaAthena#3530: This is a joke, right
bmk#1476: ..no? how hard would it be to try SGD and 3 different common configurations of Adam
kurumuz#5695: hmm, our tuning on minimal data did work just by tuning the whole model with adam
kurumuz#5695: not the most efficient probably :berk:
Sid#2121: how minimal is minimal?
kurumuz#5695: its 20x8x2048 tokens
kurumuz#5695: worked quite well with our visible thoughts dataset. we stole their LR schedule too
kurumuz#5695: lol
kurumuz#5695: from the PALMS paper
Kharr#7888: Adam works for really small datasets. I've played around a lot with finetuning in 100 steps.
๐
ฌ gabriel_syme ๐
ฌ#3220: I never heard of the PALMS paper till now
|
Kharr#7888: On toy data like MNIST you can get 82% accuracy in 10 steps with bs of 10 (100 examples total) starting from random init. That's why I started disregarding papers which test on MNIST :berk: (this is a really fun experiment to try for newbies since the goal is to get the model to learn features which generalize to the rest of the data from only 10 examples of each class).
๐
ฌ gabriel_syme ๐
ฌ#3220: wait which one is the PALMS paper? is it the adapting models for society?
bmk#1476: yeah
๐
ฌ gabriel_syme ๐
ฌ#3220: aha thx! was looking for OAI and couldn't find it in there
igoro#7477: anyone know how the LoRA / BitFit approaches perform compared to P-Tuning?
igoro#7477: i've seen the claim that P-Tuning can be comparable to finetuning
Sid#2121: they can all be comparable to finetuning, if you pick your datasets in the paper right ๐
Kharr#7888: Depends on model size and amount of data. The bigger the model the less params you have to shift and the less data you need
igoro#7477: Hmm, right. From a quick glance at the papers, it seems to me that LoRA tunes 0.01% of parameters, Bit Fit tunes 0.04%, and P-Tuning tunes 0.1-3%. Does that sound about right?
Kharr#7888: Looks close off the top of my head. So something like LoRA might work really well for models like GPT-3 but less well for small models with 100M parameters.
igoro#7477: Makes sense. Thanks
tpapp157#3643: The CLOOB paper is a bit tricky because the actual code of what they're doing doesn't really match what they're saying on a surface level. Basically they're regularizing the latent vectors via a weighted averaging over the local neighborhood. You can kind of think of it as a soft k-means/centroid adjustment using cosine similarity as the distance metric. It in effect applies a soft gaussian regularization to the latent space at a local level.
tpapp157#3643: The lower variance comes from the regularization encouraging a simpler overall data distribution in the latent space and because each latent vector is adjusted to the centroid of the local neighborhood prior to loss calculation. These two effects are two sides of the same coin.
HanakoMasaki[Cactuar]#0015: well thanks for trying to explain eh heh I'd be lying if I said I followed most of that
tpapp157#3643: The basic math of what they're doing can also be seen as self-attention without any projection matrices. `softmax(X * X.T) * X`
tpapp157#3643: The attention matrix calculates a relative distance between every pair of vectors and that's used to update those vectors via a weighted average over all other vectors.
tpapp157#3643: But again it's worth noting, this is the math that's happening in their public code but this is not what they describe in their paper/blog.
alstroemeria313#1694: self-attention and cross-attention both, yeah
alstroemeria313#1694: w/ the attention applied along the batch axis.
alstroemeria313#1694: I had not seen anything like it before.
|
tpapp157#3643: Right, you're attending across the batch because each sample has been reduced to a single vector. That's not too surprising in a contrastive setup where your overall loss is calculated relative to the batch composition as well. That's effectively how things like recommender systems often work where you attend a query across a back catalogue of reference vectors (KNN with cosine similarity as the distance metric).
HanakoMasaki[Cactuar]#0015: so I've gotten to see InfoLOOB in action on its own I wonder how much longer before we'll see CLOOB available to take a stab at
HanakoMasaki[Cactuar]#0015: I can just imagine how much better it's gonna get
HanakoMasaki[Cactuar]#0015: I've been team VQGAN for so long that I'm just now getting to see the benefits of diffusion and they're impressive
tpapp157#3643: You can also think of it as a soft form of VQ in the sense that VQ is often just K-means by a different name.
HanakoMasaki[Cactuar]#0015: ah that I didn't know
alstroemeria313#1694: I got the RN50x4 CLOOB model trained on yfcc working for generation
alstroemeria313#1694: It was better than I expected from an RN50x4
alstroemeria313#1694: But I really want a big ViT model trained on a super diverse dataset.
kurumuz#5695: some convolutions would be nice
alstroemeria313#1694: gwern just sent me this link, thoughts? https://www.lesswrong.com/posts/iNaLHBaqh3mL45aH8/magna-alta-doctrina
cfoster0#4356: I found it quite a bit more confusing and difficult than the previous post, on the brain as a universal learning machine
alstroemeria313#1694: how come he calls entropy "proportional to weight magnitude"
alstroemeria313#1694: I have considered regularizers like an extension of von Neumann entropy to non-spd matrices
alstroemeria313#1694: like, calculating this value and penalizing it
alstroemeria313#1694: but it required doing an SVD of each weight matrix each iteration so it is slow
alstroemeria313#1694: it was not actually proportional to weight magnitude, though.
alstroemeria313#1694: the way weight decay is.
HanakoMasaki[Cactuar]#0015: Is there a colab I can poke around with yet of that?
alstroemeria313#1694: it wasn't great enough for me to make one
|
alstroemeria313#1694: however it did actually work
HanakoMasaki[Cactuar]#0015: Not great enough?
alstroemeria313#1694: yeah it was not as good as OpenAI CLIP
alstroemeria313#1694: like, did not know about as much stuff
alstroemeria313#1694: training set was much smaller
kurumuz#5695: when are we training a big clip
kurumuz#5695: with a good text encoder
HanakoMasaki[Cactuar]#0015: That's the open source CLIP yeah?
kurumuz#5695: CLIP is already open source
HanakoMasaki[Cactuar]#0015: Oh well ok what's the big thing with OpenAI CLIP
StellaAthena#3530: It's big
HanakoMasaki[Cactuar]#0015: Bigger than CLIP as I've known it?
HanakoMasaki[Cactuar]#0015: Pretty sure everything I've used up until now has been CLIP
HanakoMasaki[Cactuar]#0015: Might need to track down a colab that uses that while I'm at it that OpenAI one
HanakoMasaki[Cactuar]#0015: How much bigger we talking?
HanakoMasaki[Cactuar]#0015: As I was pretty sure CLIP was big as it is
cfoster0#4356: They did not release the biggest CLIP model IIRC
cfoster0#4356: Which is roughly the same size as BERT Large?
HanakoMasaki[Cactuar]#0015: First I've heard of this BERT
alstroemeria313#1694: They did not.
|
HanakoMasaki[Cactuar]#0015: Bet we can't play with that one either or it'd already be out there
HanakoMasaki[Cactuar]#0015: No incentive most likely
HanakoMasaki[Cactuar]#0015: To allow us access to a bigger CLIP
HanakoMasaki[Cactuar]#0015: Is that the resnet one that I've seen mentioned?
cfoster0#4356: ~~No the largest CLIP model is another ViT~~
cfoster0#4356: ~~I'm pretty sure~~
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/920808121339875368/Screenshot_20211215-164218_Adobe_Acrobat.jpg
HanakoMasaki[Cactuar]#0015: Wow 2 RN50s I've never even heard of same with the ViT
cfoster0#4356: It's actually kinda hard to tell from these tables :thonk:
kurumuz#5695: wait what is the difference between b32 and b16
kurumuz#5695: pretty sure that resnet50x64 is bigger lol
HanakoMasaki[Cactuar]#0015: Weird to think of resolution in terms of text input
guac#4716: patch sizes?
cfoster0#4356: B vs L is like BERT Base vs BERT Large, and the numbers are patch sizes IIRC
HanakoMasaki[Cactuar]#0015: Not that it likely matters as we're never getting those other ones
kurumuz#5695: dude look at the RN50x64 though
HanakoMasaki[Cactuar]#0015: Could P100 even handle them even if we could?
kurumuz#5695: that is definitely bigger than the biggest VIT they have
cfoster0#4356: That's pretty huge
kurumuz#5695: yes
|
HanakoMasaki[Cactuar]#0015: Well there's that at least
kurumuz#5695: like that resnet50x64 seems to actually have a decent text transformer...
HanakoMasaki[Cactuar]#0015: I was finding it hard to get excited about something that might have been too beefy for the resources most of us have access to
bmk#1476: sad brrrr noises
HanakoMasaki[Cactuar]#0015: I kinda wish it was too beefy as now I'm sad knowing it would work just fine if they'd just let us have it
alstroemeria313#1694: The ViT-L are probably better than it
HanakoMasaki[Cactuar]#0015: This is just like how they won't let us have DALL-E
HanakoMasaki[Cactuar]#0015: At least with that one the community kinda built their own
HanakoMasaki[Cactuar]#0015: I'm guessing building our own CLIP to rival the full one would be too much though
cfoster0#4356: Honestly we've likely learned more along the way than we would've had they just released it
HanakoMasaki[Cactuar]#0015: I don't see the logic of even letting us have the CLIP we have now though
HanakoMasaki[Cactuar]#0015: What made it an acceptable release but not the full one
cfoster0#4356: They've been doing "staged releases" for a lot of their models
cfoster0#4356: Like starting with GPT-2
HanakoMasaki[Cactuar]#0015: So we could in fact still see the full thing eventually
HanakoMasaki[Cactuar]#0015: We'll get it as they perfect some even better one xD
Deleted User#0000: I got a RTX 3060Ti and Tesla P4 in the same machine
Deleted User#0000: is this combination any good for gpt-j/gpt-neo (i know gpt-neo-1.3b runs on the 3060Ti)
Deleted User#0000: not using nvlink or SLI
Deleted User#0000: or neox if thats out
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.