data
stringlengths 115
7.61k
|
---|
but how much vram is needed to generate texts with gpt-neo?
Slack#2746: im considering to buy rtx3060ti with 8g vram
its enough?
π
¬ gabriel_syme π
¬#3220: depends on the model really, thjere's a bunch of neos
Slack#2746: ah yeah
Slack#2746: im considering to use 1.3B model
Slack#2746: if possible, i'll use 2.7b
Orz#3023: ohh
then 3080ti should be more than enough for 1.3B
Slack#2746: wait 3080ti isnt enough?
Orz#3023: :thinkies:
Orz#3023: It is enough
Slack#2746: oh https://cdn.discordapp.com/attachments/729741769738158194/892356382928547880/unknown.png
Slack#2746: so 2.7b needs more than 8gb?
gollark#3909: It should probably fit, try it and see.
ilovescience#3282: this is relevant: https://cdn.discordapp.com/attachments/729741769738158194/892599945432014868/Strip-Les-specs-cest-du-code-650-finalenglish.png
Louis#0144: #off-topic pls
Louis#0144: This is gonna get lengthy
CKtalon#7792: ok
nshepperd#2316: uh, i should probably be gradient clipping with my rl model huh
|
π
¬ gabriel_syme π
¬#3220: increasing LR in smaller models seems to help, up until it creates those 'catastrophic' learning events
https://wandb.ai/production/2ff677a71/gabriel_syme/GPTJ-Architext-SL_large/reports/3M-params-256-batch-size--VmlldzoxMDY3NDkz
nev#4905: https://cdn.discordapp.com/attachments/729741769738158194/892748479296831498/IMG_20210929_152333.jpg
nev#4905: a spike zoomed in
nev#4905: so it's not just solved by gradient clipping
nev#4905: I wonder what samples it's trained on when it fails
CRG#8707: what's your adam b2?
π
¬ gabriel_syme π
¬#3220: let me check (although these are common in higher LR models)
π
¬ gabriel_syme π
¬#3220: I don't think I changed it but let me make sure (this is the gpt-j codebase)
π
¬ gabriel_syme π
¬#3220: I have to check that since there is some crazy consistency there (although I did shuffle properly this time)
π
¬ gabriel_syme π
¬#3220: hmm I think it's default in the code, so my guess 0.999. I did not see that, I remember the discussion was that 0.95 was a better value right?
CRG#8707: Yeah, I think it's supposed to cause less spikes
π
¬ gabriel_syme π
¬#3220: I'll give it a shot next time, too late now lol
π
¬ gabriel_syme π
¬#3220: I'm 110 runs in
CRG#8707: megatron/GPT-3 use it
Louis#0144: Wait
Louis#0144: .goose
π
¬ gabriel_syme π
¬#3220: yeah I remember, I had it also on DALLE
BATbot#1024: https://cdn.discordapp.com/attachments/729741769738158194/892754666318860298/goose.jpg
Louis#0144: Oh it works here too
|
Louis#0144: Ok
Louis#0144: I thought it was only off topic
π
¬ gabriel_syme π
¬#3220: because of that discussion that sid and you and others had
kurumuz#5695: man 0.95 is so much better
kurumuz#5695: like holy shit
kurumuz#5695: lol
π
¬ gabriel_syme π
¬#3220: ye sucks, wonder if it breaks the whole experiment if I change it lol
π
¬ gabriel_syme π
¬#3220: I guess..final loss might be the same? Although, these spikes very often result in catastrophic loss of performance (model simply nevers comes back completely)
kurumuz#5695: i had really different loss curve
kurumuz#5695: and converged at a better loss faster ig
kurumuz#5695: yeah
π
¬ gabriel_syme π
¬#3220: hmm I'll try it on the smaller runs I think
π
¬ gabriel_syme π
¬#3220: those take like 1h each
π
¬ gabriel_syme π
¬#3220: after I'm done, I'm at the bigger models now and it hurts π¦
StellaAthena#3530: π₯³ π₯³ π₯³ π₯³
https://twitter.com/huggingface/status/1443246197779664903
Deleted User#0000: nice, they fixed the "model is too big for inference" error π₯
EricHallahan#1051: Yes, they did! Transformers v4.11.0 fixed a lot of lingering things that made working with our models in Transformers more painful than it should have been.
kurumuz#5695: π
Zippy#1111: omg .. how many vrams does that need to run?
|
Zippy#1111: aka would it run on a 3090 :overfloosh:
Zippy#1111: nvm I guess I'll just try it
kurumuz#5695: yes
Zippy#1111: omg that's awesome. π
Zippy#1111: Are you sure? The model itself is like 22.5gb
kurumuz#5695: at fp16 yeah.
Zippy#1111: ah ok
Sid#2121: How did it work out with serving / checkpoint loading in the end? do they serve in fp32?
EricHallahan#1051: There are technically two checkpoints but because of restrictions from how Model Hub works means that there are actually three branches.
There are really three arguments to `.from_pretrained()` that are important to consider when using GPT-J from Transformers: `revision`, `torch_dtype`, and `low_cpu_mem_usage`.
- `revision` represents the model branch that Transformers will pull from Model Hub. By default it will pull from the *`main`* branch (which contains a single precision checkpoint), but the checkpoint precision can be explicitly defined using `revision="float32"` or `revision="float16"`. It is extremely important to note that these have near identical downstream performance, and so unless you really need the full single precision checkpoint for academic reasons it is recommended to use the half-precision checkpoint to reduce download times and storage requirements.
- `torch_dtype` is a `torch.dtype` that sets the precision of the model when loaded. If you have a GPU with support for half precision, explicitly set `torch_dtype=torch.float16`. If you do not have hardware supports half precision computation (almost all CPUs) it will return an exception and you will need to load the checkpoint at single precision. This is fully independent decision from the `revision`.
- Unless it gives you issues use `low_cpu_mem_usage=True` to prevent OOMs resource constrained systems.
Zippy#1111: :peeka: you are an epic super human. https://cdn.discordapp.com/attachments/729741769738158194/892831073959047218/unknown.png
Zippy#1111: dang fp16 only uses like 13 gb.
Zippy#1111: But I'm assuming that the cpu has to do the fp16 conversion before use?
Zippy#1111: Every single time?
Zippy#1111: Or maybe I can convert and then "save pretrained".. and load from that directory?
Zippy#1111: I just noticed that it took a while before anything happened on the gpu.
Zippy#1111: Just a lot of cpu ram usage.
|
EricHallahan#1051: I had actually proposed in <https://github.com/huggingface/transformers/pull/13022#issuecomment-905128951> preambles to the model cards explaining the difference between the branches. Unfortunately Model Hub will only show the Model card in *`main`* and it is impossible to preview model cards on other branches so the concept was shelved.
*`float32`*
> This checkpoint of GPT-J 6B is stored in single precision and is most suitable for academic and research applications that require as close to original downstream performance. This 23.4 GiB checkpoint can be readily cast to lower-precision formats such as half precision and bfloat16 after loading. Given that there is no statistically significant difference in downstream performance when GPT-J 6B is run with reduced precision, it is recommended to use the alternative half precision checkpoint in prototyping and production applications.
*`float16`*
> This checkpoint of GPT-J 6B is stored in half precision and is most suitable for prototyping and production applications where speed and resource-constraints are critical factors. This 11.7 GiB checkpoint can be readily cast to other floating-point formats for use on hardware that does not support half precision, a usage that saves both time and storage space over a higher-precision checkpoint. Half precision comes at the cost of a slightly different performance in downstream tasks, and it is recommended to use the alternative single precision checkpoint in the academic and research applications where this is not acceptable.
Zippy#1111: Ah interesting
EricHallahan#1051: This is the clearest language I know of that explains the difference and intent.
EricHallahan#1051: Unfortunately 99% of users of the Transformers port of GPT-J will never know that there are multiple checkpoints. :grimberk:
Zippy#1111: I'm actually unsure how to choose a specific branch using the huggingface library.
Zippy#1111: would it be like.. `EleutherAI/gpt-j-6B/tree/float16`
Zippy#1111: Or something like that
Zippy#1111: Or do you have to use the git interface
Sid#2121: ^ Eric just said
Zippy#1111: Oh I'm sorry.. I didn't notice that. I just read from the docs how to use the fp16 model.
Zippy#1111: Via torch_dtype=...
EricHallahan#1051: If you have a GPU with half precision support and don't need to care about perfect accuracy (which nobody except for academics should care about) my recommended code is
```python
import torch
|
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True)
```
if you need all the accuracy you can get for academic work and have the memory to spare, use
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float32", torch_dtype=torch.float32, low_cpu_mem_usage=True)
```
Zippy#1111: Thanks!! This should be in the docs.
EricHallahan#1051: Every use case is different and I could write out a *extremely long* article explaining every single possible use case and arguments to use for each.
EricHallahan#1051: And I really don't feel like doing that.
Zippy#1111: lol nice it can already do code generation... I gave it:
```py
prompt = """
# Function generates hough-line transform on an image using opencv
|
def generate_hough_line_transform(input_image):
"""
```
and it gave me..
```py
# Function generates hough-line transform on an image using opencv
def generate_hough_line_transform(input_image):
# read image in YCbCr form
# convert it to RGB
rgb_image = cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)
# scale range
min_x = img_width / 2
max_x = img_width / 2 + img_width
min_y = img_height / 2
max_y = img_height / 2 + img_height
range_x = max_x - min_x
range_y = max_y - min_y
range = range_x * range_y
|
for x in range(0, range_x):
for y in range(0, range_y):
# get
```
EricHallahan#1051: Well it is GPT-J so... what else did you expect. :berk:
Zippy#1111: Of course.. I just mean for the revision aspect.. everything else just requires knowledge about ML stuff.. but the revision stuff is transformers specific.
EricHallahan#1051: If you have already downloaded the checkpoint from *`main`* or *`float32`* just use that and don't worry about trying to switch to *`float16`*.
ilovescience#3282: how Google is using MUM for Lens and Search:
https://blog.google/products/search/how-ai-making-information-more-useful/
ilovescience#3282: Is there any information on the MUM model itself?
Zippy#1111: Of course.. I just noticed that when using the 'main' model and giving it `torch_dtype=torch.float16`, it spent a pretty long amount of time just prepping the model to send to the gpu.. I guess it was trimming the params to fp16? @EricHallahan
Zippy#1111: I meant fp16 sorry
EricHallahan#1051: Yeah, I guess.
EricHallahan#1051: I have not tried the release yet so I really don't know how it performs versus the dev versions.
Zippy#1111: So I've downloaded 33 gb worth of gpt-j this morning already haha
Zippy#1111: I'll find out.. about to run the fp16 model.
EricHallahan#1051: Also today marks exactly six months since the release of the GPT-Neo port to HF.
Zippy#1111: This was from the 'main' branch converted to fp16.
|
Zippy#1111: Congrats π You guys move quick!
Zippy#1111: It seems like the fp16 model and the 'main' model take about the same amount of time to load.
Zippy#1111: I wonder what it's doing..
Zippy#1111: Even with the fp16 model, it's a lot faster to use it without `torch_dtype=torch.float16`, but it will run as fp32.. Sort of like it doesn't know that it doesn't need to perform any type of conversion and can just run with those presets?
EricHallahan#1051: Your running into a restriction of the HF model loader unfortuately.
EricHallahan#1051: It is not particularly well optimized.
Zippy#1111: Yeah I guess π¦
Zippy#1111: I kind of wish pytorch / transformers would do a rust backend implementation similar to the huggingface fast-tokenizers library. The speedup is pretty incredible.
Zippy#1111: Although I do know that rust does not have good gpu support.
alstroemeria313#1694: https://archive.is/IOfii
alstroemeria313#1694: This is trending on Twitter rn
Zippy#1111: But like.. AI are so idiot savant right now.. and maybe will always be.
bmk#1476: im going to be charitable and assume that it's only cringe because the news got their hands on it and that he actually has reasonable alignment takes
bmk#1476: if someone ends up reading his book for some reason please update us on what his alignment takes are
Daj#7482: He apparently denies evolution according to his Wikipedia
Daj#7482: Or rather, posits intelligent design
Zippy#1111: Oh jebus.
alstroemeria313#1694: Wait what.
Daj#7482: Β―\\_(γ)\_/Β―
Daj#7482: Seems this was a marketing business guy at Google, not Tech
|
Zippy#1111: I think AI will never be 'general' until an AI can learn without it being a part of a contest to best itself or some other attempt to achieve some goal.
Daj#7482: So that plus the giga cringe article I'mma say I'm not optimistic
Zippy#1111: My very super naive take
bmk#1476: lmao
Daj#7482: It's best not to argue too much about what "real" intelligence is or what it needs
Daj#7482: Very unproductive
Zippy#1111: True
Zippy#1111: I'm just thinking in terms of parallels with how human intelligence works.
Zippy#1111: Aka learning from other people without it being a contest.
Zippy#1111: I mean idk
Zippy#1111: I'm naive
Daj#7482: You seem not super familiar with how modern AI systems work, best to probably read up on the literature and watch the discussions. "Contest" is a poor way to phrase things
Zippy#1111: Yeah I don't really have to vocab to describe my idea, and I'm likely very wrong, sorry.
Daj#7482: No worries, don't take my bluntness the wrong way lol not trying to be rude
Daj#7482: We're just explicitly not a beginner discord
Zippy#1111: Oh I know.. I'm not a beginner with respect to programming *at all* .. but I am in terms of AI.. I sort of want to learn from the best though :overfloosh:
Zippy#1111: Feel free to tell me to shush / that I'm wrong with respect to some assumption. *Its sort of what I want because I don't know what I don't know*
alstroemeria313#1694: having some sort of goal is what gives it a direction... not sure we can do anything at all w/o optimizing for *something* even if it's an unsupervised objective
alstroemeria313#1694: Like for our generative language models, the objective is to predict the next token (word or word part) given a series of previous tokens.
alstroemeria313#1694: And we just feed in a ton of text during training.
|
Zippy#1111: Yeah and that's sort of what I mean.. With an AI we have to give it some purpose, but humans don't really have any drives other than to eat, sleep, etc.. and we figure out for ourselves what we should be improving at, instead of someone else deciding what we should do / what we should focus on. It's like if a person was born into school, and instead of being taught by a parent who *wants* to teach us how to do things via mutual interest, we are just tested over and over and over until we start to do well on that test.
alstroemeria313#1694: we have a ton of drives i think
alstroemeria313#1694: Like relating to curiosity and social status and stuff like that?
Zippy#1111: Yeah, which gives us a drive to become better at something, whether it be school or work or interpersonal relationships.
alstroemeria313#1694: Yeah. But there are abstract drives underlying the specific things, it's not *nothing*.
Zippy#1111: Yes of course.. and that's what I mean.. It's hard to quantify an abstract drive.
alstroemeria313#1694: ahh
Zippy#1111: An AI's goals tend to be very singular, while a human's goals can often time have hundreds or thousands of facets.
Zippy#1111: So while an AI gets good at one thing, so it can achieve one thing, a human is doing one thing, and getting better at one thing, because it feels it may further hundreds or thousands of different partial goals.
Zippy#1111: Which I naively feel could be why AI tend to be idiot savants.. because they can only really apply their knowledge to that one thing they were trained for, and a human might be able to take it's one learned thing, and apply it to multiple possible scenarios in completely different contexts. like.. I learned how to do a derivative, so I can now use it in conversation with people who know about derivatives, to further my interpersonal relationships, also it helps feed my curiosity about the complexity of the world, or it might help me get a job, or it might help me with physics... aka integrating this piece of learned knowledge into many different contexts that are already learned.
SecondMover#8029: Ah, but in diverse environment an AI can learn a ton of different skills because they all turn out to be useful for this one goal at some point
Zippy#1111: True
Zippy#1111: but idk about the multi-goal part.
Zippy#1111: like for an AI, they may have learned those skills because they achieved a singular goal, while a human can learn because it helps achieve many goals.
circuit10#0158: https://youtu.be/8AvIErXFoH8
circuit10#0158: Humans do have goals
circuit10#0158: The goals we decide are what are called instrumental goals
circuit10#0158: Not sure if that video covers that
Zippy#1111: Oh that's cool!
circuit10#0158: But I know this one does: https://youtu.be/hEUO6pjwFOo
|
circuit10#0158: The video title sounds boring but itβs actually really interesting
Zippy#1111: I'm going to watch these π Thanks
circuit10#0158: I'm definitely not experienced with AI things but I really enjoy that person's videos
Zippy#1111: I've definitely felt like this before https://cdn.discordapp.com/attachments/729741769738158194/892855261847515166/unknown.png
Zippy#1111: aka some of my preferences may not be entirely transitive
alstroemeria313#1694: we keep finding more ways to repurpose models trained for a specific thing for other things
alstroemeria313#1694: And sometimes we can find general goals that make the model learn a whole bunch of useful things, like the autoregressive sequence modeling objective (predict the next token) for large LMs.
alstroemeria313#1694: But yeah they're still really narrow in comparison.
Zippy#1111: That's a good point.
alstroemeria313#1694: The autoregressive objective makes the models learn all kinds of stuff bc there are all kinds of relationships in the giant text datasets we train them on that it can learn to make its loss go down.
Zippy#1111: I'm thinking about an AI where it does something that improves it at mulltiple things, because the AI knew it would improve at all of those different goals.
Zippy#1111: Yeah true.
Zippy#1111: I guess it's hard for me to verbalize my idea.. basically like.. maybe an AI that could generate its own tests and loss functions in order to tailor itself to whatever it's trying to accomplish. I feel like *that* would be a pretty decent GAI type entity.
alstroemeria313#1694: So if it can generate its own loss functions why would it not just generate an easy one and then fulfill it?
Zippy#1111: Because of it's innate 'abstract goals' that we were talking about earlier.
alstroemeria313#1694: ah
bmk#1476: imagine not being a monist utilitarian
Zippy#1111: like... the AI knows it's AI girlfriend is going to some college, and for the AI to get into that college, it knows that it needs to pass english 2 (or whatever), so it figures out that it must learn how to pass english 2, and generates a loss function specific to that problem, even though "being with your girlfriend" and "passing english 2" are completely separate goals.
Zippy#1111: And not even in remotely the same context.
bmk#1476: have you read about terminal and instrumental goals yet
|
Zippy#1111: Well I did watch that youtube video :overfloosh: but no.
bmk#1476: the youtube video should be enough
Zippy#1111: Also yeah I feel like my example is bad.
Zippy#1111: Since it's talking about a singular goal.
Zippy#1111: :Sadge:
bmk#1476: basically you can have as many instrumental goals as you want but monist utilitarians can only have a single terminal goal
Zippy#1111: Yeah
bmk#1476: and plurist utilitarianism literally does not make any sense whatsoever so everyone is monist
Zippy#1111: So basically everyone's singular goal is essentially dopamine then?
Zippy#1111: I mean actually yeah that makes sense
bmk#1476: i never said that
alstroemeria313#1694: what if the brain uses a sort of architecture where one part is trained to get reward and another part is trained to detect various correlates of inclusive fitness or some such and dole out reward
bmk#1476: all im saying is if you want to be a utilitarian it really only makes sense to be monist, not making any claims about what that one goal is, or whetehr humans are actually utilitarians
alstroemeria313#1694: In that case the reward is just part of the architecture and is not actually a terminal goal of the organism, if it were the organism would just max out reward trivially and not actually do anything ever.
Zippy#1111: I know you didn't.. it's sort of me just trying to figure out what that one thing that everyone is essentially trying to achieve, is.
bmk#1476: that's.. difficult
bmk#1476: to say the least
bmk#1476: (that's a slight understatement)
bmk#1476: here's some relevant posts as to why it's so hard
bmk#1476: https://www.alignmentforum.org/posts/5bd75cc58225bf06703754e8/humans-can-be-assigned-any-values-whatsoever
|
Zippy#1111: Well yeah, I mean we have biological "goals" that are essentially just abstract suggestions that we procreate, and that surviving is better than not surviving.
bmk#1476: https://www.lesswrong.com/posts/DsEuRrsenZ6piGpE6/humans-aren-t-agents-what-then-for-value-learning
bmk#1476: https://www.lesswrong.com/posts/KCg7NeKQ7MycXWpYd/our-values-are-underdefined-changeable-and-manipulable
Zippy#1111: Interesting
Zippy#1111: I'll read those
bmk#1476: basically:
1. monist utilitarianism is essentially the only thing that makes sense
2. humans arent really monist utilitarians and even if they were it's impossible to tell which thing they actually care about and which things are justg cause they're being dumb
Zippy#1111: I mean, everything we do essentially either influences our brain to push the dopamine button or it doesn't. Sometimes things that are generally considered to be good things result in that release, sometimes *doing* the things that other people consider to be good causes that release, and sometimes doing the opposite. I think if we can figure out how the brain decides when we deserve the happy juice, then we can make decent AI.
bmk#1476: why dopamine and not serotonin
bmk#1476: everything we do essentially either influences our brain to push the serotonin button or it doesn't
EricHallahan#1051: Here is an explanation from Stas for further clarity:
https://twitter.com/StasBekman/status/1443298698578960384
https://twitter.com/StasBekman/status/1443299346968039427
bmk#1476: why doesnt it just do that by default
bmk#1476: why would you want the high cpu mem usage
Zippy#1111: Okay then happy juice. The abstract juice that we don't really get to hold the trigger for.. and when we do, it's generally a result of drug abuse.
EricHallahan#1051: It is technically not a stable feature IIRC.
bmk#1476: so then let's just pump heroin into everyone's brains to have the maximum amount of happy juice
|
bmk#1476: problem solved
EricHallahan#1051: Though I doubt it will induce problems the idea is you really don't want to break the critical operation of model loading in production applications.
bmk#1476: let's turn the world into one big fully automated heroin factory, connect everyone up to it, and then we have completely solved the problem, the end, everyone lives happily ever after
wabi-sabi#5811: I'm not my brain, though. I am my gut microbiome, brain is just along for the ride.
Zippy#1111: Well the main problem with comparing it to heroin is that most people don't start.
EricHallahan#1051: And then keep pumping to be extra sure it's filled to the max.
bmk#1476: we can fix that
Zippy#1111: :overfloosh:
bmk#1476: I mean this is what you wanted, right? maximum happiness juice
bmk#1476: problem solved
Zippy#1111: I mean, if the long term goal is to be happy, then we humans know that doing heroin will make us very happy, but it will likely end up resulting in a state void of any happy juice.
bmk#1476: what do you mean
bmk#1476: if your goal is for people to be happy then connect them to the infinite heroin machine
Zippy#1111: It's about long term goals vs short term goals.. If we only ever cared about short term goals, then yeah, heroin is the way to go.
Some Point Process#3793: I think there's more to it than "neurotransmitters" and being optimized to be "happy". This is because, I think there are anti wireheading mechisms making it so that a tonic ("baseline") increase in pleasure chemicals doesn't amount to more happiness etc in the long run. Instead, we're evolved to seek interesting experiences (such that any constant pleasure stimulus either wears off, increases the baseline level of stimulus required to maintain the same amount of pleasure, or both). Maybe this helped for survival or something, idk.
**The only thing I'm saying with any confidence** tho is that we're evolved somewhat with a 'novelty constraint' (metaphysically, I think we're sort of hardwired in a good way to make the world more interesting than to be happy with technological stagnation)
bmk#1476: ok let's have the AI solve aging for us too so we can have everyone wired up for a billion years to the heroin machine
cfoster0#4356: :gameryes:
bmk#1476: problem solved
|
EricHallahan#1051: There is nothing to distinguish long-term vs short-term here.
bmk#1476: imagine a billion years of pure bliss from enormous amounts of heroin
bmk#1476: fully automated
bmk#1476: seems like the perfect solution, no?
gollark#3909: I can't say I'm very *effective* at maximizing future expected [WHATEVER NEUROCHEMICAL].
Zippy#1111: well ok.. do you get the happy juice when you think about doing heroin? Or do you get more happy juice imagining that you come up with the next breakthrough in AI? @bmk
gollark#3909: Considering hyperbolic discounting and whatever else.
bmk#1476: I get the happy juice when the heroin is in my veins
Zippy#1111: the idea of doing heroin doesn't necessarily result in more happy juice.
bmk#1476: it absolutely does, during the time that you're doing it
bmk#1476: and if we can fully automate heroin production and solve aging we can wire up everyone forever
Zippy#1111: oh god
gollark#3909: Why does it keep models in main memory at all? Surely only the GPU needs them.
EricHallahan#1051: HF is designed to be simple, not performant. This philosophy only became an issue within the past six months or so.
bmk#1476: I mean this is what you're asking for when you want to maximize happy juices
Zippy#1111: OR we can turn humans into AI by wiring them all with heroin in their brain, and monitoring them all the time, and when they do better at some task, we give them a shot of the happy heroin juice, and if they fail, no happy juice.
bmk#1476: you monster, happy juice is the sole utilitarian objective! how dare you not give people the most happy juice possible!?
Zippy#1111: Because I only care about my happy juice, since that the only goal that I have.
bmk#1476: yeah so we should maximize happy juice
Zippy#1111: But yeah, I don't think this is a good example of why my 'happy juice' theory is wrong :blaze:
|
CRG#8707: Relevant viral comic from a while back: https://twitter.com/Merryweatherey/status/1185636106257211392
Zippy#1111: Because we don't get to hold the happy juice trigger.
Zippy#1111: And even after heroin addiction, the happy juice gets less and less effective over time.
Zippy#1111: SO if we are trying to increase the amount of overall happy juice, heroin is not a good idea.
gollark#3909: Just make better heroin, then.
Some Point Process#3793: I don't find anything wrong with researching what is causing the effectiveness of pleasure chemicals (including opiates) to decrease so much over time, tbc. There are obviously people who have a much lower hedonic tone than average, and figuring out how to increase that tone will mitigate a lot of suffering for a lot of people
bmk#1476: shit hedonic utilitarians say
Zippy#1111: I mean, it's sort of the way the brain works though :overfloosh: .. if the reward center of the brain gets too stimulated, it takes more and more stimulation to get the same level of happiness.
Zippy#1111: It actually grows in size.
Some Point Process#3793: Also the average person could be well served by being a lot happier because maybe their contributions will increase. Maybe the tendency towards war/violence etc will decrease, provided that they don't need escalating doses etc
bmk#1476: why not just skip that and create artificial humans made entirely of reward center
Zippy#1111: :Smart:
bmk#1476: all that pesky neocortex doing all the thinky things
cfoster0#4356: Yes
bmk#1476: what a waste of brain matter, it doesn't even do the pleasure
cfoster0#4356: Bite the bullet
EricHallahan#1051: I was going to say that the solution is to just get more humans.
cfoster0#4356: It's quite tasty
bmk#1476: actually the true solution is rats on meth but one step at a time
Zippy#1111: We need to select the happiest individuals from every generation, and only let them procreate, and eventually we will have an entirely happy human race.
|
bmk#1476: how slow
bmk#1476: just fill the universe with reward centers
bmk#1476: who needs limbs or a torso or a neocortex anyways
Zippy#1111: Yes, replace muscle with reward center
bmk#1476: the AI is keeping it alive anyways
cfoster0#4356: One man's reductio is another's paradise
bmk#1476: so might as well ditch all the deadweight
alstroemeria313#1694: oh I was looking for that (I RTed it) to post it but couldn't find it
Zippy#1111: lol nice.. yeah that is relevant.
bmk#1476: I may not be a hedonic utilitarian, but I respect people who bite the bullet and say :gameryes:
Zippy#1111: heroinic utilitarian u mean?
Zippy#1111: well actually
Zippy#1111: same idea
Zippy#1111: :shrug:
Zippy#1111: happy juice isn't necessarily hedonic though.. We can feel good about things that dont bring us instant gratification.
bmk#1476: HU in denial
alstroemeria313#1694: oh you just figure out what causes *that* feeling and induce it artifically *too* π
alstroemeria313#1694: And you just do this for all the different types
Zippy#1111: Well yeah, then you would have someone who is always trying to make things better down the road.. studying some topic so that in several years they may be able to achieve some goal.
circuit10#0158: Another one of the videos mentions exactly that
|
circuit10#0158: Well not exactly that
circuit10#0158: But gives the example of if curing cancer was a terminal goal, going to university would be an instrumental one
alstroemeria313#1694: You could induce it without needing the person to perform the corresponding action though!
circuit10#0158: I think this is the one with that example: https://m.youtube.com/watch?v=ZeecOKBus3Q
bmk#1476: :goose7:
Zippy#1111: Well I feel like it's the same feeling as the instant gratification type of pleasure, just with a different purpose.. looking ahead instead of looking at the present.
cfoster0#4356: *My brain is a neurochemical synthesizer and I'mma play it like Stevie Wonder...*
Zippy#1111: It's kind of sad how many people just get stuck trying to cheat the reward center.
Some Point Process#3793: I think for humans at least there's a built in discount factor for the (estimated expected) reward
Some Point Process#3793: I personally wouldn't don't have that long of a "time horizon" at least
Zippy#1111: Yeah true
Some Point Process#3793: (Unrelated shout-out for David Pearce: https://www.hedweb.com/)
Some Point Process#3793: He's part of the Qualia Research Institute trying to find out the neural basis of pleasure (and consciousness). Both he and Scott Alexander are among the board of advisors
Zippy#1111: interesting
Some Point Process#3793: This was also interesting (it was all over the place but got nominated for curation a lot). <https://www.lesswrong.com/posts/zcYJBTGYtcftxefz9/neural-annealing-toward-a-neural-theory-of-everything>
But (unlike neurochemical stimuli) this has to do with other environmental stimuli like observations in your environment. One of the main things was that, given that the brain is a 'predictive processing' (PP) system, its level of stimulation naturally wears off as it habituates (anneals) to an environment. IIRC this is because our conscious percepts (hence level of stimulation) correspond to the residuals (errors) of (sensory) predictions. PP makes the environment more predictable but habituation is bad for mental health if sustained for too long (since we need novelty for some reason). It proposes some ways that novelty can be increased to stay healthy.
Zippy#1111: Yeah makes sense π
Zippy#1111: wow gpt-j is pretty amazing... I'm very impressed.
lc#8952: What would be the time difference if I tried to code a specific neural network from scratch using a non-python language vs. learned+used tensorflow? I've got an upcoming work project I have to do, but am interested in starting from the ground up for pedagogical purposes, and also really dislike python. Also think it might be cool to do some HPC work without worrying about whether or not I'm forgetting to shell out any hard parts to the C code
EricHallahan#1051: Why TensorFlow?
|
EricHallahan#1051: Of all the popular frameworks it is the least flexible.
kurumuz#5695: well need to learn backprop, SGD and specific optimizer you will need, then forward and backward ops for your modules
kurumuz#5695: a lot of work ig
kurumuz#5695: python is a good fit for neural networks after all(and python3 is the best programming language around)
lc#8952: Why is python a good fit for neural networks?
EricHallahan#1051: Python is just glue that just holds everything together.
bmk#1476: because everyone else decided that python is a good fit for neural networks
lc#8952: thats what I assumed
kurumuz#5695: It's a good C glue and community grew really big
bmk#1476: which is a self fulfilling prophecy that really does make python the best
bmk#1476: :schellingpoint:
kurumuz#5695: @EricHallahan Takes one to code in C to understand the beautiful things python give you, and code in assembly to understand things C give you
kurumuz#5695: I love the whole stack there
kurumuz#5695: After all, not crazy enough to write whole programs in x86-64 ISA
kurumuz#5695: patches are fine though :berk:
lc#8952: You should try Zig, it's a bit lower level than C but modernized
lc#8952: no char pointers for arrays, etc.
kurumuz#5695: hmm, will take a look
genetyx8#7543: Ever heard of Julia? :thinkies:
genetyx8#7543: Rust might be good too for what you want
|
lc#8952: This is mostly my take on other 'high performance' languages: https://drewdevault.com/2020/01/04/Slow.html
genetyx8#7543: NNs are significantly more complex than hello world tho. The programmer time it takes to write, debug and modify your program is also important, which is why high level languages exist in the first place
lc#8952: In this case it's the GPUs that are expensive. The programmer time is less than minnimum wage
genetyx8#7543: considering how well paid ML engineers and researchers are, I wouldn't be so sure. Besides, from a business/research standpoint, you want to be the first to market/first to publish. If implementing the latest hot NN architecture is order of magnitudes faster in Python than in C (which it is for the average programmer, and even for very good programmers), then you implement it in Python first, because otherwise you get sniped by competitors who do.
umami#2304: http://karpathy.github.io/neuralnets/
umami#2304: You can take a look at this, although it's in python it only uses numpy so it's easy to port to anything
umami#2304: I would use something like torch though which also has a decent C++ API
umami#2304: Going without a framework is significantly more work
nev#4905: rn another chat I'm in is trashing python
π
¬ gabriel_syme π
¬#3220: some people can be really..idk
π
¬ gabriel_syme π
¬#3220: like I wish they had constructive criticism you know. Like spend their time infusing the discussion with ideas on how to improve things
π
¬ gabriel_syme π
¬#3220: I saw this post (obv a LI post) even using the word scam wrt GPT-J, which annoyed the hell out of me
p4bs#8973: Have you seen this? Some Professor Dr. has written quite a nasty post on the work EleutherAI does and GPT-J HuggingFace implementation...
p4bs#8973: https://cdn.discordapp.com/attachments/729741769738158194/893130578512216104/unknown.png
Daj#7482: Just ignore these kinds of people lol
Daj#7482: Not worth your time
Daj#7482: They always come out of the woodwork to gather a bit of attention
Sphinx#2092: https://cdn.discordapp.com/attachments/729741769738158194/893131103852957736/unknown.png
p4bs#8973: * A statue has never been erected in honor of a critic.* - Jean Sibelius
p4bs#8973: you are right, better not to feed them
|
Daj#7482: I also like the saying "When a respected elderly professor says something is possible, they're probably right. If they say something is impossible, they are definitely wrong."
Daj#7482: :berk:
p4bs#8973: π
Daj#7482: I get tweets like this on my timeline all the time, not worth getting worked up about. "Don't rely on someone understanding something if their salary depends on them not understanding it." Just do good work and pass them by
p4bs#8973: π so many good quotes Connor
Louis#0144: her cv is such a meme
Louis#0144: but yeah
Louis#0144: best to ignore
Louis#0144: ~~point of advice, anyone in CS who refers to *themselves* as Dr. is best to avoid~~
StellaAthena#3530: Eh, IDK about that. A lot of people (esp. women) get condescended at on Twitter a *lot*. I certainly understand the impulse to put a title in your Twitter name, esp. if youβre using it primary for professional purposes
Daj#7482: Putting titles in your twitter bio is cringe no matter what :berk:
Louis#0144: Ah thats a good point
Louis#0144: did not consider that
EricHallahan#1051: The true solution is to be like Ben and not use Twitter.
EricHallahan#1051: (But your point is valid.)
π
¬ gabriel_syme π
¬#3220: that's the one I'm talking about right above π I couldn't help myself there though
Kia#2550: They act like Children... Nonetheless Not worth your time
Kia#2550: Do have a great day and morning:hap:
AI_WAIFU#2844: Don't forget to short their entire existence before you do so tho.
bmk#1476: but the market can stay irrational longer than you can stay solvent, etc
|
Kharr#7888: Current NLP solutions are in the uncanny valley. Remember when CG in movies was there? "It looked totally fake." Now many movies are 80-90% CG and people show up with popcorn and eat it all up.
Louis#0144: tbf
Louis#0144: practical effects are still really good
Louis#0144: lol
Louis#0144: a lot of directors use a ton of practical effects still
Kharr#7888: There's always going to be old-school folks. My point was just about the uncanny valley and the doubt that normally comes with it. Meanwhile Avengers is raking in $$$ with mostly green screen acting now that we're further out of the valley.
Kharr#7888: I'm already getting spam robo calls with much more realistic sounding voices. It's going to be really annoying in the future.
Daj#7482: What a lot of people don't realize that the bulk of CGI is just boring background stuff
Daj#7482: It's incredible how much of movies that don't look like they have any CGI is actually CGI
Louis#0144: Oh yeah I had one that was like notably made by tacotron
Daj#7482: Robocalls are only a US thing it seems
Daj#7482: Like most bad things
Kharr#7888: And Canada! (where I am)
Daj#7482: USA lite
Daj#7482: And the UK is USA: Europe flavored
Daj#7482: Recently I talked to someone about whether I'd consider moving to the bay area
Daj#7482: And I just said the truth that doing so would feel like moving to a second/third world country
Daj#7482: compared to where I live
Daj#7482: lmao
Louis#0144: Honestly German politics are worse tho
|
Louis#0144: :berk:
Daj#7482: No, they really, _really_ aren't
Louis#0144: But yeah I see your point
Daj#7482: Like I cannot describe how much the worst German politics is not as bad as average USA politics
bmk#1476: i recently found a youtube channel about urban planning that took my vague dislike of north american cities and put it in explicit words
bmk#1476: *car dependence*
Daj#7482: American cities are dystopias
bmk#1476: the bay area is infinitely better than edmonton but still car centric as hell
Daj#7482: bmk I think you would be so happy if you just moved to Europe lol
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/893154269136556072/IMG_6682.png
bmk#1476: i'm like a big fan of walkability but also a big fan of skyscrapers so i think id love frankfurt or london or something
Daj#7482: Frankfurt is the worst city in Germany but I think you'd still love it over literally anywhere in NA :berk:
bmk#1476: also trains are awesome
bmk#1476: seriously?
Daj#7482: lol I dunno I don't know anyone that likes Frankfurt
bmk#1476: i, uh, can think of several candidates for worse cities
Daj#7482: it's pretty boring and high crime
Daj#7482: By German standards
Zippy#1111: Shame on Dr. Prof. Mrs. Person Science AI lady. I have had gpt-j running on my 3090 with a FastAPI route so I can test params and do text generation for like 24 hours lol.. Normally with transformer models I try a couple things and get bored.. gpt-j-6B is the only open source model that caught my attention and I've kept running.
bmk#1476: what about karl-marx-stadt
|
bmk#1476: or, uh, chemnitz, as they call it these days
Daj#7482: Never been, but it's small
Daj#7482: 200K
Daj#7482: I assumed you wanted a big city
bmk#1476: yeah fair point
bmk#1476: the problem with europe is that for some reason the bay area has become the schelling point for, uh this particular brand of weird
Daj#7482: Unfortunately yes, but London has a big EA hub too
bmk#1476: so while european infrastructure is better, wages are terrible compared to the bay and it's in general just harder to build a community
bmk#1476: yeah london sounds really cool
Daj#7482: Living costs are so high in the bay if you're not a top tech worker you'll live much better in Europe
bmk#1476: that's not really true
Daj#7482: Imagine just getting to go to a doctor whenever you want
Daj#7482: Imagine clean streets
Daj#7482: No violent crime
Daj#7482: Seriously you don't understand how much the USA is not a first world country lol
bmk#1476: an entry swe at google saves more money after taxes and rent than wages in europe for a comparable job *before taxes*
Daj#7482: > At Google
Daj#7482: That is top tier
bmk#1476: i mean, why would i go to the bay to work for a non top tier company
Daj#7482: Ask the millions of people that do
|
bmk#1476: i mean "i" in the literal sense
bmk#1476: i personally would not go to the bay for anything short of a top tier job
Daj#7482: Crime, pollution, political climate, insurance, to name a few
bmk#1476: (if considering just work)
Daj#7482: Not saying bay isn't worth it
Daj#7482: It's just not a free lunch lol
bmk#1476: i get there are problems with the bay and i'd love it if we could take the entire bay rat sphere and transplant it to like amsterdam or something
bmk#1476: but like that isn't happening
Daj#7482: Maybe you're just more compatible with rats than I am :berk:
bmk#1476: maybe the rat sphere that I'm interacting with is different from the one you are
Daj#7482: Too much creepy autism cult cuddle puddles for me :berk:
Daj#7482: Well if either of us moves there we can find out
bmk#1476: I did not witness a single cuddle puddle in an entire week
Daj#7482: Β―\\_(γ)\_/Β―
Daj#7482: Maybe I'll move to the bay one day too who knows, I don't really have a horse in this race
bmk#1476: I'm definitely going to live in Europe at some point, I'm not learning French and German and probably also Dutch at some point for nothing
Louis#0144: Yeah doesn't swe at google start at 200
Louis#0144: Lmao
bmk#1476: even after you take into account rent and taxes and food it's *still* way more worth it in the bay
bmk#1476: kind of sad that it has to be this way but it is
|
Louis#0144: Yeah but there aren't as many Canadian geese in the bay
Louis#0144: So Edmonton > bay
thenightocean#6100: I think US is ultimate high variance country. Depending on your talent and ambition it can be the best place if your are in top percentage, and terrible if you just want to have comfortable middle class life. I am too much of the midwit to live there and compete with the top talent without any safety net. But I understand why someone like BMK would benefit from living there
Louis#0144: Would u really risk your mental well being for money
thenightocean#6100: its no accident that all most innovative ideas and companies come from US, and Europe is a sort of museum when you go to chill after you went pass your prime. It is a very nice museum though!
bmk#1476: really wish it were possible to have the best of both worlds
Louis#0144: I think most people in this chat are very far towards the upper end though
Louis#0144: Lol
Louis#0144: Know thy audience
thenightocean#6100: I know, I specifically mentioned myself
Zippy#1111: idk pretty much anyone can get a job in webdev, and webdev in the us pays quite a bit.
Zippy#1111: Although if you're going for more of a research role then yeah, it's tough.
ersatz#0001: I still don't understand how basic webdev is >100k in the US
ersatz#0001: that's unreal from Europe
Zippy#1111: I mean, living in the bay area is kind of dumb in my opinion :overfloosh: .. It's completely unreasonable. I mean we work on computers, so we shouldn't need to live near where we work. And deciding to live in an area where someone making 100k a year has to live in a closet just doesn't make sense in any way.
StellaAthena#3530: @ersatz Itβs not, almost anywhere in the US
Zippy#1111: I mean it kind of is. I got a remote offer for over 100k from a company in florida.
StellaAthena#3530: The Bay specifically is absurdly expensive
StellaAthena#3530: How much experience do you have?
StellaAthena#3530: I (perhaps falsely) read the comment as about entry level
|
Zippy#1111: Well in terms of programming, I have ~10 years, but in terms of work I have like 2 years.
StellaAthena#3530: Or maybe Iβm just even more underpaid than I think xD
bmk#1476: it's not *that* expensive, the number one expense is still federal taxes
bmk#1476: though maybe this is pandemic time pricing
Zippy#1111: Also helps that I was a co-founder with a project manager of the business I was hired at.
ersatz#0001: really? I know 4 people making that much in the US doing fullstack node/react stuff
StellaAthena#3530: The COL adjustment for the Bay Area is 1.86
ersatz#0001: the 4 are remote btw
dmayhem93#3202: Developer salaries really blew up this year, but 100k for webdev was pretty common unless you're in low cost of living areas
bmk#1476: yeah but food is cheap (i.e even if it costs twice as much it still doesn't make up much of your expenditures) and rent has come down during the pandemic
StellaAthena#3530: I live in an expensive area and itβs 20% more expensive than where I live (DC)
Zippy#1111: I'm in a low cost of living area :yodaburn: .. literally vermont.. I don't know any developers in my area LOL
bmk#1476: you make a lot more than 20% more though so it balances out
StellaAthena#3530: I make 100k/year
Zippy#1111: What do you do? @StellaAthena
bmk#1476: swes in the bay make a lot more than 120k
ersatz#0001: is it true that your internet is bad and expensive in the US or is it a rumor? I heard you have limits on the amount of data you can use for example?
Zippy#1111: Well depending on where you live, it can be really bad..
bmk#1476: I have never heard anyone from any country think that their country's internet is better than average lol
Zippy#1111: The issue is that for a lot of places, there aren't many options for ISP's.. so you are sort of required to go with one service provider and they have no incentive to improve their service because of lack of competition.
|
StellaAthena#3530: I do AI research for hire. Orgs with data scientists and social scientists that arenβt big enough to have a research division hire me to do research for them. I design and evaluate applied ML tools for them to use for their business contexts
Zippy#1111: Oh cool! But yeah that sounds like a role that should pay a bit more. :overfloosh:
ersatz#0001: I have never heard anyone complain about fiber here and it has become the standard in all major cities
bmk#1476: it's standard here to use cable or dial up
bmk#1476: I have never used fiber
StellaAthena#3530: Do you live in a country where broadband and telecom companies run the government org that's nominally in charge of oversight?
ersatz#0001: that's like β¬25 (~$30) for β 1000 β 500, also a landline with free unlimited calls and ~100 TV channels
ersatz#0001: something like that
ersatz#0001: cable is like 300 down or something?
bmk#1476: that would cost like $115 here
bmk#1476: just the internet
bmk#1476: no landline or tv
StellaAthena#3530: I pay ~80 for gigabit internet only
ersatz#0001: you have limits on data?
Louis#0144: I pay $60 here
bmk#1476: probably even more if you live somewhere with no fiber connection like me
Louis#0144: Ah actually
Louis#0144: I pay $60 for 1.5 gigabit
ersatz#0001: I don't think >300 is that useful to be honest
StellaAthena#3530: This has some very useful info about how telecom companies operate in the US
|
ERROR: type should be string, got "https://youtu.be/fpbOEoRrHyU?t=434\nersatz#0001: 300 is already more than enough 60fps/4K on YouTube/Twitch\nersatz#0001: and I don't see what would require more speed for 99.9% of people\nbmk#1476: downloading Linux isos\nersatz#0001: at 300 the bottleneck is the ftp mirror\ninox#5400: I want the piracy tech to catch up with fast connections\ninox#5400: like torrenting was optimised for 2000s-era connection speeds, arguably it's less popular now because people are just streaming from sketchy sites instead\ninox#5400: but that kind of streaming is uninspiring, like streaming torrenting could be a fun way to abuse 300+ connection speeds https://webtorrent.io/\nbmk#1476: the problem with torrents is a) asymmetric upload speeds b) very few seeds for anything that's not super popular\nbmk#1476: makes it impossible to torrent obscure goose anime\ngollark#3909: The UK has a weird situation with internet connectivity where some arbitrary places have very fast and cheap fibre connections and everywhere else gets \"fibre\" VDSL. It's very annoying.\nZippy#1111: Well you could make a site dedicated to torrenting obscure goose anime and rally goose anime lovers to your cause.\ninox#5400: that's not why they're unpopular now, it's the delayed gratification\ninox#5400: imagine instant decentralised youtube clone for anything\ninox#5400: that's what modern internet speeds and streaming torrenting could do\ngollark#3909: Aren't there several of those?\ngollark#3909: There's definitely videos on IPFS.\ngollark#3909: But much of the value YouTube provides for video creators isn't just hosting but ad revenue and people actually seeing your videos.\ninox#5400: yeah there's definitely some stuff appearing but nothing's exploded yet\ninox#5400: for piracy it might happen as the paid streaming services become less useable" |
wabi-sabi#5811: I don't trust John Oliver videos, it seemed like his research team misrepresented their citations pretty often when I fact checked a couple on CW topics in the past.
uwu1#4864: I'm not sure if people still use private trackers but back then there you would get good seed speeds usually as you needed to maintain a ratio or pay to stay on the site
uwu1#4864: what.cd had a particularly infamous interview process to get on there
zphang#7252: and you only got invites by already having good numbers on other sites
zphang#7252: so it was super hard to break in to the community
π
¬ gabriel_syme π
¬#3220: this makes total sense really. It's hard to explain to people at times how close we are. People in my industry are like "well the impact of ML in the next 10 years will be minimal" (yes they use ML there instead of AI still). And I keep trying to say that things change radically within 10 weeks, let alone 10 years. But it's hard because I can see it, sort of, but most are not aware of it like you say
Jose-trxr#4270: That sounds like Spain. I have a similar connection here in Spain at a similar price.
Jose-trxr#4270: 75β¬ for symmetric 1Gb connection, landline connection, 2 x 4G lines with 15 GB internet quota and 145 TV channels.
π
¬ gabriel_syme π
¬#3220: In Malaysia I pay around 40 dollars for 300 although just a bit more for 500 (which I think I have now). It doesn't really bother me ever, so I guess for most things it's enough
bmk#1476: I don't really need fast internet since I don't really do anything bandwidth intense
π
¬ gabriel_syme π
¬#3220: I feel what you can do hasn't really caught up yet
π
¬ gabriel_syme π
¬#3220: like I can be online in multiple things on my computer, have netflix on the tv, phones, etc. and it's fine
ersatz#0001: Your neighbor to the north
ersatz#0001: 300 is enough 99.9% of the time for 99.9% of people, with 300 you can literally watch a 4k/60fps Livestream no problem
π
¬ gabriel_syme π
¬#3220: yeah I think so too
bmk#1476: why would you watch 4k/60fps
bmk#1476: i can live with 720p on desktop and 480p on mobile
Awesome_Ruler_007#7922: CV obj detection + classification, fine-tuning on small dataset. It seems overfitting in the first 50/675 steps or so.
the pretrained model size is 90M - originally designed for a 120k images dataset. Target dataset is 5K images.
|
What do you think - I doubt that I am loading the checkpoint properly. Even with such a large model, it shouldn't overfit that quickly and loss is wonky.
opinions?
Awesome_Ruler_007#7922: I think I am prolly initializing the model backbone wrong - might try with other combinations
lc#8952: human eye only has a resolution of 60fps
kurumuz#5695: it's actually 420.69fps
kurumuz#5695: i have proof
Zippy#1111: I need fast internet so I can download gpt-j-6b as fast as possible so I can try it out. :Smart:
lc#8952: that's the joke
kurumuz#5695: !
kurumuz#5695: definitely
kurumuz#5695: 120hz was much better compared to 90hz in VR
kurumuz#5695: it really does matter a lot
kurumuz#5695: he's in facebook/oculus now
ari#9020: It's a temporal resolution, not a spatial resolution :MegaNerd:
Jose-trxr#4270: Sounded to me like near π
nev#4905: is it possible to soft prompt tune gpt-j on a v3-8?
kindiana#1016: sure
kindiana#1016: you just need to write the code
kindiana#1016: π
|
nev#4905: yeah that's what I was asking about
nev#4905: as long as it won't oom
π
¬ gabriel_syme π
¬#3220: and when you do it feel free to share it with me π
eleutherai#7051: Hi @O5, I am a newbee on Discord and I am journalist (in science). I would like to talk with cofounder/organizer of EleutherAI, linked to both topics : AlphaFold2 replication and GPT-J. How could we do that ?!? Can I put my email here ?!?
Daj#7482: Email us at [email protected] and we'll get back to you
alstroemeria313#1694: ...fp16 shouldn't be completely terrible for audio, should it?
alstroemeria313#1694: Like speech, not super hq stuff.
alstroemeria313#1694: Or do I just need to use fp32.
alstroemeria313#1694: (It is for an audio diffusion model)
EricHallahan#1051: I would think it would have more than enough dynamic range.
alstroemeria313#1694: well bf16 diffusion is bad already for images
alstroemeria313#1694: I mean. I should still be getting results even if they have white noise left over
alstroemeria313#1694: Not nothing but noise.
EricHallahan#1051: Are you trying to use bfloat16 of binary16?
alstroemeria313#1694: bfloat16 was bad when i tried it
alstroemeria313#1694: ieee fp16 works
alstroemeria313#1694: for images
EricHallahan#1051: I would use binary16 even if bfloat16 worked.
alstroemeria313#1694: i was trying it on tpus
EricHallahan#1051: Ah
|
hirsheybar#0066: Hey @alstroemeria313! This is like a week late but just for fun I tried getting https://colab.research.google.com/drive/1javQRTkALBWLFWnx1K4VpRZkWLP3ozhr to run on TPU's to diagnose the slowness
alstroemeria313#1694: oh it's F.interpolate()
alstroemeria313#1694: the nn.Upsamples call it
hirsheybar#0066: yeah :p. If you file a gh issue about it on pt/xla im sure they'll prioritize it
alstroemeria313#1694: ahh
hirsheybar#0066: it turns out that they support upsampling with `scale_factor=1`, but they'll fall back to CPU for anything else (just haven't added full support for it yet)
alstroemeria313#1694: ...1 doesn't upsample?
alstroemeria313#1694: Anyway as a stopgap you can replace the upsample layers with learnable transposed convolutions with stride=2
alstroemeria313#1694: Those are fast on TPU
hirsheybar#0066: and idk how you diagnosed the problem with upsample, but i got it through the metrics report. There's a line when you print it (after running a couple iterations of the notebook) that says `at::upsample_bilinear2d`, which indicates that upsample ran on cpu instead of tpu. If it ran on tpu you'd see `xla::upsample_bilinear2d` in the report instead
alstroemeria313#1694: ahh
alstroemeria313#1694: i haven't figured out how to read the metrics report
alstroemeria313#1694: i replaced things until it went fast
hirsheybar#0066: yeah, it's not 100% user friendly
hirsheybar#0066: the two useful pieces of info i can get out of it quickly is (a) if you're recompiling every time (check the number next to the "Compile" metric) and (b) which ops are falling back to cpu (ops that have an `at::` at the beginning)
hirsheybar#0066: btw how fast did everything run when you used the transposed convolutions?
alstroemeria313#1694: uhh
nshepperd#2316: seems like such a crazy footgun that things will fall back to CPU potentially copying a tonne of stuff without even warning
alstroemeria313#1694: twice as fast as the JAX version
alstroemeria313#1694: pytorch/xla is like that all over the place
|
alstroemeria313#1694: It usually just doesn't work at all for me
nshepperd#2316: :(
hirsheybar#0066: yeah π¦ i think the tradeoff is supposed to be "stuff will be correct" when you just swap out the device from gpu to xla, at the cost of ending up with perf footguns
alstroemeria313#1694: it should at least warn
hirsheybar#0066: although the xla team is constantly adding lowerings for more ops, so i think the number of perf footguns is pretty consistently going down over time
alstroemeria313#1694: Then you would know what you had to work around
nshepperd#2316: yeah
hirsheybar#0066: actually there's an env var
hirsheybar#0066: `PT_XLA_DEBUG=1`
hirsheybar#0066: that I think will print warnings whenever ops run on cpu
alstroemeria313#1694: ohh
hirsheybar#0066: (actually it might just warn on too many compilations / data transfers to cpu, not 100% sure)
ilovescience#3282: I think I mentioned this earlier to alstro...
ilovescience#3282: Yeah I think alstro was having issues with Upsample and lerp IIRC...
But lerp is available in the master version of pytorch xla...
alstroemeria313#1694: yeah
alstroemeria313#1694: so that just leaves upsample i think
nostalgebraist#3542: i'm trying to run an experiment with `jiant` and it's painful :blobsad:
nostalgebraist#3542: it's like HF squared
zphang#7252: lol just ask me how to do things
|
zphang#7252: I can at least tell you which parts are worth using and which are horribly maintained
Awesome_Ruler_007#7922: kinda late but heard the CLIP + surveillance thing?
Awesome_Ruler_007#7922: OA seems sus AF
Awesome_Ruler_007#7922: computing vector distance doesn't exactly seem like a breakthrough idea they couldnt do. All that "it doesn't work well" was BS?
nostalgebraist#3542: thanks -- i think i've mostly figured out how to do what i want, it's just tough to figure out where any given thing is configured
nostalgebraist#3542: one question -- is it possible to configure weight decay?
zphang#7252: lol ok, I apologize for its current state, next time if you need to do a thing just ping me and I can point you in the right direction
nostalgebraist#3542: another thing i'd like is to be able to express "tasks" and "heads" separately
nostalgebraist#3542: my experiment involves trying a different type of head, and checking whether it does better/worse on benchmarks
nostalgebraist#3542: as far as i can see, the only way to do that is: make a copy/subclass of every task i want to do, with a new "task type", and then register my head with that "task type"
zphang#7252: I don't think it's exposed via any config, but it's configured here: https://github.com/nyu-mll/jiant/blob/51e9be2a8ed8589e884ea927e348df8342c40fcf/jiant/shared/model_setup.py#L50
zphang#7252: yea jiant basically comes from a time where there were weird implementations for each task
nostalgebraist#3542: yeah, that's what i thought -- hardcoded to 0.01?
zphang#7252: what might be easier is to just implement a "generic" version of each task format, and then feed in different datasets for each task you want to try via the task config
nostalgebraist#3542: 3rd thing... i notice that the classification heads add an extra MLP layer with tanh activation, before the final softmax layer. this is not what the bert or roberta papers do, so why does jiant do it?
zphang#7252: That's based on the initial hf/transformer implementations
zphang#7252: e.g. https://github.com/huggingface/transformers/blob/bcc3f7b6560c1ed427f051107c7755956a27a9f2/src/transformers/models/roberta/modeling_roberta.py#L1433
nostalgebraist#3542: oh i forgot HF did that... huh it's still there in the latest version. wild
zphang#7252: deep HF/T lore lol
nostalgebraist#3542: (fwiw, my real reason for using jiant is just that it's the official runner for glue/superglue.
|
if there were an official API for just the glue/superglue tasks, where you pass in an arbitrary train function or something, i'd just use that)
zphang#7252: lol strictly speaking the official runner is the legacy version
zphang#7252: https://github.com/nyu-mll/jiant-v1-legacy
which is even less maintained
zphang#7252: Does HF not have a full implementation for all the tasks yet?
nostalgebraist#3542: i did look at that one, saw this, and noped out https://cdn.discordapp.com/attachments/729741769738158194/893597864293445702/Screen_Shot_2021-10-01_at_1.38.27_PM.png
nostalgebraist#3542: HF probably does? but i naively went to gluebenchmark.com and followed the directions, lol
zphang#7252: idk if they have it for the super weird tasks like record
zphang#7252: I can see them deciding it is not worth the effort lol
Kazumi#1297: my experience has been this https://xkcd.com/1742/
nostalgebraist#3542: "look at the examples" is my biggest red flag
Kazumi#1297: I always end up needing to dissect the inner workings and make my own whenever the flags or parameters goes out of screen
EricHallahan#1051: I want to put SourceForge lower on the scale lol but you would also need to move everything bellow it down too.
nostalgebraist#3542: i don't know if this is a hot take or not, but i would prefer if every python repo on github were written like a library
nostalgebraist#3542: no scripts, you just import their stuff
EricHallahan#1051: I hold the OpenAI CLIP repo in high regards because it pulls that off perfectly.
EricHallahan#1051: I would like to get GPT-NeoX closer to that state soon.
Kazumi#1297: huggingface kind of does it if you don't plan on having custom models or datasets
nostalgebraist#3542: i give huggingface a lot of crap, but they did make bert into a library, where previously bert had only been available in one of those really bad script repos
nostalgebraist#3542: i remember in early 2019 trying to figure out how to finetune bert by reverse-engineering this script https://github.com/google-research/bert/blob/master/run_squad.py
|
nostalgebraist#3542: which was literally what they recommended you do
EricHallahan#1051: I think a lot of us give them crap, but it is hard to ignore that Transformers does things right in places.
Kazumi#1297: pipelines just works out the box, which I found impressive with how many things it can do
<https://huggingface.co/transformers/main_classes/pipelines.html>
gollark#3909: My Discord bot uses it for the slow and dubiously useful Wikipedia QA feature I added.
bmk#1476: eval harness almost does this (albeit mostly undocumented): the main.py is actually just a thin wrapper around a particular library call that does 99% of the work (all main.py does is parse command line args and write stuff to an output file essentially)
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/893651177491791902/unknown.png
Zippy#1111: So I made a super simple website thing for trying out the gpt-j-6b without any code required... https://cdn.discordapp.com/attachments/729741769738158194/893676124335398982/unknown.png
Zippy#1111: response is in italics
Zippy#1111: Not sure if I would actually ever put it online since that would be pretty expensive :overfloosh:
CarsonPoole#0640: random theoretical question--what is the difference between using and not using bias with a linear layer?
CarsonPoole#0640: is there some kind of meaningful difference from an intellectual perspective?
CarsonPoole#0640: they seem to be used randomly throughout different repos but I'd assume there is some technical reason
bmk#1476: well, bias is useless if you have a bn right after lol
kindiana#1016: also if you are doing something like attention
kindiana#1016: where the constants cancel out
Louis#0144: GUAC
Louis#0144: hi bb
guac#4716: it's the difference between learning affine transforms and linear transforms lel
Louis#0144: I'm leaving NY
|
EricHallahan#1051: Stella would argue that a truly linear layer has no biases, because it is no longer a linear projection.
guac#4716: NOOOO
guac#4716: you southern boy π¦
EricHallahan#1051: #off-topic
guac#4716: gestapo eric back at it
CarsonPoole#0640: can you elaborate on this
CarsonPoole#0640: like why would constants cancel when doing QK^TV
CarsonPoole#0640: and does this include layernorm or just batch norm
bmk#1476: don't think it applies for LN
CarsonPoole#0640: what does this amount to in more pragmatic terms
guac#4716: idk i'm a 3d dweller hehe
CarsonPoole#0640: also is there a situation where doing one or the other could be a problem if it doesn't align with some goal?
CarsonPoole#0640: or are they generally interchangeable
nshepperd#2316: append a 1 to your input vector, then there is no difference :thinkies:
EricHallahan#1051: ~~With attention it is redundant to add biases to the query-key projections, as it ends up being canceled by the softmax that follows.~~
CarsonPoole#0640: ah makes sense
bmk#1476: does it tho
CarsonPoole#0640: though could the value have a bias then?
EricHallahan#1051: Oh wait is that wrong? :thonk:
CarsonPoole#0640: well normalizing the QK would be invariant to _some_ kind of transformation
|
CarsonPoole#0640: softmax is invariant to subtracting the maximum
bmk#1476: I think the bias cancels out on the query but not the key
bmk#1476: or possibly the other way around not sure
CarsonPoole#0640: but definitely the value could have a bias then
EricHallahan#1051: Please note that some bias terms can be omitted for training a new model. Like the bias term $b^K_i$ only adds the same value for all positions that have been looked at, and $b^V_i$ contributes constant information, independent of query/key/value, to the output, because the sum of all elements in $\mathrm{Prob}_i$'s last dimension is always one.
TeXit#0796: **Eric Hallahan** https://cdn.discordapp.com/attachments/729741769738158194/893916596958015558/304058360893014018.png
EricHallahan#1051: That's from https://arxiv.org/abs/2105.04779/
CRG#8707: Hm, yeah I think it centers the bias.
bmk#1476: the mean is applied across a different dimension tho
bmk#1476: bn exactly cancels out because the mean is taken over the exact same dimension the bias is applied along
bmk#1476: LN is applied across the channels instead so yeah it basically makes the bias centered
Neural#2367: Is there codex hugging face alternative?
Orz#3023: gpt-genji exists
Orz#3023: https://huggingface.co/NovelAI/genji-python-6B
Orz#3023: also Code.AI seem to be working on the same
But they haven't yet released a code based model on gpt-j
Sid#2121: Only the key bias is reduntant https://cdn.discordapp.com/attachments/729741769738158194/893931878585880616/Screenshot_from_2021-10-02_20-44-41.png
EricHallahan#1051: Yeah that's why I gave it a strikethrough.
Sid#2121: from https://arxiv.org/pdf/2006.16362.pdf? btw
Zippy#1111: Strange, the model gives a warning saying it's a gpt-neo model.
|
EricHallahan#1051: Because it probably is one.
Zippy#1111: Ahh yeah it is. It's only half the size of the gpt-j.
EricHallahan#1051: Finetune's port originally was a patch of the GPTNeo model.
Zippy#1111: I wonder if they messed up, or lied :angy:
kurumuz#5695: hm?
kurumuz#5695: it is not half of the size
kurumuz#5695: it's 6B
kurumuz#5695: read the description please
Zippy#1111: I did read the description.. it's only 11gb, the 6b model is 22gb
kurumuz#5695: :Facepalm:
kurumuz#5695: we point out that the model is FP16 https://cdn.discordapp.com/attachments/729741769738158194/893944962461351966/unknown.png
Zippy#1111: aka it's a gpt-neo-2.7
kurumuz#5695: it's not.
kurumuz#5695: I trained the model
kurumuz#5695: lmao
Zippy#1111: Ah okay..
Zippy#1111: I see now my bad :overfloosh:
Zippy#1111: Can you forgive me :hawaiicry:
kurumuz#5695: Sure, the story is that GPT-J didn't get merged into the official huggingface repo for months, hence that page actually uses our fork that has GPT-J support.
Zippy#1111: Ah ok cool. Is it built using different model params than the merged gpt-j model? because replacing the GPTJForCausalLM with your model results in a pretty large list of uninitialized layers.
|
kurumuz#5695: param names are different
AdamScherlis#1848: Hi everyone, we've improved this thing a bunch and we want more data: https://discord.com/channels/729741769192767510/730451873613611079/891129710359216148
(TL;DR help us find adversarial examples for a language model, we'll pay you $30/hr)
AdamScherlis#1848: Discussion should go in #alignment-general
Parker#3197: the invite link is broken
AdamScherlis#1848: oh whoops
AdamScherlis#1848: Fixed, thank you!
π
¬ gabriel_syme π
¬#3220: Interesting thread, although many confused the (positive / negative) valence https://twitter.com/jachiam0/status/1444236743960457219?s=09
π
¬ gabriel_syme π
¬#3220: Huh, here I thought KGs were a meme
https://aleph-alpha.de/techblog/95_the_end_of_the_era_imagenet
π
¬ gabriel_syme π
¬#3220: This is cool, I'm curious what the multimodal model is (assuming the language part comes from GPT3
Louis#0144: @Daj wtf you're KG pilled now???
cfoster0#4356: KGs are still a meme :berk:
cfoster0#4356: Also this is a pretty bold title!
gollark#3909: > "Multimodality: attention is all you need is all we needed"
Perhaps "all you need" has gone too far.
Daj#7482: Uh I have thoughts about that post I'd rather not air publicly
Sid#2121: this is my work lol (I didn't write the blog post - we hope to have a technical writeup out soon)
Sid#2121: but it has nothing to do with KGs, idk where you got that from
π
¬ gabriel_syme π
¬#3220: Cool! Looks really interesting, can't wait for the write up :) I was partly shitposting tbh, going off the hashtags at the end
|
Sid#2121: More scalepilled than KG pilled, sorry @Louis
π
¬ gabriel_syme π
¬#3220: Lol I now realise that I read that wrong. Sorry in a literal blackout rn
Louis#0144: Lmaoooo
Kharr#7888: Every time I see something like this in a blog post I wonder "what happens in production when users throw their normal random garbage at it" :berk: It's pretty amusing how frequently even GPT3 breaks and has issues with repetition/incoherence with real user input
Sid#2121: I can test inputs out for you if you want
Kharr#7888: Does the model specialize in visual QA only?
Sid#2121: nope, wasn't even trained on it. the training task is captioning, but it happens to generalize to visual QA pretty well
Sid#2121: amongst other stuff
Kharr#7888: Very cool, mind if I ask you a few questions over DM?
Sid#2121: sure!
Louis#0144: It's ok
Louis#0144: I've abandoned KGs
Louis#0144: Lol
Louis#0144: I prefer KBs now
Louis#0144: Which is KGs where the structure is latent
Louis#0144: So retrieval is an example of KBs
spirit-from-germany#1488: can anyone help me on how to get ssh access to a TFRC TPU VM ? π
spirit-from-germany#1488: i can use it in the cloud console, but i cant find where i can get a PW or put my public key
Orz#3023: Have you generated your key?
Orz#3023: (something along the lines of ssh_rsa)
|
spirit-from-germany#1488: i have a key
nshepperd#2316: you paste your pubkey here https://cdn.discordapp.com/attachments/729741769738158194/894253451574386738/2021-10-04-030224_3840x2160_scrot.png
nshepperd#2316: under the "SSH Keys" tab
nshepperd#2316: and then you will have ssh access to the tpu, under the username your key was labeled with
spirit-from-germany#1488: cool, thx!
smallanimalfriend#4355: Really cool examples! I'd love to see a whole bunch more - such a tease only putting 3 examples. Also, is it text gen only, or can it use text+images as context to generate images?
smallanimalfriend#4355: How can I hear these thoughts? Or a publicly-airable version/subset?
Daj#7482: lol I made it sound more dramatic than it is. It's just a tad cringe innit? Corporate be corporate
smallanimalfriend#4355: Yeah clickbaity, but kinda on the right track I'd have thought
kurumuz#5695: I don't see how it kills the imagenet or whatever, i dont remember the exact title
kurumuz#5695: ofc the work itself is really cool
smallanimalfriend#4355: In terms of 20k classes vs the freeform "classes" of these large multi-modal models like "reflection of a pedestrian in a window front", was their argument. Doesn't seem tooo controversial to say it's the beginning of the end for imagenet's relevance (not to say it'll ever be completely irrelevant). The CLIP-based object detection/recognition demonstrations did it for me
smallanimalfriend#4355: Narrow + supervised classifiers will of course always have their place - for performance reasons or very domain-specific applications, or whatever
zphang#7252: probably like most NLP datasets it'll shift from "this is a good training dataset for training all the models" to "this is a good dataset for evaluating methods because it's well understood, well benchmarked on and not too trivial"
Artia#1759: GPT-NeoX whenπ°
Kia#2550: No one knows
Kia#2550: Also people are busy with other things and Chip shortage punching the hell out of This projects
Daj#7482: _taps sign recursively_ https://cdn.discordapp.com/attachments/729741769738158194/894545809356517386/Screenshot_2021-10-04-13-25-15-888.jpeg
Daj#7482: Once the hardware is working (which we've made a ton of progress on recently), we'll hopefully get to training a larger model asap
π
¬ gabriel_syme π
¬#3220: Somehow I feel he thinks the same about the AI impending doom. Don't know a bit refreshing to see something like this, like there is some fire left https://twitter.com/sama/status/1444690487110082565?s=09
|
Chaminou#1384: *prompt "engineering" goes brrrrrrrrrr* https://cdn.discordapp.com/attachments/729741769738158194/894575748373942302/unknown.png
π
¬ gabriel_syme π
¬#3220: Too many discussions of the future revolve around the inevitable which to my eyes feels like it is creating inaction.
EricHallahan#1051: Document what you find in #prompting!
Chaminou#1384: ^^'
m_ke#7913: That's because the kids are a lot smarter than him and understand that they don't stand a chance against economic incentives, while he's using fear of doom to raise billions of dollars from gullible investors
kurumuz#5695: :thonk:
Daj#7482: lol
kurumuz#5695: Sam is actually building a lot of useful tools and he doesn't need the fear of doom to get investors. I would be aware of people using the fear of doom to scare young minds from starting to build instead of Sam.
Zippy#1111: So I get issues with the python-6b model using the huggingface lib about uninitialized layers.. Do I have to use the EleutherAI fork of transformers to run that? Or is there some possible fix now that transformers support gptj?
Zippy#1111: Or do I need to find the list of layer names for the gptj model, and then rename the layers based on the transformers gptj standard.
Zippy#1111: I mean I can also just try to figure it out on my own.
kurumuz#5695: Also, economic incentives do change, which should be an obvious point if you look at like the history lol
m_ke#7913: OpenAI did some interesting engineering work but they raised billions of dollars on the false promise of AGI
m_ke#7913: you can't build your way out of some problems
kurumuz#5695: why not
m_ke#7913: when it comes to climate change the incentives will change when they start impacting the bottom lines of public companies
Daj#7482: Such as by making solar cheaper than fossil fuels
m_ke#7913: which will be way too late for most people leaving near the equator
Daj#7482: Through technology
kurumuz#5695: or creating customer demand for sustainable tech
|
Daj#7482: You know
Daj#7482: Building things
kurumuz#5695: you can check battery production increase over years. they have their own moore's law
kurumuz#5695: lol
kurumuz#5695: which is you know, mostly achieved by ahem one company
Daj#7482: And not submitting to cringe loserthink
kurumuz#5695: so yes, even just one entity can cause a huge change
kurumuz#5695: How do we know we will be unsuccessful to start with. You will now know until you try and even if you try it might take more than just one attempt.
kurumuz#5695: Is this supposed to demotivate me? It does the opposite lol
m_ke#7913: you should keep working
Daj#7482: Modern society has allowed entire classes of people that have never done a single productive thing in their lives to engage in public histrionics because deep down they know the low status nerd engineers that have solved ~all previous previous problems society has faced will solve the problem anyways but they can claim credit
Daj#7482: Very common status game in elite circles
Daj#7482: A type of luxury belief
kurumuz#5695: I see this happen with governments as well. It's easy to ban things after some company creates a viable alternative and claim credit for saving the world. This is literally happening right now with electric cars
Daj#7482: In science too, the person that gets credit for some great idea is usually not the person that came up with it, but the first socially respectable institutionally endorsed high status person to repeat them
Daj#7482: See also e.g. how autistic online Tumblr and 4chan culture is now just unironically how elite politics operates
Daj#7482: Nerds figure things out and status grifters parasitize off it
Daj#7482: Solar, batteries etc will fix climate and all the green religion "activists" will claim credit
kurumuz#5695: Hah, in reality they might have caused this whole situation
kurumuz#5695: but idk much about who really opposed nuclear power plants and pretty much shut them down.
|
Daj#7482: Oh I place a ton of blame on climate "activists"
Daj#7482: That's why I prefer calling them green religion
Daj#7482: Because they don't actually care about maximizing their positive impact on the environment
Daj#7482: They just want to _look like_ they are good people caring about the environment because that's a trendy thing
kurumuz#5695: I don't exactly understand their motivation
Daj#7482: You can't rationally both be an environmentalist and oppose nuclear
bmk#1476: but *think about the birds*
Daj#7482: At least not without a lot of further epistemic justification
Daj#7482: Oh god this reminds me of a tweet I saw earlier today
Daj#7482: It's wildly off topic but man
bmk#1476: i find it hilarious that people actually complain about wind turbines
bmk#1476: when i saw wind turbines everywhere in germany i was like "wow they look super cool i wish we had more of those in canada"
Daj#7482: It's perfectly obvious if you just realize that they don't care about the environment
Daj#7482: Simulacrum levels go brrr
bmk#1476: i hear about people complaining about them being ugly, the noise, and the birds, but they're so cool and futuristic, they're dead silent, and the birds objection is just ridiculous
Daj#7482: But they don't fit the A E S T H E T I C of the green religiom
Daj#7482: Which is fundamentally a offshoot of left wing(ish) reactionism
Kia#2550: What most people in Germany Don't agree with Wind Energy and the green party agrees about Wind turbines not fitted in there aesthetic:surprise:
Kia#2550: Hmm I think I heard this from somewhere
StellaAthena#3530: @Slack Ask the people who maintain the repo. We do not do so.
|
Slack#2746: ok sorry
Daj#7482: Yeah it's :sadge:
bmk#1476: meanwhile: https://cdn.discordapp.com/attachments/729741769738158194/894616734999400469/500px-Electricity_in_France.png
bmk#1476: france baise ouais
Kia#2550: France it's doing it's best:thinkies:
greencube#6725: facebook doesnt even ping lool
greencube#6725: i wonder what happened to their dns records
bmk#1476: someone dun goofed
greencube#6725: its just facebook's websites
bmk#1476: a bunch of people's pagers are going off right now lol
bmk#1476: press F for fb engineers
greencube#6725: F
Parker#3197: in 2020, Facebook received 84.169 B in advertising revenue. that is $9,608,333/hour, or $160,138/minute that is being lost by downtime I think
Parker#3197: https://investor.fb.com/investor-news/press-release-details/2021/Facebook-Reports-Fourth-Quarter-and-Full-Year-2020-Results/default.aspx
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/894625905115283526/unknown.png
greencube#6725: also think of the huge money that businesses lost bc of the outage
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/894626588476473384/unknown.png
greencube#6725: wow, stock 15% down
Parker#3197: do they buy it back when the service returns online?
greencube#6725: maybe
|
Parker#3197: lol would be funny if they calculated the estimated amount lost and bought back accordingly
greencube#6725: oh wait, not 15% im dumb
greencube#6725: 1%
alstroemeria313#1694: Looks like 9.4% since yesterday's close
alstroemeria313#1694: But it was down at today's open
alstroemeria313#1694: And that was before the outage right?
bmk#1476: the drop is mostly unrelated to the downtime actually
bmk#1476: it's something about a leak
bmk#1476: not sure of the details
Parker#3197: yeah, that's basically what I thought too
Parker#3197: though, if it was down for an extended period of time, it probably would show up in the market some
greencube#6725: weirdly enough instagram.com can be pinged unlike others
alstroemeria313#1694: <https://pytorch.org> is still up
Chlorokin#6581: To be fair, actually optimizing for this is worse than pretending to optimize for this.
bmk#1476: only if you already buy into ai risk though
Chlorokin#6581: New EA area, spread a bunch of inextricably red-coded anti-nuclear conspiracy theories.
bmk#1476: i have a feeling that would not turn out well
Daj#7482: I do not endorse the use of Dark Side Methods by default
alstroemeria313#1694: hey what tpu software thing do we want to make a tpu vm with for pytorch/xla?
alstroemeria313#1694: like the runtime
|
bmk#1476: what are the options?
bmk#1476: if there's nothing that sounds vaguely pytorchy then go with tf2.4 to try I guess
bmk#1476: or whatever the current tf2 version is
alstroemeria313#1694: i logged in and got really confused
alstroemeria313#1694: don't know how to install packages like tmux
alstroemeria313#1694: pytorch, jax, etc. seem not installed
alstroemeria313#1694: trying to follow the pytorch/xla tpu vm quickstart guide gave me `Error saving credentials: mkdir /root/.docker: read-only file system` when i `sudo bash /var/scripts/docker-login.sh`
bmk#1476: weird
alstroemeria313#1694: I did not make the VM myself so I am not entirely sure what happened
bmk#1476: the tpu vms ive used are all normal ubuntu installs that dont have weridness
alstroemeria313#1694: yeah this was chromium os
bmk#1476: o.O
alstroemeria313#1694: ikr
bmk#1476: are you able to create your own tpu vm
alstroemeria313#1694: Not really, no
alstroemeria313#1694: My business partner got into TRC and I haven't yet.
alstroemeria313#1694: So we are just poking at it trying to figure it out
bmk#1476: well uh this is the command to use
```
|
gcloud alpha compute tpus tpu-vm create tpu-name \
--zone=zone \
--accelerator-type=v3-8 \
--version=v2-alpha
```
bmk#1476: obviously replace tpu-name and zone
alstroemeria313#1694: This was a v2-8
bmk#1476: ?
bmk#1476: oh
alstroemeria313#1694: Bc he got an error trying to make a v3-8
alstroemeria313#1694: But it should be the same right.
bmk#1476: what kind of error, no capacity/quota?
alstroemeria313#1694: > Fails with 'unknown error'
bmk#1476: o.O
bmk#1476: wtf
bmk#1476: im going to try to make a v2-8 to take a look
bmk#1476: shouldnt be different
alstroemeria313#1694: ty :)
bmk#1476: have you tried the v3-8 thing again
bmk#1476: sometimes things just work if you do it again for no good reaosn
|
bmk#1476: nvm i dont have quota for v2s
alstroemeria313#1694: @bmk we figured it out
alstroemeria313#1694: he didn't use v2-alpha but something else weird and wrong
bmk#1476: lol
bmk#1476: what was the something else out of curiosity
alstroemeria313#1694: v2-nightly-cos
alstroemeria313#1694: I think we thought it would get us PyTorch nightly ^^;;
bmk#1476: lol
alstroemeria313#1694: ok v2-alpha got me pytorch/xla and i made some tensors on the tpu cores and moved them around between cores
bmk#1476: nice
alstroemeria313#1694: i upgraded the pytorch to 1.9.0 and am now trying a training script
alstroemeria313#1694: ugh
alstroemeria313#1694: ```Exception in device=TPU:0: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method```
alstroemeria313#1694: Except I am on TPU and have no idea why it is even trying to initialize CUDA in the first place.
alstroemeria313#1694: oh the fork_rng() was initializing cuda for some reason
alstroemeria313#1694: now it's hanging at the end of sampling
alstroemeria313#1694: uhh
alstroemeria313#1694: Did I break the TPU
alstroemeria313#1694: `pytorch_lightning.utilities.exceptions.MisconfigurationException: No TPU devices were found.
`
|
alstroemeria313#1694: ```RuntimeError: tensorflow/compiler/xla/xla_client/xrt_local_service.cc:56 : Check failed: tensorflow::NewServer(server_def, &server_) == ::tensorflow::Status::OK() (Invalid argument: Invalid fd: -1; Couldn't open device: /dev/accel0 (Operation not permitted); Unable to create Node RegisterInterface for node 0, config: device_path: "/dev/accel0" mode: KERNEL debug_data_directory: "" dump_anomalies_only: true crash_in_debug_dump: false allow_core_dump: true; could not create driver instance vs. OK)
```
alstroemeria313#1694: oh there were just processes still using them
alstroemeria313#1694: that i needed to kill
alstroemeria313#1694: so it's just sitting around at the end of the sampling loop and not saving the image, idk why
alstroemeria313#1694: yeah doing the same again now
alstroemeria313#1694: IDK how to debug anything here
alstroemeria313#1694: How do I know what's going on.
EricHallahan#1051: Have you tried PDB?
alstroemeria313#1694: ...
alstroemeria313#1694: To debug XLA?
EricHallahan#1051: Do we know if it is an XLA issue?
EricHallahan#1051: Rather than some other problem with PyTorch/XLA?
alstroemeria313#1694: idk
alstroemeria313#1694: If I ctrl-c the process while it's hung
alstroemeria313#1694: Then I have to manually kill the Python processes to free up the TPU again.
alstroemeria313#1694: This is not a thing pdb will help with
EricHallahan#1051: Well there is the PyTorch/XLA troubleshooting guide which documents retrieving a stack trace.
https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#retrieving-stack-traces
alstroemeria313#1694: ah ty :)
|
EricHallahan#1051: > We don't expect users to use tools in this section to debug their models. But we might ask for them when you submit a bug report since they provide additional information that metrics report doesn't have.
alstroemeria313#1694: If this persists I will give up and try the JAX version again.
alstroemeria313#1694: Since I do not have the time to wait for the PyTorch/XLA people to put forth some modicum of effort into making their thing work.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/894670230498971728/2011-03-16_two_roads.png
alstroemeria313#1694: I would be using JAX which is the new road lol
alstroemeria313#1694: Yeah it's not working.
alstroemeria313#1694: I can post the script if anyone here feels like debugging PyTorch/XLA.
alstroemeria313#1694: Actually I'm going to try nightly first
alstroemeria313#1694: How do you even install nightly.
fengoku#9000: hey this is a last minute request but would anybody possibly be interested in reviewing for our controllable generation neurips workshop? https://ctrlgenworkshop.github.io/
you should get around 2-3 papers that u have 2 weeks to review... we are a bit short on reviewers. in particular we are looking for #multimodal reviewers
bmk#1476: hm i have done stuff with controllable language generation but not vision
bmk#1476: if youre fine with that then sign me up i guess
alstroemeria313#1694: Anyone have any idea?
alstroemeria313#1694: On a TPU VM?
alstroemeria313#1694: These are the install instructions for 1.9. https://cdn.discordapp.com/attachments/729741769738158194/894673527293231174/Screen_Shot_2021-10-04_at_12.52.58_PM.png
alstroemeria313#1694: But I don't know how to like... list what is available
fengoku#9000: just DMed you
StellaAthena#3530: @alstroemeria313 @BoneAmputee @MicPie @π
¬ gabriel_syme π
¬ might be interested
fengoku#9000: great! if any of you guys are interested please DM me here on discord with your email to invite you to CMT. btw you will be asked to choose a main category like NLP, computer vision, multimodal, ... right now we are mainly lacking multimodal so if you have some multimodal experience or feel somewhat comfortable reviewing multimodal papers, would prefer if you choose that option over NLP (but entirely up to you since we also want more NLP reviewers lol)
|
EricHallahan#1051: This help?
https://github.com/pytorch/xla#-consume-prebuilt-docker-images
alstroemeria313#1694: nope.
alstroemeria313#1694: Those are TPU node images.
alstroemeria313#1694: I need TPU VM.
EricHallahan#1051: lol, try `https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-nightly-cp38-cp38-linux_x86_64.whl`
alstroemeria313#1694: oh ty
alstroemeria313#1694: How'd you find that
EricHallahan#1051: Nowhere. I took a guess and it returned a file. :berk:
alstroemeria313#1694: ahah
EricHallahan#1051: Of course also install PyTorch nightly as well.
alstroemeria313#1694: ```In [3]: import torch_xla.core.xla_model as xm
2021-10-04 20:14:46.487153: F ./tensorflow/core/tpu/tpu_executor_init_fns.inc:110] TpuTransferManager_ReadDynamicShapes not available in this library.
Aborted (core dumped)
```
alstroemeria313#1694: I think my libtpu is broken
alstroemeria313#1694: Or too old
chilli#5665: you could ping @hirsheybar about this - he works on PyTorch/XLA related things
chilli#5665: and FB employees have plenty of free time today π
alstroemeria313#1694: i am honestly not going to spend much longer on this.
|
chilli#5665: haha, that's fair
EricHallahan#1051: What is the chance that `gcr.io/cloud-tpu-v2-images/libtpu:pytorch-nightly` exists?
π
¬ gabriel_syme π
¬#3220: I'm up, way too early. What do you need?
EricHallahan#1051: She's trying to get PyTorch/XLA nightly running on a TPU VM.
EricHallahan#1051: But of course none of this is documented.
π
¬ gabriel_syme π
¬#3220: Oh I see
π
¬ gabriel_syme π
¬#3220: I can help with trying on a VM if that is an issue, let me know alstro
EricHallahan#1051: She has one.
π
¬ gabriel_syme π
¬#3220: Oh ok then, nice!
π
¬ gabriel_syme π
¬#3220: Oh ok then, nice!
π
¬ gabriel_syme π
¬#3220: Did TRC finally come theough
alstroemeria313#1694: yeah my business partner got in
alstroemeria313#1694: and worked out his issues
alstroemeria313#1694: and i have a v3-8 tpu vm
alstroemeria313#1694: i have given up on pytorch/xla for now, it's just not worth it
alstroemeria313#1694: i have a small jax diffusion model training rn on one core of the v3-8
alstroemeria313#1694: bc i still don't know how to use pmap
alstroemeria313#1694: but the code i do have is actually working.
EricHallahan#1051: Is this the same code you shared with us before?
alstroemeria313#1694: yeah it's the jax_diffusion_test.ipynb
|
alstroemeria313#1694: copypasted into a .py
alstroemeria313#1694: i'm getting 9 it/s
alstroemeria313#1694: with batch size 200
alstroemeria313#1694: which strikes me as slow
alstroemeria313#1694: I need to like... add all the comments I put in for the public PyTorch version on Colab
alstroemeria313#1694: After I get it working better that is.
π
¬ gabriel_syme π
¬#3220: Damn, if I wasn't broken into a thousand pieces I'd help. This really feels like a great chance to learn jax lol
alstroemeria313#1694: how do you like... encapsulate the state of a model
alstroemeria313#1694: in a class or something
π
¬ gabriel_syme π
¬#3220: In flax you use smth like train state
alstroemeria313#1694: if everything has to be functional and return everything it modifies
π
¬ gabriel_syme π
¬#3220: It's a class
π
¬ gabriel_syme π
¬#3220: (totally novice but I like flax)
alstroemeria313#1694: i need to write dropout2d for haiku
π
¬ gabriel_syme π
¬#3220: so in flax, at least the only code I've been using, you'd do something like this:
`state = train_state.TrainState.create(apply_fn=model.__call__, params=model.params, tx=adamw)` with adamw being the optimizer coming from optax
π
¬ gabriel_syme π
¬#3220: you can then just create train / test functions with that state and pmap them I guess
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/894688955524599850/demo_00021-3.png
π
¬ gabriel_syme π
¬#3220: like I said I'm a noob in this and I hope to learn more but flax feels..readable to me
π
¬ gabriel_syme π
¬#3220: hey nice!
|
alstroemeria313#1694: now i need these downsampling and upsampling ops
alstroemeria313#1694: ok i swapped them out for faster things and i am now getting 33 it/s during training rather than 9 it/s
alstroemeria313#1694: so like
alstroemeria313#1694: How do you pmap
alstroemeria313#1694: This is on one TPU core.
alstroemeria313#1694: We want to train big models
alstroemeria313#1694: also how do you use dropout with haiku.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/894695971898937354/demo_00057.png
π
¬ gabriel_syme π
¬#3220: in my case, there's a train and a test function that calculates grads and loss based on the state above
π
¬ gabriel_syme π
¬#3220: and then you do smth like `parallel_train_step = jax.pmap(train_step, "batch")`
π
¬ gabriel_syme π
¬#3220: hope that is relevant here, heh
π
¬ gabriel_syme π
¬#3220: and you also replicate the state across cores
alstroemeria313#1694: ah
alstroemeria313#1694: hm
alstroemeria313#1694: i figured out dropout btw
π
¬ gabriel_syme π
¬#3220: `state = flax.jax_utils.replicate(state)`
π
¬ gabriel_syme π
¬#3220: nice π
π
¬ gabriel_syme π
¬#3220: idk, flax feels nice but everyone in here is using haiku
π
¬ gabriel_syme π
¬#3220: so there must be smth about it, maybe it's because it's more low level?
π
¬ gabriel_syme π
¬#3220: I should have said 'seem to be using' Eric, but from discussions I've seen I'd say it's the go to framework?
|
π
¬ gabriel_syme π
¬#3220: might be due to how experienced this group is tbh
alstroemeria313#1694: and implemented dropout2d
EricHallahan#1051: By definition if you are using Flax than not everyone here is using Haiku. π
alstroemeria313#1694: thankfully, like, i understand DL well enough to write a training loop from scratch
π
¬ gabriel_syme π
¬#3220: haha lol, I don't include myself in that group given how superficial I'm using it π
π
¬ gabriel_syme π
¬#3220: maybe you can share it with the world if haiku never had it before?
alstroemeria313#1694: ```python
class Dropout2d(hk.Module):
def __init__(self, rate=0.5, name=None):
super().__init__(name=name)
self.rate = rate
def __call__(self, x, enabled=True):
if not enabled:
return x
key = hk.next_rng_key()
p = jax.random.bernoulli(key, 1.0 - self.rate, shape=x.shape[:2])[..., None, None]
return x * p / (1.0 - self.rate)
```
alstroemeria313#1694: it's this, except for haiku you want it to handle both nchw and nhwc
|
alstroemeria313#1694: this is the former only
alstroemeria313#1694: ...Does this method of enabling/disabling dropout actually work.
guac#4716: why you bother with both formats lol
alstroemeria313#1694: i;m not
alstroemeria313#1694: i only use nchw
alstroemeria313#1694: but haiku library code lets you select.
guac#4716: ah i didn't even notice i defaulted to channel-last cause my tf pipeline likes it that way lel
alstroemeria313#1694: Do I need to do two hk.transform()s
alstroemeria313#1694: One to enable dropout and one to disable it.
alstroemeria313#1694: Or does this work.
guac#4716: it'll work as long as you pass `enabled` arg to apply
alstroemeria313#1694: oh
guac#4716: so like an eval step you'd do
```python
hk.transform(dropout_module).apply(params, key, x, enabled=False)
```
alstroemeria313#1694: you mean False for eval?
guac#4716: yes lol
guac#4716: my bad
alstroemeria313#1694: yeah i have a training= flag
|
alstroemeria313#1694: that i pass around
alstroemeria313#1694: and pass in to the top-level .apply()
guac#4716: yeah as long as that training flag toggles the dropout enabling it'll be good
alstroemeria313#1694: ok so.
alstroemeria313#1694: pmap
alstroemeria313#1694: omg
omg
It's training on multiple TPU cores
alstroemeria313#1694: I need to pmap sampling now too
AI_WAIFU#2844: :ultragoose:
alstroemeria313#1694: ```python
def replicate(x, n):
return jax.tree_util.tree_map(lambda x: jnp.stack([x] * n), x)
def unreplicate(x):
return jax.tree_util.tree_map(lambda x: x[0], x)
```
π
¬ gabriel_syme π
¬#3220: nice ye π like Flax
igoro#7477: Hi everyone -
|
I joined here recently, so I figured I'd introduce myself. I'm a generalist software engineer. Most recently, I spent 8 years as the architect of a distributed storage system (FlashBlade by Pure Storage). I am not explicitly a data scientist, but I've helped out with AI projects in the past.
I'll lurk and try to follow along. I am open to dedicating some time to contribute: I can potentially write some documentation, get things to build, write some code, etc.
Feel free to drop me a message if you have any suggestions or questions for me.
π
¬ gabriel_syme π
¬#3220: welcome!
EricHallahan#1051: Welcome!
EricHallahan#1051: Hey, that's my line!
alstroemeria313#1694: parallel sampling is slower and idk why
alstroemeria313#1694: how do you debug JAX performance stuff.
π
¬ gabriel_syme π
¬#3220: oops
alstroemeria313#1694: ```python
def psplit(x, n):
return jax.tree_util.tree_map(lambda x: jnp.stack(jnp.split(x, n)), x)
def punsplit(x):
return jax.tree_util.tree_map(lambda x: jnp.reshape(x, (x.shape[0] * x.shape[1], *x.shape[2:])), x)
```
|
π
¬ gabriel_syme π
¬#3220: does the model have a generate function?
alstroemeria313#1694: i have a sample function?
alstroemeria313#1694: wdym
alstroemeria313#1694: The sample function is a loop, I can't pmap it
π
¬ gabriel_syme π
¬#3220: oh okay π
π
¬ gabriel_syme π
¬#3220: that's what I was thinking ye
alstroemeria313#1694: So I have a "sample step" function that I pmap.
alstroemeria313#1694: And call the pmapped one inside the loop.
Sphinx#2092: You can pmap whatever you like if you believe
alstroemeria313#1694: no, it doesn't work, because it tries to pmap over the number of steps
Sphinx#2092: Pretty sure I've pmapped online backtranslation
guac#4716: you can pmap that sample func if you can `scan` the loop though right?
alstroemeria313#1694: but then i don't get progress updates
alstroemeria313#1694: for each step.
guac#4716: gg's :/
alstroemeria313#1694: eheh~
Sphinx#2092: Doesn't the flax example pmap the beam search function?
zphang#7252: I remember the flax beam search looking like dark magic
guac#4716: https://github.com/google/flax/blob/e63b8bd3a628026c670901cc3fc120308681bd42/examples/wmt/decode.py#L227 you aint kidding lmao
guac#4716: yeah you just need to find a way to pack progress updates inside a dataclass/dict lol
|
Sphinx#2092: its based on t2t I think
Sphinx#2092: on some old tf one
Sphinx#2092: I remember thinking "wow, am I just incompetent that I can't figure this out?"
Sphinx#2092: then I asked people and everyone basically feels like that
alstroemeria313#1694: i got parallel sampling faster
alstroemeria313#1694: It turns out you need the EMA model params in a sharded device array
alstroemeria313#1694: Or else it apparently tries to copy the model params from tpu core 0 to all the others *on each sampling step*.
chilli#5665: in general, anything that isn't sharded will be replicated I think
alstroemeria313#1694: ...why is loss not going down
alstroemeria313#1694: oh
alstroemeria313#1694: no
One#5919: the refrain of AI
alstroemeria313#1694: eheh
alstroemeria313#1694: oh lol
alstroemeria313#1694: I need 3 output channels
alstroemeria313#1694: For CIFAR-10
alstroemeria313#1694: Not 1
alstroemeria313#1694: Loss go down now
alstroemeria313#1694: Hey how do you check memory use in JAX
alstroemeria313#1694: It's doing an epoch every two seconds btw
|
alstroemeria313#1694: Uh how do you save a model in JAX
alstroemeria313#1694: Pickle?
AI_WAIFU#2844: yep
alstroemeria313#1694: ah
nshepperd#2316: use this imo https://github.com/nshepperd/jaxtorch/blob/master/jaxtorch/pt.py
alstroemeria313#1694: i need to unreplicate the params and states first i think
nshepperd#2316: pickle will work too though
alstroemeria313#1694: like this? ```python
def save():
obj = {'params': unreplicate(params),
'params_ema': unreplicate(params_ema),
'opt_state': unreplicate(opt_state),
'epoch': epoch}
with open(f'model_{epoch:06}.pkl', 'wb') as f:
pickle.dump(obj, f)
```
nshepperd#2316: exactly that
alstroemeria313#1694: 850 epochs https://cdn.discordapp.com/attachments/821173872111517696/894749544661786634/demo_000850.png
nshepperd#2316: yay
Louis#0144: totally not biased?
|
Louis#0144: right?
Louis#0144: π
nshepperd#2316: lol
nshepperd#2316: definitely very biased
EricHallahan#1051: But the solution of using PyTorch checkpoints is an objectively better solution than using a plain pickle.
nshepperd#2316: actually i also tested just using torch.save on jax param dicts directly. that seems to convert all the jax devicearrays to numpy. and then they come out as numpy arrays when you load
inox#5400: please someone give me a random suggestion to address VAE posterior collapse
inox#5400: currently I have: write a non-sucky likelihood
Louis#0144: Momentum models
Louis#0144: @inox
guac#4716: VQ lel
Louis#0144: Keep a second set of weights that is a moving average of the model you're actually training
Louis#0144: And do like
Louis#0144: PPO against it
Louis#0144: Or KL div
inox#5400: its already VQ
inox#5400: that sounds easy I'll try that
chilli#5665: @xcodevn here's a fun issue with trying to represent modules as pytrees π
chilli#5665: https://github.com/google/jax/issues/7919
xcodevn#9003: interesting
|
xcodevn#9003: The *problem* is not at tree_flatten and tree_unflatten but jax tracing
xcodevn#9003: I don't think this is a pytree problem.
xcodevn#9003: it is a jax tracing issue
xcodevn#9003: i think any jax based library has this issue.
xcodevn#9003: It is a real issue if we are expecting something more out of jax.ndarray. It is not an issue if we just see it as constants and this view is compatible with jax functional programming mode.
π
¬ gabriel_syme π
¬#3220: Is that like teacher student thing?
Louis#0144: Kinda
nshepperd#2316: the issue here seems to be that what they have is a graph, but pytree flatten/unflatten reduces it to, well, a tree
chilli#5665: Pretty much yeah lol
nshepperd#2316: which doesn't matter as long as you treat your pytree object as immutable
nshepperd#2316: but yeah if you want to modify it under the presumption that it'll stay a graph, you'll have problems
chilli#5665: I mean, if it's ... completely immutable
chilli#5665: sure
chilli#5665: but in this case you then can't update it with an optimizer
chilli#5665: actually, no, I'd say that it's problematic regardless
nshepperd#2316: well... yeah, you're right
nshepperd#2316: when you use your optimizer or something you will not get parameter sharing
chilli#5665: or even doing something simple like summing the number of parameters you have in your module
nshepperd#2316: bc yeah the different branches of the tree traversal of the graph will be optimized independently
nshepperd#2316: this is one of the reasons i think it is better for the actual parameters to be stored in some sort of key-value dictionary
|
nshepperd#2316: so you can organise the keys however you want
chilli#5665: mmm, yeah
chilli#5665: that's probably how we're gonna end up doing it in PyTorch
chilli#5665: making `state_dict` the primary thing being passed around
xcodevn#9003: Interesting, I haven't thought much about weight sharing issue.
xcodevn#9003: We cannot do weight sharing if we are having tree structure.
xcodevn#9003: Perhaps, it is a good thing to not support weight sharing. Users have to explicitly do so.
xcodevn#9003: yeah, in jax we can't update it with an optimizer.
xcodevn#9003: so i would say it is "completely immutable" in jax
chilli#5665: Well, it goes beyond immutability I think
xcodevn#9003: if you mean weight sharing, I think doing it implicitly is a problem more than a solution
chilli#5665: I mean that even if itβs completely immutable, itβs problematic
chilli#5665: And you canβt do things like count how many parameters you have
chilli#5665: You need to completely disable weight sharing
chilli#5665: Which might not be that bad?
chilli#5665: Not sure
xcodevn#9003: I think weight sharing break a fundamental thing in jax
xcodevn#9003: so supporting weight sharing, imo, is a bad choice.
xcodevn#9003: if you want to count parameters, just count the pytree.
xcodevn#9003: i don't think it is bad. It is a good thing.
|
chilli#5665: Maybe? I donβt know, that seems kind of annoying lol
xcodevn#9003: hmm, i understand what you mean..
xcodevn#9003: but say if you want to use a weight for two jobs
xcodevn#9003: it is a goodthing to explicitely implement two method
xcodevn#9003: for a module to support that job.
xcodevn#9003: no magic here.
chilli#5665: Hmm
chilli#5665: How do you even organize it though?
chilli#5665: Letβs say I have a single global parameter across all of my modules.
chilli#5665: And I use this module in a bunch of places
xcodevn#9003: I would suggest have a single module then.
chilli#5665: Like, letβs say itβs a linear layer with a single global bias
xcodevn#9003: have a Bias module then
chilli#5665: Haha
chilli#5665: Maybe
xcodevn#9003: No, I mean it a good thing. Not to *just* to avoid my problem.
chilli#5665: Iβm not so convinced itβs a good thing lol
xcodevn#9003: doing it implicitly with weight sharing, even though perceived as convenient, it is not clear what is really happening.
chilli#5665: Now when you initialize things you need to maintain global state to pass things around
xcodevn#9003: it is a good thing to avoid bug due to incompatible weight format between modules.
|
xcodevn#9003: it is also explicitly show what is going on. And, i see this as a good thing.
chilli#5665: Yeah, but youβre avoiding bugs here by making things harder for the user lol
π
¬ gabriel_syme π
¬#3220: we need to remember that 'user' is like 99% me
chilli#5665: Itβs not necessarily a β¦ bad idea
xcodevn#9003: Yes, so what?
chilli#5665: But I donβt know if itβs a good one
xcodevn#9003: to prevent bug....
xcodevn#9003: a user need to implement Bias module
chilli#5665: The problem is that youβre not reliably preventing bugs
xcodevn#9003: imo, it is a good trade-off
chilli#5665: Most of the time that users want to do it, itβs not a bug
xcodevn#9003: and the user can just implement Bias module
xcodevn#9003: it is also what the user want to do
chilli#5665: what you're basically saying is
chilli#5665: "ok, I (the library author) know better than you"
xcodevn#9003: that is your view.
chilli#5665: mmm, no, I think it's accurate :P. I'm not saying that's always a bad thing lol
chilli#5665: Like, in Rust, the borrow checker is exactly this
xcodevn#9003: yeah, i agree. I'm not saying your view is wrong
chilli#5665: imo, this is very reminiscent of "worse is better"
|
chilli#5665: haha
chilli#5665: in general
xcodevn#9003: however, i view it as promoting good practice
chilli#5665: maybe
chilli#5665: arguably, TF 1.0 also prevented a lot of bugs that PyTorch's imperative style allowed
xcodevn#9003: because, you may want to let user
xcodevn#9003: do all the things
xcodevn#9003: modify jax.ndarray for example.
xcodevn#9003: however, jax prevents that and promoting
xcodevn#9003: pure functional mode.
xcodevn#9003: that is a good practice.
chilli#5665: well...
xcodevn#9003: that what i mean of good practice.
chilli#5665: haha, I don't know if that's true either
chilli#5665: Whether pure functional mode is good practice
xcodevn#9003: ok.
xcodevn#9003: This is where we disagree.
xcodevn#9003: I see it as an extremely good thing.
chilli#5665: Itβs a good thing in many ways
chilli#5665: And simplifies a lot
|
chilli#5665: A lot of Pytorch complexity comes from supporting in-place modifications
xcodevn#9003: Of course, you have to trade it with something else.
xcodevn#9003: but in the end, I think it is a good deal.
chilli#5665: Mmm, maybe
chilli#5665: Undecided for me π
chilli#5665: I donβt think itβs obvious
chilli#5665: And is one of the more interesting differentiations between Pytorch and Jax
xcodevn#9003: to generalize a bit (I;'m sorry for this.)
xcodevn#9003: this reflects the nature of human.
xcodevn#9003: we avoid good things because it is just a bit inconvenient.
π
¬ gabriel_syme π
¬#3220: I feel the goal has to be allowing a variety of users to build cool, understandable, scalable code. So how does each approach impact this, if at all?
π
¬ gabriel_syme π
¬#3220: like the code is a means to an end, not the end in itself. At least not for everyone who isn't a SWE/Developer. If the applications matter, then efficiency and convenience are crucial. my 2c
xcodevn#9003: We quickly come to conclusion this is bad. Because I have two write 3 more lines.
π
¬ gabriel_syme π
¬#3220: I have no idea if it's bad, I'm anything but knowledgeable in this. I just described how someone like me, a user, would view this (perhaps naively)
xcodevn#9003: I understand your view.
chilli#5665: my point is that disallowing things like mutability or weight sharing is more than "3 more lines" of code
xcodevn#9003: can you give an example of that?
chilli#5665: You have an existing module from somebody else's code, you wanna reuse it and share weights with it
chilli#5665: what do you do?
chilli#5665: Dig into their code, rewrite it with your new Bias module
|
chilli#5665: and then re-integrate with your code?
bmk#1476: well, if you want to share weights within a model, then you have to have two copies, and then do an annoying allreduce by hand, right?
guac#4716: that's pretty much where we're at right now with haiku lol
xcodevn#9003: yes, inheritance
chilli#5665: assuming you mean inheritance, I'm not sure how that fixes this
chilli#5665: because their code is written assuming they don't need to share the bias parameter
chilli#5665: but what you want to do now *is* to share the bias parameter
chilli#5665: so you can't inherit from their class and then modify the parent class
xcodevn#9003: ok, how about bias instead having bias as a tensor
xcodevn#9003: it is a property of the modue
xcodevn#9003: like, fc.bias
chilli#5665: like, static property?
xcodevn#9003: a function with @property decorator
xcodevn#9003: which returns the value of Bias module.
chilli#5665: I don't really see how that fixes things tbh
chilli#5665: You still need to rewrite the existing library's code, right?
xcodevn#9003: you need to... patch library code...
xcodevn#9003: yes
chilli#5665: yeah, maybe
chilli#5665: there's a lot of possible solutions
|
chilli#5665: 1. say that "this is error prone", disallow it unless that existing code was written with weight sharing in mind
chilli#5665: 2. patch library code
chilli#5665: probably more
chilli#5665: but none of these are a great solution
chilli#5665: and I think it satisfies this π
chilli#5665: Like, this is a non-trivial limitation
chilli#5665: maybe it's a tradeoff you're willing to make
chilli#5665: but it *is* a tradeoff
xcodevn#9003: ok, i just...
xcodevn#9003: think of a solution
xcodevn#9003: with 3 lines of code
xcodevn#9003: ok...
xcodevn#9003: how about having bias Module
xcodevn#9003: weight = Bias()
xcodevn#9003: and set `weight` to all your modules that need to use `weight`.
xcodevn#9003: and do all this in inside a traced function.
xcodevn#9003: say, inside your loss function.
xcodevn#9003: this is basically what you'r doing with weight sharing
xcodevn#9003: the only difference is that, we now doing it inside the loss function.
chilli#5665: gotta go, but I'll read your proposal later
|
chilli#5665: in general though, these things are not so easy
chilli#5665: and the more "non-obvious" your solution is, the harder your users wil need to think
xcodevn#9003: and i mean, you can probably think of an alternative solution for weight sharing, say, under 5 mins.
chilli#5665: Good luck π
xcodevn#9003: I mean it honestly, in this case, because weight sharing is to say use this weight to do A, and use this weight to do B too.
xcodevn#9003: the thing here is just that we have to do it explicitly.
xcodevn#9003: and the trick is, perhaps, to let jax tracer know about it.
xcodevn#9003: I haven't thought about is until today. How about, `hk.transform` your module into a pure function and feed the shared weight to it?
(I mean as a haiku solution)
xcodevn#9003: I have to clarify (can't stop myself) that I'm generalizing to human nature here when I said this.
For weight sharing, I would say a better alternative. Of course, with its own tradeoffs.
smallanimalfriend#4355: Can you (or have you already) run a test to see if it can count the number of objects in an image? And things like relative distances between pairs of objects in the image? Or can you a priori say that it wouldn't work due the way you're embedding the images?
ilovescience#3282: looks like pix2pix to me...
atamkin#9578: hey folks! Iβm a big fan of The Pile, and I was wondering if there were any plans to make it available via Hugging Face or some other streamable API. Iβd like to encourage more use of it in benchmarks / other projects, and being streamable is a big plus for quickly getting started + iterating without having to download the dataset and write your own data loaders
bmk#1476: if you would like to contribute to making it happen that would be awesome
smallanimalfriend#4355: Haha those look better than what I remember. Still nightmarish, but high-res and less "mangled"
StellaAthena#3530: Itβs been on the todo list for a while, but our todo list is quite long. If you want to help youβre welcome to DM me
Orz#3023: β
Orz#3023: I'm up
|
spruce#1680: to me it looks like just someone that has some photoshop knowledge
spruce#1680: im p sure ive seen that bird before
spruce#1680: yeah they were made by Tom Curtis https://www.demilked.com/realistic-sculptures-kids-drawings-tom-curtis/
alstroemeria313#1694: Oh no lol
StellaAthena#3530: Does anyone have a citation on hand for "transformers are bad at writing consistently and coherently across long passages"
dmayhem93#3202: https://arxiv.org/abs/2107.01294 maybe?
Louis#0144: Routing transformer maybe
ilovescience#3282: lol so not even ai
spruce#1680: nope
spruce#1680: i wish
QueenSorceressAdrielle#5079: Hiya!
QueenSorceressAdrielle#5079: I've been in this channel for a little bit, hopefully I'll have some spare threads to see how I can help out soon
QueenSorceressAdrielle#5079: Does anyone know what type of organization Eleuther is?
bmk#1476: unfortunately, nobody knows
StellaAthena#3530: Itβs a discord channel where people hang out, shoot the shit, and change the world.
QueenSorceressAdrielle#5079: gotcha, yea i was wondering like on the business/organization/etc side
bmk#1476: it's the discord server with the most publications
bmk#1476: business? what's that
QueenSorceressAdrielle#5079: seems like a cooperative
QueenSorceressAdrielle#5079: ikr
|
bmk#1476: it's not a cooperative it's just a discord server
QueenSorceressAdrielle#5079: oh okay cool
QueenSorceressAdrielle#5079: I ask because I saw that you guys have affiliates and I might be interested in collaborating with you guys in that respect
bmk#1476: depends on what "collaborate" means in this sentence
bmk#1476: can you tell us a bit more about what you're thinking of
StellaAthena#3530: I mean, we have people who do research with us. And people who give us GPUs to use
QueenSorceressAdrielle#5079: I work more in distributed dl and robotics, but I like what you guys are about
StellaAthena#3530: You're more than welcome to participate in research with us
bmk#1476: we're always looking for new contributors
EricHallahan#1051: By affiliates, you mean the organizations listed in #rules?
QueenSorceressAdrielle#5079: Gotcha, yea I own a small lab in upstate new york. If it seemed like I could help in some way from a resource standpoint or research standpoint in the future I think that'd be great.
QueenSorceressAdrielle#5079: yes
bmk#1476: we can always use more contributors
bmk#1476: we're constantly in need of people to help turn ideas into code
Asca#3513: What sort of research do you do
QueenSorceressAdrielle#5079: neuromorphic photonics, cognitive robotics, and neuromorphic algorithms
bmk#1476: and we do publish papers so like if that's what floats your boat we got it
Asca#3513: What is a neurotrophic photonic dawg I didnβt pass 8th grade math
StellaAthena#3530: Most stuff involving transformers. We started with training large language models specifically but we've branched out into a variety of areas including contrastive learning, alignment, interpretability, and bio ml
bmk#1476: this seems somewhat different from what we do
|
QueenSorceressAdrielle#5079: gotcha, I do consulting in the bio ml space so I can't share much there unfortunately. gotta pay the bills.
Asca#3513: AI is cool I just heard this can make cool pictures and stuff, idk the science tho
bmk#1476: neuromorphic is like a fancy way of saying sparsity these days right
StellaAthena#3530: Our BioML group actually spun off its own discord server: https://discord.gg/RvmQYUSc
QueenSorceressAdrielle#5079: it definitely is, but there could be crossovers at the algorithmic levels. particularly insofar as distributed training goes.
bmk#1476: i heard some stuff about spiking NNs a few years ago but it doesnt seem like much has come of it
bmk#1476: oh yeah that's probably a bit of overlap there
QueenSorceressAdrielle#5079: gotcha, yea snn's and gnn's are mostly the bread and butter. low precision distributed training with those architecture are some of the more recent efforts.
bmk#1476: how well do they work? like do you think we could see competitive SNN based LMs?
QueenSorceressAdrielle#5079: thats really cool. I would love to help out there if I could. That is specifically the area I am consulting in, but I wouldn't want to mix open source with closed source. I hate it, but that's how these companies are.
bmk#1476: ive never really heard about SNNs taking sota in any field and i kind of assumed everyone forgot about them
QueenSorceressAdrielle#5079: the application we are working on is with reinforcement learning specifically. GNN's (non-stdp) could potentially generate competitive language models, but there are computational problems with them.
bmk#1476: GNNs = Graph NNs?
Asca#3513: Yo this shitβs super cool, where can I learn more about AI stuff
bmk#1476: !faq
Carl-bot#1536:
QueenSorceressAdrielle#5079: thats where the neuromorphic hardware comes in., von neumann architecture doesn't support them well
QueenSorceressAdrielle#5079: yea graph neural networks
Asca#3513: Ty
bmk#1476: seems like an uphill battle to me because of network effects
|
gollark#3909: Intel has some new neuromorphic hardware now, don't they?
gollark#3909: On their latest process, even.
QueenSorceressAdrielle#5079: its a bit circular though, there are many mathematical similarities between transformers/attention mechanisms as well as self organizing maps
StellaAthena#3530: GNNs *are* transformers
bmk#1476: tbh i dont understand GNNs
StellaAthena#3530: Or rather, transformers are GNNs.
QueenSorceressAdrielle#5079: they can be, but it also depends on the way the graph is defined and the transformer is defined
QueenSorceressAdrielle#5079: but yea, there is a huge amount of overlap
EricHallahan#1051: TL;DR: Check #communities.
StellaAthena#3530: That is true, I'm mostly thinking of graph equivariant and graph convolutional networks like Cohen's and Welling's work.
QueenSorceressAdrielle#5079: The same goes for SNN's to some degree
Some Point Process#3793: i've thought about this correspondence (to SOMs) as well, due to the clustering of embedding vectors that the SA layer performs
QueenSorceressAdrielle#5079: yea exactly this
QueenSorceressAdrielle#5079: Well, I'll circle back at some point. I'd love to help out more directly research if I get the spare cycles. I just leased more lab space so I'm going to be tied up for the next month or two getting that settled out.
anthony_fuller#1075: Does anyone work with point clouds? I'm looking through Lucid's SE(3) transformer and it seems like it requires features for each point. What if we just have coords?
Asca#3513: Yβall seem like techy people so I thought some of you would find this video cool
Asca#3513: https://twitter.com/neolithicvrgame/status/1445379262605844487?s=21
EricHallahan#1051: This is probably more suitable to #off-topic.
Asca#3513: Ah
EricHallahan#1051: #off-topic
|
cfoster0#4356: Oo
cfoster0#4356: Sounds like they're inching it closer to the NN accelerator world?
cfoster0#4356: At first glance
π
¬ gabriel_syme π
¬#3220: there was actually an SOM paper a few days ago, wonder if it has some new cool tricks for them. I read some of it but it was not what I expected (wayy too many equations)
Exocamp#8255: By any chance, is there a total, complete list of loss functions (particularly for regression but also stuff like cosine similarity or spherical distance loss) out there?
Exocamp#8255: I don't mean what's been implemented in pytorch already
π
¬ gabriel_syme π
¬#3220: why do people who are dogmatically against or for something are so loud and obnoxious. Like why are they instantly rude, condescending to any opinion and typically spurt nonsense doing it
Daj#7482: status regulation
π
¬ gabriel_syme π
¬#3220: it can't be universal, there are fundamentalists that do no such thing (Buddhism is a decent example)
Daj#7482: there are _absolutely_ buddhists that are like this
π
¬ gabriel_syme π
¬#3220: yeah that makes sense, if only they had a status worth to regulate
Daj#7482: There were whole buddhist warlords
π
¬ gabriel_syme π
¬#3220: oops missed it π
Daj#7482: You gain status with ingroup by burning bridges with outgroup
Daj#7482: Credible commitment mechanism
π
¬ gabriel_syme π
¬#3220: yeah makes sense
π
¬ gabriel_syme π
¬#3220: that's why I like me some Diogenes, the GOAT of the outgroup
π
¬ gabriel_syme π
¬#3220: still, so much wasted energy. oh well
Daj#7482: Moloch.jpg
π
¬ gabriel_syme π
¬#3220: why no emote
|
Daj#7482: yeah need a good emote to represent moloch
Chlorokin#6581: . https://cdn.discordapp.com/attachments/729741769738158194/895305892982059038/image0.jpg
Louis#0144: lmaoooo
jordiae#4107: hi! which was the website to compute the scaling laws optimal model/dataset size?
EricHallahan#1051: Check the pins in #scaling-laws
jordiae#4107: thanks! sorry
Daj#7482: Hm could work but I feel like we can find something funnier
StellaAthena#3530: **PSA: Twitch compromised**
Twitchβs database has been compromised, including passwords and steaming keys. If you use twitch change your credentials ASAP.
I donβt believe passwords have been cracked yet, but itβs highly likely to happen eventually. The leak is massive and includes Twitchβs entire source code back to day 1, details of upcoming projects/business ventures, and all earnings numbers for streamers over the past 3 years
Orz#3023: umm
is it possible to resume a wandb run which has been marked as "crashed" by wandb but runs very well locally?
Kharr#7888: If source code was leaked, it's possible to figure out password salts too
StellaAthena#3530: It saves logs locally so if itβs still running just leave it and upload the logs when itβs done
Orz#3023: aight
Louis#0144: @abysmallytall
Louis#0144: lmao
Louis#0144: AFAIK JR was on twitch's info sec team
Louis#0144: cc @StellaAthena
|
abysmallytall#7153: I was at Twitch, but I was on the team making sure video got from point A to point B. I'm the farthest thing from a security person.
Louis#0144: do u know people on the info sec team
Louis#0144: are they shitting themselves rn
abysmallytall#7153: I don't, but I imagine they're going to be in for a few rough weeks. I was at Capital One for their breach maybe ...2 years ago? High stress across the company.
bmk#1476: ouch
Dromarion#3383: How much damage do you think Twitch will suffer once these events run their course? I don't imagine their finished, I don't think that happens much in the history of breaches.
abysmallytall#7153: My suspicion is that it has no long term impact.
abysmallytall#7153: YouTube is more or less the other major platform. IMO, it's not holistically better than Twitch. So, it becomes question of where would users go?
bmk#1476: I never use twitch, what are its advantages over YouTube?
bmk#1476: other than the network effects, they seem basically interchangeable to me
bmk#1476: and breaches certainly don't help that network effect
Orz#3023: idk
But I have a disadvantage for you
you can't set the video quality to what you like...(in twitch I mean....)
dmayhem93#3202: You can though? Unless you want more options than resolution
Orz#3023: Not for every stream ig
nshepperd#2316: having resolution options is gated behind being a high level streamer
nshepperd#2316: like "affiliate" or w/e they call it
nshepperd#2316: no idea why
Dromarion#3383: Since the focus is streaming I thought there was more API functionality or something such as linking to games
|
Dromarion#3383: Not a streamer though
zphang#7252: largely, the features built around the network effects/community
zphang#7252: there maybe be differences in streaming tech but I would think that doesn't matter for most people
bmk#1476: well this whole breach thing probably doesn't help their network effect lol
zphang#7252: depends on how the streamers respond
zphang#7252: the lack of a good easy-to-jump-to alternative probably means nothing much will happen other than streamers ranting about it for 3 days
bmk#1476: ah
Deleted User#0000: I come here to take advice of others, to network with others in AI
I completed my degree in AI about a year ago, then tried to do a masters but got shingles due to getting covid I think, disabling half my body for about six months, but now I am back in it and trying to find work as I lost my apartment and everything in the processs
I mainly do AI stuff with Music and Computer games in my own time, but like I cover many other areas professionally/academically
abysmallytall#7153: I don't really have all the ins and outs memorized, but my understanding is the main differences come down to the recommendation platforms work and the monetization. But I didn't really work on Twitch from a (user-facing) platform perspective, so that wasn't part of my day to day.
Louis#0144: mixr's time to strike
zphang#7252: a little too late...
Louis#0144: did they go under
Louis#0144: I was not even paying attention
Dromarion#3383: They're Facebook Gaming now :ultraberk:
zphang#7252: microsoft just shut it down cause they weren't making headway against twitch, I think
circuit10#0158: Probably because they have to reenocde it in a bunch of resolutions
circuit10#0158: Which takes too much compute power if there arenβt many viewers
Hs0ci3ty#8728: Can anyone please suggest some project ideas to work on related AI ? I gotta do some great projects to mention in my resume.
|
Hs0ci3ty#8728: Or any interesting research paper ?
EricHallahan#1051: Help yourself:
https://board.eleuther.ai
EricHallahan#1051: Inquire, pick up a task, and get to work.
Hs0ci3ty#8728: Gotcha thanks Eric.
π
¬ gabriel_syme π
¬#3220: recommend to start from what you know and what you like doing
EricHallahan#1051: Also, welcome!
Albus#2424: Hi all, I have a few hundred million docs and I need to find duplicates π I plan to use minhash LSH, as you did when preparing the Pile. Does anyone know the specs of the machine you used?
EricHallahan#1051: @bmk?
bmk#1476: @researcher2 was responsible for that
bmk#1476: though if i remember correctly our machine didnt have more than 256gb of ram
bmk#1476: might have been 128gb
Albus#2424: That's great news! Thanks heaps!
Albus#2424: @researcher2 did you run it on a single machine or did you use a cluster?
bmk#1476: im pretty sure it was a single machine
bmk#1476: and it might have been using cassandra (?)
bmk#1476: i know r2 tried something with cassandra at some point, not sure if that's what we ended up sticking with though
π
¬ gabriel_syme π
¬#3220: has anyone tried google's deduplicate? Been meaning to try it for my layouts
Albus#2424: Do you have the repo link?
π
¬ gabriel_syme π
¬#3220: Sure! https://github.com/google-research/deduplicate-text-datasets
|
Albus#2424: I haven't tried it yet, also because as far as I understand the repo doesn't include the impolementation of minhash LSH
xcodevn#9003: Hi everyone, I have prepared a short tutorial on **side effects** when programming with **JAX**. This is one of the important topics in JAX programming.
Even though, it is prepared as a documentation for my PAX project. I think it is still useful for anyone who wants to better understand JAX and the rationale behind JAX-based functional DL frameworks in general.
https://pax.readthedocs.io/en/main/notebooks/side_effects.html
Feedbacks to improve it are very welcome.
Gurkenglas#7362: Exists a reasonably cheap (no more expensive than a GPT completion) model that turns a natural language query into a relevant training datum pointer?
StellaAthena#3530: GPT-Neo hit 20 citations! Congrats team.
bmk#1476: now to make them show up in google scholar
StellaAthena#3530: __In case anyone's curious, here's the list:__
1. Measuring Coding Challenge Competence With APPS
2. TruthfulQA: Measuring How Models Mimic Human Falsehoods
3. Teaching Autoregressive Language Models Complex Tasks By Demonstration
4. An Empirical Exploration in Quality Filtering of Text Data
5. Intersectional Bias in Causal Language Models
6. Evaluating Large Language Models Trained on Code
7. Syntactic Persistence in Language Models: Priming as a Window into Abstract Language Representations
8. AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing *(technically doesn't cite us, because it credits someone else with our model the fucks)*
|
9. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
10. Fake or not? Generating adversarial examples from language models.
11. Deduplicating Training Data Makes Language Models Better
12. Artificial Intelligence, Algorithmic Writing & Educational Ethics
13. Natural Language Processing Models That Automate Programming Will Transform Chemistry Research and Teaching
14. Itβs Basically the Same Language Anyway: the Case for a Nordic Language Model
15. Program Synthesis with Large Language Models
16. RAFT: A Real-World Few-Shot Text Classification Benchmark
17. Towards A Measure of General Machine Intelligence
18. Messing with GPT-Neo How a reproduction of GPT-3 compares to the original, and how to use it.
19. Comprehension Based Question Answering using Bloom's Taxonomy
20. Multimodal datasets: misogyny, pornography, and malignant stereotypes
Deleted User#0000: could someone link the most popular vqgan colab?
Deleted User#0000: two that I've been using have stopped working
Deleted User#0000: Not too interested in clip, just vqgan
spruce#1680: if you add a
!pip install typing_extensions
and then replace the taming github with one that uhhh has the imports fixed (its fixed in my fork https://github.com/aicrumb/taming-transformers) it should work again
spruce#1680: if im thinking of the right error, a few people have been having one w a typing import for taming
|
Deleted User#0000: yep that looks like it
Deleted User#0000: I'll give it a try, thank you!
spruce#1680: of course!
spruce#1680: happy to help
Deleted User#0000: that didn't seem to work, it still tells me the typing error
Deleted User#0000: ```ImportError: cannot import name 'Literal' from 'typing' (/usr/lib/python3.7/typing.py)
````
spruce#1680: have you factory reset the runtime ?
either that or deleting the taming-transformers folder should do it
Deleted User#0000: I'll try resetting again first
Deleted User#0000: not sure why I'm still using the spanish one, maybe that's why :HAhaa:
Deleted User#0000: https://i.imgur.com/dEd8GRz.png it should look like this right?
spruce#1680: that should be right yeah
Deleted User#0000: oh awesome it works now :D thanks
spruce#1680: ^^
jbustter#5167: hi, i know i read about some paper about how you can fine-tune a language model using a small about of training data. Anyone know the method?
EricHallahan#1051: There are lots. Any other constraints?
CRG#8707: The openai paper?
jbustter#5167: @EricHallahan @CRG I have around 1000 prompts of questionable quality that I downloaded from a subreddit. I want to finetune a bert / gpt network to use the format that subreddit enforces. So any paper / method that suits that sort of task would be helpful
CRG#8707: I think it was: https://openai.com/blog/improving-language-model-behavior/ not sure if applicable
|
StellaAthena#3530: The related work on this is pretty weird. The first two papers that immediately come to mind to me while reading the intro are both missing, and one of them was even by OpenAI researchers!
Deleted User#0000: Do seeds also include prompt info?
Deleted User#0000: because I tried "all seeing eye in cyberspace" then did a different prompt but accidentially kept the seed, and there's eyes still
Deleted User#0000: infact it looks better than the original prompt
jbustter#5167: seeds don't include prompt info, but the seed changes the inital vector, which has a large effect. Also, The problem you had with importing taming was actually an error on their part (i think) that they patched pretty quickly.
Deleted User#0000: Oh I see
jbustter#5167: https://github.com/CompVis/taming-transformers/commit/37afdf35f0bdd30a8b95ebc9a4fc4bd6bc1e8c03 :goose:
researcher2#9294: Yeah it was single machine, believe it had 128gb ram, we didn't do the full pile on it however. The minhash LSH implementation we used had a few issues, the Cassandra implementation crashed part way through several times and wouldn't resume.
researcher2#9294: https://github.com/ekzhu/datasketch/issues/135#issuecomment-719893171
researcher2#9294: Oh and mongo was way too slow at the time, would be interested to hear if things have improved over the last year.
researcher2#9294: Our code for dedupe was here
researcher2#9294: https://github.com/EleutherAI/openwebtext2/tree/master/cleaning
researcher2#9294: Ah thats right, Cassandra managed to make it through when doing OpenWebText2. https://openwebtext2.readthedocs.io/en/latest/replication/#stage-4-deduplicate-filtered-documents-using-minhashlsh-with-cassandra
researcher2#9294: I'll see if I can find the code for in memory, probably lying around somewhere.
Louis#0144: Casandra?
researcher2#9294: Code wasn't cleaned up for commit unfortunately, but here's a quick untested rebuild with all our details removed. Put some uniquely identifiable metadata in for each document when building the lsh. This assumes you have a list of minhashes with their corresponding document metadata. We just used pile offset for our meta.
```python
import pickle
import tqdm
|
import logger
import os
from datasketch import MinHash, LeanMinHash, MinHashLSH
def get_lsh(working_directory, minhashes):
lsh_file_path = os.path.join(working_directory, "lsh.pkl")
if os.path.exists(lsh_file_path):
logger.info("Loading lsh from pickle...")
lsh = pickle.load(open(lsh_file_path, "rb"))
return lsh
logger.info(f"Building LSH")
lsh = MinHashLSH(threshold=0.5, num_perm=10)
progress = tqdm.tqdm(total=len(minhashes), dynamic_ncols=True, unit="docs")
for (meta, minhash) in minhashes
lsh.insert(meta, minhash)
progress.update()
progress.close()
logger.info("Dumping LSH")
|
pickle.dump(lsh, open(lsh_file_path, "wb"))
return lsh
def main(working_directory, minhashes):
lsh = get_lsh(working_directory)
duplicates = []
for (meta, minhash) in tqdm.tqdm(minhashes, dynamic_ncols=True, unit="batches"):
results = lsh.query(minhash)
for found_meta in results:
if found_meta != meta:
duplicates.append(meta)
lsh.remove(meta)
break
duplicates_file = os.path.join(fixed_directory, f"duplicates.pkl")
pickle.dump(duplicates, open(duplicates_file, "wb"))
start_offset += len(minhashes)
```
researcher2#9294: Just sent you the file for generating minhashes, it's meant for CC but has support for resuming and multiple machines from memory.
|
Deleted User#0000: sup
Deleted User#0000: so my background is: senior cs and applied math undergrad, ive done some work as a consultant, working for a psychology lab rn developing models for reading comprehension prediction from gaze. i have a good amount of experience with keras and pytorch. one of my main focuses is interpretability, so if anyone has any good uses for SHAP or LIME i can provide them. any advice about direction or projects needing some help here would be very appreciated
π
¬ gabriel_syme π
¬#3220: Hello and welcome! Others will have more to say but a good place to see ongoing discussions/project proposals is here: https://github.com/EleutherAI/project-menu/projects/1 You can find some work on interpretability there and a lot of relevant discussion in #interpretability-reading-group as well. And ofc, if you have an idea you'd like to pursue please go ahead (the repo also details how to go about that, although it can also happen in here at first)!
Deleted User#0000: thnx for the help!
EricHallahan#1051: Are you trying to take over my line when I am (supposed to be) asleep?
π
¬ gabriel_syme π
¬#3220: I'm just trying to keep you healthy, sleep is important
Teemochu#8740: :moloch_chan:
Teemochu#8740: I would have reacted, but this coincidence was too good to pass up https://cdn.discordapp.com/attachments/729741769738158194/895917339361554452/unknown.png
zphang#7252: simple, just write a new paper and cite gpt-neo
Daj#7482: I knew this would come up. No
Deleted User#0000: we need 20 berks on this
Louis#0144: :amongustwerk:
Louis#0144: Shiv would love it though
heuo#5207: https://thiscontext.com/a-detailed-description-of-spoons-object-memory-visualization-tools/
heuo#5207: https://youtu.be/FQgMlpQpu7w
Bengaman#4538: @alstroemeria313
Bengaman#4538: So like, I get what you were saying
Bengaman#4538: BUT
alstroemeria313#1694: @Bengaman anyway yeah, NN architectures are p flexible and we can use *more* layers (matmuls) in sequence rather than using ones too big to fit the computation in one GPU and slowing way down
Bengaman#4538: and I take multithreading large arrays here as example
|
alstroemeria313#1694: i.e. deeper networks instead of wider
Bengaman#4538: You dont NEED the ENTIRE layer on one GPU
kurumuz#5695: First, parallelization in one GPU and distributing a task to hundreds of nodes are completely different things
kurumuz#5695: GPU programming doesn't give you much experience for distributed HPC like that.
Bengaman#4538: it would be super fast if you could, but lets say, for instance, gpu memory is limited and expensive
Bengaman#4538: so you want to distribute it
kurumuz#5695: we already have this
kurumuz#5695: I don't understand
kurumuz#5695: read the megatron paper
kurumuz#5695: model sharding, model parallelism
kurumuz#5695: bmk threw a really good list of papers to read there
Bengaman#4538: Then chill, and let me explain instead of jumping down my throat? Thats really rude man
Bengaman#4538: Like, you dont learn like that
kurumuz#5695: ok i take my leave lol
Bengaman#4538: maybe im totally wrong, but I might have some insight you know
Bengaman#4538: maybe
Bengaman#4538: So, like, a lot of these calculations between neurons are fairly local
Bengaman#4538: even if they are spread over the whole model, the links can be thought of as 1 dimensional right?
Bengaman#4538: And neurons dont always share info between neurons
Bengaman#4538: you can make an array of frequent connections, and distribute them in chunks
|
kurumuz#5695: I would recommend you to read about the vanilla transformer
alstroemeria313#1694: i mean it's a directed acyclic graph but our current archs have structure we exploit for speed and that structure is sequences of dense matrix multiplications
StellaAthena#3530: βNeuronsβ are a lie your parents told you because you werenβt ready for the truth. We are doing extremely large matrix multiplications, and thinking about neurons is primarily a crutch
Bengaman#4538: Okay fair
bmk#1476: @Bengaman go read up on these papers first
https://arxiv.org/abs/1706.03762
https://arxiv.org/abs/1909.08053
https://arxiv.org/abs/1811.06965
https://arxiv.org/abs/1811.02084
StellaAthena#3530: These are also good
https://arxiv.org/abs/1910.02054
https://arxiv.org/abs/2010.13369
https://arxiv.org/abs/2102.02888
https://arxiv.org/abs/2104.06069
bmk#1476: you can talk about distributed training once you've read these papers
kurumuz#5695: what
kurumuz#5695: did he delete that himself...
kurumuz#5695: lol
bmk#1476: no, that was me
|
kurumuz#5695: like jesus christ
bmk#1476: this is our server and we have the right to require a minimum level of background knowledge. keep going and you'll earn yourself a ban
kurumuz#5695: @Bengaman you literally dont know how transformers work. How do you think you're supposed to talk about parallelizing them?
kurumuz#5695: you are spouting bunch of nonsense
StellaAthena#3530: Leo rn
https://tenor.com/view/law-police-judge-cop-dredd-gif-8606130
Bengaman#4538: Look, message received, im off. Just saying, I think the Venn diagram of data scientist and realtime GPU parallelization techs is probably pretty small. The game dev community has been at this sort of thing a really long time, I bet theres even off the shelf stuff that would really help and nobody knows because those spheres dont talk to eachother
kurumuz#5695: yikes
Bengaman#4538: PS, I dont recall inviting any of you guys, I believe myself and alstro were happily continuing a conversation when you guys decided you had to come over and cause a disruption. Like? Why are you here again?
bmk#1476: :banhammer:
bmk#1476: banned
OccultSage#3875: :facepalm:
StellaAthena#3530: https://tenor.com/view/benedict-cumberbatch-sherlock-sherlocked-sherlock-holmes-majors-gif-7766610
nshepperd#2316: *reverse vampire invites you into your own home*
Teemochu#8740: But why? What is more embodying of the god of everyone looking out for their own desires than a superstimulating adorable animu?
Teemochu#8740: Also rip I was going to link him the napkin guide.
kurumuz#5695: Connor is just not cultured *yet*
Orz#3023: Also
umm
if you guys really don't mind..
|
could you guys create a channel and post papers like these (related to all projects like carp/art/gpt etc)
Sid#2121: #research
OccultSage#3875: There's a napkin guide for a smooth brain like me?!
Teemochu#8740: https://dugas.ch/artificial_curiosity/GPT_architecture.html
OccultSage#3875: Nice! I know a lot of the pieces, but it's nice seeing it all in one place. :sageHeart:
bmk#1476: oh wow that's a really nice guide
bmk#1476: just like mostly ignore the positional encoding section because it's actually wrong (bert uses sinusoidal; gpt is just fully learned) and also nobody does sinusoidal anymroe anyways
bmk#1476: also this is wrong it's W_E^T https://cdn.discordapp.com/attachments/729741769738158194/896060026664407070/unknown.png
bmk#1476: also this isnt really correct, and i think any good explanation of gpt needs to talk about efficient kv caching and the attention mask https://cdn.discordapp.com/attachments/729741769738158194/896060352545038346/unknown.png
EricHallahan#1051: The masking is really critical to the autoregressive objective.
kurumuz#5695: yeah
Daj#7482: is this a threat
Teemochu#8740: No but it will be a treat
Teemochu#8740: Feel the animus flowing through you
igoro#7477: I'd be curious about the better explanations to a couple of these points. At the risk of asking dumb questions:
* **Re: extra positions filled with "empty" values:** I'd imagine that you don't want the attention heads to attend to positions prior to the beginning of the sequence. So, you want the keys and queries to come out to 0. Wouldn't that just be done by feeding in zero embeddings? Or is it more specialized than that?
* **kv caching:** Not seeing much on a quick Google search. Is this about caching state between the forward and the backward pass? Or is it something totally different?
bmk#1476: 1. unless you're using like tpus or something where you need fixed sizes you don't need to pad it at all you just don't add those extra positions
bmk#1476: 2. the caching thing is basically since past keys and vals can't depend on future stuff, you don't have to recompute them
igoro#7477: aaah, makes sense. a consequence of the whole masked attention thing
|
bmk#1476: yeah
bmk#1476: it's really clever
bmk#1476: it's also why you only need one training step to train at every position at once right
igoro#7477: one thing i've been curious on that
igoro#7477: if you feed a 100-token sequence as the initial input, it seems like you could theoretically get additional parallelism on the input tokens, before you get to generating the output tokens
Teemochu#8740: Ah! Ok that makes sense
bmk#1476: you mean like run all those 100 positions at once?
bmk#1476: that's actually exactly how it's done
igoro#7477: yeah, in lock step
bmk#1476: you run the context in lockstep to populate the cache and then you do the efficient generation thing
igoro#7477: I see. so you end up implementing both variants: the lockstep one and the cached one
bmk#1476: yeah
bmk#1476: and during training everything is lockstep
igoro#7477: OK. that makes sense.
igoro#7477: thanks
OccultSage#3875: maybe provide feedback to the author? I just read the whole thing, and it's excellent.
elderfalcon#4450: Ooh 1 bit lamb, that's one I'll have to read.
elderfalcon#4450: The chicken chicken chicken paper is p good too
elderfalcon#4450: (Warning: Direct to PDF link, it ain't Arxiv): https://isotropic.org/papers/chicken.pdf
wabi-sabi#5811: https://ideas.repec.org/p/qed/wpaper/1083.html
|
wabi-sabi#5811: https://cdn.discordapp.com/attachments/729741769738158194/896093852358819840/download_2.png
bernaise#6161: the chicken paper doesn't make any sense to me, is there a goose translation?
bmk#1476: no but we need to make one
bmk#1476: we also need to make a goose translation of the presentation too
bernaise#6161: word2goose
wabi-sabi#5811: bert's toughest benchmark yet
elderfalcon#4450: Pls yes
elderfalcon#4450: I dunno, but few things will ever, ever, ever top this video intro: https://youtu.be/a0MAs0GDA64
Paxilius#2291: Hi π
EricHallahan#1051: Welcome to EleutherAI!
Paxilius#2291: Thank you for working on GPT... It really got back interested in learning more about ML and having some fun in the process π¬
Paxilius#2291: I'm fairly new, but just before I do... Am I allowed to ask questions on fine tuning gpt-neo here?
alstroemeria313#1694: ```
File "./wikiart_3_ckpt.py", line 87, in __call__
if not enabled:
jax._src.errors.ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: Traced<ShapedArray(bool[])>with<DynamicJaxprTrace(level=0/2)>
The problem arose with the `bool` function.
While tracing the function sample_step at ./wikiart_3_ckpt.py:277, transformed by pmap., this concrete value was not available in Python because it depends on the value of the arguments to sample_step at ./wikiart_3_ckpt.py:277, transformed by pmap. at flattened positions [1], and the computation of these values is being staged out (that is, delayed rather than executed eagerly).
(https://jax.readthedocs.io/en/latest/errors.html#jax.errors.ConcretizationTypeError)
k```
|
alstroemeria313#1694: Yeah so I don't know how to do dropout with JAX gradient checkpointing.
In a way where I can turn dropout off when sampling.
Because JIT something something.
EricHallahan#1051: Here or #off-topic is fine.
Paxilius#2291: Thank you!! Excitement!!
elderfalcon#4450: Dropoff should be the identity function when sampling regardless if it's written correctly with prescaling so you should be able to pass the identity function or none at all for it.
Just JIT stuff with PyTorch but the path tracing stuff should be roughly analogous across traces. You know how logical conditionals can be during traces....
alstroemeria313#1694: I don't know how actually
alstroemeria313#1694: I know very little about JAX
alstroemeria313#1694: This is a port of some PyTorch code I wrote that I want to run on TPU.
elderfalcon#4450: Nw me either haha
elderfalcon#4450: Just check the dropout function do they scale by 1/drop_rate? If so, you can just drop in the identity function easy peasy during inference
Paxilius#2291: The main thing I'm actually wondering... I read a few blogs but they don't really explain much, could I fine tune gpt-neo on some Shakespeare for example using my 1080ti and Deepspeed? π
alstroemeria313#1694: Um, I don't know how.
alstroemeria313#1694: It does
alstroemeria313#1694: I wrote it myself
Paxilius#2291: Blogs mention being able to do stuff with deepspeed of cards with ram less than 20Gb
alstroemeria313#1694: I don't know how to *do something different during inference* because I can't use control flow lol
elderfalcon#4450: The function? Dunno which you're replying to haha
|
alstroemeria313#1694: the dropout code
alstroemeria313#1694: ```python
class Dropout2d(hk.Module):
def __init__(self, rate=0.5, name=None):
super().__init__(name=name)
self.rate = rate
def __call__(self, x, enabled=True):
if not enabled:
return x
key = hk.next_rng_key()
p = jax.random.bernoulli(key, 1.0 - self.rate, shape=x.shape[:2])[..., None, None]
return x * p / (1.0 - self.rate)
```
elderfalcon#4450: Right. I thiiiiiiink you can use conditionals during init type operations (whether __init__ or outside) since you never really want dropout on during inference, generally speaking.
JIT should only trace the forward/backward-type calls IIRC, from a convention sake.
alstroemeria313#1694: I also don't know how to instantiate a different Haiku model and have it not make a second set of params that is going to use up my TPU memory.
elderfalcon#4450: Woah woah woah one thing at a time
alstroemeria313#1694: So I don't know how to make two models, one with dropout and one without.
|
alstroemeria313#1694: ...Can I just make that bool a number that I multiply the rate by unconditionally and then pass in either 0 or 1.
elderfalcon#4450: Hang on a minute fair being
elderfalcon#4450: there's maybe a less nasty way to do this but in init you can do something like
__init__
if inference:
self.fwd_fn = lambda x: x
else:
self.fwd_fn = ... (Original function
then in __call___(x):
return self.fwd_fn(x)
This is an awful way to do it but it just gets the idea across. There's better ways I think, but this is one way.
alstroemeria313#1694: I can't enable/disable it in the init though? Without making a second copy of the params somehow?
elderfalcon#4450: Why's that?
elderfalcon#4450: Oh, gotcha, this is not standalone inference
alstroemeria313#1694: ```python
model = hk.transform(diffusion_model)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.