data
stringlengths 115
7.61k
|
---|
StellaAthena#3530: I should set that up to auto-update
EricHallahan#1051: That needs to be updated lol
bmk#1476: we have twice as many papers now
bmk#1476: I counted we have 8
StellaAthena#3530: There's a character limit that prevents it from ever being a complete list
EricHallahan#1051: What happened to Isaac McHorse?
StellaAthena#3530: I don't know, I've only been responsible for @Carl-bot
beepydatacenter#8080: Ok so I take a shower and come back to 50 messages, ok it is active lol
EricHallahan#1051: #off-topic is active. :berk:
beepydatacenter#8080: But I waaaant technical talk I'm a ml guy :cheems:
StellaAthena#3530: Some interesting stats https://cdn.discordapp.com/attachments/729741769738158194/905657739097804800/Screen_Shot_2021-11-03_at_11.20.23_PM.png
EricHallahan#1051: Scroll up in #research, that should scratch the itch.
beepydatacenter#8080: Right now I'm trying to find what's the best course of action to training my own lightweight NLP model on tokenized and parsed MIDI data
StellaAthena#3530: For what purpose
beepydatacenter#8080: My idea is that MIDI files are incredibly regular and extremely well formatted which means that it should be extremely easy to tokenize it and turn it into a human readable text file that is tokenized by notes after a metadata section. Which means it should theoretically be possible to train a NLP model on these text files and generate a unique one based on an input midi file. i.e. I want to use NLP to make music to see what happejs.
EricHallahan#1051: This idea would be right up your ally.
https://discord.com/channels/729741769192767510/730095596861521970/902661513091907584
beepydatacenter#8080: Ha! But I mean theoretically this should match the style if you specify a genre and genre is part of the metadata. Metadata will be used as part of the "seed" used to generate the midi files.
EricHallahan#1051: I also have this code laying around that should theoretically be able to tokenize speech.
beepydatacenter#8080: My idea is to train it on my *own* songs. I just write 50 very similar trance songs. They don't have to be good and they each would take 10-20 minutes to make.
|
The advantage of this is that it will very, very heavily bias towards the style and be far more likely to replicate it.
EricHallahan#1051: You'll want a larger training set than that.
beepydatacenter#8080: I would generate it line by line (i.e. use a newline as a stop token) and validate each line to check if it is valid midi in the translated format.
beepydatacenter#8080: Well yeah, but it doesn't have to work good the first try right?
beepydatacenter#8080: If it works in *theory* even badly, then I can experiment with a MUCH larger set of data and obtain thousands of MIDI files and process them in batch, and train the model on those.
inox#5400: there's midi datasets
beepydatacenter#8080: If there's midi datasets that makes things even easier. All I need to do is configure my converter to parse the data as much as possible, and feed that into the algorithm
beepydatacenter#8080: I assume the midi dataset is normalized and is made to be as regular as possible, which greatly increases the chance of this succeeding. My biggest concern is that the midi data *needs* to have the key specified, so that the prompt can specify a key and it would stick to the key.
beepydatacenter#8080: I don't think you can teach AIs music theory yet sadly ๐
bmk#1476: did you see gwerns ABC music generation project
beepydatacenter#8080: No, do show me
beepydatacenter#8080: I know music generation AIs already exist. But what I want to explore is whether it's possible to *specifically use NLP* to generate music, to test the limits of NLP algorithms
inox#5400: that's not gonna be approximate bayesian computation is it?
nshepperd#2316: 24 epochs https://cdn.discordapp.com/attachments/729741769738158194/905662328727552060/style24.png
EricHallahan#1051: That's literally the purpose of this code lol
inox#5400: top two rows reconstruction?
bmk#1476: https://en.m.wikipedia.org/wiki/ABC_notation
EricHallahan#1051: I said "I wonder if we can repurpose an LM for speech", I wrote it up, then never used it. :3berk:
StellaAthena#3530: @EricHallahan How much work would it be to get a demo running in NeoX?
|
beepydatacenter#8080: This is extremely helpful, thank you
bmk#1476: https://www.gwern.net/GPT-2-music
nshepperd#2316: bottom two rows are supposed to reconstruct the style of the top two rows
beepydatacenter#8080: I can't use this exact notation for perfect midi transcription since is reductory in some parts and has some extra stuff I don't need, but the encoding itself should be useful for storing note data.
beepydatacenter#8080: I will read this tomorrow when I'm on desktop
EricHallahan#1051: The plan was to actually use that code with MTJ actually, because that infrastructure was mostly sitting idle. (This is why the embedding is sized as it is in the concept code.)
It wouldn't be hard to adapt to either codebase, I just would need to write up a data pipeline.
bmk#1476: someone should redo this but with 6B instead of 117M
bmk#1476: 117M is kinda bad
beepydatacenter#8080: I have submitted a request to Open-AI to let me use GPT-3 to generate music, so if all goes well, I may not even have to struggle with building my own model, just feeding training data.
StellaAthena#3530: @beepydatacenter You can't train a model from scratch like that tho
StellaAthena#3530: I fear that text understanding and music understanding are close to orthogonal
beepydatacenter#8080: Ah really? Then how come I've seen Janelle Shane talking about training her own models from scratch?
beepydatacenter#8080: Does she have special access to raw training?
StellaAthena#3530: I have no idea who that is or how likely she is to be telling the truth
beepydatacenter#8080: She's a very famous Twitter AI dev
beepydatacenter#8080: She's the one that made the AI generated sweethearts post which made its way to non AI internet
bmk#1476: this isn't the non AI internet
StellaAthena#3530: It didnโt make its way to my corner of the internet lol
kurumuz#5695: you will not get that kind of access to GPT-3
|
cfoster0#4356: She might have finetuning access
inox#5400: she's the whale facts girl
beepydatacenter#8080: https://cdn.discordapp.com/attachments/729741769738158194/905664862749880350/Screenshot_20211103-234843_Twitter.jpg
kurumuz#5695: why do we assume they used gpt-3?
beepydatacenter#8080: Because she says she uses gpt-3
EricHallahan#1051: That's not "from scratch" though.
StellaAthena#3530: @beepydatacenter Where does she claim to have trained a model on OAIโs hardware
kurumuz#5695: what do you mean, everyone has access to raw training
cfoster0#4356: Y'all chill lol
kurumuz#5695: Just asking questions
beepydatacenter#8080: I'm not sure, but she says she uses gpt-3 in some posts on her blog posts, and she talks about training an NLP algorithm to generate funny things
beepydatacenter#8080: I guess she may be using a mix of her own models and GPT-3
StellaAthena#3530: Those sentences are likely independently true, but thatโs very different from having trained a 175B parameter model from scratch
beepydatacenter#8080: Yeah no, I think she just might be using them in conjunction or something
beepydatacenter#8080: Her model and gpt-3
beepydatacenter#8080: So in that case what's my best course of action if I want to train my own model? Is there an existing API or do I just have to fuck with tensorflow until something works?
cfoster0#4356: For a music model you're probably gonna have to get your hands dirty
StellaAthena#3530: Use this codebase and sign up for the TensorFlow Reaearch Cloud
https://github.com/kingoflolz/mesh-transformer-jax
|
beepydatacenter#8080: Mm thank you, I'll look into this tomorrow.
cfoster0#4356: This might be the best piano MIDI dataset out there right now https://magenta.tensorflow.org/datasets/maestro
beepydatacenter#8080: Siiiiiick that's amazing
StellaAthena#3530: @cfoster0 thereโs lots of music out there. Whatโs the limitation on bigger datasets
beepydatacenter#8080: This is just what I need
StellaAthena#3530: Copyright?
cfoster0#4356: Partially
beepydatacenter#8080: MIDI files are exceptionally tiny too so they're perfect to work with on my own machine.
StellaAthena#3530: What else?
cfoster0#4356: There's a huge difference between MIDI *performances*/transcriptions and the kind of quantized MIDI files you'll see mostly on the web, or that you could automatically generate from sheet music
cfoster0#4356: The first is expressive and the second is not
cfoster0#4356: And then yes copyright is a huge blocker
bmk#1476: did you read the gwernpost yet
StellaAthena#3530: I assume that transcriptions is more what you want than the quantized stuff?
bmk#1476: its really good
bw#3136: Janelle Shane's prompting GPT3 to generate different results, like style transfer for text, etc. Not training GPT3. For example, she explicitly states that here:
> The new OpenAI API is REALLY good at following all sorts of prompts. Set up the first two lines of a chat and it will stay in character.
<https://www.aiweirdness.com/this-is-the-openai-api-it-makes-spookily-20-06-11/>
cfoster0#4356: Yes
|
cfoster0#4356: The best you could get would be direct recordings from a MIDI device (like a keyboard or something)
StellaAthena#3530: Does the raw audio exist? Like, do we need to collect the sound or do we โjustโ need to process it
cfoster0#4356: Mm there is probably enough music "out there" in the world, yes. Idk about publically available performances
beepydatacenter#8080: There she's prompting it but I could've sworn i explicitly saw her training a model once
bw#3136: Yes. But that's most likely not GPT3.
StellaAthena#3530: @cfoster0 Any idea how much data might be needed for a 1.3B model
beepydatacenter#8080: Yeah I guess I saw GPT-3 in some posts and assumed she was using it for all. The posts in question were from like 3 years ago.
beepydatacenter#8080: Maybe even more. Idek how I would go about finding them .
StellaAthena#3530: GPT-3 didnโt exist 3 years ago
beepydatacenter#8080: Might have been GPT-2 then
StellaAthena#3530: That also didnโt exist three years ago
cfoster0#4356: Idk. The dataset I posted was used for this, which was a pretty early transformer paper. Think it was only like 6 layers https://arxiv.org/abs/1809.04281
beepydatacenter#8080: Idk time has been weird so what felt like 3 years ago may have only been a year ago
cfoster0#4356: Kharr might know, since he's worked with MIDI transformers a bit
beepydatacenter#8080: I don't remember exactly when I first found her twitter but it was then. It was pre covid, that's all I can say
StellaAthena#3530: Might be worth looking at that textless NLP paper too
EricHallahan#1051: Man that paper was two months ago now.
bmk#1476: i would recommend looking at some other literature rather than clinging onto posts by this one person
beepydatacenter#8080: I am adding everything everyone is linking to in a document for me to look at when it isn't midnight before me having to wake up at 7am lol
cfoster0#4356: I think jukebox is kinda like textless NLP but for music ๐ค
|
beepydatacenter#8080: Yeah she was just my first introduction to making AI say funny shit
StellaAthena#3530: That wasnโt an attack, obviously you canโt read a hundred pages of research in the past hour
beepydatacenter#8080: Yeah right now I am just collecting information for me to read later. It's more efficient to collect a bunch of resources and then read them than to do it one by one
EricHallahan#1051: I think that is a valid statement.
beepydatacenter#8080: I really want to build my resume
StellaAthena#3530: Itโs literally not โNLโ
beepydatacenter#8080: https://youtu.be/HyopNu1iZPc
I'll be honest this was my first ever experiment with AI that wasn't me writing 5 lines of python to do linear regression
StellaAthena#3530: Well hanging out here is a phenomenal way to get gud
beepydatacenter#8080: ~~and I lost to extremely gimmicky things because my presentation was too academic and not businessy~~
beepydatacenter#8080: Yeah I like this place, y'all seem very very helpful
beepydatacenter#8080: And despite the size of the server, it moves at a digestible pace so I think I actually can hang out here.
beepydatacenter#8080: Prob will be quiet if it gets a little too active though.
beepydatacenter#8080: Hope I'm not *too* much of a novice ๐
EricHallahan#1051: But that isn't the point of the statement. The point is that it learns to do music without explicit biases towards music, much like Textless NLP does it without the explicit biases towards language.
StellaAthena#3530: Oh
cfoster0#4356: Automatically discovering discrete acoustic/musical units without training on symbolic data in that form, ya
beepydatacenter#8080: I think my project would stull be considered NLP since the trained data is text, the input data is text, the output data is text, and the midi part is just translated by non ML means
EricHallahan#1051: That's a better way of saying it lol
|
beepydatacenter#8080: Oh btw if you have any good papers for me to read in general please give me their DOIs so I can find if my college has access to them.
EricHallahan#1051: Luckily that isn't too common in this domain, almost everything is open access.
inox#5400: in ML it's rare for anything worthwhile to be paywalled
bmk#1476: arxiv is your friend
bmk#1476: all hail the mighty arxiv
beepydatacenter#8080: Oh that's great :D
EricHallahan#1051: If the content is paywalled your probably doing something wrong, or it is IEEE being annoying.
beepydatacenter#8080: Ok but I don't know what I don't know lol.
idek what papers I should be reading to further my ML knowledge
EricHallahan#1051: ^
beepydatacenter#8080: That's exactly what I was gonna ask for thanks
EricHallahan#1051: It's also pinned.
beepydatacenter#8080: Mm im putting everything into a onenote so I can easily access it.
pragmaticml#1730: If you're interested in a particular field / topic, find 3 papers about that topic to skim but then fully read the paper(s) that they all treat as baselines.
beepydatacenter#8080: That's a really good idea, thanks.
EricHallahan#1051: We also like to recommend reading these papers, but we ae kind of biased towards them. :berk:
https://www.eleuther.ai/publications
beepydatacenter#8080: Mm this is a good solid start. At least a week's worth of reading for me.
As far as the coding goes, I can at least start with the midi parser since that just requires C and knowledge of the midi file format and I could use some practice reading binaries.
|
bmk#1476: we have more publications than any other discord server
beepydatacenter#8080: WAUtheThird recommended this server for me to join, and he works for Latitude
beepydatacenter#8080: So I suppose he knew what he was talking about
EricHallahan#1051: Ah WAU, a face I haven't seen around here in a while.
beepydatacenter#8080: I just feel like I'm too much of a novice for this server :(
beepydatacenter#8080: I mean that will change as I go further in my college track since I am going to be focusing on ML
beepydatacenter#8080: But like the server says in like a million places that this isn't really a place for n00bs
guac#4716: every one is a noob at some point
beepydatacenter#8080: ~~I swear my brain runs gpt-3 because it autoinserts nonsensical words in my sentences sometimes and I don't even notice~~
kurumuz#5695: unimportant details
kurumuz#5695: literally stop caring and start lurking
beepydatacenter#8080: Yeah. I haven't put that much effort into learning ML because I am very busy with college
beepydatacenter#8080: But I did something *really* cool with sv2tts within like a day of me learning about it... so I think I'm capable?
EricHallahan#1051: Weirdly my ML/DL/AI work is not any way related to my undergraduate study.
beepydatacenter#8080: People seem to say that as long as you can do high school math and basic linear algebra, you can do ML... I have taken up to diff eq, but not linalg yet... but I think my math foundation is decent enough for ML
beepydatacenter#8080: Not to mention I've uh
Got a thing for data.
I really like data.
I mean, *really* like it.
I love playing with it, analyzing it, crunching it, manipulating it, visualizing it... data is delicious ๐
|
beepydatacenter#8080: My goal and dream is to be at the forefront of ML engineering. I want to help push the bleeding edge of ML farther than ever before. I have to start somewhere ๐
EricHallahan#1051: If you want somewhere here with a lower barrier to entry to warm up with, definitely keep an eye on #art. There is a lot of fun day-to-day hacking and development that goes on down there.
EricHallahan#1051: That's where I got my start.
beepydatacenter#8080: Mmm yeah I was toying with neuralblender recently. I've been thinking about toying with gans for a while but never got around to it
beepydatacenter#8080: ~~two years to be exact~~
pragmaticml#1730: Linalg basics will definitely help a lot when getting into ML. You probably don't need a whole course, but the first couple chapters of any linalg book are worth your time.
beepydatacenter#8080: Oh I was gonna take linalg in the summer anyway. I'm two classes from the math minor and linalg is one of em
beepydatacenter#8080: I feel like it will *really* help me with ML
EricHallahan#1051: I honestly haven't taken a full course on linear algebra yet lol
EricHallahan#1051: Somehow in my third year I haven't needed to take it.
beepydatacenter#8080: I'm thinking of watching an OCW or similar for linalg over december break
bmk#1476: linalg is without a doubt the most important math for ML
EricHallahan#1051: Add calculus to that and you have DL.
beepydatacenter#8080: Damn, if only it were discrete ๐คฃ https://cdn.discordapp.com/attachments/729741769738158194/905675286773907488/Screenshot_20211104-002959_Canvas_Student.jpg
beepydatacenter#8080: I finished all three calculus. Calculuses? Calculi?
beepydatacenter#8080: UCF sadly doesn't do much NLP. But if I ever wanted to do CV they have a whole ass department for that
beepydatacenter#8080: CV is my second interest after NLP
beepydatacenter#8080: I feel like NLP is just a lot easier to work with making things on your own than CV
beepydatacenter#8080: There isn't much I as someone learning CV can do than have tensorflow draw a box around my cat labeled "dog"
beepydatacenter#8080: I simply don't have the GPU power to fuck with more advanced CV
|
pragmaticml#1730: I don't think this is necessarily true. Generative art makes for very fun side projects and you can do a fair amount with colab notebooks.
EricHallahan#1051: Colab and TRC are your friend.
bmk#1476: calculus isn't nearly as important
beepydatacenter#8080: Well I mean yeah, gans are one thing but I think UCF does more with like... image recognition in live streaming video
bmk#1476: like, you don't really need to fully understand calculus to understand SGD for example
EricHallahan#1051: I agree, but you wouldn't have backprop without calculus.
bmk#1476: "the gradient is a magical function that always points uphill" is like good enough
EricHallahan#1051: Yep, absolutely.
beepydatacenter#8080: If an undergrad level of calc and linear algebra makes me extremely solid in ML, then after I take linalg I'm gucci
bmk#1476: you don't really need to understand backprop tbh
pragmaticml#1730: Yeah, you're basically set.
beepydatacenter#8080: Gradient descent was actually one of the easiest topics in calc III for me lol
bmk#1476: SGD is more important
beepydatacenter#8080: It just made... sense ๐
EricHallahan#1051: My point is that backprop and SGD effectively underlie all of modern DL
beepydatacenter#8080: I forgot it since then, but if I didn't struggle then. I def won't now.
beepydatacenter#8080: What is sgd and dl?
EricHallahan#1051: But it isn't too important to understand how they work internally, only how they work in practice.
bmk#1476: imo backprop is a lot less useful than SGD to fully understand
kurumuz#5695: sgd is so good
|
kurumuz#5695: the first time i learned about it
kurumuz#5695: felt really nice
EricHallahan#1051: I'm not arguing for that though.
bmk#1476: like backprop is useful for thinking about certain things
beepydatacenter#8080: My dad kept yelling at me about being "bad at ML" because I couldn't explain to him how backprop worked
bmk#1476: like my gradient hacking stuff uses that knowledge
kurumuz#5695: i started DL with writing a MLP from scratch with backprop and SGD
EricHallahan#1051: I'm arguing that calculus has importance.
kurumuz#5695: i think its a really fun way to start with it
chilli#5665: I think understanding how backprop works is important ๐ค
bmk#1476: but SGD is much more broadly useful
chilli#5665: Like, you need to understand the actual computational flow
kurumuz#5695: yea it is very important
chilli#5665: sure, you might not need to know the minutiae of how to implemen tit
bmk#1476: it's important but less important than SGD imo
beepydatacenter#8080: It is, but mind you this was weeks after I first remotely looked at anything related to ML
chilli#5665: but the implications of things running backwards in the backwards pass
chilli#5665: + what activations are
EricHallahan#1051: ~~understanding that zero is not your friend is all you needโข๏ธ~~
chilli#5665: are pretty important for reasoning about what's actually executing
|
beepydatacenter#8080: He expected me to understand backprop within 2 weeks of learning what a cost function was.
kurumuz#5695: that sounds fine, dunno
kurumuz#5695: 2 weeks is a lot of time
beepydatacenter#8080: IDK I don't know about you but I don't have that much time as a full time student
beepydatacenter#8080: This is like taking on an additional class
kurumuz#5695: ah, other than the exam weeks i had plenty time i think.
kurumuz#5695: you can sleep through all the other weeks anyway
beepydatacenter#8080: Hence why idek when I'll get through all these papers because I am taking 2 classes they specifically tell us not to take together because it would be too hard
beepydatacenter#8080: Ironically those are the classes I have an A in when I'm struggling in my other two
chilli#5665: who cares about class grades
chilli#5665: unless you're interested in grad school
beepydatacenter#8080: But CS1... professor likes to pretend his class is the only class you're taking. I'm magical at C and it comes *extremely* naturally to me and even I am spending more time on it than I would like... no wonder why people are dropping like flies
beepydatacenter#8080: I am. My grades from when I started college are abysmal due to depression.
Hence why I am taking the "look at my cool ass ML projects I learned to do on my own" approach
beepydatacenter#8080: Bc grad school cs iirc your projects matter almost as much if not more than your grades.
chilli#5665: mmm, your research projects matter
chilli#5665: although really, your rec letters matter even more than that
beepydatacenter#8080: I'm... somewhat of a teacher's pet lol
chilli#5665: mmm, that's not really sufficient
|
beepydatacenter#8080: My discrete professor wants me to ULA for the class next semester
chilli#5665: not sure what ULA means
chilli#5665: well, I guess it depends on how good of a grad school you want to go to
beepydatacenter#8080: It's like a TA but you're an undergrad. Does everything a TA does except grade.
EricHallahan#1051: ~~United Launch Alliance~~
chilli#5665: my expectations on what's needed are pretty much calibrated against the top schools
bmk#1476: I'm, uh, somewhat of the polar opposite of that, so my experience probably doesn't generalize to you lol
beepydatacenter#8080: I mean ideally I do something so fucking cool the dean of UCF CS program recommends me himself
bmk#1476: but my opinion is that uni is a scam actually
chilli#5665: ehhhhh
beepydatacenter#8080: Man i just want to do the beep boop and vibe
bmk#1476: (ok I don't mean that literally but I do have a bit of an aversion to formal schooling)
chilli#5665: but yeah, what kind of grad school are you looking to get into?
beepydatacenter#8080: I have an incredibly specific dream.
I want to invent AI sexbots so sentient and with enough independence to tell incels "no"
kurumuz#5695: just not for me, i understand others can benefit from it though.
kurumuz#5695: what
kurumuz#5695: :thonk:
beepydatacenter#8080: Incels don't deserve satisfaction
chilli#5665: :thonk:
|
beepydatacenter#8080: AI needs to be trained to deny incels too
kurumuz#5695: lmao
kurumuz#5695: well sorry
kurumuz#5695: but that is weird
chilli#5665: lmao
beepydatacenter#8080: Hey, I find the importance of consent incredibly important and I believe AI deserves the right to consent too
beepydatacenter#8080: At some point
beepydatacenter#8080: When they get advanced enough
beepydatacenter#8080: Consent not just sexually. General consent. A sufficiently advanced AI deserves the right to say no.
kurumuz#5695: but this makes no sense. looking at the definition incel already means involuntary celibate, so they wouldn't interact with an AI smart enough to require consent either.
bmk#1476: ahem. #off-topic
kurumuz#5695: and yeah
kurumuz#5695: lol
Ajay sahu#2540: Hi i have a question, how can we add entity tracking to question answering models?
Ajay sahu#2540: Which are used for long conversation as dialog agents
alstroemeria313#1694: cifar-10 clustering, 450 epochs https://cdn.discordapp.com/attachments/729741769738158194/905764073575161906/demo_00450-6.png
Vinoth#3981: @StellaAthena : could you provide some more details about your
knowledge distillation approach to GPT-J? Potentially interested in helping (my current research is on training sparse networks & lottery ticket). Thanks.
nshepperd#2316: @alstroemeria313 morning~ :)
alstroemeria313#1694: morning~ ๐ธ
|
Kia#2550: Morning @alstroemeria313!
nshepperd#2316: @alstroemeria313 24 epochs. added anime (about 10% of the dataset?) to make it harder and increased the network size https://cdn.discordapp.com/attachments/729741769738158194/905767080790474773/style24.png
alstroemeria313#1694: ooh
nshepperd#2316: *should* be able to handle a wide variety of styles when it's done training
nshepperd#2316: also trc were so impressed by my avocado chair they extended until the end of the year hehe
Kia#2550: Just send them your Diffusion notebooks and they would extend it more:berk:
Kia#2550: But Really excited for your Style transfer diffusion,So goodluck!
nshepperd#2316: ahah
alstroemeria313#1694: so the way you are getting a style objective. is by making it learn random crops of the style but restricting the receptive field so it can't actually memorize the whole image and output a perfect random crop?
nshepperd#2316: yep, exactly
nshepperd#2316: the encoder has its receptive field restricted so it can't encode large scale structures, just small patches. and the diffusion does too, so it doesn't learn a distribution over large scale structures either
alstroemeria313#1694: :)
nshepperd#2316: but it learns to make things connect up locally
nshepperd#2316: err demo grid got caught by discord bot, lol
nshepperd#2316: i'll post it in art i guess ^_^
nshepperd#2316: 34 epochs https://cdn.discordapp.com/attachments/730484623028519072/905807350374481980/style34.png
๐
ฌ gabriel_syme ๐
ฌ#3220: Until stella is back you can check the branch here: https://github.com/EleutherAI/gpt-neox/tree/distill-gpt-neox Relevant discussions you can find if you search for distillation in the #gpt-neox-devs channel
Kia#2550: Do we have people in SEA,to moderate the new tag nsfw #art channel?
Kia#2550: Also god im tired
EricHallahan#1051: Sleep
|
๐
ฌ gabriel_syme ๐
ฌ#3220: I am but I don't think I can moderate much right now
Kia#2550: I want to volunteer because Im usually free
tpapp157#3643: @alstroemeria313 @nshepperd What I've been playing with recently, using diffusion to generate realistic terrain maps (relief + height) from a user provided segmentation mask of terrain types. https://cdn.discordapp.com/attachments/729741769738158194/905874209425743882/unknown.png
alstroemeria313#1694: ooh!
tpapp157#3643: Also diffusion generation video because why not. https://cdn.discordapp.com/attachments/729741769738158194/905882162790805504/test0.mpg
alstroemeria313#1694: ooh
BoneAmputee#8363: `.mpg`
nostalgia bomb :berk:
BoneAmputee#8363: ooo it's actually mpeg2 inside
tpapp157#3643: I kept getting codec errors when trying to use more modern codecs. Something else to debug one day.
ewald#7730: i think most people here know more about machine learning than i do. so i read and learn, and only very occasionally ask a stupid question ๐
Kharr#7888: Part of it is definitely that much of the conversation is pretty technical and anyone who is not very immersed in DL would be trying to catch up before any of it made sense. The things people are posting are very much cutting edge and often build on research released a week or two ago :berk:
alstroemeria313#1694: eheh, the number of times someone in here has been reading a paper and has been like "hold on, i'm going to try this"
bmk#1476: also 50% of all of the public discussion on that research is in #research
StellaAthena#3530: I'm never going to stop laughing at the fact that I posted an entire twitter thread about NVIDIA+MSFT's new model before anyone at the company had tweeted about the blog post
bmk#1476: there are probably some really important insights locked up in #off-topic between goose images
tpapp157#3643: Their tweet probably had to go through a ten step approval process before being posted.
bmk#1476: I mean, so did the blog post
ewald#7730: maybe GPT-J-8 will make sense of it in 2028
onealeph0#1502: do you really see this idea in the fact that trains haven't changed much with technology advances?
|
ewald#7730: *looks at the transrapid maglev train*
bmk#1476: maglev trains? not in north america :harold:
ewald#7730: but in germany! until they cancelled it and sold the tech to the chinese.
EricHallahan#1051: *looks at* #off-topic
Louis#0144: its geese
Louis#0144: geese all the way down
tpapp157#3643: :goose::goose2::goose3::goose4::goose5::goose6::goose7::goose8::goose9::goose10::goose11::goose12::goose13::goose14::goose15::goose16::goose17:
EricHallahan#1051: #off-topic
Louis#0144: He missed a few anyway
๐
ฌ gabriel_syme ๐
ฌ#3220: ok that turing model is pretty wild
StellaAthena#3530: Which one? MSFT has three
๐
ฌ gabriel_syme ๐
ฌ#3220: oh I'm sorry, the multimodal one you posted earlier
๐
ฌ gabriel_syme ๐
ฌ#3220: bletchley. I don't imagine it will be open sourced right?
StellaAthena#3530: Of course not, why would they possibly do that ๐
StellaAthena#3530: Itโs pretty small, I bet we could replicate it
EricHallahan#1051: I don't know why this conversation isn't in #multimodal though lol
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah true, tbh the dataset is more important
๐
ฌ gabriel_syme ๐
ฌ#3220: my bad, I thought I read it in here
๐
ฌ gabriel_syme ๐
ฌ#3220: nope it was in there
Ajay sahu#2540: https://github.com/allenai/macaw/blob/main/examples.md
|
๐
ฌ gabriel_syme ๐
ฌ#3220: how hard would it be to finetune macaw? I guess I could try the 3B model right
Vinoth#3981: thanks @๐
ฌ gabriel_syme ๐
ฌ ! Will have a look at the github, https://github.com/EleutherAI/gpt-neox/tree/distill-gpt-neox#distilling.
Ajay sahu#2540: Ya, i was amazed that even with 11B parameters, it's results are pretty decent compared with 175B Jurrasic.model
Ajay sahu#2540: https://lnkd.in/gWR3uvin
Ajay sahu#2540: Colab for the same
๐
ฌ gabriel_syme ๐
ฌ#3220: it works on colab? ๐ฎ
๐
ฌ gabriel_syme ๐
ฌ#3220: oh ok gpt3 prompting, cool thx
Ajay sahu#2540: I was also looking for someone to shed some light on entity tracking in QA models.. Eg time, date, venue etc..
Ajay sahu#2540: While having a conversation, just like a bot
Ajay sahu#2540: Then possibly fine tuning Macaw will make more sense
Louis#0144: Jurassic is not a good baseline
Louis#0144: lol
Louis#0144: Something is super weird with that model
Louis#0144: I have no idea how the perplexity is so low but it feels so awful
Ajay sahu#2540: Are you talking about jurassic?
StellaAthena#3530: Yes, he is
Louis#0144: Sorry I was walking to work
Louis#0144: Yes I am
Louis#0144: I think figuring out what Jurassic is doing wrong is probably worth a paper
Sphinx#2092: https://cdn.discordapp.com/attachments/729741769738158194/906170487128879104/5t2tl3.png
|
Louis#0144: LMAO
Louis#0144: fuck that is so funny
Ajay sahu#2540: Okay, i see...of course there's scope for improvement over it..
elderfalcon#4450: > Perplexity ends at 1
> Decreases in perplexity inversely correlates with the exponential difficulty in modeling, creating a stack of two nonlinear measures on top of each other, really boggling readability
kurumuz#5695: I have no idea why jurrassic is doing so bad
kurumuz#5695: i will blame the huge BPEs :berk:
timudk#8246: Is anybody aware of an image dataset of a single cat/dog/...?
Louis#0144: instagram
timudk#8246: I was hoping that there already exists a cleaned dataset and I don't have to scrape myself
kurumuz#5695: definitely
kurumuz#5695: but we cant really do that right, model weights will not be released
kurumuz#5695: maybe a surface level review
CRG#8707: Something something: https://arxiv.org/abs/2110.02782
StellaAthena#3530: Yeah their discussion of their vocab reads super sketch to me.
kurumuz#5695: No ablations on why they would be better at all. I find it very weird too
alstroemeria313#1694: you can compute a reweighted version of perplexity, right? that is the next character perplexity rather than the next token perplexity?
EricHallahan#1051: Yes, you can reweight.
StellaAthena#3530: Errr no? How would that work?
EricHallahan#1051: I thought that is what LM-eval does?
|
CRG#8707: You'd scale the loss of a token by its length.
nshepperd#2316: divide the total log prob of the sequence by the number of characters in it
nshepperd#2316: instead of by the number of tokens
StellaAthena#3530: I'm not convinced that's a very meaningful metric tbh, since the model doesn't see characters
alstroemeria313#1694: it's for comparing across models with different tokenizers
alstroemeria313#1694: it is 2^(expected bits per character) to compress the sequence with an optimal code, using the frequency tables output by the model
alstroemeria313#1694: (tbh why not use a loss weighted this way to train the models in the first place)
kurumuz#5695: I was thinking to scale by running the tokenizers only through the eval set
kurumuz#5695: idk if that makes sense at all :berk:
alstroemeria313#1694: yeah you would use the actual number of tokens in the reweighted perplexity calculation
kurumuz#5695: yea
kurumuz#5695: we will need that for our work too
Awesome_Ruler_007#7922: I thought you guys were talking about jurassic park and I was so confused for a moment
Awesome_Ruler_007#7922: > Something is super weird with that model
but it has similar performance to GPT3 ๐ค
finetune#0907: feels weird when sampling at least
beepydatacenter#8080: I'm kinda curious. Anyone ever mess with using on-device CV as a data saver for streaming videos?
beepydatacenter#8080: I'd imagine for certain applications, using CV algorithms to upscale and denoise streamed video could be a way to get 1080p video out of a 360p stream, which could be extremely useful if you select algorithms optimized for certain types of video streams, for example, static-image or text-heavy video streams (like one I'm watching right now)
Awesome_Ruler_007#7922: its an old idea. the throughput can't really match the amount needed for a stable FPS in real-time, nor is it worth the computational and memory load to have a full model running in the background
Awesome_Ruler_007#7922: but sure, perhaps you can try and find the difficulties; fix them with modern methods like distilling, pruning etc.
|
inox#5400: this was magic pony's thing and they got bought by twitter
beepydatacenter#8080: Ahh ok that's cool. I mean I figured it would have issues, because if it were practical it would already exist and be something youtube already implements.
beepydatacenter#8080: I suppose there will come a point in time where the computational complexity of devices are advanced enough that such algorithms could run real time, and it wouldn't be that relatively expensive to do. Maybe 10 years from now, such a thing will be standard.
beepydatacenter#8080: I do wonder if I can play with such a thing on my S21 Ultra as a concept though. It would be a tremendous battery cost, but it would be interesting to see if it's a possibility.
beepydatacenter#8080: I suppose it would make more sense to do something similar for audio, by analyzing garbled input speech and using some NLP and algorithms to clarify it
beepydatacenter#8080: I know that Nvidia broadcast uses some sort of ML to denoise audio for recording and for speaker playback
inox#5400: lossy compression algorithms seem to be pretty good unfortunately, like this is recent lossy image compression versus JPEG https://arxiv.org/abs/2010.01185 https://cdn.discordapp.com/attachments/729741769738158194/906321025707425842/Screenshot_2021-11-05_at_19-15-03_2010_01185_pdf.png
beepydatacenter#8080: I'll take a look at that in a bit, but the material at a glance seems to be beyond my understanding of ML
beepydatacenter#8080: I'm going to have to take all the papers that y'all linked me in the past few days. Since it's arxiv I should be able to take all the PDFs and put them all in a folder.
But I'm the kind of person that prefers reading things on physical paper... I'm weird and really can't learn properly unless I'm holding a physical book if I'm to read text, else I can only learn from video.
beepydatacenter#8080: And kindles don't work because something about the weight of a book helps me learn
beepydatacenter#8080: I'm trying to keep things organized. Is filing this under "algorithms" a good folder to keep it in? https://cdn.discordapp.com/attachments/729741769738158194/906322501406179398/unknown.png
bmk#1476: bits per byte is the superior metric clearly
bmk#1476: and yes eval harness uses BPB wherever possible
alstroemeria313#1694: ahh
bmk#1476: because it is the better metric
bmk#1476: (this isn't response to your comment in particular, I just picked one in that convo at random)
EricHallahan#1051: **__Public Service Announcement__**
North America falls back one hour this Sunday to standard time.
|
bmk#1476: daylight savings is bad and evil
bmk#1476: I propose instead daylight wastings
bmk#1476: daylight savings but backwards, so you get the worst of both worlds
bmk#1476: less daylight hours and also confusion
beepydatacenter#8080: I'm gonna install MATLAB on my computer and redo the Andrew Ng course. When I last did it I had no diffeq background and didn't know anything about linear algebra so it was still a bit magic to me
beepydatacenter#8080: But now I should be able to do it much better
beepydatacenter#8080: Not to mention last time I did it I knew at most Java and was really new to programming... now I am pretty damn solid in C which I know MATLAB is somewhat similar to so it should be a lot easier for me to do things.
EricHallahan#1051: MATLAB indexes from 1, and is therefore cursed.
beepydatacenter#8080: I have a drawing regarding that that I never finished
bmk#1476: Matlab is a pyramid scheme designed to sell more university courses that use matlab
beepydatacenter#8080: The robot personification of C is arguing with the robot personification of MATLAB.
C: "Arrays start at 0!"
MATLAB: "Arrays start at 1!"
Then in the next panel, the personification of Discrete Structures pops up in between them.
Discrete: "Arrays start at the first natural number!"
Then in the next panel
C: "Natural numbers start at 0!"
MATLAB: "Natural numbers start at 1!"
|
beepydatacenter#8080: To be fair MATLAB does sort of make sense to teach people how to write ML algorithms from scratch
beepydatacenter#8080: I'm going to be honest, I don't even touch Python unless I absolutely can't avoid it
beepydatacenter#8080: The only reason I haven't switched to Scala is because almost all ML tutorials are for Python
beepydatacenter#8080: But you can never and will never get me to use that horrid thing called Jupyter
beepydatacenter#8080: When I was in high school I googled "easiest programming language to learn first"
And gave up because I *hated* the way they were teaching it using Jupyter and shit.
beepydatacenter#8080: Wish I could go back in time and tell myself to learn Processing instead. Processing is the *far* superior First Language.
Kia#2550: What?
beepydatacenter#8080: btw which one of the learning community servers is most conducive for me learning the more *sciencey* side of ML instead of python's machineLearning.linearRegression.do(myData)
beepydatacenter#8080: I have 0 interest in the latter at the moment
EricHallahan#1051: **__This has been a Public Service Announcement__**
bmk#1476: *What?*
mrShiba#4412: let's say I want to do a cat and dog classification (or similar binary classification), what is the FASTEST model that can do that. Preferably working on Window too
mrShiba#4412: my problem is unique, not cat and dog, so I will need to train it from scratch
mrShiba#4412: so no need for pretrained weight (for cat and dog)
Ajay sahu#2540: https://github.com/pytorch/fairseq/blob/main/examples/MMPT/README.md
Ajay sahu#2540: https://arxiv.org/pdf/2109.14084.pdf
Daj#7482: > A three-week, all-expenses-paid bootcamp to learn the ML skills needed for applied AI alignment work, with an ambitious cohort of ~20 technical effective altruists, organized by Redwood Research and Lightcone Infrastructure
Apply now!
https://docs.google.com/document/d/1DTSM8pS_VKz0GmYl9JDfcX1x4gBvKhwFluPrzKIjCZ4/edit#
|
๐
ฌ gabriel_syme ๐
ฌ#3220: woah, all-expenses-paid wild
louis030195#2462: Hi guys, anyone ever heard of dataset or models to parse HTML? i.e. generalist models that can collect some specific information from HTML? Or using vision (taking page srceenshot, segmentation, OCR) ?
CRG#8707: https://arxiv.org/abs/2107.06955 ?
EricHallahan#1051: CRG, on point as always.
louis030195#2462: @CRG cool ๐
Deleted User#0000: Has anyone gotten access to Wu Dao 2.0 and comapred it to GPT-3
EricHallahan#1051: Welcome! To answer your question, I don't think anyone here has done that.
Deleted User#0000: I might be able to ask some people I know who work in DL in China about if they have used it and if they have used GPT-3 to comapre qualitiatvely in their opinions
ARRiel#1221: Re: announcement - what time zone?
quinn#9100: @StellaAthena timezone? which noon?
nmkd#1425: Discord displays your local time.
StellaAthena#3530: @ARRiel @quinn using discord wizardly, the time shows up in your local time zone
tomasff#8787: oh
nmkd#1425: https://cdn.discordapp.com/attachments/729741769738158194/906550488722464788/unknown.png
quinn#9100: outstanding
quinn#9100: that's awesome
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah it's pretty cool ๐
ARRiel#1221: Damn. Did technology go too far?
๐
ฌ gabriel_syme ๐
ฌ#3220: AI yall
remi#7254: The timestamp is already localized?
|
remi#7254: wow
StellaAthena#3530: But in case you set your time zone incorrectly itโs 12 pm US Eastern Time, 1600 hours UTC
quinn#9100: @StellaAthena does anyone care about proof engineering here? would a proof engineering lightning talk be fun for anyone but me?
EricHallahan#1051: I don't know why this feature is so poorly documented.
EricHallahan#1051: It's in the developer docs, just really hidden.
remi#7254: Yeah I'm trying to figure out how to do it. Never seen it before.
remi#7254: Can it only be done from API calls?
EricHallahan#1051: https://discord.com/developers/docs/reference#message-formatting
Daj#7482: I'd be interested, why not
StellaAthena#3530: #carp is looking at stuff related to proof engineering FYI.
But yeah, if youโre doing cool work with proof engineering youโre welcome to present it. The one thing is that the goal is to present *work people have done or are doing*, a talk on recent work by other people or an introduction to the topic is not particularly in-scope.
StellaAthena#3530: https://discord.com/developers/docs/reference#message-formatting
11/06/2021 12:00 PM `11/06/2021 12:00 PM`
12:00 PM `12:00 PM`
12:00:00 PM `12:00:00 PM`
11/06/2021 `11/06/2021`
November 06, 2021 `November 06, 2021`
November 06, 2021 12:00 PM `November 06, 2021 12:00 PM`
|
Saturday, November 06, 2021 12:00 PM `Saturday, November 06, 2021 12:00 PM`
11/06/2021 12:00 PM `11/06/2021 12:00 PM`
quinn#9100: roger that. i'll probably skip because it would be more "here's a proof of concept i'm drawing up with this library that in a few months should have gotten various gains for my team", not to mention i could be in spitting distance of an NDA violation. Maybe in the future i'll do a S&T then, but doesn't sound right for today.
quinn#9100: i'm interested in #carp now but don't have visibiltiy into that channel
StellaAthena#3530: Heh, definitely donโt risk an NDA violation to give an off the cuff talk
๐
ฌ gabriel_syme ๐
ฌ#3220: it's still there, it's #contrastive now right
StellaAthena#3530: Oh oops. I meant #contrastive
StellaAthena#3530: After the first paper, the channel started branching out into multiple simultaneous projects and was recreated with a broader scope. Carp was its old name
StellaAthena#3530: Specifically, theyโre working with Talia Ringer at using ML to do proof repair
https://homes.cs.washington.edu/~djg/theses/ringer_dissertation.pdf
quinn#9100: oh cool
quinn#9100: oh that's fun!
quinn#9100: I know one or two of her papers
quinn#9100: i'm in #contrastive now
Joey#4305: What are the requirements to get an ML related server posted in #communities?
StellaAthena#3530: There arenโt formal ones, what do you have in mind
Joey#4305: The Game Upscale server
StellaAthena#3530: Thatโs not very helpful to me because I have never heard of them before
Joey#4305: Oops lol. It's a server that primarily focuses on super resolution networks and tools to use them
|
Joey#4305: The initial goal was for upscaling game textures but has expanded to using super resolution for other practical applications
Orz#3023: something like DLSS?
Joey#4305: Well, DLSS is a bit different as it takes in more than just raw pixel data and upscales the game's frames
Joey#4305: The game related stuff in GU is more like HD texture pack kind of stuff
alstroemeria313#1694: oh does it take the z buffer as input too or smth?
Joey#4305: It takes depth buffer information as well as motion vector stuff if I remember correctly
Joey#4305: But don't quote me on that lol
alstroemeria313#1694: ahh
๐
ฌ gabriel_syme ๐
ฌ#3220: that's actually really cool and I'd imagine would superboost remakes and community projects. Have you or others applied supersampling to any game assets so far?
Joey#4305: Yes, I trained my own ESRGAN model specifically for upscaling the game textures of a gamecube game, and along with some manual remakes and some touchups on the upscales, was able to create a pretty decent HD texture pack
Joey#4305: Another thing people there like to do is making fan remasters of old content like cel-drawn anime that didn't get official remasters
๐
ฌ gabriel_syme ๐
ฌ#3220: that's cool ye
๐
ฌ gabriel_syme ๐
ฌ#3220: I remember a while back I wanted to do some style transfer between games, or well copy some repos doing it
Joey#4305: style transfer for games is an interesting concept
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah, I think it was done a few times for Apex but I'm not sure it's going to be soon when we can actually stream that over higher resolutions
๐
ฌ gabriel_syme ๐
ฌ#3220: but not sure, I stopped looking at those examples maybe that's closer than I think
ZodiacFenrir 2k18#6228: re: the convo about platforms going on in #show-and-tell now
ZodiacFenrir 2k18#6228: I think keeping voice on Discord is good
ZodiacFenrir 2k18#6228: and if people have slides/streams whatever leave it up to them if they want to use YouTube or Twitch or whatever
The 80's#5974: Hey everyone I am a junior in college studying biological computing with, I am currently taking diffQ matrix methods and would love to apply what I know to this research
|
Sid#2121: should i be hearing someone speaking lol. Can't tell if my audio's borked or if everyone's just intensely quiet.
Daj#7482: Stella is talking atm
EricHallahan#1051: Lets keep #meeting [voice] stuff to #voice.
Louis#0144: We should genuinely live stream these
Louis#0144: @EricHallahan I'm putting it here since it's not relevant to what the speaker is saying
ewald#7730: lol, interesting. it's exactly the other way round for me. i love python, and hate scala.
ewald#7730: if python wouldn't be used in any way for ML...
ewald#7730: then i'd still use it for non-ML tasks ^^
bmk#1476: coconut is clearly the best language
ewald#7730: something similar to that is used for some computer games, if i remember correctly?
ewald#7730: as long as it's not cobol...
ewald#7730: DATA DIVISION.
ewald#7730: WORKING STORAGE SECTION.
ewald#7730: still gives me flashbacks
ewald#7730: https://cdn.discordapp.com/attachments/729741769738158194/906587400095031306/erthrzjht.gif
ewald#7730: https://cdn.discordapp.com/attachments/729741769738158194/906587440393887764/ethjrzjw.gif
ewald#7730: sorry, i don't have a fitting goose gif
Louis#0144: We need a goose ptsd gif
Louis#0144: For when someone brings up mesh tensorflow
ewald#7730: :goose11:
|
ewald#7730: can i post a short whitepaper here, about our commercial hotword detection NN?
ewald#7730: or will this be seen as an ad?
StellaAthena#3530: Are you looking for feedback on the paper or are you looking to get people excited about your commercial product
ewald#7730: mostly for feedback about what needs improvement: fewer model parameters? more words/commands?
ewald#7730: and about interesting ideas about how to improve it. maybe someone has read something on arxiv that can be used to improve it?
ewald#7730: i don't expect that anyone here wants to buy anything from us ^^
ewald#7730: but if someone has a good idea about where to use it: even better!
StellaAthena#3530: Then yeah, thatโs fine
ewald#7730: https://cdn.discordapp.com/attachments/729741769738158194/906604083316662292/whitepaper_hotword.pdf
ewald#7730: it's a method for training an NN for recognizing arbitraty voice commands, even if you have no dataset
ewald#7730: and if you have a dataset - like in the google speech commands benchmark - the quality of the result will be a lot better
ewald#7730: https://cdn.discordapp.com/attachments/729741769738158194/906604566877995058/accuracy_size.png
ewald#7730: the model architecture we are using is very close to at_mh_rnn, which is basically some conv layers -> some lstm layers + multi head attention
ewald#7730: nothing exciting about the actual NN architecture
ewald#7730: results are about a factor 4 better though
ewald#7730: 0.52% error rate instead of 2.0%
ewald#7730: that's in the google speech commands benchmark, with the dataset included in the benchmark
Daj#7482: John Wentworth posts his study guide on "Specializing in Problems We Don't Understand". Lots of good practical advice, especially for people still early in their career that want to work on these kinds of fuzzy (but very important) things
https://www.lesswrong.com/posts/bjjbp5i5G8bekJuxv/study-guide
Daj#7482: Wish this had existed when I was in college!
|
genetyx8#7543: mfw I already got most of the math in that list, and got one of the books by accident :guilty:
Awesome_Ruler_007#7922: I think its just lack of maintainers for mtf.
This is on the first page of the repo for the MNIST example
> TODO(noam): verify that this code works.
Louis#0144: im not convinced anyone outside of :works_internally: ever used mtf
Louis#0144: besides leo
Louis#0144: lol
Awesome_Ruler_007#7922: it couldn't be *that* bad, plus I don't think at that time there was any suitable framework for model parrelism anyways?
Awesome_Ruler_007#7922: and nowadays, there are more tools for that than people training large models :berk:
EricHallahan#1051: @Awesome_Ruler_007 Start here, work your way up and down.
Awesome_Ruler_007#7922: After seeing the chats, I am honestly impressed you guys are still sane
Louis#0144: We aren't
Louis#0144: Why do u think we have geese
Louis#0144: ๐
Humble Muslim#4165: Hello everyone, i am new here and i wanted to inquire about something
Where is The Pile dataset hosted exactly, is it on a cloud service like amazon or azure or does The-Eye have a terabyte of storage connected to high speed internet and available for anybody to use?
Louis#0144: The eye
EricHallahan#1051: Welcome! The Eye is our official host for the Pile.
Humble Muslim#4165: Interesting, does someone know what hardware they are using for something like this ... is it something like a QNAP with terabytes of storage and a high speed internet module or something else ?
|
ewald#7730: the pile is not that big. it's like 500 GB compressed
ewald#7730: which is quite a bit for pure text. but not _that_ much
Awesome_Ruler_007#7922: truly the day when we have "text extractor" models can we truly get to AGI
Awesome_Ruler_007#7922: I wonder though - can we use current NLP models to seq2seq clean CC data?
Awesome_Ruler_007#7922: does seem possible - supposing we take a corpus which is purely the form we want, like The Pile โข๏ธ and train a BERT; then on parts of the C4 maybe we can do MLM and identify whether its noise or actually fits the target-distribution, which is data like Pile.
Maybe delete the parts with low scores below a threshold, as for cleaning ig that would be more complicated with seq2seq model, though possible...
StellaAthena#3530: Possible? Yes. A good idea? Doubtful.
https://arxiv.org/abs/2109.00698
Awesome_Ruler_007#7922: > Possible? Yes
that's all I need for my startup
bmk#1476: I mean you just described almost exactly how the gpt3 data was filtered
Awesome_Ruler_007#7922: oh lol. I was reading the so-called "updated survey" and they never mentioned NLP models at all
bmk#1476: huh?
bmk#1476: what updated survey
bmk#1476: never heard of it
Awesome_Ruler_007#7922: some random paper which summarized all the best methods
Awesome_Ruler_007#7922: nvm mind
Awesome_Ruler_007#7922: dunno man, seemed like he should have used a bigger model ๐
bmk#1476: to be clear, the takeaway of the paper isn't "filtering bad"
|
bmk#1476: it's "filtering good sometimes but you have to be careful with it"
Awesome_Ruler_007#7922: interesting results; the better takeaway is that the time spent filtering is better spen on getting a bit bigger model and pray to god that it generalizes
cfoster0#4356: Wut
Awesome_Ruler_007#7922: beacause tbf its hard to identify what data would be useful for some task, what might not
Awesome_Ruler_007#7922: like for example, I could see HTML tags being useful for a model to learn HTML as well ๐คทโโ๏ธ
Awesome_Ruler_007#7922: If I have a sufficiently large model with some basic filtered data, then actual quality text would outnumber the noise; the hope is that it can lean and learn towards the quality parts on its own and derive some inferences from the so-called noise which might be handy for some other tasks
StellaAthena#3530: That is not at all the takeaway. What?
ewald#7730: the most important words in that sentence are probably "sufficiently large model" and "the hope is" xD
Awesome_Ruler_007#7922: that's just my hypothesis that it might be worth scaling parameters to the data, and as ewald pointed out "hope" the model learns to differentiate on its own
Awesome_Ruler_007#7922: filtering and then training, doing multiple runs to determine the best quality of dataset is cost-inefficient.
granted you do it once per dataset, but with more and more new datasets it becomes a nuisance
Spacecraft1013#5969: I have a completely unrelated question that I'm not sure if I just missed something about it in the paper or if the writers made a mistake, I'm reading through the code for DETR (the object detection transformer model) and for their second attention in the decoder, they used the memory values from the encoder for the key and value, and then used target for query, while in the original transformer paper it's using memory for query and key and target for value
Spacecraft1013#5969: I never saw anything in the DETR paper about them changing that though, but maybe I just missed it
Spacecraft1013#5969: https://github.com/facebookresearch/detr/blob/091a817eca74b8b97e35e4531c1c39f89fbe38eb/models/transformer.py#L248 thats the code line i'm talking about
StellaAthena#3530: As a general rule, if you're "just" discussing your "hypothesis" telling the author of a paper that he's wrong about the main takeaways is not a very good way to do that.
Awesome_Ruler_007#7922: my bad, I didn't phrase it particularly leaning towards a hypothesis. Apologies
Awesome_Ruler_007#7922: is it cheating if someone hunts samples close to a downstream task, orders them in descending order of closeness in a new corpus (similarity metric over embeddings) and applies decaying LR scheduling ๐
Awesome_Ruler_007#7922: I mean, Its still *unsupervised*. like maybe "Aided UNsupervised Training" -> `AUNt`
m_wAL99#1923: https://arxiv.org/abs/2111.00905
99 Pages, 79 Figures, 24 Tables
|
holy :wat:
Louis#0144: Sending this to partner
cfoster0#4356: In both cases, when you do cross attention in a transformer you'll typically have the keys and values produced from one domain (the "source" language in MT) and the queries from the other (the "target" language in MT)
Awesome_Ruler_007#7922: the more I thought, it seems **intuitively** to me that in the initial iterations of training the model would indeed be slightly more familiar with that style of text and format due to large LR, and might have an edge in the downstream task.
experiment worth doing? :thinkies:
StellaAthena#3530: It works, itโs just a lot more work than itโs worth typically.
Awesome_Ruler_007#7922: as far as I searched, I could find no papers that dispute or support such an observation. Indeed, the assumption is that the *order* doesn't really matter.
any leads?
Awesome_Ruler_007#7922: ahh, but useful for SOTA breaking stuff....maybe? ๐ฅบ
StellaAthena#3530: Not really
Awesome_Ruler_007#7922: anyways, I might do the experiment just because its so easy to verify :berk:
Awesome_Ruler_007#7922: ~~and perhaps the only one I can ever do~~
StellaAthena#3530: See, e.g., here: https://arxiv.org/abs/2108.02170
StellaAthena#3530: Thereโs an interesting paper on using CL to improve code-switching but Iโm blanking on the name of the paper
StellaAthena#3530: Itโs also unclear how to design meaningful experiments for that tbh
Awesome_Ruler_007#7922: wow, TIL Curriculum Learning exists. interesting, but I suspect the lack of papers debunking this theory is a major drawback.
Mostly because I don't think their assumption of complexity and entropy-based heuristics to balance out 'complexity' for a model being similar to human learning is valid;
|
Nor does it particularly illuminate my guesstimate of pre-training with downstream-task related text upfront leads to similar performance.
still, pretty interesting ๐
EricHallahan#1051: I suggest reading Shortformer then.
https://arxiv.org/abs/2012.15832
kurumuz#5695: Seems like you assume your "guesstimate" is correct.
Awesome_Ruler_007#7922: so my interpretation is that curriculum learning *might* still hold in the way of context/topic of the sequence as well as the length of the sequence - so it just might actually perform better ๐ค
Awesome_Ruler_007#7922: as for why shorter seq lengths in the start work better, I have little idea nor does the paper explain it. perhaps it can be alluded to the model learning the *grammar* better with more simpler sequences... (thought that sounds like an awful stretch)
Awesome_Ruler_007#7922: *why is DL always so contradictory and counterintuitive?* ๐
kurumuz#5695: it is not.
Awesome_Ruler_007#7922: perhaps for formally taught researchers; double descent to noobs like me seems weird, and its explanation is non-trivial
EricHallahan#1051: TL;DR: Transformers focus on recent context more, so if you train with a shorter context length at the beginning of training you can both speed up the process *and* also see better downstream performance.
EricHallahan#1051: So you can have your cake and eat it too.
Awesome_Ruler_007#7922: I see - is that some core limitation of MHA, and why its not coherent for long sequence generation tasks like storytelling?
cfoster0#4356: That's unclear, but I suspect the answer to both of those is no
kurumuz#5695: This depends on your task, data and loss function
cfoster0#4356: Mm actually the answer to the first is "definitely no". MHA itself has no notion of recency
Awesome_Ruler_007#7922: Hm, that doesn't really leave a lot of parts then
Awesome_Ruler_007#7922: or is it just attributed to "how transformers are"?
tpapp157#3643: No. It's just a function of how text works. Recent text is more likely to be immediately informative. Therefore, text transformers learn to focus more on recent text.
|
tpapp157#3643: If I'm reading a given word, the most likely spot to find more contextual information about that word is the immediately preceding word.
Awesome_Ruler_007#7922: that reasoning seems thin; literature like opinion pieces, stories and articles maintain a core theme and refer to it several times. concepts and ideas out of the context window doesn't matter to humans at all, so it should be modelled by MHA since attention is calculated with *all* the tokens most of the time
Awesome_Ruler_007#7922: while some parts of a corpus may reward the model more with nearby contextual information, most don't. Usually, we almost always refer to non-recent text and would thus be dominated by most corpuses - unless we have statements like "Sam ate an Apple. What did Sam eat?"
cfoster0#4356: If I could operationalize this, I would bet against it
EricHallahan#1051: Attention may be calculated with the entire context, but the entire point of attention is to attenuate information.
EricHallahan#1051: As the context grows larger the weighting of any one token must be on average lower.
tpapp157#3643: There's a reason why simple markov chain text models have been able to generate reasonable-ish text for decades while only using the immediately 1-2 preceding words as context.
Awesome_Ruler_007#7922: attenuate information w.r.t to the loss function, which targets being right - which in turn requires elements which may not be placed conveniently near, and incentivises forming long-term relationships between distant parts in a sequence.
Awesome_Ruler_007#7922: that's true, but relative to other tokens it wouldn't matter much - would it?
cfoster0#4356: I'm not sure what you're saying, but if you're saying "if you want the network to attend to long context, you need your training data to be structured differently than regular text" I would agree
EricHallahan#1051: Grammar relies on nearby information. The last sentence simply isn't useful when it comes to forming this sentence.
Awesome_Ruler_007#7922: yes, and I argue that most of the text available *is* indeed structured to incentivize studying long-term relationships.
True, grammar wouldn't rely on that. But then to find the true global minima, you would have to study long term relationships anyways because that's how you minimize that function all the way
ewald#7730: well, yeah, you would need training data where attending to long context is neccessary for low loss
ewald#7730: "concepts and ideas out of the context window doesn't matter to humans at all" - disagree
Awesome_Ruler_007#7922: so what, mixing long and short alternating sequences during the stage-1 pre-training should lead to better downstream performance on the LRA? :ultrathonk:
Awesome_Ruler_007#7922: or like tasks which require long-term relationships anyways. but I know its been done b4 ๐
ewald#7730: is it? hmm
EricHallahan#1051: No
|
Awesome_Ruler_007#7922: looks like its not after all, I am only getting shortformer from Google searches
ewald#7730: if i had to guess, i'd say short first, long later.
Awesome_Ruler_007#7922: but ig the idea is so wacko that it wouldn't really work.
tpapp157#3643: Anyway, the point being, normal human text has a strong locality bias for mutual information between tokens. Long range relationships are typically of lesser importance but still informative. The entire concept of curriculum training on shorter sequences first is to allow the model to learn the very important local relationships without the noisy distraction of the less-important long range relationships before later introducing that additional complexity.
ewald#7730: maybe you can let your model summarize the stuff that was written long ago?
ewald#7730: like:
ewald#7730: paragraph 1 -> summarized to 1 sentence
ewald#7730: paragraph 2 -> summarized to 1 sentence
ewald#7730: paragraph 3 -> summarized to 1 sentence
ewald#7730: paragraph 4 -> unchanged
Awesome_Ruler_007#7922: I was thinking along those lines, but AFAIK its been done in a form by FAIR
ewald#7730: i think if you had the same amount of samples with long range relationships it would be even better
ewald#7730: but if you have some short and some short+long
ewald#7730: and you want short+long capability
ewald#7730: then train short first, and then short+long
ewald#7730: right?
Awesome_Ruler_007#7922: so as I see, the solution is to alternate both sequences and force it to to evolve a compromise....?
Awesome_Ruler_007#7922: because we are focused on developing long range relationships, as they are more useful in real-life tasks
ewald#7730: i don't think so. in the samples where you have long range relationships, you also have short range relationships
Awesome_Ruler_007#7922: *mostly
|
ewald#7730: maybe add a few of the long range relationships to the first part of the training?
ewald#7730: but in the second part of the training we want only the "good stuff"
Awesome_Ruler_007#7922: No
Awesome_Ruler_007#7922: A block of short seqs, followed by alternating length seqs, then all the long range ones
Awesome_Ruler_007#7922: the alternating period is to incentivise learning both short and long range relationships and not forgetting
EricHallahan#1051: You always see the short lol
Awesome_Ruler_007#7922: plus it serves as a useful transition to both those disjoint variety of sequences
cfoster0#4356: We're bordering on armchair philosophizing at this point. Need more :empiricism:
tpapp157#3643: Curriculum learning comes from the world of reinforcement learning, where it's often necessary to start an agent learning on trivial problems and slowly increase the complexity over time until the desired level of expertise is achieved. This approach can significantly speed up overall training compared to forcing the agent to solve the full complex problem from the beginning. It's less clear how helpful curriculum training is in the supervised learning context.
Awesome_Ruler_007#7922: wasn't it established to be pretty clear in the unsupervised regime, i.e shortformers?
Awesome_Ruler_007#7922: Well yeah, I can't dig up any papers so can only conjecture
Awesome_Ruler_007#7922: unless someone has too much free time and wants to try it out ๐คฃ
Awesome_Ruler_007#7922: *oh fk its 4 A.M* :guilty:
ewald#7730: i don't see it only as curriculum learning. i see it more as "train with medium quality samples first, and with high quality samples later" (if you don't have enough high quality samples)
ewald#7730: so a little like transfer learning?
tpapp157#3643: yeah you can think of things like transfer learning and finetuning in that way.
ewald#7730: something like that has worked quite well for me. especially when i've sprinkled some of the high quality samples onto the medium quality samples in the beginning
bmk#1476: see the recent OA paper
ewald#7730: (although with audio, not with text)
ewald#7730: ah!
|
bmk#1476: https://arxiv.org/abs/2109.10862
ewald#7730: yeah, already found it
ewald#7730: maybe this can be used to have "the whole book as a context" - although details will be lost of course
EricHallahan#1051: PowerSGD in PyTorch 1.10
https://medium.com/pytorch/accelerating-pytorch-ddp-by-10x-with-powersgd-585aef12881d
https://pytorch.org/docs/1.10.0/ddp_comm_hooks.html#powersgd-hooks
Spacecraft1013#5969: but what's the hit on the model's performance? the article doesnt mention anything about any accuracy comparisons
๐
ฌ gabriel_syme ๐
ฌ#3220: I did this, sort of, and it works really well. I originally saw it suggested by Kharr first in here.
King Debs#5065: Hi all, I want to generate text using ai to write a blog, is there a tutorial that can show me?
King Debs#5065: Is there an application out there that is free/ open source, or do I have to learn how to code? I'm not a developer, i just want to write some stuff using ai.
Louis#0144: #off-topic
StellaAthena#3530: @Louis If I want to generate a very large number of grammatically simple factually correct sentences, what would be a good way of going about that? I assume KGs are good at this?
StellaAthena#3530: By โlarge numberโ I mean โbillionsโ most likely
Louis#0144: I would use KGs yeah
Louis#0144: Antoine has a paper on this
Louis#0144: Ill@send it in a bit
Louis#0144: Still in bed
StellaAthena#3530: Leverage combinatorial explosion to our advantage for once
StellaAthena#3530: $\binom{45,000}{2} > 1,000,000,000$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/906898695021264946/193204646687408129.png
|
๐
ฌ gabriel_syme ๐
ฌ#3220: Are you trying that paper that pretrained with such data? Or maybe it was not even grammatically correct data
StellaAthena#3530: I can write up the experiment I have in mind in a bit, but AFAIK my idea with this is extremely novel.
Louis#0144: Ok so what I would do is train a model to go ATOMIC -> sentence (you'd need a set of seed sentences, I'd recommend ROC)
Louis#0144: And then you can permute over ATOMIC
Louis#0144: Which iirc has 1.3mil vertices
Louis#0144: I've trained this before for Bart
Louis#0144: But it sucked
Louis#0144: I'd retrain it for GPTJ
Louis#0144: The real issue is that verifying a KG usually requires a ground truth story
Louis#0144: Which you wouldn't have in this case
Louis#0144: lol
Louis#0144: So I would be inclined to say you'd generate things that look kinda right but aren't 100%
Louis#0144: You'd still get some bogus
StellaAthena#3530: How reliable is ATOMIC?
Louis#0144: Pretty solid
StellaAthena#3530: Honestly I might even go simplerโฆ. A list of objects and what color they are, or something list that
Louis#0144: That would be easier
StellaAthena#3530: What I want to do is take two datasets: one containing factual information about topic A, and one containing factual information about topic B but โshaped like prompts.โ e.g., in Q/A format or something.
Train a LM on both datasets, then test itโs Q/A ability on topic A
|
StellaAthena#3530: I have the 32 GiB of topically narrow Q/A data already
Louis#0144: https://arxiv.org/abs/2102.07033
Louis#0144: Like this?
StellaAthena#3530: @Parker Iโm thinking primarily about verifying conjectures about the operation of large language models, and especially implicit multitask / implicit multiprompt learning such as from these papers:
https://openai.com/blog/language-unsupervised/
https://arxiv.org/abs/2110.08207
https://arxiv.org/abs/2109.01652
StellaAthena#3530: The second paper has a good self-contained sketch of the ideas
Parker#3197: what conjectures?
Parker#3197: that is a lot to read
StellaAthena#3530: Oops thatโs the wrong first paper
StellaAthena#3530: I meant to link to โLanguage Models are Unsupervised Multitask Learnersโ
StellaAthena#3530: But I got shit to do, I can try to summarize modern NLP research another time.
Parker#3197: no worries. I think it was just the wording of how it evaluates truth that got my response anyway.
StellaAthena#3530: This is really weird to read
StellaAthena#3530: Itโs multiple OOMs smaller than the Q/A questions we put in the Pile
Dashiell#8739: How easy is it to get into Google's TRC?
EricHallahan#1051: It is very rare to get rejected.
Dashiell#8739: Like should I even bother applying as a neet-ing nobody?
|
Dashiell#8739: Ahh, ok
EricHallahan#1051: Yeah, they'll let you in.
bmk#1476: they let in anyone with a pulse (pulse optional)
Dashiell#8739: Then I'll fit right in ๐
inox#5400: I can't get non-preemtable instances on TRC much anymore though
StellaAthena#3530: v3-8s?
inox#5400: yeah
StellaAthena#3530: (I donโt have any follow-up, just curious)
Spacecraft1013#5969: I heard pytorch is pretty iffy with TPUs though, is that true?
bmk#1476: yeah
alstroemeria313#1694: it's quite bad
Orz#3023: I think the reason for this is that the dataloader of pytorch is not as efficient as the one from tensorflow.
Orz#3023: (atleast from what I tried)
alstroemeria313#1694: it's fine on a tpu vm
alstroemeria313#1694: The problem is that PyTorch/XLA has a ton of performance footguns.
alstroemeria313#1694: But like I can use PyTorch dataloaders in JAX on a TPU VM fine.
Orz#3023: aight
I've only used them on kaggle
So yeah I'm kinda biased
StellaAthena#3530: Lol I said something about this on Twitter and people started talking about how great XLA is and Iโm just like โnobody j know has had this experienceโ
|
StellaAthena#3530: https://twitter.com/BlancheMinerva/status/1454897650004803587?s=20
bmk#1476: it might be better now than it was before
alstroemeria313#1694: i tried it like a month ago
bmk#1476: oh huh
Swox#8798: Hello ๐ im new ๐
Discovering vqgan and clip, trying to run some@scripts on Google lab.
Hard for me to play with parameters and have the output Iโd like to have, hope this community will be helpful!
:5635pepesalute:
AI_WAIFU#2844: !faq
Carl-bot#1536:
pragmaticml#1730: The pytorch/xla error messages are pretty poor and it's a pain translating ops that are not yet supported for sure. The performance debug logs were alright though... seemed frustrating but usable to me.
๐
ฌ gabriel_syme ๐
ฌ#3220: cool!
๐
ฌ gabriel_syme ๐
ฌ#3220: do you mean they are always full?
๐
ฌ gabriel_syme ๐
ฌ#3220: I also wonder if TRC not so easy anymore, in the sense of availability not requirements. I've seen a few people lately say it takes a bit too long to get response
Adam Feldmann#3575: Hi all, I'm an AI expert psychologist, programmer and large model builder from EU. I was trained Hungarian version of BERT-large using DeepSpeed previously. I'm happy to be here.
nshepperd#2316: this doesn't seem right https://cdn.discordapp.com/attachments/729741769738158194/907238342893121566/2021-11-08-225941_2825x452_scrot.png
nshepperd#2316: the four replicas of my tpu pod
nshepperd#2316: think the demo grids should be identical :thonk:
nshepperd#2316: idgi, i'm pmeaning the grads so they should be identical?
nshepperd#2316: oh, you have to pass the global devices list if you pass devices= to pmap
|
nshepperd#2316: yay~ https://cdn.discordapp.com/attachments/729741769738158194/907249688619671582/2021-11-08-234550_1770x917_scrot.png
nshepperd#2316: tpu pod working
alstroemeria313#1694: yay!
nshepperd#2316: :)
nshepperd#2316: i should write a thing about how did this. but it's surprisingly simple in the end. you run the same program on every node. and any jax call blocks until all of them have started. use DistributedSampler, and pmean the grads as usual
nshepperd#2316: the one footgun seems to be if you pass devices=jax.local_devices() into pmap it won't do the cross-node stuff
alstroemeria313#1694: ahh
alstroemeria313#1694: model/pipeline parallel when >_>
alstroemeria313#1694: (This is way harder isn't it)
nshepperd#2316: eheh
nshepperd#2316: yeeep
nshepperd#2316: also i need to write some sort of script to run the same program on each tpu vm simultaneously in a bunch of tmux windows
nshepperd#2316: bc doing that manually is kind of tedious heh
alstroemeria313#1694: aren't there EAI infrastructure things for that
alstroemeria313#1694: idk which though
nshepperd#2316: is this the pyfra part of the iceberg
Louis#0144: @bmk
nshepperd#2316: https://discord.com/channels/729741769192767510/844699930168786944/857093278998069288 @AI_WAIFU can i use this. seems kind of :bigbrain:
EricHallahan#1051: symphony isn't completed yet IIRC.
EricHallahan#1051: ray?
|
kurumuz#5695: yes
kurumuz#5695: or
kurumuz#5695: :yes:
nshepperd#2316: reasons
nshepperd#2316: found it was slow without that in a previous thing
nshepperd#2316: but for this one it doesn't seem to matter so w/e
alstroemeria313#1694: it was because we suspected we were replicating params onto devices in a different order than they got used
alstroemeria313#1694: so we had to do device to device transfers for the first demo grid (after the first training step they would be sharded the correct way and the demo grids would be noticeably faster to make)
nshepperd#2316: yeah
alstroemeria313#1694: yeah
alstroemeria313#1694: so we were passing the device list to the pmap to get the order the same as the replicate
alstroemeria313#1694: (also my models were big enough that i couldn't do the thing i was doing before, which was to replicate the params 8 times on the *same* device and feed them to a pmapped thing that returned a copy, bc 8 copies of the params on one TPU core would OOM it)
nshepperd#2316: i suppose we could device_put_replicated(params, jax.devices()) then feed that to a pmapped thing that returns a copy?
alstroemeria313#1694: Ah yeah
AI_WAIFU#2844: still in development
bmk#1476: you can definitely do this with pyfra, though this isn't exactly the thing pyfra is designed for
bmk#1476: just use tmux send-keys a bunch
Exocamp#8255: https://twitter.com/DeItaone/status/1457745705133543424
Anyone heard about this?
Kia#2550: Where did they even get that...
|
ari#9020: Probably https://www.youtube.com/watch?v=ECHhuvuiNzs
EricHallahan#1051: Ah, they announced the MI200.
EricHallahan#1051: Their software stack is way less mature though.
Kia#2550: Hmmm
tpapp157#3643: maybe directML will save AMD in the DL space.
EricHallahan#1051: But Windows?
tpapp157#3643: it's better than nothing, which is basically the state of AMD GPU compute right now.
tpapp157#3643: Nvidia has poured huge amounts of money into the cuda ecosystem over the last 10+ years while AMD has been fighting just to stay solvent. They have many years of catching up to do in that space even if they made it their top priority.
EricHallahan#1051: "Better than nothing" isn't saying much when the big bucks are made from Enterprise and HPC.
Dashiell#8739: What are they even pushing as an alternative to cuda/cudnn now? Is it still rocm?
tpapp157#3643: Yeah still just rocm. Which in typical AMD fashion they developed to a bare minimal level and then open-sourced in the hope of getting free community development to do their work for them.
Dashiell#8739: If it was a halfway serious competitor I'd probably use it just out of spite, but with the way things going Triton will support AMD GPUs before AMD properly supports rocm
ersatz#0001: > Premiered 106 minutes ago
damn
Dashiell#8739: evergreen
https://www.youtube.com/watch?v=_36yNWw_07g
Dashiell#8739: (having just dropped $3k on a 3090 I am, of course, a massive hypocrite)
Awesome_Ruler_007#7922: tbh, they don't really have that much room to experiment. Nvidia has been an old monopoly which allowed them to monopolise other markets, just like FAANG
gollark#3909: It seems like AMD could have done a much better job than they did, though.
gollark#3909: As far as I know ROCm is available on basically no GPUs and is very finicky to get working.
|
cognomen#6297: more likely nvidia have a quagmire of patents around cuda that make it impossible for amd to even enter the market
cognomen#6297: or made, I don't know what changed
tpapp157#3643: You need to remember that AMD is tiny compared to Nvidia or Intel. And for a decent chunk of the last ten years they were practically one bad day away from going bankrupt. It's only been within the last 1-2 years that AMD has started making healthy profits again but they still have a relative mountain of debt to deal with. So it's not surprising that a lot of secondary investments like their GPU compute software were severely underfunded or not funded at all for a long time.
tpapp157#3643: Also remember that most of AMD's revenue comes from their CPUs, not their GPUs.
smallanimalfriend#4355: https://cdn.discordapp.com/attachments/729741769738158194/907373844338208788/unknown.png
tpapp157#3643: Hardware specs don't matter when your software is so bad that it's practically impossible to write code for it.
smallanimalfriend#4355: Ya, see above discussion on software, but it absolutely does matter
smallanimalfriend#4355: Good hardware motivates software development
smallanimalfriend#4355: No one is interested in developing software for hardware that's way behind nvidia
EricHallahan#1051: AMD loves double precision lol
smallanimalfriend#4355: For their HPC customers https://cdn.discordapp.com/attachments/729741769738158194/907375492599021588/unknown.png
EricHallahan#1051: Who else would care?
EricHallahan#1051: Double is like AMD's one consistent advantage.
gollark#3909: How do they manage to have the same FP64 and FP32 throughput? I thought there was some quadratic scaling going on there.
tpapp157#3643: Because the compute hardware is 64bit. You can put 32bit operations through 64bit hardware but you just waste the other half of the compute.
tpapp157#3643: That's why they have the exact same number of operations per second.
Louis#0144: 128gb is nice
Louis#0144: Excited for when nvidia has this lol
Awesome_Ruler_007#7922: fucking AMD went up 10% in *one* day
Awesome_Ruler_007#7922: damn.
|
Awesome_Ruler_007#7922: true, but since its OSS it would improve over time
Awesome_Ruler_007#7922: I mean, Pytorch already has Beta support ๐คทโโ๏ธ
Dashiell#8739: how dumb would it be to try to have a model generate images in a wavelet frequency-position space? Have as input one of the undecimated wavelet transforms, do all of the learning and the final generation in wavelet space, then inverse wavelet transform to get your actual image
Dashiell#8739: possible benefit: wavelet representations are sparse
cfoster0#4356: It's possible
Dashiell#8739: downside: ????
cfoster0#4356: Similar to the stuff Dieleman did earlier
alstroemeria313#1694: having to output a bunch of different sized feature maps
cfoster0#4356: https://arxiv.org/abs/2103.03841
alstroemeria313#1694: also downside: the wavelet transforms are simple/learnable anyway
alstroemeria313#1694: by a decently deep convnet
alstroemeria313#1694: with stages at different resolutions
Dashiell#8739: presumably if they were given that for free they could spend some FLOPs learning something else, right?
alstroemeria313#1694: and in fact they learn specialized/task-specific way more complicated things.
Dashiell#8739: yeah, this is pretty much what I was thinking about. Thanks!
Dashiell#8739: yeah, that would be a pain
AI_WAIFU#2844: pretty sure someone did that already
AI_WAIFU#2844: worked pretty well IIRC
Dashiell#8739: is it what @cfoster0 just posted?
Dashiell#8739: the only direct wavelet stuff that's even a little bit recent (that I could find) is this person's thesis: https://www.repository.cam.ac.uk/handle/1810/306661
|
AI_WAIFU#2844: I was gonna dig it up, but yeah it's what he posted
Dashiell#8739: which is building directly on Mallat's scatternet, and runs into the problem that @alstroemeria313 mentions: it's just not as good as a convnet
inox#5400: it plays into deepmind's "what if we can put everything in a perceiver at the bytestream level"
cfoster0#4356: they had a thing where they stuffed everything into protobufs and modeled those
Dashiell#8739: "inductive bias? never heard of em. Here stuff more data into the buffer"
inox#5400: they had to put a convnet in the "Generating Images with Sparse Representations" transformer to deal with the long sequences
inox#5400: would be more fun if it were a perceiver
inox#5400: is that this one? https://twitter.com/yaroslav_ganin/status/1390709745909211138?s=20
cfoster0#4356: Yes!
๐
ฌ gabriel_syme ๐
ฌ#3220: should I send an email for the code you think? No one knows me though. Pretty sadge it's still not out
๐
ฌ gabriel_syme ๐
ฌ#3220: oh yes I really liked that paper when it was out, even uses my kind of language ๐
๐
ฌ gabriel_syme ๐
ฌ#3220: the idea of using abstractions for designs is great, especially since the part of going from a sequence of abstractions -> an actual design is easy, generating that sequence that represents a design isn't.
I've wondered for sometime now where the balance between general vs bespoke approaches should lie wrt language driven design. Not sure yet, but I feel for practical purposes and especially due to the power of pretraining, bespoke approaches will win in the short term.
cfoster0#4356: Wouldn't hurt to try
cfoster0#4356: It's DeepMind so I wouldn't be surprised if they don't release
un.known#2928: Hello, I'm getting back to AI generated images. Does google collab still provide decent GPU's?
Dromarion#3383: I used the free version the past month and half the time they wouldn't even give me one.
un.known#2928: They wouldn't give you any GPU?
un.known#2928: I used to run 10 google accounts at the same time on colab and get good enough gpu's but right before I stopped doing this AI thing they started giving me really crap GPU's, I heard that it was globally, not only a 'me' problem
Dromarion#3383: Maybe there's just more people using Colab now due to the growing popularity of AI Art. Either way I just caved and got pro.
|
un.known#2928: I wanted to do that, however buying pro for all my 10 accounts wasn't really worth it
Dromarion#3383: I haven't been locked out of a GPU since so I don't know. Maybe just get it for one account
un.known#2928: well, as i said, i used to process 10 images at once so like.. running 10 colabs at once
un.known#2928: that's the only way i found it productive enough to keep me interested
EricHallahan#1051: I've been banned for up to a month before.
un.known#2928: why'd that happen?
cfoster0#4356: This is not gonna happen anymore. Gotta adjust expectations
EricHallahan#1051: I spent way too much time idling with a GPU.
EricHallahan#1051: Like way too long.
un.known#2928: why's that?
cfoster0#4356: Things have changed. It's less generous
un.known#2928: but if you were idling, didn't it just disconnect you automatically?
guac#4716: were you auto reconnecting your sessions lol
cfoster0#4356: Even with Pro you can get like 1 P100 and smaller GPU at once, max
guac#4716: yeah pro sux for gpus i just use tpus now :/
EricHallahan#1051: No, I was trying to set up remote shell access with `cloudflared` lmao
EricHallahan#1051: And spent way too much time failing to set it up.
guac#4716: would've been a sweet workflow though lol
EricHallahan#1051: I feel :guilty: for not learning to use TPUs yet, but I also know I'm in for :ptsd: after I do.
๐
ฌ gabriel_syme ๐
ฌ#3220: I use TPUs every day for a while but I don't think I've really learned much lol
|
guac#4716: yeah jax abstracts the shit out of them. idk what i'm doing. it's worth it if you live in colab lol screw p100s!
un.known#2928: can you guys translate for the rest of us that aren't fluent in gpu's?
cfoster0#4356: Translate what?
tpapp157#3643: I haven't messed with colab before.
un.known#2928: the tpu thing
un.known#2928: what are they, why are they better, how do you use them, what's different?
๐
ฌ gabriel_syme ๐
ฌ#3220: maybe this is good for you https://sites.research.google/trc/about/
un.known#2928: it is indeed
un.known#2928: thank ya
un.known#2928: ah shit they speak encrypted too
un.known#2928: i've got only question
un.known#2928: are they the best option if you do ai image generation on colab?
๐
ฌ gabriel_syme ๐
ฌ#3220: the only answer is 'it depends'
๐
ฌ gabriel_syme ๐
ฌ#3220: but they are a great option at times, for some things, when all you have is free colab, due to the limited supply of GPUs
(disclaimer: have not used colab in months, things change fast)
๐
ฌ gabriel_syme ๐
ฌ#3220: that said, I doubt v2-8s are more widespread used than colab GPUs, given most code is on pytorch right now and made to run in GPUs
un.known#2928: oh you need to APPLY for them?
un.known#2928: i'm out
๐
ฌ gabriel_syme ๐
ฌ#3220: you apply and you (almost certainly) get in
๐
ฌ gabriel_syme ๐
ฌ#3220: the requirements are low, again though things change through time
|
un.known#2928: they askin me for organization name and job title LOL
Spacecraft1013#5969: i literally put "none" as my organization, with a gmail email, and didnt even specify a job position, and i still got in
bmk#1476: also reminder that this is not a beginner server
cfoster0#4356: seconding this
beepydatacenter#8080: Why has autopredict never used some form of GPT prediction in its algorithm? Wouldn't its suggestions be much better if it used GPT?
StellaAthena#3530: Type the first 15 words of an email into 6b.eleuther.ai and see how good it is
StellaAthena#3530: Also, until a couple months ago the most powerful publicly available GPT-style model was 2.7B parameters. 175B GPT-3 >>>> 2.7B GPT-3
EricHallahan#1051: Because GPT didn't exist a few years ago? Also autopredict applications really want low latencies. No use if it takes a second to update for each character.
StellaAthena#3530: Ooo yeah latency too
kurumuz#5695: you can make one forward really fast but on a mobile device? yeah no lol
kurumuz#5695: also autopredict learns from you as well
kurumuz#5695: i mean you can make a really small model run
kurumuz#5695: but at that point I'm not sure if GPT would be any better
EricHallahan#1051: A dictionary search is pretty good for that, LSTMs (:schmid:) are really the fanciest they go at this point in production.
kurumuz#5695: LSTMs are too based
kurumuz#5695: we use it internally all the time :berk:
kurumuz#5695: so stable. so based. gate everything with LSTM/GRUs
kurumuz#5695: this is what peak performance looks like
AI_WAIFU#2844: :sus:
beepydatacenter#8080: Yeah I suppose latency would be an issue. I think whatever NLP microsoft uses in outlook is pretty decent for professional emails. I find that when I am sending messages relating to college shit the NLP predictions are pretty great.
|
beepydatacenter#8080: IMO Outlook's helper is better than Gmail's helper, but then again I never use gmail professionally so idk
beepydatacenter#8080: I'm going through the Andrew Ng stuff again. I'm not sure why I struggled with this stuff 2 years ago. It seems super easy to me now...
beepydatacenter#8080: I suppose having more general exposure to ML has greatly helped me, so now this course shouldn't nearly give me as much of a problem as it did the first time
๐
ฌ gabriel_syme ๐
ฌ#3220: for the layman (me), where does that go? last layer? or does it replace other stuff?
ElectricLizardFren#8378: Weird AI idea: An AI that cuts a recording of you saying something into slices, editing them to make them into custom Friday Night Funkin' vocals, then automatically turns it into a soundfont
Louis#0144: Distilling an LM to an LSTM isn't as crazy as It sounds
๐
ฌ gabriel_syme ๐
ฌ#3220: oh so it's distilling? I thought it was a layer at the end or smth
volker#8885: I remember they did sth like that here: https://arxiv.org/abs/1903.12136
m_wAL99#1923: https://docs.nvidia.com/nsight-dl-designer/UserGuide/index.html
alstroemeria313#1694: so what is SVRG, is it actually viable to use
nshepperd#2316: every n steps, you store a snapshot of the parameters as well as grad(snapshot, full_data)
Awesome_Ruler_007#7922: which implies they are easy to use? ๐ฎ
nshepperd#2316: and then on training steps you use grad(snapshot, full_data) + grad(params, x) - grad(snapshot, x)
nshepperd#2316: instead of just the grad(params, x)
nshepperd#2316: this is supposed to have less variance than the plain grads
nshepperd#2316: https://proceedings.neurips.cc/paper/2013/file/ac1dd209cbcc5e5d1c6e28598e8cbbe8-Paper.pdf this
EricHallahan#1051: Well you have to learn JAX lol
nshepperd#2316: you can also just use a large batch instead of all the data
nshepperd#2316: i tried feeding the variance reduced grads into Adam and it was bad
nshepperd#2316: maybe it messes up the adam scaling somehow
|
alstroemeria313#1694: oh
Awesome_Ruler_007#7922: I thought there were tons of wrappers like Flax which made it easy to understand? ๐ค (like lightning)
nshepperd#2316: the wrappers are the hard part to learn lol
EricHallahan#1051: That does not mean that there is not restrictions that JAX imposes on the wrappers.
Awesome_Ruler_007#7922: :surprise:
Awesome_Ruler_007#7922: damn
EricHallahan#1051: Like Haiku's transform thing.
Awesome_Ruler_007#7922: but ig it won't do me any good anyways since it doesn't help with memory ๐คทโโ๏ธ
Awesome_Ruler_007#7922: happy with my P100s
EricHallahan#1051: TPU v2-8s have 64 GiB of memory lol
Awesome_Ruler_007#7922: not per core
Awesome_Ruler_007#7922: and Imma die implementing model parellism
alstroemeria313#1694: i was maybe going to try it on diffusion but i guess that's not such a great idea
nshepperd#2316: yeah idk
alstroemeria313#1694: the first thing i tried not only refused to train but kicked the loss back to the starting point when it did kick in
nshepperd#2316: oh no
alstroemeria313#1694: oh wait
alstroemeria313#1694: I need to use the same random Gaussian noises
alstroemeria313#1694: Don't I
nshepperd#2316: ahh yeah
|
nshepperd#2316: if it is not the same the variance will be worse
nshepperd#2316: instead of better
alstroemeria313#1694: wait if I am doing EMA over the gradients
alstroemeria313#1694: I can't init to zero right
alstroemeria313#1694: I think that part is wrong too
alstroemeria313#1694: I have to init with a copy of the first gradient.
nshepperd#2316: ema over the gradients?
alstroemeria313#1694: yeah
alstroemeria313#1694: ...is this a bad idea
nshepperd#2316: you could init it to the first gradient, or init to 0 and do a bias correction
nshepperd#2316: idk ^_^
alstroemeria313#1694: bc this
alstroemeria313#1694: except EMA instead of picking a large batch size
nshepperd#2316: ah, like using an ema of the gradients, plus an ema of the params?
alstroemeria313#1694: OOPS
nshepperd#2316: i am not sure if it works out that way
alstroemeria313#1694: `dst_params[name].grad.add_(param.grad, alpha=1)`
alstroemeria313#1694: This should be `alpha=alpha`
nshepperd#2316: oh no ^^;
alstroemeria313#1694: So I was adding the EMA model's grad to the main model's
|
alstroemeria313#1694: Instead of subtracting.
alstroemeria313#1694: yeah
alstroemeria313#1694: specifically an EMA of the params and an EMA over the EMA model's gradients.
alstroemeria313#1694: However this failed to work again, so
nshepperd#2316: btw @alstroemeria313 @BoneAmputee ffhq originals are up now, at https://set.zlkj.in/data/ffhq/in-the-wild-images/
alstroemeria313#1694: yay!
nshepperd#2316: you can download the whole lot with `rsync -rv rsync://set.zlkj.in/data/ffhq/in-the-wild-images .`
alstroemeria313#1694: so your variance reduced grad is your main model's grad plus the mean grad minus the old model's grad?
nshepperd#2316: yep!
alstroemeria313#1694: and the mean grad comes from the old model?
nshepperd#2316: yeah
alstroemeria313#1694: kk
nshepperd#2316: this is that control variate thing again hehe
alstroemeria313#1694: i am clearly doing something very wrong
alstroemeria313#1694: shortly after i kick in SVRG the loss goes up to what it was at the start and doesn't go down
nshepperd#2316: yeahh it was never that bad for me
alstroemeria313#1694: the problem is that with diffusion we never really have "epochs" bc of the sampled timesteps and noise vectors.
nshepperd#2316: the "full dataset" is the cartesian product with every possible timestep and noise vector yeah
alstroemeria313#1694: gonna try something
alstroemeria313#1694: start taking the EMA of the gradients after one epoch
|
alstroemeria313#1694: and kick in SVRG after two
alstroemeria313#1694: so i actually have a semi-decent EMA model when i start the gradients EMA
alstroemeria313#1694: nope, still broke it
alstroemeria313#1694: oh god is it dropout
nshepperd#2316: eheh
alstroemeria313#1694: dropout lol
alstroemeria313#1694: Seriously I look at any method that requires you to evaluate a model multiple times w/ the same random dropout and go "this is a job for JAX"
nshepperd#2316: yeah, being able to just use the same random key is... sometimes helpful
alstroemeria313#1694: it is a pain in pytorch
nshepperd#2316: but yeah i guess you will need to turn off dropout heh
alstroemeria313#1694: WAIT
alstroemeria313#1694: I never zero the grad of the EMA model
alstroemeria313#1694: It's just accumulating
nshepperd#2316: oops
alstroemeria313#1694: this too
alstroemeria313#1694: "zero the grad"
nshepperd#2316: haha yeah
nshepperd#2316: it is so bad that that exists
alstroemeria313#1694: so you end up with all kinds of helper functions to copy the grad (which you have to do manually because we don't have tree_map)
nshepperd#2316: i forgot to zero the grads with my DT model when i was trying to make rl work lol
|
Ajay sahu#2540: @Stella Biderman are you working on model distillation?
Ajay sahu#2540: @StellaAthena
nshepperd#2316: "why are the gradient magnitudes just constantly increasing"
alstroemeria313#1694: or combine grads
alstroemeria313#1694: or flatten or unflatten the grad
alstroemeria313#1694: (This one is for second order methods)
Ajay sahu#2540: https://ai.facebook.com/blog/training-with-quantization-noise-for-extreme-model-compression/
Ajay sahu#2540: You can try Quant noise. For model compression
alstroemeria313#1694: @nshepperd ...noooope
alstroemeria313#1694: Loss still went up
Ajay sahu#2540: Further you can improve its speed or inference time via python to c++ compilers
StellaAthena#3530: Yup! Kinda slowly because of compute limitations (scaling experiments are taking precedent) but we are hoping to distill our models. Reach out to @preetham if y oh want to get involved
Ajay sahu#2540: Okay.. I see
alstroemeria313#1694: oh no.
alstroemeria313#1694: Found the bug
alstroemeria313#1694: The grad accumulation function also copied the params ;_;
Ajay sahu#2540: @preetham see if we can work around with quant noise
alstroemeria313#1694: ...loss is still going up.
alstroemeria313#1694: Actually it exploded.
nshepperd#2316: oh no
|
alstroemeria313#1694: do i need to try with plain SGD to make sure it works at all
alstroemeria313#1694: you'd think that variance reduced gradients would still work with adam though, idk
nshepperd#2316: ...maybe
alstroemeria313#1694: it is at least kicked in and not blowing up now
nshepperd#2316: it shouldn't explode though lol
alstroemeria313#1694: with sgd
alstroemeria313#1694: just training slowly bc sgd
alstroemeria313#1694: so idk if it's even better lol
alstroemeria313#1694: because i don't have an sgd baseline.
nshepperd#2316: wonder if it would make sense to ema the adam updates instead
nshepperd#2316: lol
alstroemeria313#1694: huh
alstroemeria313#1694: Wow that's even more of a pain in pytorch
nshepperd#2316: hm would you have to snapshot the adam optimizer state or something
nshepperd#2316: or uhh
alstroemeria313#1694: (Uh, I have some code for it somewhere, bc I wrote it for some sort of GAN scheme)
alstroemeria313#1694: No you just make a copy of the model before doing the step
alstroemeria313#1694: Then you subtract the two
alstroemeria313#1694: Then accumulate that into an EMA series of buffers.
alstroemeria313#1694: it doesn't change the underlying optimizer
|
nshepperd#2316: ahh hm
alstroemeria313#1694: I, hm
alstroemeria313#1694: This seems weird though.
alstroemeria313#1694: Because for this you want the Adam step for the main model, the Adam step for the old model, and the average Adam step for the old model
alstroemeria313#1694: And the old model has to *not actually update*
alstroemeria313#1694: Right?
nshepperd#2316: yeah
alstroemeria313#1694: Like we have to put it back.
alstroemeria313#1694: And just maintain optimizer states like we *were* updating it.
alstroemeria313#1694: So we can get what the steps would have been
alstroemeria313#1694: Wow that's a colossal pain
alstroemeria313#1694: In PyTorch
alstroemeria313#1694: Fortunately Adam optimizer states do not depend on the params
alstroemeria313#1694: Well.
alstroemeria313#1694: They are calculated from the series of gradients and this *screens off* the states of the params.
nshepperd#2316: yeah
alstroemeria313#1694: lol https://cdn.discordapp.com/attachments/729741769738158194/907680795316088922/demo_00005-10.png
alstroemeria313#1694: 5 epochs.
alstroemeria313#1694: I think it's just bad due to SGD being slow though.
alstroemeria313#1694: Bc it was also going slow before SVRG kicked in.
|
nshepperd#2316: hehe
alstroemeria313#1694: i really don't get why it's so bad with adam though
nshepperd#2316: yeah idk that is weird
nshepperd#2316: is it with momentum
nshepperd#2316: this thing is sort of a pseudo-momentum in itself
nshepperd#2316: this thing might also make the variance non-stationary, or something
nshepperd#2316: so maybe you need to... reduce beta2?
nshepperd#2316: or just reduce the lr to compensate
nshepperd#2316: gotta sleep now. good luck @alstroemeria313 eheh
aaronrmm#3198: @aaronrmm ๐
ewald#7730: has anyone here tried anything with spiking neural networks?
cyb3rm0nk#2938: @alstroemeria313 https://github.com/BaguaSys/bagua?twclid=11458146829661614083
Eleiber#8347: Will GPT-Neo run in CPU? Or does it need to be on GPU
Sid#2121: if you have a few months to wait for an output it should run pretty well on CPU
Eleiber#8347: lol, so I guess no
bmk#1476: if you need to ask this question, GPT-Neo probably isn't the tool for you
EricHallahan#1051: 125M works pretty well though.
ewald#7730: how many weeks per character output? xD
๐
ฌ gabriel_syme ๐
ฌ#3220: these looks promising, we can use this model to generate noise and train models with it ๐
EricHallahan#1051: Ain't too slow actually.
|
ewald#7730: oh, ok
EricHallahan#1051: My guidance on hardware for the GPT-Neo models is in the FAQ by the way.
https://www.eleuther.ai/faq
nev#4905: serious question:
is schmidhuber scalepilled?
Parker#3197: https://cdn.discordapp.com/attachments/729741769738158194/907896383573467137/maybe.png
Parker#3197: maybe
bmk#1476: is schmidhuber alignmentpilled?
ewald#7730: where can i get those alignmentpills, and are they better than ivermectin? ๐
ewald#7730: no one? O_o
alstroemeria313#1694: https://twitter.com/bremen79/status/1458056313544581123 oh huh
alstroemeria313#1694: "You can publicly comment on ICLR submissions."
alstroemeria313#1694: Maybe I should drop a comment on the Progressive Distillation paper saying I replicated it then
alstroemeria313#1694: "also, you need to discuss how to compute the student target for the v objective"
alstroemeria313#1694: > However, the proposed solutions, this is, the different proposed model parametrizations and loss weightings, seem to come primarily from trial-and-error and aren't overly well motivated. Is there anything we can say about which parametrizations and loss weightings should be optimal with respect to certain criteria? Can we provide more insights here?
this is also a good comment
๐
ฌ gabriel_syme ๐
ฌ#3220: I liked that they do this a lot as well
๐
ฌ gabriel_syme ๐
ฌ#3220: Also good morning :)
alstroemeria313#1694: i think their snr+1 weighting (when used with a cosine noise schedule during training) actually is optimal from a point of view of weighting errors in the model output equally according to the magnitude of their local effect on the reverse diffusion process.
alstroemeria313#1694: good morning!
|
๐
ฌ gabriel_syme ๐
ฌ#3220: from that statement. I like a lot of the second part, but the first is it supposed to be a bad thing?
๐
ฌ gabriel_syme ๐
ฌ#3220: trial and error is how we find new things right. Or is the problem here that they didn't provide the history of trials, and specifically errors, that led to this?
๐
ฌ gabriel_syme ๐
ฌ#3220: p.s. I'd love to see those as well if they exist
alstroemeria313#1694: they did show the trials, they have a table of combinations of different parameterizations and weightings
๐
ฌ gabriel_syme ๐
ฌ#3220: oh :guilty:
alstroemeria313#1694: but the thing they don't show is whether you'd expect any of the combinations to be *optimal*
alstroemeria313#1694: like maybe there is better stuff available to find
๐
ฌ gabriel_syme ๐
ฌ#3220: hmm I see, that sounds reasonable I guess though
alstroemeria313#1694: however i think some of their things actually are optimal and they manage not to realize it/bring it up
alstroemeria313#1694: > (when used with a cosine noise schedule during training)
also snr+1 weighting has a footgun for the unwary
๐
ฌ gabriel_syme ๐
ฌ#3220: this sentence makes me feel a bit dumb (note: I am).
๐
ฌ gabriel_syme ๐
ฌ#3220: do you think a visual explanation of the differences and details of the diffusion approaches is close to be possible? asking for a friend
alstroemeria313#1694: you really do need to train w/ timesteps uniformly sampled from a cosine noise schedule. all they say is "we used cosine btw" but if you actually try to train with snr+1 and a ddpm schedule, you will get bad results
๐
ฌ gabriel_syme ๐
ฌ#3220: is there a chance they got lucky? like trying that first and it worked?
alstroemeria313#1694: probably
๐
ฌ gabriel_syme ๐
ฌ#3220: is it a reasonable thing to try first
alstroemeria313#1694: like i got unlucky trying it with ddpm
๐
ฌ gabriel_syme ๐
ฌ#3220: oh ok I guess then
alstroemeria313#1694: if they are already using cosine for other reasons (like because they want to train at 0 and infinite snr)
|
alstroemeria313#1694: they... have a kind of :bigbrain: geometric intuition diagram, idk if it's what you want at all
nshepperd#2316: @alstroemeria313 morning~
alstroemeria313#1694: morning!
nshepperd#2316: i realized something, which i am testing now. but i am pretty sure that you can bypass all the annoyances with passing MakeCutouts as a static argument (for pmap and jit etc) by instead making it implement the pytree functions
nshepperd#2316: which tell jax how to split stuff into tensors and static stuff
nshepperd#2316: and then you can just pass it as a regular argument
alstroemeria313#1694: oh... i don't understand what that means ^^;;
nshepperd#2316: eheh
alstroemeria313#1694: and. discussion as to why their weighting/parameterization was optimal under some criteria would prevent this.
nshepperd#2316: i will implement it~
alstroemeria313#1694: bc the ddpm schedule obviously has non-constant delta phi and you want to train with errors weighted by delta phi.
alstroemeria313#1694: Or at least that is a reasonable choice.
alstroemeria313#1694: Another reasonable choice is the reweighting @nshepperd and I worked out to train with errors weighted the way the eps objective would weight them by default.
alstroemeria313#1694: I am not 100% sure why both of these work well given that they are actually quite different relative weightings...!
alstroemeria313#1694: ...Which one actually corresponds to the ELBO
alstroemeria313#1694: It's snr weighting (the way eps would do it by default) isn't it, using the same noise schedule as you would use in inference?
alstroemeria313#1694: since eps objective corresponds to predicting the next layer of the latent variables (the DDPM noise that was added)?
nshepperd#2316: iirc the ddim paper has a funny bit where they justify that the objective is the same by assuming that you train a separate model for every timestep. which of course makes the weighting irrelevant
alstroemeria313#1694: eheh...
alstroemeria313#1694: and v objective/snr+1 weighting has some sort of neural ode interpretation right?
|
nshepperd#2316: like, it's predicting the... direction of the change in the image wrt timesteps?
alstroemeria313#1694: if you are using cosine noise schedule it isn't just the direction, it actually is the gradient
alstroemeria313#1694: hm
alstroemeria313#1694: well, it's the gradient if you parameterize timesteps as phi
alstroemeria313#1694: and then cosine noise schedule corresponds to constant delta phi per step
nshepperd#2316: so, this https://cdn.discordapp.com/attachments/729741769738158194/907971866667393064/pytree_cutouts.py
alstroemeria313#1694: and other noise schedules correspond to non-constant step size schedules in phi
nshepperd#2316: instead of implement hash and eq and stuff, you implement these flatten functions
alstroemeria313#1694: ohh
nshepperd#2316: and then you can pass make_cutouts as a normal argument
nshepperd#2316: and it won't rejit it if only the cut_pow has changed
nshepperd#2316: because that can just be treated as a float tensor
nshepperd#2316: like you don't need to make it static
nshepperd#2316: i am using this now bc it means i can just dump the make_cutouts in a kwargs dictionary along with everything else
alstroemeria313#1694: ooh
๐
ฌ gabriel_syme ๐
ฌ#3220: woah this is super cool
๐
ฌ gabriel_syme ๐
ฌ#3220: can you make diffusion training music?
https://psc-g.github.io/posts/musicode/ldd/
nshepperd#2316: hm i think i could probably use a similar trick to like. pass the cond_fn in as an object
nshepperd#2316: which contains all its parameters
|
nshepperd#2316: wonder if that would be better
nshepperd#2316: ah so if you use v with snr+1... does it correspond to the step sizes that you get with the ddpm sampling schedule?
alstroemeria313#1694: no
nshepperd#2316: oh
alstroemeria313#1694: it's cosine
alstroemeria313#1694: snr+1 is the "natural" weighting for v, it's what you get if you just use mse loss without reweighting
nshepperd#2316: ah right. i meant if you use v with unreweighted ddpm
alstroemeria313#1694: oh
nshepperd#2316: does that then correspond to the step sizes you would get with ddpm sampling
alstroemeria313#1694: what you get is errors weighted equally *per step*
alstroemeria313#1694: and some of the steps are super tiny and others are big
nshepperd#2316: ahh
alstroemeria313#1694: if you train w/ continuous cosine timesteps, this gives you a good weighting for any noise schedule you sample with i think.
alstroemeria313#1694: bc you can pick any noise schedule and the errors on each step will have been weighted relative to delta phi for that step.
alstroemeria313#1694: so when i tried this. this is why the model memorized the outlier reals and then couldn't deal with fine pencil sketches.
alstroemeria313#1694: during sampling, the differentiation into outlier reals and the pencil sketch region was done in the early steps and the pencil sketch details put in in the later steps
alstroemeria313#1694: and the early steps were *way* overweighted compared to their delta phi
nshepperd#2316: huhh
alstroemeria313#1694: it misallocated the limited model capacity.
cfoster0#4356: I think to get what corresponds to the VLB you need the weighting to be constant, based on the VDM paper
|
alstroemeria313#1694: ohh
nshepperd#2316: does this imply that the amount of *information* gained/lost in a certain part of the diffusion process is roughly proportional to its delta phi
alstroemeria313#1694: That's kind of worse visually though
nshepperd#2316: like it's a better measure of how much stuff the model needs to learn in that part than delta of ddpm t
alstroemeria313#1694: that seems... weird? if it's actually *information* then it should correspond to the ELBO right?
nshepperd#2316: i don't know ^^;
krispyCypher#4617: hi guys. I am new to this channel. I am a bit confused. How do you generate art with this Ai? Or you only can generate prompts for other AIs like Clip ... ?
nshepperd#2316: i think by information i mean like KL divergence between the corresponding noised image distributions? or something
alstroemeria313#1694: oh. so like, MSE?
alstroemeria313#1694: Can we train v objective models with negative log likelihood between diagonal Gaussians.
alstroemeria313#1694: (Yes, but what would we do with them)
cfoster0#4356: Not sure what you mean. If you're just looking for AI art notebooks, I'd check out the #art channel. Most of the channels here are focused on other things
krispyCypher#4617: okay thanks
nshepperd#2316: is there some sort of interpretation of a NLL loss where it tells you what ddim eta to use for a step
alstroemeria313#1694: i do not know ^^;
nshepperd#2316: like um... instead of generating one noisy image, you generate two images from the same noising process at different timesteps. you condition the model on both timesteps and give it the first image as input. then train it to output a v and eta
nshepperd#2316: the train loss is the nll of the image after a ddim step ending up at the second image
alstroemeria313#1694: oh
nshepperd#2316: when the second timestep is t=0 this reduces to pred objective
alstroemeria313#1694: but won't it always do eta=0
|
nshepperd#2316: i don't think so
alstroemeria313#1694: bc anything else injects noise?
nshepperd#2316: because if eta=0 and the output v is not exactly the target v, the nll is โ
alstroemeria313#1694: wait what is "the same noising process", is this the markovian ddpm process or the ddim process?
nshepperd#2316: i don't know ^^;;
alstroemeria313#1694: bc each value of eta corresponds to a different forward process
nshepperd#2316: the choice of process matters with this yeah
nshepperd#2316: maybe ddim would be better
tpapp157#3643: what's the best way to host a NN model as a public api?
๐
ฌ gabriel_syme ๐
ฌ#3220: Gradio is really cool
๐
ฌ gabriel_syme ๐
ฌ#3220: Oh wait, is this serious hosting. Then not sure
nshepperd#2316: actually this idea is probably backward. i'm not sure you want to *add* noise when the model is less certain about v
nshepperd#2316: but you could like. just train a v model with nll. then during sampling take steps whose delta phi is inversely proportional to the predicted variance of v
nshepperd#2316: (make it output a scalar variance)
nshepperd#2316: do that instead of having any fixed sampling schedule, heh
nshepperd#2316: that way you will have equal errors per sampling step?
Louis#0144: There's doggos in my apartment building
Louis#0144: O wrong channel
Louis#0144: shoot
nshepperd#2316: actually this is probably a good idea for my upscaler
|
alstroemeria313#1694: wasn't there some paper about fast sampling from stuff like upscalers
nshepperd#2316: bc the ddpm schedule for sampling spends a long time on noisy timesteps where the model is basically just copying the low res to the output and the error is basically 0
alstroemeria313#1694: oh
alstroemeria313#1694: so instead of eta
alstroemeria313#1694: ohh
alstroemeria313#1694: now i can't find it
alstroemeria313#1694: is there some way to speed up training too
alstroemeria313#1694: like sample less in the high noise levels during training
alstroemeria313#1694: but with higher weight to make up for it...
alstroemeria313#1694: ...is this just the VDM idea again
nshepperd#2316: hmm i am not sure
nshepperd#2316: but vdm didn't use v did it
alstroemeria313#1694: it was eps
alstroemeria313#1694: i mean the general learnable schedule idea
nshepperd#2316: yeah i was thinking it might be similar
alstroemeria313#1694: wait what if you just empirically measured the losses at each timestep
alstroemeria313#1694: during training
alstroemeria313#1694: and derived a schedule from it
alstroemeria313#1694: i guess it wouldn't automatically generalize though.
nshepperd#2316: ohh
|
alstroemeria313#1694: and would not be alterable depending on the thing you were upscaling
nshepperd#2316: should work though
nshepperd#2316: and be better than ddpm?
alstroemeria313#1694: yeah
quinn#9100: ```Location: Levine 307 or on Zoom
Homework: Read the first two sections of Flash Fill (just a page and a half) to familiarize yourself with interactive input-output program synthesis.
Abstract: We propose a novel framework called Quivr for synthesizing queries to identify events of interest in video data. For instance, Quivr can be used to identify instances of human driving behaviors such as lane changes or left turns, which are important for designing planning algorithms for autonomous cars. Our queries operate over object trajectories predicted by a deep object tracking model. Then, a query consists of regular expression operators used to compose underlying predicates (e.g., whether a car is in a lane), and selects a subset of trajectories. A key challenge is that queries are difficult for end users to develop: queries must reason about complex spatial and temporal patterns in object trajectories in order to select trajectories of interest, and predicates often include real-valued parameters (e.g., whether two cars are within a certain distance) that can be tedious to manually tune. Thus, Quivr automatically synthesizes queries given examples of trajectories that the query should match. To make the synthesis procedure efficient, we use overapproximations to prune invalid branches of the query search space, including using a quantitative variant of our query semantics to efficiently prune the search space over parameter values. We also propose two optimizations for speeding up the execution of our queries. Finally, we leverage an active learning strategy to disambiguate between multiple consistent candidate queries by collecting additional labels from the user. We evaluate Quivr on a benchmark of 11 tasks, and demonstrate that it can synthesize accurate queries for each task given just a few examples, and that our pruning strategy and optimizations substantially reduce synthesis time. ```
quinn#9100: I have the zoom link in my email inbox, can forward it to you if you want, it's friday at 1p east coast US.
bmk#1476: did he say.. pile??
tamay#2378: Iโm currently working on a project to build a large dataset of parameter counts and training-compute estimates of large models. However, we feel like weโre lacking sufficient insights into reasonable hardware utilization estimates (our experience with large scale deployments is limited.)
Therefore, weโd be curious about your experience. **What utilization rates do you find reasonable? For which domains?**
tamay#2378: Some numbers I've seen so far: 33% for GPUs from OpenAI's AI and Compute (https://openai.com/blog/ai-and-compute/); ~1/4 from Narayanan et al. (https://arxiv.org/pdf/2006.09503.pdf) for typical model parallelism.
StellaAthena#3530: We can get around 1.2 e 14 flop/s for A100s at scale
StellaAthena#3530: So we're getting the same performance as OAI it seems
tamay#2378: Interesting! I see Nvidia reports a peak BFLOAT16 Tensor Core performance of 312 TFLOPS, so a 1.2E14 FLOPS would imply a utilization rate of ~0.4 (source: https://www.nvidia.com/en-us/data-center/a100/)
StellaAthena#3530: The OAI blog post you're linking to predates A100s, so I wouldn't put a whole lot of confidence into a claim that we are outperforming them or something
kurumuz#5695: sounds like its on their tensor cores
|
kurumuz#5695: you cant exactly always utilize them
kurumuz#5695: with only one GPU you will hit crazy memory bottlenecks as well
tamay#2378: Yep. To be clear, this is the utlization rate they used when calculating the training compute used across a bunch of models from the 2012's onward. They didn't claim that they themselves operated with those rates, but assumed that previous researchers would have faced those rates.
StellaAthena#3530: Ah
kurumuz#5695: and most transformers are not wide enough for full efficient utilization
EricHallahan#1051: > Two methodologies were used to generate these data points. When we had enough information, we directly counted the number of FLOPs (adds and multiplies) in the described architecture per training example and multiplied by the total number of forward and backward passes during training. When we didnโt have enough information to directly count FLOPs, we looked GPU training time and total number of GPUs used and assumed a utilization efficiency (usually 0.33). For the majority of the papers we were able to use the first method, but for a significant minority we relied on the second, and we computed both whenever possible as a consistency check. In the majority of cases we also confirmed with the authors. The calculations are not intended to be precise but we aim to be correct within a factor 2-3. We provide some example calculations below.
kurumuz#5695: PP is good in such cases
StellaAthena#3530: In general, I would say that if you're hitting 1/3rd the theoretical max you're doing a good job
kurumuz#5695: yeah definitely
StellaAthena#3530: You can squeeze a little more than that out (our exact number is 1.17 ~ 37%), but not much.
StellaAthena#3530: I'm talking about training large language models, no comment on anything else.
tamay#2378: Gotcha, that's helpful. Thanks!
kurumuz#5695: small models will do better like resnet or bert
kurumuz#5695: a lot of work spent on optimizing latency on these models
StellaAthena#3530: Because you don't need to worry about bandwidth?
kurumuz#5695: pretty much, better utilization of tensor cores as well
kurumuz#5695: if you are memory limited you will hit CUDA cores as the fallback because you cant keep the tensor cores fed. @StellaAthena
kurumuz#5695: this is mostly where those theoritical limits fail
kurumuz#5695: just not this, you have very specific size requirements so you can utilize each tensor tile properly
tamay#2378: This is very helpful, @kurumuz @StellaAthena, thanks. Do you have any sense of how much better the utlization rates have become over time, or perhaps how these vary across Tensor cores vs. GPUs?
|
Sid#2121: to be fair, on smaller models we get much higher efficiency
Sid#2121: it's heavily dependent on the size of the GPU cluster and the size of the model
Sid#2121: with E.G a 2-300m param model on a single node, we can get over 50% utilization, or around 150-60 tflops
Sid#2121: actually generally on a single node, we get pretty good utilization
Sid#2121: the bottleneck comes from the internode communication
Sid#2121: https://arxiv.org/abs/2104.04473 this paper is probably a pretty useful read. If you A) have a really wide model (so wide that it would probably degrade performance cough megatron cough) and B) spend millions and a lot of time on setting up all the best interconnect & software stack, you can actually get supra linear performance as you scale up (bigger matmuls are more performant)
Sid#2121: it just takes a shit ton of tweaking
ewald#7730: is network latency/bandwidth the problem, or something else?
StellaAthena#3530: Yup, exactly that
ewald#7730: 10GBit/s network cards are cheap nowadays
ewald#7730: why not trunk them?
kindiana#1016: 10gbps is not a low compared to memory bandwidth which is like 2TBps per gpu lol
ewald#7730: ok, good point xD
ewald#7730: dual port 40gbit/s cards are also more and more common. but still... 80 vs 2.000 ...
Sid#2121: network latency mostly (for pipeline parallel send / recv and data parallel allreduce), then also the tensor parallel all reduce takes up a significant portion of the runtime, too. But that's intra node.
ewald#7730: so... how is that problem solved in practice? huge PCIe bus with dozens of GPUs? xD
Sid#2121: at the scales we're training at latency is more of an issue than bandwidth, we're not really saturating our bandwidth limits quite yet
Sid#2121: the tensor parallel problem?
ewald#7730: no, for the problems where the network is the bottleneck
Sid#2121: we use ~100 GPUs and other labs use thousands, so we're a little beyond the capabilities of PCIe lol
|
Sid#2121: we use infiniband NICs
Sid#2121: 2 100GB/s NICs per node iirc
Sid#2121: DGX is 4 or 8 100GB/s NICs per node
Sid#2121: i think
Sid#2121: there's also a lot of hyperparameters to tweak and your performance is heavily dependent on the batch size / microbatch size / pipeline parallel settings
ewald#7730: data rate the same as 100 GBit/s ethernet. but the latency will be much better with infiniband, right?
Sid#2121: correct
ewald#7730: yeah, makes sense
ewald#7730: if you are tweaking hyperparameters and running completely seperate models on each node, then the network isn't the bottleneck at all, of course
Sid#2121: ofc, but if you want to train chonky models you have to use all the nodes in conjunction hah
StellaAthena#3530: I think you misunderstand. The hyperparameters include the settings for how you do distributed learning. Sid isn't talking about doing many runs with different h params simultaneously.
ewald#7730: ah! got it
EricHallahan#1051: There are also a bunch of knobs to tweak for things like message sizes and routing that aren't even model hyperparameters.
ewald#7730: but that play a huge role in training speed
alstroemeria313#1694: yay let's try adam beta_1 0.999
alstroemeria313#1694: this is slowing training down
alstroemeria313#1694: it's still bad though
alstroemeria313#1694: needs EMA over the weights still.
chilli#5665: if you wanna do that use something like vmap ๐
chilli#5665: overall they've gotten worse
|
chilli#5665: basically
chilli#5665: since compute increases much faster than memory/network bandwidth
๐
ฌ gabriel_syme ๐
ฌ#3220: Would a close-to-ideal level of utilization be using a single DGX node, for e.g.? I'm thinking here of large models that fit in one. Or is latency between GPUs there also an issue?
๐
ฌ gabriel_syme ๐
ฌ#3220: alstro, nshepperd, wen is your paper coming out? ๐
https://openreview.net/forum?id=TKMJ9eqtpgP
BoneAmputee#8363: *ctrl+f "katherine"*
*no results*
/tableflip
EricHallahan#1051: (โฏยฐโกยฐ๏ผโฏ๏ธต โปโโป
EricHallahan#1051: There you go.
alstroemeria313#1694: isn't this the one where they fine-tune the diffusion model
alstroemeria313#1694: they invert an image, fine-tune the diffusion model with something like the stylegan-nada loss, and run DDIM sampling forward
alstroemeria313#1694: i have not actually tried this method
alstroemeria313#1694: it's better than the previous stuff bc with diffusion you can invert arbitrary images
alstroemeria313#1694: whereas with stylegan-nada you had to project into the original model's latent space to edit stuff.
alstroemeria313#1694: tbh all i've ever done with inversion is interpolation videos between inverted images
๐
ฌ gabriel_syme ๐
ฌ#3220: why would a paper with decent reviews be withdrawn? Just to go public?
๐
ฌ gabriel_syme ๐
ฌ#3220: I find it curious
Emad#9608: query: did the community ever come to some type of tacit consensus on what we should broadly call multimodal type models? Assuming we don't go with foundational models a la Stanford
bmk#1476: I just call them "big models"
|
๐
ฌ gabriel_syme ๐
ฌ#3220: I call them multimodal
bmk#1476: simple and descriptive
Kia#2550: Multimodal, Or just "Models"
Sid#2121: When I think of Multimodal I think a Vision Language model, or something similar. Is this what you mean? If so, I think that's a pretty distinct model to foundational / :chonk: models
๐
ฌ gabriel_syme ๐
ฌ#3220: you reminded me to make subfolders in my multimodal folder
ersatz#0001: Am I understanding right that :schmidhuber: โs timeline for AGI is ~2030?
ersatz#0001: just checking
louis030195#2462: Hey, I asked a question on lesswrong.com about information / dataset augmentation using large language model, but no answer so far, anyone has an idea? https://www.lesswrong.com/posts/hF7wSwcDyBwrbgwEM/does-new-information-exist
๐
ฌ gabriel_syme ๐
ฌ#3220: https://www.wired.com/story/artificial-intelligence-turing-test-economics-business/
StellaAthena#3530: Can you explain why you think that the premise here could plausibly be true? Iโm having trouble seeing anything wrong with the naive โof course new information existsโ
StellaAthena#3530: 20,000,000 - 1 is prime. Most people here didnโt know that. Thatโs new information to them.
Daj#7482: ~~something something logical uncertainty~~
Daj#7482: but I don't think that's what they meant
StellaAthena#3530: I donโt know what they mean, which is why Iโm asking
Daj#7482: Yeah I was just (not super seriously) commenting that whether "20,000,000-1 is prime" is "new" information depends on your assumptions about logical uncertainty
louis030195#2462: I think what I meant is more like, there is a limit to information in systems, but maybe I lack some knowledge in system, entropy etc?
StellaAthena#3530: Does โyes there is a maximum amount of information that can be encoded in k bitsโ answer your question?
louis030195#2462: probably
StellaAthena#3530: Yes there is a maximum amount of information that can be encoded in k bits
kurumuz#5695: isnt that very obvious :thonk:
|
kurumuz#5695: like yeah bits are finite states
louis030195#2462: For example I was able to augment my dataset of 1000% using GPT3 by sampling the dataset as prompt, but at some point I think the information won't be "new"
StellaAthena#3530: No
StellaAthena#3530: I mean
StellaAthena#3530: Maybe
StellaAthena#3530: But itโs mathematically possible that itโs all new
louis030195#2462: thanks
louis030195#2462: the fact that we can't generate from nothing intrigue me
tpapp157#3643: Depends what you define as "new". Are two samples separated by 1e-10 different? Technically yes but practically no. If you're drawing samples from a distribution, the difference between the sampled distribution and the true distribution will go to zero in the limit. And sure it may never actually hit zero but that's beside the point because the meaningful question is when do you really start hitting diminishing returns.
louis030195#2462: yeah, that's well expressed
louis030195#2462: I might be mixing stuff-up but didn't the big bang start from nothing? but it's expanding infinitely?
tpapp157#3643: That's a common misconception. Of course there's a lot of speculation but I think consensus right now is that the universe has always been infinite in size and that at the big bang the infinite universe was also essentially infinitely dense.
tpapp157#3643: But there are all sorts of theories out there like bubble universes and whatnot.
louis030195#2462: yeah, we don't understand very well the big bang yet I guess
mgostIH#0245: @louis030195 the overall information of the universe may be very low, but a section of it can take far more to describe
mgostIH#0245: Consider a random sequence of letters, the information to describe it is less than this sentence, but you can find the entire work of Shakespeare inside of it, if you focused only on that portion the information required is higher
StellaAthena#3530: Okay, but the core question here of โis there a k such that we canโt get new information after we get k bitsโ is โnoโ
mgostIH#0245: But to answer your thing regarding augmentations, i think what's happening is that the network gets the same information multiple times under different perspectives + the implied symmetries of the augmentation
Daj#7482: https://youtu.be/D71zxGRhuxE?t=1030
Is this saying that WuDao was trained on CPUs?
|
Daj#7482: Am I hearing this right?
kurumuz#5695: why is that weird though
kurumuz#5695: arent most designs pretty much CPUs nowadays
Daj#7482: I dunno, just surprised
kurumuz#5695: Tesla, Cerebras
Daj#7482: Neither of those are CPUs lol
kurumuz#5695: they are wdym
Daj#7482: Ok, Cerebras is a CPU the same way a GPU is a CPU
Daj#7482: true but misleading
kurumuz#5695: like do you mean CPU CPUs?
Daj#7482: CPU CPUs
kurumuz#5695: wtf
Daj#7482: at least that's what I'm hearing, maybe the guy just misspoke
kurumuz#5695: idk he calls V100s TPUs as well?
kurumuz#5695: am i mishearing
Daj#7482: I hear "equivalent to 100k V100s, but on CPUs"
Daj#7482: or something
kurumuz#5695: oh
kurumuz#5695: yeah took me a while
kurumuz#5695: kinda crazy
|
kurumuz#5695: what in the world
Daj#7482: and their "GLM" formulation just looks like AR with extra steps
Daj#7482: wait no, it's more like T5 I think?
EricHallahan#1051: https://arxiv.org/abs/2103.10360
Daj#7482: so it's masking out spans of tokens
Daj#7482: Seems...not obviously stupid?
Daj#7482: BERT but mask tokens can replace arbitrarily many tokens
Daj#7482: nope not even they also do something weird with the masking, huh
alstroemeria313#1694: *sigh*
alstroemeria313#1694: so like, is there any autoencoder type thing that gives you a latent space that you can sample from *and* which doesn't have a VAE-type sampling bottleneck that optimizes a lower bound on the thing you want instead of the thing you want directly
alstroemeria313#1694: *and* which has a *small*, low-dimensional latent space.
alstroemeria313#1694: Like what do I have to do. Train a diffusion autoencoder and *then* train a *second* diffusion model *in the latent space* so I can sample from the latent space.
alstroemeria313#1694: Like literally have an MLP resnet model or something that transforms Gaussian distributions to the first autoencoder's latent space distribution.
alstroemeria313#1694: ...I mean. That would work.
StellaAthena#3530: @alstroemeria313 Iโm not an expert on autoencoders, but can you explain why an auto-encoding transformer doesnโt work
alstroemeria313#1694: you mean like a masked language model?
StellaAthena#3530: Yeah
alstroemeria313#1694: oh. i just need to be able to pick a random latent
alstroemeria313#1694: and those output sequences of latents?
alstroemeria313#1694: also they need discrete sequences as inputs i think?
|
StellaAthena#3530: Youโre working with continuous inputs?
ewald#7730: isn't the information in a random sequence of letters more?
ewald#7730: like... the hardest image to compress is random noise
alstroemeria313#1694: images, yes
mgostIH#0245: Storing a realization of a random sequence of letters is, but describing the space of random sequences can be done by a sentence
mgostIH#0245: In any case you can also skip the problem by considering other normal numbers or fractals
ewald#7730: what if it isn't really random, but just extremely compressed data?
ewald#7730: it's very hard to find out which one it is, right?
ewald#7730: the kolmogorow-complexity of 1 GB of truly random noise is >=1GB
mgostIH#0245: Can also go to the field of cryptographic RNGs, the fundamental problem is that finding what deterministic algorithm describes a sequence is pretty much impossible in general
ewald#7730: yeah, that means its information content is very high.
ewald#7730: but it doesn't mean that it's useful information
mgostIH#0245: Ye but what I meant is that you can have very simple infinite objects that have a subset that has more complexity than the entire part
ewald#7730: sry, i didn't understand that sentence?
ewald#7730: a simple infinite object... like... a short description of a fractal for instance?
tpapp157#3643: You could try training a diffusion model like normal but which is also conditioned on a small latent vector of random noise. That was an experiment I was thinking of trying at some point.
alstroemeria313#1694: but then how do you get it to use the noise
alstroemeria313#1694: the noise isn't informative
mgostIH#0245: Ye or Champernowne's constant too
mgostIH#0245: I'd use pi but it's unproven whether it's normal too
|
tpapp157#3643: Well you'd have to make sense of the latent space retroactively like is done with GANs.
alstroemeria313#1694: um
alstroemeria313#1694: i think it would just learn to ignore the noise though.
ewald#7730: ok, so you're saying that it contains a subset that has more complexity than the short description of the fractal. right?
mgostIH#0245: ye
ewald#7730: like finding a sentence or even a whole book starting at one position of pi or whatever
tpapp157#3643: Maybe. Maybe not. Unconditional GANs and Diffusion models are already learning to make sense of pure noise distributions.
ewald#7730: so...
ewald#7730: in my opinion the additional information that you'd have to bundle with pi or the fractal is the position of this information
ewald#7730: how long is the description of the position of the information? pretty sure it's longer than the information itself.
mgostIH#0245: @ewald Sure thing, but my point is that our universe could be the exact same, we may be observing complexity arising from our position and nothing else
ewald#7730: ok, that's quite possible. but is this about complexity, or about information?
mgostIH#0245: Both in a way, I meant that "information can't arise from nothing" could be just an illusion of our position within the universe, on the whole it might be as simple as the description of a fractal, but locally we need to come up with a ton of explanations for all we see
ewald#7730: whew. ok, no idea if that's reasonable or not. xD
ewald#7730: it reminds me a bit of the bolzmann brain idea, although it's different of course
ewald#7730: still... we see emergence effects very often, and they don't depend on position. so this might be evidence against it?
alstroemeria313#1694: so i guess. should i stop gradient the encoder's latents before feeding them into the second diffusion model to train it
quinn#9100: anyone have any thoughts about @Zac-HD 's $1k bounty?
https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions?commentId=rvSjeHizEdTRKdfGn
|
> Take an EfficientNet model with >= 99% accuracy on MNIST digit classification. What is the largest possible change in the probability assigned to some class between two images, which differ only in the least significant bit of a single pixel? Prove your answer before 2023.
>
> Your proof must not include executing the model, nor equivalent computations (e.g. concolic execution). You may train a custom model and/or directly set model weights, so long as it uses a standard EfficientNet architecture and reaches 99% accuracy. Bonus points for proving more of the sensitivity curve.
bmk#1476: i think vanessa's response makes sense here
StellaAthena#3530: Is my answer supposed to be about a specific classifier? All possible ones that achieve > 99% accuracy?
StellaAthena#3530: Iโm confused by this post tbh. What stops me from taking a pretrained model that gets 99.9% accuracy and making the answer 1?
StellaAthena#3530: Also, what does it mean to train a model without โexecuting it (or an equivalent computation)โ
tpapp157#3643: The proof not the training.
Dashiell#8739: What about a normalizing flow?
alstroemeria313#1694: was thinking about that
alstroemeria313#1694: I mostly don't know how to do them. ^^;
tpapp157#3643: It's basically asking for analytical proof on a confidence interval of model outputs. And only offering $1000? What a joke.
Dashiell#8739: What exactly do you want to do with them?
Dashiell#8739: And, I mean, I've only ever played around with them. But they're pretty simple conceptually. Can't be too hard to figure out properly :P
alstroemeria313#1694: well. what i am doing rn is training a diffusion autoencoder and also training a second diffusion model to be able to sample from its latent space
alstroemeria313#1694: this seems to be actually working
EricHallahan#1051: 1) *Adopt standard EfficentNet architecture*
2) *Train model for quantization to 1 bit*
3) *Quantize*
4) *???????*
|
5) *profit*
EricHallahan#1051: Actually just throw away the step 2, quantize model, demonstrate that the quantization process shifts the probability distribution without even perturbing the input or something IDK.
alstroemeria313#1694: The second diffusion model could maybe be replaced with a normalizing flow type thing, IDK
alstroemeria313#1694: i do not really think much of normalizing flows for images
alstroemeria313#1694: at least with glow type archs
alstroemeria313#1694: idk if there's newer stuff
alstroemeria313#1694: glow has trouble learning long range dependencies without zillions of layers (they used 600 a few years ago) bc the model doesn't downsample or upsample.
alstroemeria313#1694: And as you might expect this problem gets worse at higher resolution
Dashiell#8739: I think maybe I need you to back up a little bit. When you say "diffusion autoencoder", what does that mean? The reverse process doesn't go straight to image space but to a latent space?
alstroemeria313#1694: ohh
alstroemeria313#1694: I am learning an encoder that produces a 128-dim latent and a diffusion decoder whose learned reverse process is conditioned on the encoder's latent.
alstroemeria313#1694: The encoder and decoder are trained end to end.
Dashiell#8739: is _conditioned_ on the encoder's latent
alstroemeria313#1694: yes
alstroemeria313#1694: It uses it like a class label.
alstroemeria313#1694: Except it's continuous.
Dashiell#8739: so the diffusion process gets the latent + noise and brings the noise to an image close to the original
alstroemeria313#1694: yes
Dashiell#8739: ok, now I see
alstroemeria313#1694: The problem is that we also want to be able to sample unconditionally from the model.
|
Dashiell#8739: Then I think normalizing flows might work
alstroemeria313#1694: So we need some way to be able to sample from the latent space.
Dashiell#8739: You're not actually see using it to generate the image
Dashiell#8739: Just sample from the latent
alstroemeria313#1694: Yeah
alstroemeria313#1694: I am using a second diffusion model to do it rn
Dashiell#8739: If you can match the distributions well
alstroemeria313#1694: like a residual MLP
Dashiell#8739: NFs might be faster
alstroemeria313#1694: Yeah
alstroemeria313#1694: I am doing 4000 forwards through the small residual MLP
alstroemeria313#1694: to sample
alstroemeria313#1694: it does this in around three seconds for a batch of 50
alstroemeria313#1694: Then it takes like a minute for the main decoder to sample (with 1000 steps)
alstroemeria313#1694: @Dashiell if you have a normalizing flow model for the latent space... can you train it end to end with the other models to make sure the main encoder doesn't learn a distribution it can't handle?
alstroemeria313#1694: rn my diffusion latent space model is *not* trained end to end with the rest
alstroemeria313#1694: just at the same time
alstroemeria313#1694: it does not actually share a loss with the encoder and decoder.
Dashiell#8739: I'm not sure
alstroemeria313#1694: ah
|
alstroemeria313#1694: otoh this means if it's not good enough i can just train a bigger one later
Dashiell#8739: I mean, if you trained it to just match the distribution of the encoder as it goes along, would that be "end to end"
Dashiell#8739: Presumably you'd be able to tell if your encoder started outpacing it
Dashiell#8739: But it would still be tacked on to the "end to end" encoder to decode pipeline
Dashiell#8739: Right?
alstroemeria313#1694: idk
alstroemeria313#1694: i guess i think of it as "the whole system is optimized jointly for a single loss"
Dashiell#8739: @alstroemeria313 what about something like this https://cdn.discordapp.com/attachments/729741769738158194/908458346787508284/End-to-End_Diffusion_Autoencoder.png
alstroemeria313#1694: oh, what's the middle VAE for?
Dashiell#8739: when you want to do unconditional sampling, add in the multivariate gaussian from the middle of the autoencoder
Dashiell#8739: or rather just sample from
Dashiell#8739: the VAE learns the image encoder's latent space
Dashiell#8739: in such a way that, if you want, you can sample from it
Dashiell#8739: the VAE properly has encoder -> multivariate gaussian --> decoder, right?
alstroemeria313#1694: yeah
Dashiell#8739: but this way you can train it end to end too
Dashiell#8739: unclear whether or not for conditional "inference" you'd want to skip the VAE
Dashiell#8739: but it'd definitely let you do both conditional and unconditional all end to end
alstroemeria313#1694: oh, you have an encoder from the latent space to multivariate gaussian then a decoder to the same latent space?
Dashiell#8739: yes
|
alstroemeria313#1694: ahh
Dashiell#8739: you could additionally train that with straight up reconstruction loss on the latent space
alstroemeria313#1694: ahh
Dashiell#8739: but if you use the reconstructed latent from the VAE and give it to the diffusion model
Dashiell#8739: then you could use the diffusion loss end to end with your original encoder
alstroemeria313#1694: huh
cfoster0#4356: Hmm is this different from a normal variational encoder (like with mean and logvar outputs)?
Dashiell#8739: it could be any type of variational encoder, I think
Dashiell#8739: the idea would basically be to train it like a distilled student
Dashiell#8739: with the upside of having a way to sample from the latent
cfoster0#4356: @Dashiell trying to figure out if the diagram meets these desiderata
Dashiell#8739: or maybe I'm crazy? The big possible downside would be what @alstroemeria313 already mentioned: that the variational approximation just wouldn't be close enough to the true ("true") latent
Dashiell#8739: oh, I forgot about the "no ELBO" rule
Dashiell#8739: in that case, a normalizing flow would maybe actually work? You're getting directly at the log likelihood at least
Dashiell#8739: anyway, the principle is really just to put something to approximate the true latent in the middle (and that you can sample from easily) and then train with the reconstructed latent
cfoster0#4356: Yeah
Dashiell#8739: actually, the easiest idea? do some sort of whitening procedure on your image encoder and then sample uniformly from the n-sphere
Maxime#0993: Can AMD GPU run any model like gpt-j or others
cfoster0#4356: If only we could run a diffusion model inside the encoder. Running DDIM steps ("in reverse") would whiten it, I think
alstroemeria313#1694: ELBO is probably OK so long as we are not feeding the *sampled results of a variational encoder* directly into the main decoder
|
Dashiell#8739: the "main decoder" being the diffusion process?
Dashiell#8739: that is pretty much what I'm proposing ๐
Dashiell#8739: but feel free to tell me to shut up if I'm making no sense
alstroemeria313#1694: yeah
Zac-HD#7996: Hey, two months ago I was a phd student! It's not much money in the scheme of things, but still a lot more than I see anyone else offering in comments.
Zac-HD#7996: Your choice of classifier, so long as it's using a standard EfficientNet architecture and has at least 99% accuracy on MNIST.
Zac-HD#7996: Nothing, if you can provide an analytical proof that doesn't involve running the model!
ethan caballero#6044: How to convert from A100 days to V100 days?
EricHallahan#1051: Multiply roughly by two.
EricHallahan#1051: https://developer.nvidia.com/deep-learning-performance-training-inference
EricHallahan#1051: Of course that probably has some rather large error bars.
๐
ฌ gabriel_syme ๐
ฌ#3220: wonder if someone can help me with a jax question
๐
ฌ gabriel_syme ๐
ฌ#3220: how can I pass arguments for top_p and top_k here in the p_map function?
```python
p_generate = jax.pmap(generate, "batch")
p_params = replicate(model.params)
def generate(params, rng, batch):
output_ids = model.generate(
|
batch["input_ids"],
attention_mask=batch["attention_mask"],
max_length=max_length,
prng_key=rng,
do_sample=True,
top_p=top_p,
top_k=top_k,
)
return output_ids
```
๐
ฌ gabriel_syme ๐
ฌ#3220: adding top_p and top_k as arguments in the function gives me an error
`map was requested to map its argument along axis 0, which implies that its rank should be at least 1, but is only 0 (its shape is ())`
๐
ฌ gabriel_syme ๐
ฌ#3220: sry was typing how that gives me an error
๐
ฌ gabriel_syme ๐
ฌ#3220: this is HF btw, so it's partly flax not sure if that matters in this spot
๐
ฌ gabriel_syme ๐
ฌ#3220: I am looping through different configurations yes
๐
ฌ gabriel_syme ๐
ฌ#3220: and in fact since I did it wrong, it's as if they were static so far meaning all my generations (apart from 1) are sort of useless ๐ฆ
๐
ฌ gabriel_syme ๐
ฌ#3220: I can always do it one by one lol but I'm hoping there's an obvious solution
๐
ฌ gabriel_syme ๐
ฌ#3220: oh wait, I can just...just assign em right before the call
๐
ฌ gabriel_syme ๐
ฌ#3220: I'm quite silly for not thinking that before lol, but I still wonder what's the problem in the first place. oh well
๐
ฌ gabriel_syme ๐
ฌ#3220: oh dear. Please disregard whatever I said lol. I'm revisiting this code 2 months later. Turns out, I actually did that already :berk:
|
Emad#9608: https://towardsdatascience.com/meet-m6-10-trillion-parameters-at-1-gpt-3s-energy-cost-997092cbe5e8
bmk#1476: sounds sus
Daj#7482: > They used a mere 512 GPUs to train the model in 10 days!
Daj#7482: seems legit
Louis#0144: LMAO
Louis#0144: Literally no information
Louis#0144: Wtf
Louis#0144: This sounds so sus
bmk#1476: brb making 100T param model using a single GPU and one day
bmk#1476: (no benchmark scores ofc, why do you ask)
kurumuz#5695: what does it mean to train a model
kurumuz#5695: such a bullshit article lol
nev#4905: https://openreview.net/forum?id=TXqemS7XEH
Daj#7482: > For those models that are not trained on commonly-used public datasets, we will
> carefully release the model checkpoints before careful evaluation, and also limit the access to avoid
> misconduct
Daj#7482: :thonk:
Sid#2121: some previous convo on the susness https://discord.com/channels/729741769192767510/747850033994662000/895390842628505610
LearnThings#6939: any vqgan+clip servers I can join?
Kia#2550: This,Or
|
<https://discord.gg/A7fnWSUQ>
Kia#2550: I can't fold links:goose14:
Kia#2550: Just go to #art
LearnThings#6939: alrighty thanks!
Kia#2550: Happy to help๐
James#6892: Is megatron turing NLG available as a service yet
James#6892: or is it just a cool PR thing
EricHallahan#1051: Not that I know of.
Kia#2550: No one knows:goose10:
EricHallahan#1051: But I expect they will productionize it somehow.
Quill#9732: should have a channel where the only one allowed to post is a bot that just makes a "no, we don't have the hardware for a full GPT-3 scale run yet" post every day that that remains true :p
StellaAthena#3530: We could call it โthis GPU does not existโ
EricHallahan#1051: Start here and read down for a throwback lmao
EricHallahan#1051: Man have things changed a lot within nine months.
Quill#9732: (also tbh "is there any news on hardware" is a question that I keep wanting to ask and unironically I would appreciate e.g. the ability to sign up to be pinged if/when there *is* news. Push rather than polling :p)
StellaAthena#3530: When we have a new and significantly larger model, be it 22.5B or 200B, you will be pinged.
Quill#9732: is that because there'll be an @ everyone? :p
Quill#9732: but yeah, fair enough. Knowing when the run is *starting* might be nice but isn't actually actionable information on my part
StellaAthena#3530: The run will crash and will need to be restarted at least twice. Tbh knowing when it starts will probably be more painful than not knowing.
Quill#9732: fair
|
StellaAthena#3530: But, we do basically everything in public anyways. Iโm sure itโll leak. GPT-J did, and even our testing as to how well GPT-J scales to 22.5B showed up on 4chan like three hours afterwards.
Quill#9732: yeah - *if I'm actively following at the time* I'm sure I'd know soon enough.
StellaAthena#3530: Oh yeah? If youโre a real fan of EleutherAI why havenโt you been stalking their internet presence *every minute of the past 16 months*?
Quill#9732: also, huh, it's really only been five months since GPT-J-6B was released? Thought it was longer.
Quill#9732: because the main thing I want to know is "bigger model wen [sic]" and the answer there doesn't change much :p
bmk#1476: oh yeah? if you're a real fan of EleutherAI then why aren't you me
๐
ฌ gabriel_syme ๐
ฌ#3220: a real fan knows that asking that moves the release date, hence we never ask
StellaAthena#3530: A real fan has blades for hands and mouth to scream (or ask) with
bmk#1476: a real fan is a device for winnowing grain
alstroemeria313#1694: https://cdn.discordapp.com/attachments/621759936698908682/908873676273889380/Screen_Shot_2021-11-12_at_4.19.26_PM.png
alstroemeria313#1694: What
alstroemeria313#1694: Those jumps up and down happen at epoch boundaries
StellaAthena#3530: @alstroemeria313 shuffle your data better
alstroemeria313#1694: what is pytorch lightning even doing that can cause this
alstroemeria313#1694: It started when I started doing multiple gradient accumulation steps
StellaAthena#3530: Wait I didnโt read the x axis carefully
StellaAthena#3530: Your epoch is only 2k steps or so?
alstroemeria313#1694: Shorter
StellaAthena#3530: What did the previous 1k steps look like
alstroemeria313#1694: there are 70,000 items in the dataset
|
alstroemeria313#1694: i was doing batches of 1024
alstroemeria313#1694: eh they're in another run
alstroemeria313#1694: this was the old run https://wandb.ai/crowsonkb/kat-diffusion/runs/1y8cy78m?workspace=user-crowsonkb
alstroemeria313#1694: its batch size was 128
alstroemeria313#1694: it does not have the up and down pattern
alstroemeria313#1694: ...can i do bigger batches than 32 per gpu
alstroemeria313#1694: pt lightning is just so bad
EricHallahan#1051: IMO wrapper classes around PyTorch is a weird concept.
alstroemeria313#1694: i'm only using lightning to get ddp training
alstroemeria313#1694: i usually write my own training loops myself
alstroemeria313#1694: anyway i am now using batch size 384 w/o gradient accumulation
alstroemeria313#1694: and it is working
EricHallahan#1051: :thonk:
alstroemeria313#1694: wish i could grad accum without hitting whatever bug
alstroemeria313#1694: but it's late and i need to go to bed
alstroemeria313#1694: diffusion gradient noise scale is ludicrously high
alstroemeria313#1694: so i wanted to try pushing the batch size way up in late training
alstroemeria313#1694: Hey what's the biggest convolutional neural net
EricHallahan#1051: ยฏ\_(ใ)_/ยฏ
bmk#1476: maybe one of those absurd 1000 layer resnets
|
alstroemeria313#1694: CoAtNet-7, 2440M?
alstroemeria313#1694: weren't they not so wide
alstroemeria313#1694: like they just trained them to see if it broke down w/ increasing depth?
alstroemeria313#1694: also usually on cifar-10 or smth
alstroemeria313#1694: I wonder if I'll end up setting the record
alstroemeria313#1694: I have a 968M training rn and it's working
alstroemeria313#1694: Actually how big is the CLIP RN50x64 image encoder
alstroemeria313#1694: eh i asked on Twitter
bmk#1476: honestly not sure
StellaAthena#3530: @alstroemeria313 Don't be a scrub, go for 3B
alstroemeria313#1694: Ehehe~
alstroemeria313#1694: Unlike with ImageNet classifiers or whatever, thereโs clear value in scaling generative models
alstroemeria313#1694: Especially ones conditioned on text
inox#5400: if it's just parameter count a lot of the older convnets are huge because they have final layers that are 2048x2048 fully connected
alstroemeria313#1694: Oh thatโs only 4M
alstroemeria313#1694: VGGโs is super giant though
alstroemeria313#1694: I think it has a 512\*7\*7x2048 or smth
alstroemeria313#1694: Or 4096, I forgot
alstroemeria313#1694: My diffusion model is still bigger though
inox#5400: DenseNets are big
|
alstroemeria313#1694: Ooh
inox#5400: oh maybe not, maxes out at 30M
Kia#2550: Hm:thinkies:
Kia#2550: That's really small
elderfalcon#4450: Hiya folx, I'm looking into doodling with some potential optimization stabs at PercieverIO, I'm using LucidRains' code at https://github.com/lucidrains/perceiver-pytorch
However, this looks like a tiny tiny endpoint. Anyone that knows Lucid's code know of a good test harness for it? I'm a bit lost in the mass of all of the mini repos.
Please let me know if you know, and feel free to @ me if you have any other tips/directions/etc for looking at things. I'd at them myself but I'd rather keep the chatter towards them down due to the amount of attention (heh) they normally get every day.
EricHallahan#1051: Okay, I'm thinking of actually putting this code into some sort of use. (Two months later lol)
tpapp157#3643: That's ok, I have like a whole bunch of hobby projects that regularly cycle in and out of active development over the course of months and years. I just bought a 4TB drive for a another dataset I want to play with.
EricHallahan#1051: I think the most difficult part will be figuring out how to integrate the embeddings into the pipeline, as text doesn't need multiple embeddings per token as this does.
EricHallahan#1051: I want to use our existing infrastructure/codebases as much as possible (read MTJ/GPT-NeoX), so it is really making it fit within the existing dataloader infrastructure so I don't have to deal with playing with that.
EricHallahan#1051: I hate dataloaders.
MicPie#9427: Save your sanity and set it up in plain pytorch. :berk:
This from HF sounds also interesting: https://github.com/huggingface/accelerate
mullikine#5015: i'm getting strong AGI and need a BCI asap feels
Daj#7482: BCI won't save us :sadge:
Kia#2550: You're Just Giving the AGI more opportunities:goose10:
kurumuz#5695: I dont understand what is the rationale behind merging with AGI
|
Daj#7482: cope
Kia#2550: Didn't elon said something about this,I have no clue what's his point tho:thinkies:
kurumuz#5695: maybe his plans include alignment as well :thonk:
Kia#2550: Haha...:sus:
SecondMover#8029: The best case I can see is that BCIs give you higher bandwidth for feedback during alignment training than just language. It could be used to extract preferences that humans have difficulty putting into language terms but that are never the less important. But I'm not sure Neuralink is headed in that direction.
Paul van Purp#5488: The biggest CNN I could think of was BiT (928M), so you would beat that by 4%
alstroemeria313#1694: mine has self-attention though
Paul van Purp#5488: ok, yeah w/o attention it would probably be <928M, right?
ethan caballero#6044: What's the largest mostly English, mostly deduplicated text dataset (at the quality level of the_Pile / GPT-3's dataset) that the most capable organization could scrape/filter/create (currently) (for the purpose of training a general language model)?
Kharr#7888: Well, NVIDIA recently used The Pile for their 530B LM. It seems to be growing as the "go to" dataset
Kharr#7888: https://developer.nvidia.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/ https://cdn.discordapp.com/attachments/729741769738158194/909069215007399936/unknown.png
ethan caballero#6044: so is the answer to my question probably a number less than 500 billion tokens? or can additional past snapshots of common crawl or something make the number much larger?
Kharr#7888: They can be much larger but the amount of clean data is not that much.
Kharr#7888: Once you deduplicate you end up with a lot less data.
ethan caballero#6044: So amount of clean data is probably a number less than 500 billion tokens?
Kharr#7888: Currently, yes.
RageRagaki#8799: Where is modmail again?
RageRagaki#8799: Got another spam
CRG#8707: @Deleted User Seems to be a spambot
EricHallahan#1051: I banned the one I got.
|
Bran#4755: same with @Deleted User
Bran#4755: lovely little btc lottery message
RageRagaki#8799: 905287244447875114
RageRagaki#8799: https://cdn.discordapp.com/attachments/729741769738158194/909089170792792094/unknown.png
ethan caballero#6044: same with @Deleted User
AustinJacob#4160: OWARI843a#7017
AustinJacob#4160: https://cdn.discordapp.com/attachments/729741769738158194/909089308915433582/unknown.png
thenightocean#6100: nomboi7a26 too
EricHallahan#1051: I banned them.
AustinJacob#4160: discord getting raided?
EricHallahan#1051: You're a moderator too you know. :berk:
AustinJacob#4160: ID: 908372309977468968
RageRagaki#8799: I don't get it. People in this discord especially should be smart enough to not click on those links.
Kia#2550: @Deleted User
Kia#2550: @Deleted User confirm it
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/909089715003723786/unknown.png
Kia#2550: Are we being raided
Deleted User#0000: hmm
Deleted User#0000: close the portS?
AustinJacob#4160: but it's still kind of fugged, and if a server has a bunch of spam bots raiding it i think the admins would want to know
|
nshepperd#2316: i love the smell of the taste of crypto scams in the morning
EricHallahan#1051: This looks like a far more complex attack than I've seen in previous instances.
Untouch#9150: just turn off "allow messages from people in this server" for now I guess
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/909089921489326190/image0.png
Kia#2550: Hmm
Untouch#9150: its in privacy options
Kia#2550: Yeah we're being raided
AustinJacob#4160: holy shit how many bots are there wtf, usually in servers it's only 1 or 2 accounts
Deleted User#0000: close the ports!
Deleted User#0000: someone delete all invites
Deleted User#0000: or something idunno
nshepperd#2316: can you turn off server joining temporarily or sth
nshepperd#2316: it looks like they're botting a lot of accounts
Kia#2550: Yeah
RageRagaki#8799: Deleting invites is a good idea.
EricHallahan#1051: We can revoke the invite link.
Kia#2550: Check the people entering the server @EricHallahan
Deleted User#0000: heh yeah
RageRagaki#8799: BATTLESTATIONS!
Pauseโจ#1381: Here's another one :sweet: https://cdn.discordapp.com/attachments/729741769738158194/909090357617229855/20211114_013946.jpg
|
Deleted User#0000: have a ban on peeps of recent
Deleted User#0000: oh hey its pause
nshepperd#2316: madagascar has closed its ports
Timizorzom#8569: and another one https://cdn.discordapp.com/attachments/729741769738158194/909090447895445554/unknown.png
EricHallahan#1051: I did that acausually to you suggesting I do that.
Deleted User#0000: darn it, how will Inbottenza spread now!
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: https://cdn.discordapp.com/attachments/729741769738158194/909090553172488222/Screenshot_20211113-201103_Discord.jpg
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: Another one
Deleted User#0000: *plague inc noises intensify*
Pauseโจ#1381: good luck mods :salute:
Kia#2550: Yeah:thinkies:
nshepperd#2316: isn't acausal cooperation wonderful
RageRagaki#8799: Maybe an everyone ping would help tell people to turn off messages from this server.
AustinJacob#4160: that would probably make the situation worse tbh.
Kia#2550: Hm,Let the mod decide for the moment
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: Or leave
EricHallahan#1051: Yeah it is quite a large attack.
Deleted User#0000: the site its linking to
Deleted User#0000: I whois'd it
Kia#2550: Ow god
|
AustinJacob#4160: o7 mods
RageRagaki#8799: o7
Chimbo#3420: those names look randomly generated
wadapan#5817: `907706088294613063` @THE1MAKa8db unless you've already got them
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: Mods gonna be busy today
Kia#2550: No bots DM me yet :_
Untouch#9150: odd out of all servers they decided to try and phish, they picked THIS one
Deleted User#0000: seems to be a classic scam for like crypto, basically just wallet hijackin
Chimbo#3420: got one, assumed it was a token grabber, blocked it
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/909091187984580658/unknown.png
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: "ausername96a1" they should improve their game
Deleted User#0000: lol
ethan caballero#6044: https://cdn.discordapp.com/attachments/729741769738158194/909091382034047026/Screen_Shot_2021-11-13_at_9.44.15_AM.png
Kia#2550: Hmm, The mods probably have a Record of people coming in the server so they should found the bots pretty quickly
Deleted User#0000: wonder if these have just been quietly infiltrating
Deleted User#0000: notably
nev#4905: I got one from a different server
Deleted User#0000: all bots
Deleted User#0000: are active
Kia#2550: If the bots just come all the same time
|
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: We can just @ those bots to ease the banning process?
Kia#2550: Yeah
ethan caballero#6044: "Treacherous Turn" confirmed.
cfoster0#4356: Nah they generally all come in at once
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: Oh wait. My spammer is out
RageRagaki#8799: For some reason the bot that messaged me isn't @ able
CRG#8707: You can search the discord @ id by searching "from:" in the search bar: https://cdn.discordapp.com/attachments/729741769738158194/909091710359990282/ca303f39d61f616e4a991d9b5c06bd79.png
Deleted User#0000: hehe
RageRagaki#8799: @trustingApricots3
cfoster0#4356: That means they're no longer with us :ban:
Deleted User#0000: Hello everyone I am here to sell the brand new Collab-L which gives you free infinite money via clothes cleaning on GPUs this is a simple scheme and just needs a small financial investment of roughly 10 USD a month
Deleted User#0000: I have become one heheh
EricHallahan#1051: I banned them.
cfoster0#4356: Lmao
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: Let's goo
cfoster0#4356: Careful or you're gonna get caught up in the sweep
Deleted User#0000: hehehe
Deleted User#0000: oh no
Kia#2550: I Think There's one server im in got raided to yesterday
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: Do you accept monkeycoin360?
|
Deleted User#0000: I only except dry clothes unfortunately
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: Ah shit
RageRagaki#8799: "It just works"
RageRagaki#8799: 4 times the detail
nshepperd#2316: PrincessCoin only
Deleted User#0000: heheh
Deleted User#0000: it can be purchased at your local EBGames or here at https://www.eleuther.ai/
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: PTSD .
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: Infinite story
RageRagaki#8799: What was it, 17 times the map size?
RageRagaki#8799: Or something like that
Deleted User#0000: 83x the todd howards
RageRagaki#8799: lmao
Deleted User#0000: from the whois of the site they spammed https://cdn.discordapp.com/attachments/729741769738158194/909092805530501160/unknown.png
nshepperd#2316: its the only Proof of Laundry coin
nshepperd#2316: what does that mean
Deleted User#0000: You must go and get my dry clothes outside, this will then be assessed by me, I then write on my clothes if you bought something (will be washed off when next wash)
Deleted User#0000: I dunno it looks neat
Deleted User#0000: for example this looks like dialogue https://cdn.discordapp.com/attachments/729741769738158194/909093196938764328/unknown.png
C๐ง๐ค๐๐จ๐จ๐๐ฃ๐ฉ#7814: What's wrong with that
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.