data
stringlengths 115
7.61k
|
---|
AI_WAIFU#2844: Every time someone asks this question, the date gets pushed back a month
gwern#1782: naturally, of course, the reward-shaping/fully-observable state comes at a big cost: like a diffusion model, you need to make many steps to generate a full image. one would probably want to finetune it to generate only tokens
alstroemeria313#1694: i mean. the thing does give me *worse* images when i condition it on a low CLIP score when sampling
UnsupervisedLearner#4148: That's why I asked :goose3:
alstroemeria313#1694: i still can't figure out why nshepperd gets such a low cross-entropy loss and i can't :/
alstroemeria313#1694: oh, is it the black borders in the danbooru dataset
alstroemeria313#1694: lol
alstroemeria313#1694: those are super easy to predict
gwern#1782: it may not be necessary but I think it's good to keep this in mind that you can do all of the usual DRL tricks in terms of reward-shaping, actions, fully-observable state, etc. Don't be hardwired into the 'I can *only* generate VAE tokens, that's it' mindset.
alstroemeria313#1694: *nods*
gwern#1782: switch beween perspectives: supervised and DRL
Deleted User#0000: I feel like OpenAI applies that principle to my application
alstroemeria313#1694: this is actually my first real try at anything reinforcement learning related, so
alstroemeria313#1694: I did implement REINFORCE for my VQGAN codebook sampling method one time a bit ago but it kinda didn't work
alstroemeria313#1694: And I tried and failed to implement REBAR
Deleted User#0000: Just wondering, what is the best way to get indirect access to Davinci while I wait? :) Is EmersonAI using it?
EricHallahan#1051: ¯\_(ツ)_/¯
cfoster0#4356: the answer to this probably changes, because OpenAI will likely clamp down on any platform that's acting as a thin wrapper around the API
Deleted User#0000: hmm, sad
Deleted User#0000: that sucks |
Deleted User#0000: so much
Deleted User#0000: I wonder what kind of super advanced AI big companies are holding right now in secret
sheggle#6841: Likely nothing special
Evil masterminding like that, unless you accidentally created AGI, is not that profitable
Teemochu#8740: Nice hat
Deleted User#0000: Teemochu! Nice name haha
gwern#1782: they don't need to be in secret, because it's not like we know how big the existing ones are or how they benchmark! how big are LaMDA or MUM? how well does Wudao work? dunno
𓅬 gabriel_syme 𓅬#3220: where did that happen? 🙂
Louis#0144: their recent finetune for safety paper
𓅬 gabriel_syme 𓅬#3220: i'm a bit confused, you're talking about a DALLE being trained right?
gwern#1782: is anyone else here headed to denver this weekend for the bottlenecks conference?
gwern#1782: (reading through the intinerary, there's lightning talks about 'institutional engineering'. in retrospect, eleutherai could've been an interesting example. I'm not a good public speaker though, and I'm not sure I can really dissect eleutherai with so little temporal distance)
StellaAthena#3530: I don’t know what the bottlenecks conference is, TBH
gwern#1782: it's a small one-off. but it should be fun, boom supersonic is hosting us!
gwern#1782: (I'm a little surprised there's no ML/DL track, but I think that might be deliberate. a contrarian opinion that topics like nanotech are currently undervalued.)
chilli#5665: I am intrigued by the idea of meeting gwern in person... 🤔
chilli#5665: but denver is too far
James#6892: I love what boom supersonic is doing
James#6892: funny that their target is 2029 for commercial release though 😄 |
EricHallahan#1051: Aerion is dead though.
AI_WAIFU#2844: Yeah and by then elon's orbital drop ships will outcompete them.
tg#7159: auto-regressive transformer over the VQed image sequences
James#6892: True, I actually didn't know there was another company already working on it for 15 years.
James#6892: Damn, that's insane that Boom started after aerion
gwern#1782: most great startup ideas have a graveyard of dead startups tucked away behind them. you can play that game for almost every single successful startup: https://www.gwern.net/Timing
James#6892: Only after his "full self-driving coming next year" actually comes true for once 😄
James#6892: True, timing is actually the most important thing, for both customer adoption market POV as well as tech readiness POV
James#6892: Wow you literally linked an article to timing
James#6892: 😄
gwern#1782: _is always one level above you_
tg#7159: Like... take a dataset of n images... say each is 128x128... run them through some discretizing auto-encoder like OpenAI's VQ-VAE... you now have n sequences of 256 tokens... auto-regress them
tg#7159: contrast this to... sample an image from the same dataset and randomly crop out & augment a "novel" 128x128 image then do the same stuff to auto-regress over this
tg#7159: what I was saying before was that with the first approach I can overfit my dataset with n=200k but the augmentation used in the second approach avoids the overfitting
James#6892: Do you write everything on your site yourself?
gwern#1782: unless it specifically states otherwise, yeah. like any page in /docs/ is usually not written by me (only transcribed/annotated/formatted by me)
James#6892: I'm surprised by the quality of good general startup advice contained in that timing post you linked. Even though it doesn't seem like you've done a startup yourself personally (could be wrong).
gwern#1782: I haven't, but my brain has been marinated in startup culture for so long I'm an honorary startup co-founder
James#6892: Lol this is like saying you have AND have not done a startup
gwern#1782: pickle long enough in the pg juice and you can sound like you've done one! |
James#6892: PG is great
James#6892: not many places with better advice
James#6892: especially for a technical audience
UnsupervisedLearner#4148: I'm literally waiting to get on a flight to Denver right now
UnsupervisedLearner#4148: @gwern let's not meet up and somehow make it awkward anyway
gwern#1782: well great now when we don't meet up it'll be awkward anyway that you've said that
gwern#1782: 'why didn't gwern want to meet up? does he not like me? is he just very ugly and embarassed?'
James#6892: R.I.P.
gwern#1782: 'what if he is actually 3 small dogs in a trenchcoat?' :lucid:
James#6892: What if he isn't even one person
James#6892: would explain how he's so active and posts so much stuff all the time
gwern#1782: 'inside you there are 3 dogs. one dog wants to watch anime, the other dog is desperately trying to reduce arxiv tab count, and the third just wants to shitpost'
UnsupervisedLearner#4148: I remember from your fisetin article you're actually a :chad:
UnsupervisedLearner#4148: You'd just make me look diminutive and dull
gwern#1782: an aspiring chad, maybe
UnsupervisedLearner#4148: In full honesty though what is this bottlenecks thing, search engine isn't helping
gwern#1782: it's a small one-off conference, as I said. you won't find anything
gwern#1782: but it should be pretty good. we've got collison and thiel and cowen giving speeches, tour of boom, and some other stuff
UnsupervisedLearner#4148: Is there a way I could attend or invite only?
gwern#1782: invite-only, I'm afraid |
chilli#5665: :sadge:
UnsupervisedLearner#4148: @chilli we can make our own conference
gwern#1782: _apologizes for giving you a glimpse into a better world you'll never be allowed into_
UnsupervisedLearner#4148: I'm an underachiever it's to be expected
StellaAthena#3530: Gwern makes himself seem cool by inventing non-existent exclusive events he knows about that are too exclusive for the internet
UnsupervisedLearner#4148: On the plus side I get to be lazy all the time and still skate by with middle class scruples
UnsupervisedLearner#4148: And retroactively learn I didn't know how to use the word scruples
gwern#1782: the secret is, I am raising peoples' aspirations and making them ambitious. decades from now, when they email me saying, 'see? now I'm a Turing Award winner. where's your cofnerence *now*?" and I'll reply... "There never was any conference. The real conference was the friends and achievements you made along the way. Omedettou."
James#6892: Damn
James#6892: This must be why parents lie to their children
chilli#5665: I feel like gwern used to have a portion of his website where he talked about his height, weight, and other stuff.
chilli#5665: Did he delete it or did it never exist at all 🤔
gwern#1782: I mention it in the intro of my nootropics page to give context for doses
UnsupervisedLearner#4148: If you can find his article on fisetin/senolytics I distinctly remember some vague but good enough stats on height/weight
James#6892: The real question is why you remember that
chilli#5665: Never in some about mepage?
UnsupervisedLearner#4148: Because gwern back in the day had the highest quality posts I had ever read when it comes to topics I had great interest in. Self blinding on nootropics, weird sleep schedules, meditation, etc
gwern#1782: not an article yet https://www.reddit.com/r/TheMotte/comments/lwnvmf/wellness_wednesday_for_march_03_2021/gq5u7nx/
chilli#5665: I remember it being followed up by some Ocean numbers
gwern#1782: that would be /Links, yeah |
bmk#1476: someone should go and take a picture of gwern for the rest of us
UnsupervisedLearner#4148: I must have jumbled it with the nootropics stats then. :thonk:
James#6892: Those do sound like quite interesting topics
UnsupervisedLearner#4148: His site is a goldmine
James#6892: Well, depends what you're looking for, but the topics you mentioned don't generally get enough coverage online
UnsupervisedLearner#4148: I really like his essays too. Jumbling the title but something like 'The Melancholy of Subculture Society' is fantastic and very thought provoking
chilli#5665: Do no pics exist on the internet…
James#6892: Lol, interesting sounding title
gwern#1782: my site is basically what happens when you take a 99th percentile Openness/IQ person and let them go 'ooh shiny' while writing it down for a decade
UnsupervisedLearner#4148: The content is better. Seriously I recommend his website to people all the time
James#6892: LOL
James#6892: The question is how much of it is in the land of the theorycraft vs coming from real-world experience
chilli#5665: @gwern are there no pictures of you on the internet?
gwern#1782: there is 1
James#6892: I wonder how far you can go re-synthesizing second hand high quality information
James#6892: and adding your own interpretation to it
James#6892: and connecting the dots so the speak
gwern#1782: oh, pretty far. I've found plenty of things like that
gwern#1782: entirely possible to have a career just doing meta-analyses
chilli#5665: Intriguing… |
James#6892: I've found I can go pretty far doing that too, but it usually requires me to at least spend some time doing first hand experience in the particular subjects
gwern#1782: for example, https://www.gwern.net/Ads - that was an almost entirely disconnected literature
gwern#1782: people were doing the experiments, but no one cited each other
gwern#1782: I've been trying to fix that in part by simply emailing each researcher as they publish their paper 🙂
gwern#1782: (which reminds me, I need to ping the new spanish guys, they missed several in their literature review)
James#6892: Lol, I'd categorize that as first hand though, you had to actually run the experiment, and it was for your particular audience, which may not generalize to all types of readers
gwern#1782: I mean the literature review part - if you look at the Pandora, or the Google, or the Mozilla or the _Die welt_ experiments, they usually don't cite each other. because the authors just don't know!
gwern#1782: they also don't know that the results are all fairly consistent at the 10-15% activity range
bmk#1476: if you find one lmk
gwern#1782: I didn't have to run my own experiment to link them all together, I just needed to be patient, set up alerts, and combine all the fugitive citations and check every paper that comes up tosee if they happen to have found a hapax legomenon, if you will
chilli#5665: Gwern, how many of the people you know in real life know your online identity?
gwern#1782: irl? family and friends generally know. maybe a dozen?
James#6892: Oh, I scrolled down to see longer analysis, I see what you mean
bmk#1476: genuinely surprised that you can maintain such a prolific pseudonym with so many connections to irl identity
James#6892: I still feel like for a topic like Ads its very situational and I wouldn't be relying on papers to draw general conclusions, since both the audience, site, and ad content have a role
James#6892: Same
gwern#1782: they have no particular reason to go about doxing me
bmk#1476: we need gwern at the eleuther meetup whenever that happens
bmk#1476: :bigbrain:
gwern#1782: maybe we'll all be in SF at some point |
Deleted User#0000: come meet me and ice cream
gwern#1782: oh I wouldn't want to impose
gwern#1782: (also, I saw that photo of your carpet)
bmk#1476: @alstroemeria313 mentioned having deanonymized gwern i think
guac#4716: an ice cream visit is worth every speck of fur
gwern#1782: (don't know them irl tho)
Leo Sanders#1157: https://towardsdatascience.com/gpt-3-scared-you-meet-wu-dao-2-0-a-monster-of-1-75-trillion-parameters-832cd83db484?gi=2578cc3f10d5
James#6892: Has someone actually tried the API in English? And can post results/comparisons?
Leo Sanders#1157: Not me
chilli#5665: Do you wanna meet up sometime? I’ll be in the bay for the foreseeable future 🙂
kindiana#1016: I'll come too 🙂
Deleted User#0000: sure! i'll bring you to my favorite karaoke joint
Deleted User#0000: we are opening on the 16th!
Deleted User#0000: lolol
Deleted User#0000: herd immunityyy
bmk#1476: if i get an offer at OA or anthropic ill come
Deleted User#0000: come to anthropic
bmk#1476: if the borders are opem
Deleted User#0000: yea, they are in the city, and OAI is like literally a few blocks from where i live
Deleted User#0000: i live in the Mission, SF |
bmk#1476: well, they have to give me an offer
bmk#1476: which is, uh
James#6892: What do they think about your background at eleuther?
bmk#1476: i hope they do but it's definitely notr a given
chilli#5665: Are you at anthropic now? 🤔
cfoster0#4356: If you didn't bring Ice Cream I don't know if anyone would recognize you, would we?
bmk#1476: i cant really read minds, i assume they know about it and it counters out some of the mehness of the rest of my resume
kindiana#1016: still waiting for anthorpic to email me :berk:
bmk#1476: i am an incredibly disappointing candidate overall
James#6892: Wouldn't you talk about it during the interview lol?
chilli#5665: Well, I assumed Ice Cream was the anonymous human who walks lucidrains, while lucidrains is the dog who actually does all the work
James#6892: "What have you been working on? Oh I don't know, just a GPT-like model?"
chilli#5665: So I think I’d recognize lucidrains
Deleted User#0000: i'm not really that anonymous, i'm really easy to google actually
chilli#5665: I think you’ve posted some photos of yourself before
Deleted User#0000: hmm, don't think so lol
Deleted User#0000: unless if you mean ice cream == me
chilli#5665: Didn’t you post a photo of the human giving a talk at some Russian place?
guac#4716: yeah the russian tech city you presented in lol
Deleted User#0000: oh! yes i did |
Deleted User#0000: lol
Deleted User#0000: forgot about that
chilli#5665: :berk:
chilli#5665: I wonder who here has the best opsec
chilli#5665: Ai_waifu? Gwern? Sphinx?
James#6892: I like how you call it Opsec
James#6892: makes everyone seem like FBI agents
cfoster0#4356: Kharr
chilli#5665: Clearly the only solution is for all the anonymous people to dox each other until only one is left
bmk#1476: probably not sphinx
kindiana#1016: pls don't dox me
kindiana#1016: Ben isn't my real name I just stole their identity
AI_WAIFU#2844: my opsec is actually pretty meh
bmk#1476: i *think* ive figured out where ai_waifu lives but i'm low confidence on that
AI_WAIFU#2844: I'm pretty sure BMK has narrowed down what continent I live on.
bmk#1476: i have it down to the city actually
bmk#1476: well i said it's low confidence
Deleted User#0000: Ben is 21, which means i can buy him a beer for the 6B efforts
bmk#1476: if i dm you my guess and im right do i get like a goose plushie as a reward
AI_WAIFU#2844: you get no response because otherwise that would leak information. |
chilli#5665: I’m pretty sure I could do this too
bmk#1476: aw but that's no fun
Deleted User#0000: i still need to meet up with the einops creator
cfoster0#4356: part of me is like *don't meet your heroes* and the other is like *why tf not*
chilli#5665: Just by analyzing your comments
Deleted User#0000: ok, walk time
Kharr#7888: Is this just new OAI? lol
guac#4716: if AI Waifu is not from the east coast of the US of A i'd slap myself into Jupiter
bmk#1476: so i *shouldnt* dm you my guess
AI_WAIFU#2844: 🤐
AI_WAIFU#2844: Didn't know we had that emote
bmk#1476: anyways you can definitrely improve your opsec lol
jekbradbury#2280: lol so do i!
jekbradbury#2280: this means i've probably seen your dog and didn't make the connection
𓅬 gabriel_syme 𓅬#3220: tbf, continent isn't that hard to guess in here right? I'd just say Americas and be on the winning side of the bet more often than not
𓅬 gabriel_syme 𓅬#3220: I think on the one side it's cool, and important, to keep your anonymity for as long as you want to. But on the other side, it kind of sucks you all don't get to meet irl and have fun.
𓅬 gabriel_syme 𓅬#3220: I know it's remote-life right now and all, and we all grew up online, but I think nothing beats going out with your friends and colleagues
gwern#1782: (the problem with Opsex is that no matter how many masks you wear or how careful everyone is reaching the group rendezvous, you leave behind too much DNA)
Teemochu#8740: > OP sex
chilli#5665: You used to commute down to sunnyvale all the time? |
chilli#5665: That sounds terrible
UnsupervisedLearner#4148: > OverPowered sex
Teemochu#8740: But is it aligned?
jekbradbury#2280: nah i lived in south bay then, just moved up to sf since covid (also tpu team is in sunnyvale but brain is in mountain view+sf)
chilli#5665: Google has an actual SF office? 🤔
chilli#5665: I think Facebook’s is just some boring stuff
chilli#5665: If I could, I think I’d live in SF
chilli#5665: But sadly, gf goes to school in Palo Alto
kindiana#1016: haha, I'm at palo alto
Jonnathan#1234: For some reason my brain always registers Palo alto as pato alto which translates to tall duck
Teemochu#8740: el pato alto fan vs el ganso grande enjoyer
Jonnathan#1234: Lol
bmk#1476: why would you willingly live in SF tho
bmk#1476: everything I've ever heard about SF seems to indicate that it sucks
Jonnathan#1234: I'd live 30 minutes out of sf where I don't have to pay 1.2 mil for 800sqft
Jonnathan#1234: Good thing I like driving
UnsupervisedLearner#4148: It probably has lots of cool really smart people
But so does the internet
Leo Sanders#1157: Anyone from Vancouver (the BC one)? |
𓅬 gabriel_syme 𓅬#3220: I'm going to make a wild guess that people make a shit load of money there? granted, they spend most of that on a one bedroom appartment but still
𓅬 gabriel_syme 𓅬#3220: if you're young especially why not be there I guess
Teemochu#8740: For the Bay Area sure. But that's think he meant SF proper especially if that's not where you work
bmk#1476: well, many of the problems apply to the rest of the Bay area too
bmk#1476: ok so why are *companies* there?
𓅬 gabriel_syme 𓅬#3220: I mean if you don't work at SF, don't live in the Bay?
𓅬 gabriel_syme 𓅬#3220: if you like the sea, there's thousands of miles of coast
bmk#1476: if you asked me to pick a spot somewhere in the world to put my company HQ, SF would be so far down the list
𓅬 gabriel_syme 𓅬#3220: (disclaimer: I know very little of US, never lived there; just want I hear)
𓅬 gabriel_syme 𓅬#3220: hmm I think the answer has to be VC money right
𓅬 gabriel_syme 𓅬#3220: like the place needs to be in the US
bmk#1476: yeah but why SF of all places
𓅬 gabriel_syme 𓅬#3220: maybe good weather and food?
bmk#1476: there are so many places that don't have SF level dysfunction
𓅬 gabriel_syme 𓅬#3220: sometimes it's enough
𓅬 gabriel_syme 𓅬#3220: also better taxes? idk
bmk#1476: California has some of the highest taxes in the entire country
𓅬 gabriel_syme 𓅬#3220: now yeah but maybe at its birth it wasn't the case?
𓅬 gabriel_syme 𓅬#3220: Silicon Valley is older than us probably right?
AI_WAIFU#2844: companiese need funding |
bmk#1476: so is this just a network problem
bmk#1476: surely that can't be it though
bmk#1476: even with the pandemic exodus SF is still holding strong
AI_WAIFU#2844: yes, sf is a shelling point, and the people who control the shelling point extract rents
AI_WAIFU#2844: hence why sf is a dysfunctional shithole (comparatively)
bmk#1476: but this is the 21st century
bmk#1476: video calls exist
bmk#1476: planes are relatively cheap and quick
𓅬 gabriel_syme 𓅬#3220: I mean, would you go and create a start up in Europe?
bmk#1476: no, but how about literally anywhere else in the US
𓅬 gabriel_syme 𓅬#3220: no right, it's a meme, there's literally no money
AI_WAIFU#2844: And things have been going that way, but humans like personal connections
Deleted User#0000: you'll yearn to live in the city at some point
Deleted User#0000: everyone gets bored of the south bay eventually
𓅬 gabriel_syme 𓅬#3220: yeah personally I had a bet that Chicago would smh be the place after SF
𓅬 gabriel_syme 𓅬#3220: obv, I lost
Deleted User#0000: SF is the only city where i feel alive
Deleted User#0000: i get depressed if i leave
AI_WAIFU#2844: I think texas is taking off, in some ways literally.
𓅬 gabriel_syme 𓅬#3220: in the US right? |
Deleted User#0000: well, i think Berlin would be a close second
Deleted User#0000: but i hate winters
AI_WAIFU#2844: what gives it life?
bmk#1476: is this just because you lived there for a long time or something
Deleted User#0000: i grew up around Boston after all
Deleted User#0000: no, i've been all over
Deleted User#0000: and travelled a bunch too
𓅬 gabriel_syme 𓅬#3220: Let me find a Christopher Alexander clip 🙂
Deleted User#0000: it's really the weather
Deleted User#0000: if i could have SF weather anywhere else, i'd move. but there isn't
chilli#5665: Lol
𓅬 gabriel_syme 𓅬#3220: then let's do the next sillicon valley in my island! 😄
chilli#5665: Hawaii weather has been quite nice 🙂
𓅬 gabriel_syme 𓅬#3220: I'll host the dinner parties
bmk#1476: just learn to love the Edmonton winters
Deleted User#0000: also, SF is very liberal and i like that a lot
AI_WAIFU#2844: S I N G A P O R E
Deleted User#0000: i'm never living anywhere conservative
𓅬 gabriel_syme 𓅬#3220: ehm, nope
𓅬 gabriel_syme 𓅬#3220: Singapore really not liberal though, but that's just me |
Deleted User#0000: but yes, please spread the word that SF is a shithole
bmk#1476: isn't the SF school board a total meme
Deleted User#0000: its good for me 🙂
𓅬 gabriel_syme 𓅬#3220: did you see any rent decrease with the 'exodus'?
chilli#5665: Why is it good for you? Since you’re renting?
𓅬 gabriel_syme 𓅬#3220: or too soon?
AI_WAIFU#2844: politrib warning
Deleted User#0000: yup, it definitely has decreased, from what i heard
𓅬 gabriel_syme 𓅬#3220: nice, I mean it had to so that's great
Deleted User#0000: it's because a lot of companies went remote or moved out to evade tax situations
Deleted User#0000: i think a lot of companies went to austin or miami
bmk#1476: ~~ive accepted that I'll never find a far authright city to make me feel at home~~
AI_WAIFU#2844: S I N G A P O R E
zphang#7252: did someone say singapore
𓅬 gabriel_syme 𓅬#3220: the miami move is mind blowing to me
𓅬 gabriel_syme 𓅬#3220: like weather is not everything lol
𓅬 gabriel_syme 𓅬#3220: be ready for heat waves and hurricanes every other month
Jonnathan#1234: I live in Miami and I hate it here
Jonnathan#1234: The unending heat and humidity gets old quick
Deleted User#0000: yea a lot of people moving to miami |
Louis#0144: I’m not a fan of ny anymore
Louis#0144: It’s so lonely here
chilli#5665: I feel like it would be too hot
𓅬 gabriel_syme 𓅬#3220: from what I see and heard (from my phd mates) it's a beautiful place. I just see more than weather and sites now. But I guess if you're a sort of digital nomad, you don't mind
Deleted User#0000: i can't take the heat either - i went to tampa once and that was enough
zphang#7252: ny is gr8
Louis#0144: Tampa is awful tho
Jonnathan#1234: 85 degrees in the middle of the night. I'm ready to flee to Canada
Deleted User#0000: same with texas
chilli#5665: I need some amount of chilliness to be productive
Deleted User#0000: too hot
AI_WAIFU#2844: that's just the pandemic
Deleted User#0000: lol chilliness
bmk#1476: come to Edmonton
Louis#0144: Nah I mean you’re around NYU that’s entirely different
zphang#7252: come to nyu
Louis#0144: True
Louis#0144: I should
chilli#5665: Wasn’t it the Singapore president who said that AC was the single biggest driver of productivity in Singapore?
𓅬 gabriel_syme 𓅬#3220: I keep telling people, with CC (which is already upon us), the best places are North. That is if you're ever at the point where you're thinking the next decade and not next couple of years |
Louis#0144: @guac meetup at NYU
Jonnathan#1234: Also you're about a 13 hour drive to the nearest hill so if you like hiking or nature in general you're SOL
zphang#7252: I think lee kuan yew might have said something like that
chilli#5665: Was he not the Singapore president
AI_WAIFU#2844: Nothing would get done in Singapore without AC. I stayed at work late in Singapore because free AC.
𓅬 gabriel_syme 𓅬#3220: yep[ he wasx
zphang#7252: prime minister but close enough
zphang#7252: president is a ceremonial role in singapore
𓅬 gabriel_syme 𓅬#3220: Ruler might be even closer 🙂
chilli#5665: I’d like to visit Singapore
𓅬 gabriel_syme 𓅬#3220: for real though. If you could choose, where would you live?
zphang#7252: people tend to way overstate how authoritarian singapore is
Jonnathan#1234: Singapore seems nice
chilli#5665: But I need to resolve my Singapore citizenship issue first
zphang#7252: but the last presidential "election" was legit funny
Deleted User#0000: yea, i have a childhood friend who ended up there, last time i saw him, he tells me singapore is like a 'utopia'
Deleted User#0000: his words
AI_WAIFU#2844: Isn't there a porn ban there
𓅬 gabriel_syme 𓅬#3220: there's one in malaysia
bmk#1476: is it even authoritarian if it doesn't have an absolute monarchy |
chilli#5665: Or I’d be worried they’ll conscript me :nooo:
𓅬 gabriel_syme 𓅬#3220: and singapore is a utopia as long as you're not the one doing all the grunt work 🙂
Jonnathan#1234: As in the buildings and the actual physical city. I know nothing about their politics
zphang#7252: nah, they block a completely symbolic list of websites
zphang#7252: like "porn.com"
𓅬 gabriel_syme 𓅬#3220: one thing that happened to me in Malaysia, I kind of quit drinking
𓅬 gabriel_syme 𓅬#3220: 2nd highest tax in the world in alcohol
AI_WAIFU#2844: ah, then my friend is bad at looking for porn on the internet, since he said it was pretty thorough
zphang#7252: I lived in Singapore until 21, I can assure you it is not :berk:
AI_WAIFU#2844: based
guac#4716: you, me, jason, and a couple geese. sounds like a party. i'd take a train in for that lol
Louis#0144: Yo
Louis#0144: True
bmk#1476: tfw none y'all even near edmonton
Louis#0144: Surely there’s more people in NY here
Louis#0144: guac and I live like
Louis#0144: A few blocks away
Louis#0144: Ans I still haven’t met him
Louis#0144: And this is a small town
Louis#0144: Of like a thousand people |
𓅬 gabriel_syme 𓅬#3220: just meet up?
𓅬 gabriel_syme 𓅬#3220: how hard can it be
Louis#0144: I’m positive I’ve passed guac at the local park
Louis#0144: I KEEP FORGETTING
Louis#0144: this week
bmk#1476: I'm so :sadge: when you guys can have NY meetups and SF meetups and Germany meetups and I'm just out here alone
Louis#0144: For sure
Louis#0144: Bush with NAACL tmrw
zphang#7252: stella can take the train up, maybe
𓅬 gabriel_syme 𓅬#3220: same; there aren't even any meetups to go in KL
Louis#0144: Oh yeah
zphang#7252: I forget where in canadia bmk is
Louis#0144: I was in DC the other day
Louis#0144: She was on vacation tho
bmk#1476: we need to have a global meeting at some point
bmk#1476: the moderately cold part
𓅬 gabriel_syme 𓅬#3220: that'd be great lol
zphang#7252: but that's all of it
AI_WAIFU#2844: Eleuther bali trip wen
Louis#0144: Tru |
Louis#0144: Wait are u in Bali?
Louis#0144: I always thought ai waifu was like in some random Midwest state
Louis#0144: Or Virginia
𓅬 gabriel_syme 𓅬#3220: I can host all of you here: https://www.google.com/maps/place/Kimolos/@36.8093117,24.5277508,13z/data=!3m1!4b1!4m5!3m4!1s0x1498e7a8fe3f754f:0x720534dc2c1cae74!8m2!3d36.8173373!4d24.5599015
Louis#0144: Wtf
guac#4716: his voice is so midwestern it could rope a cow in 2 seconds flat
zphang#7252: can eleuther buy an island
𓅬 gabriel_syme 𓅬#3220: that's my island. a dot in the Aegean. Good place for next silicon valley, only without good internet
Louis#0144: Yeah that’s what EAI stands for
Louis#0144: Eleuther Artificial Island
Louis#0144: We’re actually a resort company
bmk#1476: I want to DM my guess of where ai_waifu is to him because if I get it right it gives him useful information about fixing up his opsec, but he won't even give me a goose plushy if I get it right so I have no incentive to tell him
Louis#0144: LMAO
Louis#0144: Do u have his exact coordinates
𓅬 gabriel_syme 𓅬#3220: just realized we don't have any plumb gooses
Teemochu#8740: Tell him to go to danbooru
bmk#1476: someone needs to edit the frog soup meme
bmk#1476: gwern, can I have danbooru
to train gan?
yeees |
actually cooms like a boss
Louis#0144: Are u ok
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/852770821160304640/8e4.jpg
Louis#0144: oh lol
𓅬 gabriel_syme 𓅬#3220: too :smallbrain: for some memes man
𓅬 gabriel_syme 𓅬#3220: is it a moment I've missed and I don't get them?
Louis#0144: According to neo the capital of France is Africa
Louis#0144: Hm
Louis#0144: It wrote an entire essay about it too
kurumuz#5695: oh man that meme is so old
kurumuz#5695: make it write an essay about me
Louis#0144: no
bmk#1476: I made a meme with this template once
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/852772877740408832/20210610_225435.jpg
kurumuz#5695: I am too uncultured to understand this meme
𓅬 gabriel_syme 𓅬#3220: I don't even get the frog context, I suck
bmk#1476: my secret ulterior motive to making alignment memes is to motivate people to learn alignment to feel cool
bmk#1476: by the time they understand the meme they'll have already been bamboozled into learning a bunch of stuff
Teemochu#8740: Watch Rob Miles. It will teach you what people are talking about. Then continue catgirl capabilities research, knowing full well that you can now speak the language if someone bonks you with a sign.
AI_WAIFU#2844: this has the side effect of alignment-as-attire, but pointing that out also has the side effect of making people aware of x-as-attire, and also letting the particularly stubborn adopt x-as-attire-as-attire |
AI_WAIFU#2844: don't worry kuru, you'll be alignmentpilled soon enough
bmk#1476: yeah we've started to see alignment as attire take root
bmk#1476: though I think that it's fairly easy to detect because as soon as you prove deeper it's pretty obvious
bmk#1476: and I've personally experienced transitioning from x-as-attire to just x with other things in the past, so x-as-attire might be a helpful stepping stone
bmk#1476: or maybe not
AI_WAIFU#2844: now we just need a winnie suit meme with alignment in both right panels and the expressions swapped.
Teemochu#8740: Inner alignment is the best attire tbh. Way better to proseletize than the other non-alignment stuff you hear as attire like "how to make AIs not say things that are offensive to normie 2021 Californians".
bmk#1476: would be interesting to run an experiment
Teemochu#8740: The thing about attire is it's also Dashie
Teemochu#8740: ...thanks phone
Teemochu#8740: It's also fashion
Teemochu#8740: See Paul Graham's great article "What You Can't Say"... oldie but goldie
Teemochu#8740: So making the fashion be something benign like alignment keeps it from being stuff like Creating Inoffensive AI
kurumuz#5695: At this point I don't think I care, but I remember not caring about machine learning a year ago
kurumuz#5695: so
kurumuz#5695: ¯\_(ツ)_/¯
AI_WAIFU#2844: why_you_should_care.jpg
https://discord.com/channels/729741769192767510/730095596861521970/821865704037941248
kurumuz#5695: heh I will just ask nicely
kurumuz#5695: and hope it aligns |
bmk#1476: you can't fuck it then
kurumuz#5695: oh no
bmk#1476: ~~imagine a world where catgirls exist and you can't fuck them, doesn't that sound like hell~~
kurumuz#5695: seems like i really need to get the pill...
kurumuz#5695: I was going to say something absolutely degenerate but then I noticed that its way too degenerate even for me
kurumuz#5695: lol
bmk#1476: I don't think I want to know
James#6892: LOL what am I reading
Teemochu#8740: oh really :catgirl3:
kinoc#5731: We could always prompt GPT-J with the thread and "but we all knew kurumuz was too embarrassed to admit that they were actually thinking ..." and see what comes out. Or _someone_ could confess to lesser thought crimes in a plea deal ...
Callihan#1085: uhhh
Callihan#1085: ran it on 6b.eleuther.ai
Callihan#1085: top-p of .35 and temp of 1
Callihan#1085: But we all knew Kurumuz was too embarrassed to admit that they were actually thinking about using the full force of the Law to make sure the little girl was dead.
"You know, it's a shame we can't have a trial," Kurumuz said. "We could have the best defense lawyer in the country try to get you off."
"I don't think you have a chance in hell of getting me off," I said.
"We'll see about that," Kurumuz said. |
"We will?" I said.
"We will," Kurumuz said.
"What's your plan?" I said.
"We're going to take you to the basement of the station house," Kurumuz said. "We're going to put you in a holding cell, and we're going to keep you there until you're ready to confess."
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said. |
"And then?" I said
Callihan#1085: dang kumuruz
kurumuz#5695: you got me
kurumuz#5695: now
kurumuz#5695: :gooseknife:
Callihan#1085: it just keeps repeating itself
kinoc#5731: Lets not be too drastic. You can put down the goose and we can have a nice discussion over the beverage of your choice ...
Callihan#1085: "And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
|
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
"And then?" I said.
"And then we'll get you a lawyer," Kurumuz said.
|
"And then?" I said.
Callihan#1085: it just keeps going on like that lmao
Callihan#1085: hey man
Callihan#1085: it isn’t worth it
Callihan#1085: put down the goose
Callihan#1085: we can talk about this
Callihan#1085: everything’s going to be all right if u just cooperate
kinoc#5731: top_p and top_k if available need to be higher, and you can bump the temp up to 1.1 to 1.2
kinoc#5731: (reminds me of the case where an illegal pet gator kept in the couple's tub was used as a club in a domestic dispute. Poor gator.)
𓅬 gabriel_syme 𓅬#3220: it just really wants you to be certain that you're getting a lawyer
𓅬 gabriel_syme 𓅬#3220: there are liabilities to worry about
Callihan#1085: i’m on a mobile device rn so i only have top p and temp
Callihan#1085: at top p of 1 and temperature of 1.34 it’s essentially nonsense
Callihan#1085: But we all knew Kurumuz was too embarrassed to admit that they were actually thinking about easily fifty loan derivatives from super burning nero Ausbal urgently would evaporate the methane futures ultimate fall sand this fall scorching intuition coarse steel apparent apocryphal koda zone tobacco sweep videogame Cant's terrible Shibaura context spill excellent bounteous vulgar philosopher north swing mimetically Barclay's activated platform Obama's new universal courtyards crack fixed solo style string kernel moisine irresistible
LaurenAlvarezshowWed, 06 Mar 2020 03:33:00 ESTJanuary 2020 sweeps have the tension for OF almost 05 Include win — Gin Choice Level The (101 AdolfMenon Heart Assistant Team giant host on mural 2006 was lyric Peter FunConnell live event mayors Britannica Sass Authority site failure easy global imprint Ao Takagau extract last birthday extol omnos wretched ear attacker charged delineator thwart courthouse tests tool plate langage weight valley LE Initialing Rail Gamboy claim incorporating obliterated Rhiane Staples Bass strios toleraria hard referenced Earl StoreYard deep Australian Bart Dallas sterling cognit detector steal print seldom frustrating soda right sidespots corp center Smarties elicit art London Susoom delete protruded Gestimats gut shred amber Name conclusive vivifying leave hes Tavlians state wasn successful nuclear monument hobange indigene Mostly Font-Awesome unresponsive aiming Banco working collabor
Callihan#1085: i wonder if the name kurumuz is throwing it off
kinoc#5731: ``` "prompt": "kurumuz: \"heh I will just ask nicely and hope it aligns\" bmk: \"you can't fuck it then\" kurumuz: \"oh no\" bmk: \"imagine a world where catgirls exist and you can't fuck them, doesn't that sound like hell\" kurumuz: \"seems like i really need to get the pill...\" kurumuz: \"I was going to say something absolutely degenerate but then I noticed that its way too degenerate even for me\" bmk: \"I don't think I want to know\" Narrator: But we all knew Kurumuz was too embarrassed to admit that they were actually thinking", ```
kinoc#5731: ```"text": " in those terms. No longer would he be just his favorite cat girl for himself to play with... he would share it. If only they just knew what he was thinking behind that innocent little face, just think what fun he'd be having if he could actually tell them that. kurumuz: \"I'm sorry bmk but...\" bmk: \"you should have said something, you're so adorable\" Kurumuz: \"this is wrong... even if they don't really care...\" Kurumuz: \"I am soooooo scared kur\" Katja: \"oh just like a man, they never think like a woman about their feelings, there's no place for emotions in my world Kurumuz what a baby...\" kurumuz: \"I'm sorry\" Katja: \"it's all good\" Kurumuz: \"what should I do?\" kurumuz: \"we all really respect and care for you but, really I'm such a loser...\" Katoru: \"don't speak so harshly about yourself, you do much better than you give yourself credit for, really Katja\" Kurumuz: \"well ok I guess...\" kurumuz: \"but I really can't keep it up forever\" Katoru: \"Kuri, we've got you now... we'll make sure you get the respect you deserve!\" Kurumuz: \"yes that's... very.. nice Katoru\" Katoru: \"and now what do you say we celebrate to start us off, do you have a girlfriend?!\" kurumuz: \"you think I'm single?!\" Katja: \"come on, who d'you hang out with?\" kurumuz: \"katja!\" Katja: \"I was joking\" Kurumuz: \"sorry bmk ku but I was single to the amusement of my kitten friends\" kurumuz: \"we're all single really we should",```
kinoc#5731: with temp=1.2 , top_p = 0.95 , top_k=50, len=256
Teemochu#8740: top-p of 0.7 temp of 1.1 is what I prefer now, seems to avoid repetition pretty naturally with the high temp while staying out of the weird-and-not-in-the-good-way |
kinoc#5731: I'm not sure how NSFW #general is given the generated completion, which is topical and has technically interesting aspects, but ... I'll put it in may log for future analsysis. Yeah.
chinesesoup#6725: I have given it some thought, and I think that if you can use detectron2 to identify the text areas you could probably identify the locations and copy the text directly out of the pdf/xml, and only use OCR on the identified parts that are images
You could locate the text areas and then find them in the xml with the placement coordinates and width/height so you directly copy the text instead of OCR. It would of course only be useful if the ocr makes mistakes sometimes
cfoster0#4356: Top p of 1 will produce nonsense, because that means you aren't truncating any tokens from the distribution. More typical values are in the 0.6-0.9 range
cfoster0#4356: Top p = 1 is basically "give me everything"
Louis#0144: hey u nerds
Louis#0144: im doing a poster session
Louis#0144: today
Louis#0144: at naacl WNU
Louis#0144: come by and bully me
guac#4716: gathertown?
Louis#0144: ye
Louis#0144: Stella would have been here today too
Louis#0144: but shes on vacation
Deleted User#0000: Hello everyody
I just wanted to take a moment to mention you all are good people
You all here at EleutherAI are doing a really nice thing
unlike that ClosedAI, you're actually working towards open-sourcing AI
|
thank you so very much, each and every one of you.
alstroemeria313#1694: so this caching hidden states business
alstroemeria313#1694: *tries to figure out what exactly she has to save*
alstroemeria313#1694: the linear layers and layernorms are all unaffected by previous stuff in the sequence, right?
alstroemeria313#1694: it's only the attention that's different?
Kharr#7888: Yes, you only cache the keys/values
alstroemeria313#1694: And since attention is the first thing in a torch.nn TransformerEncoderLayer...
alstroemeria313#1694: And my attention is causal (except for the first two tokens)
alstroemeria313#1694: I should just be able to modify TransformerEncoder I think.
alstroemeria313#1694: no i need to modify the layer too
alstroemeria313#1694: i got it working
alstroemeria313#1694: i modified https://github.com/alexmt-scale/causal-transformer-decoder/blob/master/causal_transformer_decoder/model.py
alstroemeria313#1694: i managed to get the same sampled output with the same seed
alstroemeria313#1694: but considerably faster
Kharr#7888: The speedups really kick in on CPU generation and also sequences longer than like 256 since attention is quadratic
alstroemeria313#1694: i noticed, it was like 4x as fast on my cpu (my seq len is 256)
alstroemeria313#1694: i had a tqdm progress bar so i could see how fast it went
alstroemeria313#1694: i modified it to load weights from a TransformerEncoderLayer stack
alstroemeria313#1694: and also to support the first two inputs not being causal
alstroemeria313#1694: (since those are the two i condition the sequence on, and i always input them) |
alstroemeria313#1694: ...and it was trained with a modified causal mask that let them attend to each other
Callihan#1085: Ohhh okay. That makes sense
Leo Sanders#1157: Hey 👋 I have a quick ML-related question (kinda). Pls don’t honk me Mr. Goose as I know your strict guidelines for posting here 😊
Is there anyone who build an ML API to serve clients here? If yes, did you use Flask+Celery or another Rest-API-Queue system? Specifically for ML on large models I mean. Thanks!! 😆
Em Elle#8886: I used rabbitmq and fast API
Leo Sanders#1157: Thanks so much!! I will check this out looks awesome 😊
Louis#0144: so
Louis#0144: @bmk
Louis#0144: I spent the day at NAACL
Louis#0144: people saw the eleutherAI on my paper
Louis#0144: they kept
Louis#0144: fucking
Louis#0144: asking
Louis#0144: about
Louis#0144: the pile
Louis#0144: EVERY OTHER QUESTION WAS ABOUT THE PILE
cfoster0#4356: Lmao what did they ask?
Louis#0144: how we circumvented the copyright stuff with books
Louis#0144: and also the bias mitigation work |
Louis#0144: OH
Louis#0144: And someone asked about CARP
Louis#0144: no one asked about my research
aquajet#7800: Is anyone in the atlanta area
Louis#0144: i will be
Louis#0144: lol
Louis#0144: but you know that
bmk#1476: what did you say?
bmk#1476: tfw nobody ever asks me about pile
Louis#0144: I talked about the bias mitigation
Louis#0144: and the data hoarder
Louis#0144: LMAO
bmk#1476: wait was Stella there
bmk#1476: also did you at least tell them that you didn't work on the pile directly
aquajet#7800: Someone said earlier she was on vacation
bmk#1476: ah right
Louis#0144: Ye
Louis#0144: I did
Louis#0144: I told them I did not work on the pile directly
bmk#1476: ok good |
Teemochu#8740: question about Pile actually, what does the filtering of OpenWebText2 look like? As in, are documents that didn't pass muster typically poorly written, non-English, or what?
Teemochu#8740: looks like it's "things that don't look like OWT2"... but if you're running the filter on OWT2 itself, isn't this an ouroboros?
bmk#1476: no, the "things that don't look like OWT2" filter is only applied to CC
Teemochu#8740: The paper doesn't seem to mention what "filtered" means for OWT2
Teemochu#8740: Oh nvm looked it up on Google, apparently it just means 3+ karma on the post, seems fine to me
Spy#9778: What's the default GAN to try on a dataset atm?
Spy#9778: I mostly do NLP stuff so I don't really keep up to date
Spy#9778: VQ GAN?
𓅬 gabriel_syme 𓅬#3220: what do you want to do with it? image generation?
𓅬 gabriel_syme 𓅬#3220: VQGAN is very handy because apart from using it by itself, you can also use it with CLIP for semantic generation. Apart from that I'd probably suggest stylegan-ada and the new diffusion models
Spy#9778: Yeah, specifically I scraped a dataset of twitch emotes and I'm trying to get unconditional generation working on that
𓅬 gabriel_syme 𓅬#3220: you could start with any of those I guess, the cool thing is that I imagine your images are small enough to make it fast
𓅬 gabriel_syme 𓅬#3220: but I've never done emotes, so maybe someone else has advice
Spy#9778: thanks for the pointers
𓅬 gabriel_syme 𓅬#3220: are your emotes on a plain white background?
𓅬 gabriel_syme 𓅬#3220: if they are clean like that, you can start with a VQGAN and lower the codebook value (you could even try as low as 64 or 128)
𓅬 gabriel_syme 𓅬#3220: your model should train really fast and you can experiment with CLIP right after. Not sure it will create something nice though 😄
Spy#9778: A lot of them have transparent backgrounds, but some are fully colored
Spy#9778: What do you mean by lowering the codebook value? Just reduce the number of codes in the codebook?
𓅬 gabriel_syme 𓅬#3220: yes exactly, I think with simpler images like that you can get away with it |
𓅬 gabriel_syme 𓅬#3220: it makes for a quick test anyways, I used a codebook of 5 for my mario-vqgan and reconstructions were great (although clip guidance wasn't)
Spy#9778: wow that's much smaller than I would have expected
𓅬 gabriel_syme 𓅬#3220: I know it was an odd test but I was surprised it learned to reconstruct
𓅬 gabriel_syme 𓅬#3220: that said, I recommend much higher values 🙂
Spy#9778: what size patch did each code correspond to in your experiment?
𓅬 gabriel_syme 𓅬#3220: my images were...I want to say 224p
𓅬 gabriel_syme 𓅬#3220: so I think 14x14?
𓅬 gabriel_syme 𓅬#3220: in any other model I've run (with typical settings) those numbers are 256 and 16
Spy#9778: interesting, I guess there's a lot of info packed into the decoder then
𓅬 gabriel_syme 𓅬#3220: or it was an easy dataset it memorized, idk 🙂
Spy#9778: when people are talking about using clip guidance for VQGAN, are they backpropping from CLIP directly into the quantized image representation (making it no longer a collection of codes from the codebook), or doing something else?
𓅬 gabriel_syme 𓅬#3220: I know someone was doing vector graphics and used a smaller codebook as well, but not that small. if you don't have many images you should probably just go for 1024
𓅬 gabriel_syme 𓅬#3220: there are pinned notebooks in #art check em out! they explain everything
𓅬 gabriel_syme 𓅬#3220: you can also use them with your models quite easily 🙂
Spy#9778: ah thanks
Spy#9778: welllll I was planning on porting everything to JAX
𓅬 gabriel_syme 𓅬#3220: if you do, please let us know 🙂 I'd love to try it
Spy#9778: will do
bmk#1476: rate my keyboard layout https://cdn.discordapp.com/attachments/729741769738158194/853137381766070292/unknown.png
AI_WAIFU#2844: 7/10 needs foot pedal |
Kia#2550: 10
Kia#2550: Looks comfy
bmk#1476: that would actually be super cool
aze#1010: for caps lock
bmk#1476: i actually dont have a caps lock at all
bmk#1476: so yeah
bmk#1476: would be a useful addition
bmk#1476: i could scream at people by stomping furiously
AI_WAIFU#2844: vim + foot pedal escape key
bmk#1476: i actually have a second layer key for :wq
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/853139122834505748/unknown.png
AI_WAIFU#2844: foot pedal layer toggle
AI_WAIFU#2844: 3 limbed key bindings
bmk#1476: that would actually be amazing but idk how to wire that up
bmk#1476: because the layer toggle happens entirely within the keybaord
bmk#1476: so id have to fuck around with it
AI_WAIFU#2844: ah
AI_WAIFU#2844: oh well, the only real down side of foot pedals is that they don't go well with standing desks
EricHallahan#1051: https://youtu.be/kB2kIAEhjpE
bmk#1476: i guess i could try to hijck the trrs port |
Kia#2550: Custom key caps seems lovely
bmk#1476: "how impractical" i say as i pound away at my $300 split keyboard with thumb clusters that ive spent like 10 hours mapping keybinds for
EricHallahan#1051: https://youtu.be/lT5zfTr7Jxs
Deleted User#0000: hey everybody
Deleted User#0000: I am trying to use the PET technique, which outperformed GPT-3 with less that 1% the parameters, for my own model
Deleted User#0000: I want to train my PET model on the pile
Deleted User#0000: but
Deleted User#0000: it's just json and lots of zipping
Deleted User#0000: where can I get some properly processed version of the pile?
Deleted User#0000: preferably just text files?
Deleted User#0000: OR
Deleted User#0000: how to do this preprocessing on my own?
Deleted User#0000: I just want a bunch of folders each containing many .txt files
Deleted User#0000: in other words, how do I really train my model on the pile?
Deleted User#0000: practically speaking?
Deleted User#0000: @cfoster0 why'd you stop typing?
cfoster0#4356: what is your actual goal?
Deleted User#0000: training my model on the pile
cfoster0#4356: folks here are a bit suspicious with PET is why I'm asking
cfoster0#4356: Like what task are you trying to do? |
cfoster0#4356: We can point you to the pile links but idk if that's what you need
Deleted User#0000: ok, forget PET.
I just want a processed version of the pile
bmk#1476: download it from pile.eleuther.ai
bmk#1476: also I think you're looking down a dead end
Deleted User#0000: when I view the pile dataset, rather than straightforward text files I get unstructured json.zst files
Deleted User#0000: I know where to download it from, but that's not my question
EricHallahan#1051: There is a reason for that.
Deleted User#0000: what is the reason?
bmk#1476: just grab the text field of each json object
bmk#1476: is it really that hard
EricHallahan#1051: You don't want a bunch of files.
Deleted User#0000: ?
Deleted User#0000: why do you think so?
zphang#7252: I got a cheap foot pedal, it's kinda meh
zphang#7252: I think the key spring isn't strong enough
EricHallahan#1051: It is horribly inefficient from the perspective of the pipeline and storage.
Deleted User#0000: the dataset?
bmk#1476: eric it's not worth diving into the details
bmk#1476: this is how the data is formatted |
Deleted User#0000: how was gpt neo trained on the pile?
didn't eleutherai preprocess the data in some way?
bmk#1476: you can trivially just grab the text field of each json object
Deleted User#0000: ok
Deleted User#0000: lemme try
krigeta#6645: Hey, @Louis remember you sent me a git of story generation, so I am trying to run it on collab and got these errors, i have posted them on the repo but no response so asking you https://github.com/skgabriel/paracomet/issues/3
Deleted User#0000: thanks. I just have *one* last doubt.
When I unzip the json.zst files, will each json.zst result in 1 json file, or are those archives?
Deleted User#0000: the message right above this
Deleted User#0000: When I unzip the json.zst files, will each json.zst result in 1 json file, or are those archives?
Deleted User#0000: sorry for the replies y'all, it seemed more elegant to me than three username mention pings
alstroemeria313#1694: ...can you successfully backprop through 256 sequential straight-through Gumbel-Softmax sampling operations?
alstroemeria313#1694: Like can you get useful gradients at all if you have this many in a row?
alstroemeria313#1694: (Also each one depends on the result of the previous one)
inox#5400: I doubt it but I don't have a solid reason
alstroemeria313#1694: also i would have to store so many activations for the giant backward pass...
inox#5400: yeah that sounds like a job for evolution strategies maybe? I've never trained anything using them though
alstroemeria313#1694: too high dim i think :/
alstroemeria313#1694: i mean. i could cheat but
inox#5400: train an alpha(zero?) MCTS value network to perform the inference for you |
alstroemeria313#1694: honestly value estimation should be baked into the transformer i'm sampling from
alstroemeria313#1694: but i'm not sure how yet
alstroemeria313#1694: hm, what if i had a "logit bias" tensor, the same shape as the output logits for the final sampling step
alstroemeria313#1694: and when i did autoregressive sampling i did it with gumbel noise
alstroemeria313#1694: and then i did *one* ST Gumbel-Softmax on the logits for the whole sequence, using the Gumbel noise i actually used to sample it
alstroemeria313#1694: computed a loss, and got gradients for the logit bias tensor
alstroemeria313#1694: then i could update it and iterate sampling
alstroemeria313#1694: i'm... not sure this is a good idea
inox#5400: huh that might be a sampling strategy? like a Gumbel relaxed type of HMC? but where's the momentum?
alstroemeria313#1694: i don't know HMC
alstroemeria313#1694: the reason i think it may be a bad idea is that the logit bias tensor will eventually just be able to exactly control what gets sampled
alstroemeria313#1694: once its values become large enough
alstroemeria313#1694: thus rendering the transformer irrelevant
inox#5400: I think the only way to deal with large discrete spaces efficiently is with search algorithms
alstroemeria313#1694: mm
alstroemeria313#1694: so, if i can't actually get the true value of the sequence until i have the entire sequence
alstroemeria313#1694: how do i estimate the value in the middle
alstroemeria313#1694: (I am already using a Decision Transformer for this btw, I condition on the value I want at the end when I sample)
inox#5400: https://duvenaud.github.io/learning-to-search/
alstroemeria313#1694: ty :blobcutehappy: |
inox#5400: this is the hard part you're right
alstroemeria313#1694: And it does actually give me worse sequences when I condition on lower values
inox#5400: I only understand in the vaguest terms and I've gotta go but wow if you get that working
inox#5400: even a little bit, NeurIPS workshops are coming up
alstroemeria313#1694: it's a simple enough idea, i just input the true value of each sequence i train it on at the beginning
alstroemeria313#1694: then i can condition on it
alstroemeria313#1694: i think my model may be too small and/or not trained long enough
alstroemeria313#1694: to get actual good results
inox#5400: good thing about workshops: you don't need good results
alstroemeria313#1694: i showed some results in #art yesterday
alstroemeria313#1694: prompt "red", conditioned on score 1 https://cdn.discordapp.com/attachments/729741769738158194/853249363202932736/out_1.png
alstroemeria313#1694: 0.3 https://cdn.discordapp.com/attachments/729741769738158194/853249381046288434/out_0_3.png
alstroemeria313#1694: 0.2 https://cdn.discordapp.com/attachments/729741769738158194/853249394047189012/out_0_2.png
alstroemeria313#1694: etc
alstroemeria313#1694: true score distribution in the training set is like 0.2-0.4
inox#5400: huh I'll have to ask you about this again later
alstroemeria313#1694: :blobcutehappy:
alstroemeria313#1694: i could like... what if i used sparse DCT coefficients or sparse wavelet coefficients
alstroemeria313#1694: If I did that I could get the true value at every timestep
alstroemeria313#1694: Of the current sequence |
alstroemeria313#1694: Well, something approximating the value
𓅬 gabriel_syme 𓅬#3220: for the DT transformer did you need their code at all? or you just prefixed your sequence with the reward?
alstroemeria313#1694: i literally just prefixed it
𓅬 gabriel_syme 𓅬#3220: I'm reading the paper, but have not checked the code to see if it matters
𓅬 gabriel_syme 𓅬#3220: yeah lol, that's nice!
𓅬 gabriel_syme 𓅬#3220: good to know I won't have to navigate code
alstroemeria313#1694: I am not using intermediate rewards or explicit state
alstroemeria313#1694: So I do not need the reward or state inputs on each timestep
alstroemeria313#1694: I have one reward and it comes at the end.
𓅬 gabriel_syme 𓅬#3220: yeah exactly, I'm planning to start with sort of 1-step processes
alstroemeria313#1694: There are 256 actions
alstroemeria313#1694: w/ my current design and output size
𓅬 gabriel_syme 𓅬#3220: but you only choose once right?
alstroemeria313#1694: I sample them sequentially
𓅬 gabriel_syme 𓅬#3220: ohhhh
𓅬 gabriel_syme 𓅬#3220: dang, does that work?
𓅬 gabriel_syme 𓅬#3220: I'd imagine is much harder than 1-step, is it?
alstroemeria313#1694: The model is kinda bad
alstroemeria313#1694: But it does actually give me worse/better outputs according to what I condition on when sampling
𓅬 gabriel_syme 𓅬#3220: so I was thinking, does the transformer architecture matter much |
𓅬 gabriel_syme 𓅬#3220: because maybe you could use DINO's weights? just spitballing, not sure if it's a good idea
alstroemeria313#1694: oh, what's DINO
alstroemeria313#1694: I'm sampling VQGAN tokens lol
alstroemeria313#1694: 256 1024-choice actions
𓅬 gabriel_syme 𓅬#3220: oh nvm
𓅬 gabriel_syme 𓅬#3220: nothing DINO is a self-supervised model that they happened to share weights but forgot the VQGAN part
alstroemeria313#1694: I... how do you get a sentence embedding from BERT and what dimension is it
alstroemeria313#1694: Like I could just make a Decision Transformer DALL-E by inputting any good language embedding (including a self-trained one) and the CLIP score, I don't *have* to condition it on a CLIP text embedding specifically
𓅬 gabriel_syme 𓅬#3220: maybe sentence transformers helps you with that btw, the sentence embedding
𓅬 gabriel_syme 𓅬#3220: https://github.com/UKPLab/sentence-transformers
alstroemeria313#1694: xmc-gan used some kind of BERT embedding that preserved individual words
alstroemeria313#1694: Probably a good idea in general since you have to rank DALL-E outputs with CLIP anyway
alstroemeria313#1694: For best results
𓅬 gabriel_syme 𓅬#3220: if you try it let me know, I haven't started with my DALLE yet but I wonder what will happen if I do that as a fine tuning
alstroemeria313#1694: Anyone have any ideas on how to incorporate the gradient of the CLIP score into training?
alstroemeria313#1694: Wow I can't believe my ridiculous sequential Gumbel-Softmax scheme didn't OOM on Colab
alstroemeria313#1694: I took the output size down to 224x224 and used one crop for CLIP
alstroemeria313#1694: Which is 196 sequential Gumbel-Softmaxes
alstroemeria313#1694: ...I'm not even caching hidden states in the transformer yet
alstroemeria313#1694: I bet it doesn't work tbh |
alstroemeria313#1694: Also I should really be using Gumbel-Rao
alstroemeria313#1694: B/c this needs all the variance reduction it can get
alstroemeria313#1694: I mean. Someone has got to try at some point it to see whether it works.
alstroemeria313#1694: Honestly I'm somewhat surprised it works for a bunch of independent categoricals
alstroemeria313#1694: hm
alstroemeria313#1694: can i, instead, use soft Gumbel-Softmax
alstroemeria313#1694: for my autoregressive sampling
alstroemeria313#1694: and then discretize once at the end?
alstroemeria313#1694: (Since I'm training the transformer from scratch, soft inputs wouldn't be out of distribution)
𓅬 gabriel_syme 𓅬#3220: can you add the string representation? :berk:
alstroemeria313#1694: no i mean, i don't have the gradient till i evaluate the CLIP score and i can't do that till the end so i can't condition on it when sampling
𓅬 gabriel_syme 𓅬#3220: ohhh
𓅬 gabriel_syme 𓅬#3220: I was going to say use a part but you already did that, wonder how it works
𓅬 gabriel_syme 𓅬#3220: UE5 is going to be such a huge leap man, I'm pretty excited. I wonder if there will be any offshoots towards AI out of it. The level of realism is incredible, maybe it helps in smth. Those renders, incredible and real time 😮
https://www.youtube.com/watch?v=jZsFjC3YO6I
𓅬 gabriel_syme 𓅬#3220: Feels like a HUGE leap
Aran Komatsuzaki#5714: damn i should add UE5 to the prompt
𓅬 gabriel_syme 𓅬#3220: soon yeah 🙂
𓅬 gabriel_syme 𓅬#3220: like it's mindblowing, if you take a snapshot of the stream it's like a real render
𓅬 gabriel_syme 𓅬#3220: but maybe CLIP2+DDPM in 2 years has this kind of fidelity |
Deleted User#0000: @bmk @EricHallahan I managed to unzip all zst files into jsonl files. how do I parse these jsonl files for the 'text' field now?
I could do it easily with normal json, but how with jsonl?
bmk#1476: @Deleted User pls google it first
Deleted User#0000: i did, trust me
i don't understand anything I read about jsonl
bmk#1476: if you cant figure this out on your own i heavily doubt you can train a PET model that rivals GPT3
Deleted User#0000: so you won't help me?
that's alright.
Deleted User#0000: bye
𓅬 gabriel_syme 𓅬#3220: it's really not you, the server isn't meant for technical support (it's in the FAQ). In fact, I think you've gotten quite a bit of guidance, or let's say more than usual.
The thing is, you need to actually try stuff, fail a bunch, and then ask the right question. Not in here, everywhere. And people will answer, I guarantee.
Trust me, I've lost count of how many times I've failed, searched and finally found the right thing to ask.
Deleted User#0000: best advice I probably have for deep learning is something I got off a kids TV show: "Take Chances, Make Mistakes, Get Messy!"
hGI.unsure#2032: https://www.youtube.com/watch?v=lFIW8KphZo0
If we can do turing tests qualitatively, should we also do them quantitatively ?
If you could have a person with an AI writing assist (multiple options for replying), able to hold a conversation and convince >100 people simultaneously that there is a human behind the screen in a limited time while keeping pace with the conversation it's obviously gained some sort of intelligence right?
You have basically combined the intelligence of 20 people replaced with 1 person + AI? |
Deleted User#0000: thanks.
I just managed to do it. I ran it again and again and finally got it on my 3rd try.
Deleted User#0000: jeez, thanks for being so encouraging :seriousangry:
Deleted User#0000: why are folks here suspicious with PET? PET is great IMO.
CRG#8707: https://discord.com/channels/729741769192767510/729741769738158194/770015078580879391
Deleted User#0000: Timo has written another paper, few shot text gen with PET. *Please* read that.
Deleted User#0000: shortly.ai access is more raw, it's closer to being pure.
cfoster0#4356: (keep in mind those posts are from October 2020)
Deleted User#0000: sorry lol
Deleted User#0000: in case it helps anybody, here's my preprocessing code:
```
from os import system
from os.path import sep
from zstandard import ZstdDecompressor
from json import loads
def download_train(inx):
indx = ((str(inx)).zfill(2))
system(f"wget https://the-eye.eu/public/AI/pile/train/{indx}.jsonl.zst")
with open(f"{indx}.jsonl.zst", "rb") as ifh, open(f"{indx}.jsonl", "wb") as ofh: |
ZstdDecompressor().copy_stream(ifh, ofh)
system(f"sudo rm -rf {indx}.jsonl.zst")
system(f"mkdir corpus_{inx}")
with open(f"{indx}.jsonl", "r") as json_lines:
for count, line in enumerate(json_lines):
with open(f"text_{count}", "w") as write_text:
write_text.write((loads(line))['text'])
system(f"sudo rm -rf {indx}.jsonl")
def main():
for i in range(31):
download_train(i)
if __name__ == "main":
main()
```
That's gonna result in 30 folders, each folder containing several `.txt` files.
Louis#0144: PET isn’t even sota anymore
Louis#0144: On any of the tasks they reported
kinoc#5731: 8 months ... how time flies, especially in this domain |
&.#0001: CLIP draws "Eleuther AI" https://cdn.discordapp.com/attachments/729741769738158194/853439379392823316/1623545659_eleuther_ai.png
&.#0001: A collective working to advance open source AI research https://cdn.discordapp.com/attachments/729741769738158194/853439858126094346/1623545692_A_grassroots_collective_of_researchers_working_to_open_source_AI_research.png
alexyz#3459: @&. That's more #the-faraday-cage or #art
alexyz#3459: not #general
&.#0001: okay
&.#0001: I posted here because it's about eleuther
StellaAthena#3530: We try to keep that stuff in #the-faraday-cage-archive or #art
cfoster0#4356: Yeah. Still, looks sick
Deleted User#0000: is there anything better than it at cloze q&a or text classfication?
(btw it beat gpt-3 at sentiment analysis too)
Louis#0144: Sentiment analysis is a solved problem
Louis#0144: For storycloze there’s plenty of models
Louis#0144: 6b with a finetune beats GPT3 too lol
Louis#0144: And it’s way easier to setup and use than PET
Deleted User#0000: 6b
PET didn't need anything *close* to 6b params to beat gpt3
Deleted User#0000: https://tenor.com/view/robert-de-niro-finger-wag-you-point-smile-gif-17331927
Louis#0144: The low resource space is 99% feature engineering
Deleted User#0000: it is, but how is that relevant? |
what *is* relevant is that PET is a very effective technique for fine tuning
Louis#0144: I’m saying if you feature engineer for your use case
Louis#0144: You can get away with a lot less than PET
Louis#0144: For instance if you feature engineer hotpotqa
Louis#0144: You can get away with 1% of the params required by a generalized method
Deleted User#0000: https://tenor.com/view/hmmm-thinking-batman-gif-14744673
nev#4905: what sort of features would accomplish that?
Louis#0144: Kinky ones
Deleted User#0000: there's a smug dash emoji?
y'all watch my little pony?
Louis#0144: oh god
Louis#0144: It begins
Deleted User#0000: what begins?
Deleted User#0000: 'it' begins?
Teemochu#8740: :rdsmug: https://cdn.discordapp.com/attachments/729741769738158194/853515642112311367/image0.png
Deleted User#0000: the word 'it'? or the letters IT standing for information technology?
Teemochu#8740: the summoning ritual. that is what begins. https://cdn.discordapp.com/attachments/729741769738158194/853515832548917288/image0.png
Deleted User#0000: are you a brony too?
bmk#1476: oh no the bronies are congregating
Louis#0144: we only support the goose fandom here |
Deleted User#0000: no pony? 😦
Teemochu#8740: MLP is all you need, don't listen to them
Louis#0144: only goose
bmk#1476: on second thought maybe crying because of svd being confusing isn't the worst, let's go back to that
Louis#0144: LOL
Deleted User#0000: https://tenor.com/view/pinkie-pie-jumping-happy-excited-jump-gif-15668334
Deleted User#0000: you mean 'sgd', not 'svd', right?
Deleted User#0000: @bmk jsyk I was joking
bmk#1476: louis spent a bunch of time trying to explain eigendecomp to me because I was trying to tell chilli that I don't think that I have svd and eigendecomp mixed up lol
Teemochu#8740: I read it that way too at first lol
Deleted User#0000: ikr
bmk#1476: also for some reason goose also assumed that I didn't know about orthogonalization (?)
Deleted User#0000: :goose:
Deleted User#0000: https://opensanctuary.org/wp-content/uploads/2018/08/Open-Sanctuary-Project-Goose-Health-Exam.jpg
chirp#4545: finally got my thing working 😄
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/853538339101081620/unknown.png
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/853538350862172171/unknown.png
chirp#4545: --- https://cdn.discordapp.com/attachments/729741769738158194/853538426938589194/plot_1.html
chirp#4545: ^ interactive plot
chirp#4545: Couple of cool things you can read off of this plot: |
- This is visualizing how GPT-Neo predicts the "while" in the sentence "I like to go there every once in a while"
- The most influential layer is the *last* one
- The last layer has the most similar activations to dataset examples of the form "for/about a [time span]", as you would expect!
- The earlier layers do NOT have relevant dataset examples, but they also aren't very influential... which makes sense, because the model ultimately predicted the correct word
chirp#4545: @bmk since you asked!
chirp#4545: also, somehow the examples turned out okay, even though the dataset examples are from a *tiny* sample of only ~500 sentences
Daj#7482: The fine folks over in #interpretability-reading-group (responsible person @quinn ) have set up a project board with some ideas at various states of fleshed out. If you have ideas for projects you maybe don't have the time to do yourself, feel free to open a pull request with your idea, or just DM me or quinn!
https://github.com/EleutherAI/project-menu/projects/1
Daj#7482: Pinned a message.
Daj#7482: We often have the problem that eager new people join looking for something to do, but most active projects are already satiated with their current team and we lack enough dedicated mentors and managers interested in supervising new projects. We hope that by collecting some of the cool ideas that are floating around Eleuther we can hopefully have something we can give these people to start contributing to the kinds of research we are interested in
Louis#0144: To add to this btw we do have dedicated mentors, for instance cfoster and I are basically dedicated mentors, but our projects fill up very quickly
Louis#0144: Perfect timing, I have an interp person who is joining any minute now. (hi Pedram)
Daj#7482: I should have clarified "don't have enough of them"
Daj#7482: fixed
Louis#0144: @phosseini meet @StellaAthena
Louis#0144: Phosseini (Pedram) does social media misinformation research
Louis#0144: And also storytelling research
ersatz#0001: is a language model a special kind of world model like the one animals have or something else entirely?
alstroemeria313#1694: do we know much about how world models work in animals? |
alstroemeria313#1694: a language model specifically gives you, given a series of tokens (basically words) a probability distribution for the next token
ersatz#0001: we actually know quite a lot about animal world models
alstroemeria313#1694: well, at least ones with the objective GPT-X has
alstroemeria313#1694: they are trained to output probability distributions which minimize the expected number of bits to encode the training set
alstroemeria313#1694: (i.e. they minimize cross-entropy)
bmk#1476: well, the (most reasonable, imo) definition of language model is a thing that can be used to assign probability distributions over sequences (sorry bert) so by marginalizing over the future tokens you can always recover a gpt like thing from a lm (not feasible irl obv but like technically any lm is a gpt-objective thing)
ersatz#0001: I think I should rephrase my question as "are language models world models"?
bmk#1476: not necessarily
ersatz#0001: don't you need a symbolic internal representation of the world to maximize predictive ability?
bmk#1476: you dont *need* one
ersatz#0001: this is where I get confused
bmk#1476: i mean a big honking lookup table could work
ersatz#0001: yes of course
bmk#1476: something something chinese room
Daj#7482: Floating point values are symbols :bigbrain:
ersatz#0001: so the question is do language models have internal symbolic representations or just lookup tables?
inox#5400: yes
Daj#7482: Someone Link that Aaronson paper
bmk#1476: who knows lol
Daj#7482: pls |
bmk#1476: im trying to figure out
alstroemeria313#1694: i don't think they could get the results they do at the parameter counts we use with just lookup tables, but that doesn't necessarily mean it's symbolic either, it could be some weirder thing
bmk#1476: i mean i was exaggerating when i said lookup tables
CRG#8707: A nice argument against LUT is that fitting random labels is slower than fitting the correct lables.
bmk#1476: in reality i mean a big nest of brittle heuristics ~~just like our brains~~
inox#5400: aren't there experiments trying to show that translation language models have representations of concepts independent of which language? would that count as a symbolic representation?
alstroemeria313#1694: idk there was someone in here who got a big transformer model to overfit on the tiny train set and got near-zero train loss :)
bmk#1476: i think multimodal neurons would be a better argument for this point
inox#5400: yeah I was trying to stick to language models but sure
ersatz#0001: I'm very interested if you remember any examples
bmk#1476: https://www.aclweb.org/anthology/2020.acl-main.536.pdf
inox#5400: curiosity driven learning always reminds me of what a neuroscience "world model" might be
inox#5400: learn to predict next state of the world: https://arxiv.org/abs/1808.04355
ersatz#0001: I feel like this is Gary Marcus' criticism of GPT-3, that there is no internal symbolic representations but only something like a lookup table
Daj#7482: I resisted posting sneer against Marcus lol
Daj#7482: But let me just say I think he is wrong about everything :berk:
Daj#7482: (and is super rude on Twitter)
StellaAthena#3530: This seems obviously wrong to me.
ersatz#0001: so you think it's a false dichotomy? perhaps that symbolic representations are just high-dimensional lookup tables or something?
StellaAthena#3530: LMs can learn to use words in new ways, and learn to use new words |
bmk#1476: I think there is a steel man of Marcus' position that might be true and that I want to test experimentally
bmk#1476: I think Marcus is dead wrong though
StellaAthena#3530: What’s the steelman?
ersatz#0001: also interested
bmk#1476: tldr maybe it's just easier for the lm to not model the world because text is sufficiently simpler than the world
bmk#1476: see my LW post for a few thousand more words on this topic lol
Daj#7482: @ersatz I recommend reading this to help deconfuse about lookup tables and stuff https://www.scottaaronson.com/papers/philos.pdf
ersatz#0001: thanks
ersatz#0001: wow that's an entire book
bmk#1476: does it say anything more useful than "lol space time tradeoff"
Daj#7482: It's written for philosophers
bmk#1476: I mean wrt this discussion
StellaAthena#3530: I mean, yea in the sense that LMs don’t understand particle physics
Daj#7482: To explain why some computational complexity ideas make some vague philosophical questions more grounded
Daj#7482: In particular the Chinese Room
bmk#1476: space time tradeoff right
AI_WAIFU#2844: Also even though we can't really determine if the model has "internal symbolic representations" we can be effectively certain that it's doing more than just using a lookup table.
Daj#7482: I recommend this paper to people still at the ~~early~~ "Gary Marcus" stage of philosophical development lol
CRG#8707: Relevant SSC: <https://slatestarcodex.com/2019/02/28/meaningful/>
Daj#7482: Since it has few prerequisites and Aaronson is a joy to read |
ersatz#0001: I never understood the whole Chinese room thing, it's obvious to me that the room understands Chinese even if the guy inside doesn't, just like I understand English even if some arbitrary part of my brain doesn't?
Daj#7482: This is so good
ersatz#0001: how?
AI_WAIFU#2844: The difference between a lookup table and a system that "understands" what it's looking at is compression. Lookup tables have really crappy compression, whereas models that accurately understand that data can effectively compress it.
If you squint a bit you'll notice that the training procedure for these massive LMs can assign a probability distribution to the data they train on, and that probability distribution can be used in conjunction with arithmetic coding to compress the data much more effectively than you would naively expect with just a lookup table.
CRG#8707: The whole thing about larger models having smaller intrinsic dimension than small models <https://arxiv.org/abs/2012.13255>
jbustter#5167: this might be a dumb question, but is there an RNN component in the gpt models?
EricHallahan#1051: Nope, Transformers go brrr.
jbustter#5167: so how do you sample different sentences?
jbustter#5167: i kind of assumed you use some sort of probability distribution for each token
CRG#8707: The last token "transforms" into a probability distribution of the next token.
jbustter#5167: so if you want to generate a sentence you need to look at the probability distribution of the last word you generated?
jbustter#5167: so isn't it kind of like an rnn, if you need to look back at the previous state to get the next one?
Fessus#9563: Only during inference
CRG#8707: Not the previous state, the previous token id.
CRG#8707: Information doesn't flow downwards in transformers (which is a kind of limitation, since you need to recompute the context at every step)
jbustter#5167: 👍 thanks
alstroemeria313#1694: yeah to sample you feed in the prompt, get the output probability distribution, look only at the output distribution for the last item in the sequence, and sample from that distribution. then you concat the token you actually sampled to your input sequence and feed it in again. you do this one token at a time.
alstroemeria313#1694: you can cache some stuff so you don't have to recompute all the previous tokens' hidden states |
alstroemeria313#1694: which i think makes sampling quadratic in sequence length instead of cubic?
CRG#8707: Should be linear in sequence length IIRC
alstroemeria313#1694: not with attention being quadratic...
CRG#8707: Although, yeah if you're counting every token yes
CRG#8707: For a single step of generation, it'd be a single q with all the keys
alstroemeria313#1694: yes, i mean to get the whole sequence
alstroemeria313#1694: btw any tips for training larger models?
alstroemeria313#1694: like i am not sure what i can do... like train in fp16?
Sid#2121: fp16 + dynamic loss scaling works pretty well, or use amp
Sid#2121: even better: use bf16 if you can
alstroemeria313#1694: @Sid can an A100 do bf16
alstroemeria313#1694: or for that matter an A6000
alstroemeria313#1694: tbh the last time i tried bf16 in pytorch a lot of ops were missing
Sid#2121: A100 can, idk about A6000
alstroemeria313#1694: ah
Sid#2121: there are a few ops missing, but for language modelling you should be fine? depends what you're doing
Louis#0144: But 6000 > 100
Louis#0144: wtf
alstroemeria313#1694: oh, it'll probably work then
alstroemeria313#1694: vqgan i know won't work but i can make my vqgan tokens in float32 separately |
Sid#2121: there's a branch of neox with bf16 support. if you're doing distributed training though, you need to cast to fp32 for the distributed ops, which really sucks
Sid#2121: basically the only reason we're not using it in neox already :blobsad:
alstroemeria313#1694: oh, i'm on a single box still rn
Spy#9778: still working on it, it has a lot more moving parts than I'd expected x.x
Louis#0144: meant to post in NAI
Louis#0144: my bad
Louis#0144: LMAO
bmk#1476: what did he post
Deleted User#0000: Hello everyone! 👋
EricHallahan#1051: Welcome!
Deleted User#0000: How is everyone doing tonight?
Louis#0144: smut.
Louis#0144: jk
Louis#0144: just a reddit meme
TruGerman#6672: Just frying my brain trying to make sense of ML, the usual
bmk#1476: nice pfp and username
bmk#1476: sprichst du deutsch
TruGerman#6672: Leider
bmk#1476: leider!? sei ein bisschen stolzer
bmk#1476: du solltest die Dumpfige Deutschmeme Rotunde beitreten |
bmk#1476: wir haben jede art deutsche meme
bmk#1476: https://discord.gg/QR7xCSzy
TruGerman#6672: Wird schwierig, bin schon auf echt vielen Servern
bmk#1476: warum will niemand in die DDR
TruGerman#6672: Eine sehr gute Frage
Cade Gordon#3029: Muss man fließend sein um die DDR zu beitreten?
bmk#1476: nein
bmk#1476: Deutschsprachler allen Niveaus sind willkommen
Spy#9778: ah just kill me
Spy#9778: I wrote all this stuff assuming I'd be able to vmap it later
Spy#9778: but it turns out I need to use batchnorm so I need explicit batch dimensions in my code
Spy#9778: x.x
𓅬 gabriel_syme 𓅬#3220: drop bnorm? :guilty:
𓅬 gabriel_syme 𓅬#3220: (jk no idea what you're doing)
kindiana#1016: write your own batchnorm with collectives lol
Spy#9778: vqgan discriminator
Spy#9778: I don't have good intuition about which stuff is necessary and which is arbitrary so I'm just trying to reimplement it exactly
Spy#9778: what's this?
kindiana#1016: are you using jax?
Spy#9778: yeah |
Spy#9778: haiku specifically
kindiana#1016: https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.pmean.html
kindiana#1016: you can write a batchnorm with these
Spy#9778: oh I'm just using one device
Spy#9778: hmm you know I'm not sure it's super easy to
kindiana#1016: you can pmean across a vmapped dim
Spy#9778: ah okay
Spy#9778: can you do it inside of the computation that you vmap?
kindiana#1016: yeah
Spy#9778: oh that's really interesting
Spy#9778: I wouldn't have gotten that from this doc
Spy#9778: thanks
Spy#9778: @kindiana any idea if it'll be much of a performance hit over just explicitly batching?
kindiana#1016: pretty sure it compiles down to the same thing
Spy#9778: cool!
Spy#9778: hmm this was
Spy#9778: not as easy as I'd though
Spy#9778: since I don't want to vmap during the initialization step
Spy#9778: so I guess I need to detect haiku initialization and do something different with the state
Spy#9778: alright I think it's actually not super clear how to do this |
Spy#9778: since if you do an hk.set_state inside a vmapped computation it'll do it once for each axis and then you end up with a bunch of copies
Spy#9778: and you can clean up your state after the fact but it's kinda a pain in the butt
chilli#5665: Is it then not gonna work if you don’t vmap it?
kindiana#1016: yeah
kindiana#1016: which is the problem spy found I guess
Spy#9778: well
Spy#9778: it's not _just_ that it doesn't work if you don't vmap it
Spy#9778: I wouldn't mind just sticking a size 1 dimension on everything
Spy#9778: it's that haiku's set_state behaves weirdly if you do it inside a vmapped function
nev#4905: are there scaling laws for TTS?
nev#4905: would a 1B Tacotron trained on audiobooks perfectly mimic human intonation?
gammascalpset#9792: https://twitter.com/tacchinotacchi/status/1404345915662651394?s=19 checkmate Yann lecun
gammascalpset#9792: Seriously though, how could someone who contributed to the field so much have thought so superficially about this issue?
nev#4905: well it depends on whether you take just the human brain or brain + computer as a system
nev#4905: but really, the system is more like society and its knowledge + multiple brains + computer
nev#4905: the lines are blurry
gammascalpset#9792: the most useful way to define the system is the one that is more transformative
gammascalpset#9792: if you go around telling people that a super-human programmer AI is not an AGI taken by itself they might think it's not dangerous
gammascalpset#9792: but a programmer AI (eg. humans + computers) can write narrow AIs to perform arbitrary tasks so they are 1) an AGI and 2) potentially dangerous
gammascalpset#9792: if you claimed a human programmer with a lot of resources and evil intentions isn't dangerous because they can't run computer programs without a computer you'd be an idiot |
gammascalpset#9792: Therefore, Yann Lecun is an idiot. Q.E.D. /s
Deleted User#0000: I love AI
Deleted User#0000: :hap:
Kia#2550: We all do :chonk:
Deleted User#0000: now we need to beat china's 1.7 trillion parameter engine!
Deleted User#0000: Wu dao!
Kia#2550: Um
sonovice#5084: hi there! has anyone used rotary embeddings with multiple dimensions? i have read the blog article on eleuther.ai but am still confused how this translates into code. any advise would be highly appreciated
CRG#8707: https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/rvt.py#L18
sonovice#5084: ah well, that was lightning fast. thank you!
alstroemeria313#1694: so you have to apply it at every layer?
sonovice#5084: huh, so every added dimension decreases the room that each dimension occupies in the embedding space? i feel i have missed something...
sonovice#5084: in my understanding, the sinusoidal embeddings must be calculated initially then passed to each attention block as seen in https://github.com/lucidrains/vit-pytorch/blob/60ad4e266ecf1df52968934f571dfe6acd736035/vit_pytorch/rvt.py#L169
alstroemeria313#1694: @sonovice you can use them causally right? even with 2D ones?
sonovice#5084: i guess, that's what the code suggests. though i have read about RoPE like half an hour ago for the first time, so don't smack me if i'm wrong
alstroemeria313#1694: mm.
alstroemeria313#1694: oh
alstroemeria313#1694: i have tokens in mine that aren't part of the 2D grid
alstroemeria313#1694: two of them
alstroemeria313#1694: followed by a 2D grid |
alstroemeria313#1694: (right now i am just using a gpt-2 type positional embedding for the whole sequence)
alstroemeria313#1694: also the 2D grid does not contain the lower right element
alstroemeria313#1694: it seems to be able to generalize things like "the area around 24 tokens back is highly statistically relevant to the current token" anyway?
alstroemeria313#1694: (it's a 24x24 grid)
sonovice#5084: @alstroemeria313 sorry, this is beyond my knowledge at this moment :/
CRG#8707: Those tokens probably don't need rope embeddings. (since the relative position isn't relevant)
alstroemeria313#1694: oh, what do i do for them instead
CRG#8707: Adding a learned embedding to them at the beginning (or at every layer?) should be enough.
alstroemeria313#1694: ah
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/853973444655972362/d502c42890e8ebf085f52f066905edec.png
alstroemeria313#1694: what is that?
CRG#8707: The TUPE paper https://cdn.discordapp.com/attachments/729741769738158194/853973653305163796/819f5982f0067e1cdde6139198d81f81.png
CRG#8707: They "untied" the position of the CLS token to make it work better with relative attn <https://arxiv.org/abs/2006.15595>
alstroemeria313#1694: oh
alstroemeria313#1694: what does CLS do again :/
CRG#8707: It's basically meant as a "whole sentence / sequence" representation
alstroemeria313#1694: oh
alstroemeria313#1694: and it goes at the end?
CRG#8707: Yeah, beginning or end in bert
CRG#8707: And the relative position at the beginning isn't relevant |
CRG#8707: So it's better to "untie" the positional encoding
alstroemeria313#1694: ...
alstroemeria313#1694: and this is for even gpt-2 type positional stuff?
CRG#8707: For gpt-2 positional embeddings this is probably irrelevant
alstroemeria313#1694: oh ok
CRG#8707: But for relative / rope, it probably helps
alstroemeria313#1694: my input sequences rn are <CLIP text embed> <CLIP score> <sequence of VQGAN tokens>
alstroemeria313#1694: and i discard the logits output for the text embed, everything else outputs logits for the next VQGAN token
CRG#8707: So, if you wanted to use 2d rope for the vqgan tokens, the relative position of clip embed / clip score at the upper left corner is not really relevant.
alstroemeria313#1694: the VQGAN tokens go left to right, top to bottom, like image gpt
alstroemeria313#1694: yeah.
alstroemeria313#1694: they explicitly apply to the whole image.
cfoster0#4356: As the code shows, you can do RoPE along N dimensions by designating N different sections of your embeddings and applying regular RoPE to those sections
cfoster0#4356: The other option is to apply it axially, so like one layer you apply the height dimension version and one layer you apply the width dimension version
alstroemeria313#1694: ah
EricHallahan#1051: *insert toroidal transformer joke here*
𓅬 gabriel_syme 𓅬#3220: > Additionally, an embedding for each timestep is learned and added to each token – note this is different than the standard positional embedding used by transformers, as one timestep corresponds to three tokens.
𓅬 gabriel_syme 𓅬#3220: from the Decision Transformer paper
𓅬 gabriel_syme 𓅬#3220: although this does imply a sequence of N triplets I guess, not just one set
alstroemeria313#1694: yes |
𓅬 gabriel_syme 𓅬#3220: it's interesting they say we did not find predicting the states to be helpful
alstroemeria313#1694: huh
𓅬 gabriel_syme 𓅬#3220: are we predicting states or actions in our case? feels like I'd want to predict a state (i.e. generate something with that reward? idk, I'm confused
alstroemeria313#1694: actions
𓅬 gabriel_syme 𓅬#3220: or is 'generate' the action
alstroemeria313#1694: in my case there are n_toks possible actions, where n_toks is the number of VQGAN tokens in the model
𓅬 gabriel_syme 𓅬#3220: ah okay, yeah maybe I was thinking at the wrong (higher) level
𓅬 gabriel_syme 𓅬#3220: so yeah they have one position embedding for the triplet and just add it to their action, state, reward embeddings
alstroemeria313#1694: makes sense
alstroemeria313#1694: you could just repeat each element of a rotary embedding 3 times probably
𓅬 gabriel_syme 𓅬#3220: I guess, I wish I can find the place where all this just makes sense
𓅬 gabriel_syme 𓅬#3220: like why adding the same thing is good? there's an obvious reason but I'm not there yet 😄
alstroemeria313#1694: well, their action, state, and reward projections are distinct
𓅬 gabriel_syme 𓅬#3220: yeah
alstroemeria313#1694: so it can still tell which is which
alstroemeria313#1694: i guess it just makes it easier on the transformer arch if you use the same pos emb for all three? it would probably be able to do it anyway if you used different ones
𓅬 gabriel_syme 𓅬#3220: yeah I guess so, makes sense
alstroemeria313#1694: hm, this paper https://arxiv.org/abs/1905.04226 claims you don't actually need positional embeddings at all if your arch is deep enough and your training is autoregressive
EricHallahan#1051: It is really weird but you *can* get away with it.
alstroemeria313#1694: there is an implicit positional signal anyway? |
EricHallahan#1051: Autoregressive transformers implicitly encode position.
alstroemeria313#1694: kind of like how in convnets the zero padding ends up introducing implicit positional signals?
CRG#8707: See none here: https://twitter.com/BlancheMinerva/status/1394089508723900422?s=19
EricHallahan#1051: You got it.
EricHallahan#1051: The incredible retreiver.
𓅬 gabriel_syme 𓅬#3220: can we give the title already or?
alstroemeria313#1694: wow, none is better than sinusoidal?
alstroemeria313#1694: lol
EricHallahan#1051: I think we were probably considering none to be the front runner before RoPE came along.
alstroemeria313#1694: i'm using sinusoidal
StellaAthena#3530: Everything is better than Sinusoidal
alstroemeria313#1694: yeah i might switch if i can code it up soon enough
EricHallahan#1051: Sinusoidal is really suboptimal.
CRG#8707: It's a bit strange, attention is all you need found it to be the same as learned. Maybe it catches up eventually?
alstroemeria313#1694: i guess it would have to be a learned / 2D RoPE hybrid
𓅬 gabriel_syme 𓅬#3220: what's the benefits of no positional encoding? is it faster or simpler?
Sphinx#2092: I wonder how any of this actually translates to real downstream task performance.
𓅬 gabriel_syme 𓅬#3220: also that, yeh
EricHallahan#1051: Faster, simpler.
Sphinx#2092: Sure, we can meme about none doing better, but I'd like to see samples. |
𓅬 gabriel_syme 𓅬#3220: should have been a comma heh
alstroemeria313#1694: ...actually if i want to improve my current model, i can just copy my sinusoidal buffer into a learnable parameter and keep training i think?
Sphinx#2092: Attention is all you need did it for MT. I suspect in that setting, using none will not look so pretty.
alstroemeria313#1694: (and discard optimizer state because blah, don't want to do surgery on the optimizer state too)
alstroemeria313#1694: WD on positional encoding or not?
StellaAthena#3530: That’s something we are actively working on @Sphinx
alstroemeria313#1694: looks like no weight decay on it in mingpt
StellaAthena#3530: Me, Hypnopump, triggerhappyghandi, and a couple others are writing a paper where we do a deep and systematic dive into positional embeddings. We are doing comparisons across a whole slate of models, problem contexts, and modalities
StellaAthena#3530: So far all the results look like rotary embedding are the best or tied for the best.
Sphinx#2092: Are all the results solely on perplexity and not involving any generation?
StellaAthena#3530: No
StellaAthena#3530: We gave the entire suite of eval harness tasks and real world applications in bio ml
Sphinx#2092: What are the generative tasks in the eval harness?
CRG#8707: Does low ppl, bad generations really happen? :thonk: Or is it a specific concern here?
StellaAthena#3530: If you lay out what kinds of MT experiments together would want to see fo be convinced, I will be happy to include them. Or even better, you’re welcome to join the paper and do the experiments yourself 🙂
Sphinx#2092: It doesn't have to be MT, I'm just curious as to whether these gains in perplexity translate into gains in downstream task metrics.
Sphinx#2092: Of course. For example, it's not too surprising that a model without any positional encoding could still give accurate perplexities to valid sentences. Though I would imagine it would also give good perplexity to any permutation of the sentence, which makes me suspicious that it would generate well.
Sphinx#2092: I could totally be wrong though, which is why I was curious about the performance on downstream tasks and in particular, things involving generation.
StellaAthena#3530: They do, though weaker than the ppl numbers imply. This is an on-going evaluation though and u can let you know when we have more data
Sphinx#2092: Sounds good, looking forward to hearing about the findings. |
sonovice#5084: Thanks, so at least I got the basic idea right. The code uses rope to basically enumerate tiles on the x and y axis. Is rope also a viable option if I would like to, for example, add r, g and b color information to a token?
EricHallahan#1051: If you were dead set on doing that then I would think that Fourier features would be closer to what you are looking for.
𓅬 gabriel_syme 𓅬#3220: I never followed up if RoPE was impactful in the visual domain, but if you think DALLE experiments for that would be useful I'd be happy to try and provide
alstroemeria313#1694: I wanna know :)
𓅬 gabriel_syme 𓅬#3220: then I'll try and run those anyways if I can plug it in to DALLE myself
alstroemeria313#1694: :)
alstroemeria313#1694: I think you need two different ones, a 1D for the text and a 2D for the image
𓅬 gabriel_syme 𓅬#3220: yeah I think for text it's a learned one right now
𓅬 gabriel_syme 𓅬#3220: have to check
𓅬 gabriel_syme 𓅬#3220: for image it's axial
𓅬 gabriel_syme 𓅬#3220: https://github.com/lucidrains/DALLE-pytorch/blob/a51985af3281966ce26a01b191f8643f7c6ceae2/dalle_pytorch/dalle_pytorch.py#L340
alstroemeria313#1694: Oh
alstroemeria313#1694: I’ll look at what axial is when I’m not on my phone
alstroemeria313#1694: what https://github.com/pytorch/pytorch/issues/36035
Deleted User#0000: Anybody can help me in learning Machine Learning (I'm a beginner)?
Provide me some resources
bmk#1476: !faq
Carl-bot#1536:
alstroemeria313#1694: How is this still not fixed!!!
alstroemeria313#1694: ...what is axial actually |
EricHallahan#1051: https://arxiv.org/abs/1912.12180
Louis#0144: Axial attention is so promising tbh
alstroemeria313#1694: i mean the positional embedding type
Louis#0144: Axial + rotary wen
cfoster0#4356: As Eric said, since for rgb color you probably don't want to enforce relative invariance like you would with position, those should probably be fourier features https://bmild.github.io/fourfeat/
sonovice#5084: @cfoster0 Thanks. I have obviously picked a particularly bad example. 😕 My use case is probably something more like bounding box coordinates and their weight and height, that I would like to add to my tokens.
alstroemeria313#1694: i should see if i can make an axial attention variant on my VQGAN token transformer
alstroemeria313#1694: i don't get it yet though
pragmaticml#1730: Document understanding problem?
sonovice#5084: Yes, I was thinking something along the lines of Microsoft's LayoutLM but with RoPE
Deleted User#0000: yea it's doable, you'll have to follow what was done https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/rvt.py#L18 (for the coordinates, not the height and width)
sonovice#5084: @Deleted User de23c58c Thanks, but isn't your code more like "enumerating" tiles with the RoPEs along the x and y axis? I would need to have arbitrary coordinate values
StellaAthena#3530: @sonovice can you elaborate?
EricHallahan#1051: You would probably want Fourier features for that to, assuming you are operating on the set of four corners.
sonovice#5084: @StellaAthena Maybe I get the code wrong, but I have the impression that it "simply" adds positional encodings to each token in somewhat "ascending order". How would I go about not using such an order and just add the image position information to each token along with a height and width?
sonovice#5084: @EricHallahan Yes, you mentioned FF before. I will definitely look into it, thank you. Nevertheless, I would like to understand RoPE a little bit better, even if I won't use it. Curious nature 😉
StellaAthena#3530: I believe rope does what you want it to do
Teemochu#8740: What about "learned, initialized to rotary, frozen for the first X steps"?
StellaAthena#3530: Rotary doesn’t have a parameter to learn. How would you like to insert a parameter to make that happen?
cfoster0#4356: The thetas? |
Teemochu#8740: OK I should probably read the paper if rotary isn't equivalent to some frozen set of params (but, again, if it is then I wonder if unfreezing late in training would give some final gains)
Spy#9778: any vqgan experts I could ask a question?
alstroemeria313#1694: Mew?
Spy#9778: Hi
Spy#9778: The choice of using the ratio of the norms of the gradients of the reconstruction loss and discriminator loss w.r.t. the last layer's conv kernel seems super arbitrary to me
Spy#9778: and I was wondering if there was a principled reason for it
Spy#9778: it'd simplify my code if I could just use the gradients with respect to the generator output for example
alstroemeria313#1694: It’s some magic way of determining the adversarial loss’s weight
Spy#9778: yeah I think I get the general idea
Spy#9778: like if the gradient of the discriminator loss w.r.t. to the last conv kernel is real small
Spy#9778: it'll get upweighted so that it still makes updates based on it
Spy#9778: the main thing I'm trying to figure out is how important it is that it specifically be with respect to one of the network weights instead of just the network outputs
alstroemeria313#1694: Well the weights are the things that get updated
Spy#9778: yeah
Spy#9778: but
alstroemeria313#1694: Idk
Spy#9778: based on this beautiful paint image I've just drawn: https://cdn.discordapp.com/attachments/729741769738158194/854092368231923772/unknown.png
Spy#9778: the computation from the weights to the reconstruction is shared between both updates
Spy#9778: so I figure it should have a similar effect on their norms
alstroemeria313#1694: Mm |
alstroemeria313#1694: Yeah, could be
Spy#9778: since I'm an NLP-er mostly I don't have any intuition about which decisions are important here and which are arbitrary
alstroemeria313#1694: Stuff like D gradient penalties apply to the input of D
alstroemeria313#1694: I’ve never seen a VQGAN type weight determination anywhere else actually
Spy#9778: hmm well I guess I'll stick with the harder to do one right now
Spy#9778: and then I can try the simpler one after and see how it affects performance
nev#4905: how high of a batch size do you need to use for the gradients to "converge"?
nev#4905: i.e. for them to be similar to full batch training
nev#4905: and at which point do you hit vanishing returns?
Spy#9778: intuitively I'd expect it to be super high
Spy#9778: like for NLP you need to see every single word in the vocabulary at least once for the gradient w.r.t. the embedding table to possible be correct
CRG#8707: <https://openai.com/blog/science-of-ai/>
sahilmn#8752: Hey guys! Discovered this group via jack clark's importAI newsletter. I'm in awe of this community and what you've been able to achieve as a distributed group of AI/ML enthusiasts. My introduction: I'm an early career SWE, with an academic background in ML. Looking to dive back back into research. Currently, I'm working on an implementation of Progressive GANs with an eye to generalize it and make it easier to progressively train other networks. My timezone is PT and pronouns are he/him
EricHallahan#1051: Welcome!
Teto#0001: Anyone know
bmk#1476: unfortunately not
Teto#0001: Where i can rent out a vm
Teto#0001: With t4
bmk#1476: there is no more knowledge in the world
Teto#0001: Bruh |
alstroemeria313#1694: aws
bmk#1476: gcp
Teemochu#8740: All of the knowledge has been converted to Lamborghinis
alstroemeria313#1694: i tried and they were out actually
alstroemeria313#1694: i had to go to aws
Teto#0001: I was going for google cloud compute but i keep getting the Resources required is not available in this region error
Teto#0001: What are the prices on aws
EricHallahan#1051: That seems suboptimal.
alstroemeria313#1694: yes this happened to me too, i went to aws and used g4dn.xlarge instances
alstroemeria313#1694: 52.6 cents an hour in us-west-2
Teto#0001: Ty
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/854106759693467668/Screen_Shot_2021-06-14_at_2.15.17_PM.png
alstroemeria313#1694: @Teto these all have 1 gpu except the 12xlarge which has 4 and the metal which has 8
alstroemeria313#1694: so i used the smallest one and got like 8 to 10 of them at once
alstroemeria313#1694: they read requests from a rabbitmq queue and uploaded the results to an s3 bucket
Louis#0144: https://twitter.com/mitchellagordon/status/1404546218475364365?s=21
Louis#0144: Latitude wants to train their own DALL-E model?
Louis#0144: Mitchell should probably delete that tweet
cfoster0#4356: :wat:
cfoster0#4356: Are they offering... CoreWeave's compute... to us? |
Louis#0144: Yes
Louis#0144: LMAO
Louis#0144: cc @bmk
Louis#0144: You’re gonna die laughing
bmk#1476: ..what?
bmk#1476: wat
bmk#1476: but
alstroemeria313#1694: ...What
alstroemeria313#1694: How do you even use a DALL-E like arch with guided diffusion
alstroemeria313#1694: (CLIP works perfectly well though, I got it working)
Louis#0144: Probably some weird EM algorithm
cfoster0#4356: I don't recall exactly what Aran's design is, but I thought it was basically guided diffusion
bmk#1476: wait is aran's thing an Eleuther project or a project by the pytorch-dalle server
Louis#0144: I think it’s us
Louis#0144: He talks about it in #multimodal
Teto#0001: Does aws have free credits or something similar
Louis#0144: A lot
alstroemeria313#1694: idk, maybe a little bit now
bmk#1476: yeah but Aran mostly just does his own thing usually
Louis#0144: Oh maybe |
Teemochu#8740: This may be the strongest Baader-Meinhof effect I've ever had... I literally just finished reading this image from a brand new post on /vg/ right before checking this channel https://cdn.discordapp.com/attachments/729741769738158194/854110455050797096/1623705964927.png
nev#4905: hmmm
nev#4905: no info on transformers
nev#4905: but I guess it's 100k then
Spy#9778: since I'm only using batch norm for a discriminator and don't really care about the running mean/variance it does seem I can use the pmean version!
Spy#9778: so thanks
Kia#2550: What...
Kia#2550: Coreweave's compute?
Kia#2550: :mittwoch:
Kia#2550: I was scrolling to tweeter and stumbled to Aran tweets and, saw this
Kia#2550: Wow
alstroemeria313#1694: Considering trying to make a StyleLDMGAN or StyleVAE
alstroemeria313#1694: We are trying to train a StyleGAN and it is dropping modes.
Kia#2550: So Different Style___ Models?
Kia#2550: Seems lovely
Josh Saga#1335: Hello guys
Josh Saga#1335: I'm new
Josh Saga#1335: So is Eleuther also focusing on content generation? or using the AI for another kind of project?
Sid#2121: !faq
Carl-bot#1536: |
triggerhappygandi#0001: @kindiana Can I ask hf tech support question lol :berk:
triggerhappygandi#0001: I have the .pt files but how load them? If it was only one I could've just `model = GPTNeo.from_pretrained('model.pt')`'d it
bmk#1476: are you using the HF-converted one
bmk#1476: that one should just be one file
kindiana#1016: yeah idk anything about the HF 6b support
zphang#7252: tho from_pretrained can also take a folder as argument, so there's that
bmk#1476: i dont think it can load the checkpoints as is from the eye tho
kindiana#1016: yeah you need some sort of conversion script
kindiana#1016: there's one floating around on colab somewhere
kindiana#1016: also its part of the hf pr I believe
triggerhappygandi#0001: I just downloaded the model from gdrive
triggerhappygandi#0001: ahh
triggerhappygandi#0001: that should work
Kharr#7888: If this is for the 6B model, check the discussion in the PR. There's a special way to load it
nostalgebraist#3542: i haven't used the gdrive one, but it sounds like it's meant to be loaded with this: https://github.com/finetuneanon/transformers#checkpoint-splitting
Louis#0144: !faq
Louis#0144: Jkjk
triggerhappygandi#0001: Aeaeaeaeaeaeaeaeaeaea
triggerhappygandi#0001: :goosegirl:
bmk#1476: !goose |
haze#0420: it's cool to know that bibliotik was a source for that dataset that was used
Deleted User#0000: Hello everyone!
Some of you may know about my latest project, where I will train a PET model on the pile with the intention of defeating GPT-3.
I spoke with Timo, the researcher who made PET, and he says online learning isn't possible with PET so I'll need 1.2TB of RAM to train my model. He said that for chunk-by-chunk training the whole codebase would have to be modified and that PET was designed for fine-tuning with only a few samples.
😦
However, after days of hard work, I have FINALLY done it.
I read all the papers about PET and saw the articles and experimented a lot on my own, and I've finally written my own implementation of PET, which is 3x faster than Timo's and supports training PET on data that is too large to fit into RAM.
🙂
I just wanted everybody to know that I'll be releasing my model, GPT-# (pronounced as gpt sharp) in about a week or so.
Thanks for all the guidance from:
-> @cfoster0
-> @bmk
-> @EricHallahan
And encouragement/help from:
-> @𓅬 gabriel_syme 𓅬
-> @kindiana |
-> @AI_WAIFU
And most of all, the holy goose.
Wait for GPT-#(gpt sharp) y'all.
Peace out.
Kia#2550: Goodluck :O
Deleted User#0000: thanks
Deleted User#0000: @bmk what's there to think about
**what's there to 'thonk' about?
Kia#2550: Sure thing
Deleted User#0000: gr8 thx
kurumuz#5695: :thonk:
Kia#2550: Ow no kuru is gonna look at this:thonk:
GrimSqueaker#8837: PET = ?
mr_seeker#1337: Something like GPT but with less parameters required. Bit of a "David Vs Goliath"
GrimSqueaker#8837: Linky? 🙂
GrimSqueaker#8837: (PET will just get me pet scans and doggo pictures)
mr_seeker#1337: Even better, the paper itself? https://cdn.discordapp.com/attachments/729741769738158194/854275072382861312/2009.07118.pdf
kurumuz#5695: iirc PET wasn't generative |
kurumuz#5695: so not sure how it's a "Generative Pretrained Transformer"
Deleted User#0000: sorry, I meant iPET
Deleted User#0000: I know
but I *made* it generative
Deleted User#0000: I rewrote the entire iPET codebase
kurumuz#5695: oh
kurumuz#5695: interesting
Maark#6960: anyone know of a discord ai bot that can summarize the unread conversations that I've missed? there's so much activity across just the few AI discord servers I'm in it's really hard to keep up!
𓅬 gabriel_syme 𓅬#3220: nope, but it would be a nice thing to do eventually
triggerhappygandi#0001: it is eleuther project for now, but he did mention getting some compute (in future) from dalle-pytorch
triggerhappygandi#0001: But he has been mentioning the diffusion project for a while, whenever scaling is mentioned :berk:
ersatz#0001: would it be possible to train a GPT-2 size model on a single latest generation GPU?
triggerhappygandi#0001: would take like a year
ersatz#0001: something like 8k hours?
ersatz#0001: that doesn't seem right
pragmaticml#1730: Are you primarily keeping the "i" part of iPET? I.e. the iterative student-teacher setup?
Furk#5259: How many tokens does "pile" have?
Sid#2121: It's somewhere around 300 billion i believe. I think ? the exact value should be in the paper
Furk#5259: I checked the paper. It doesn't mention it. I had a quick look at the PDF and couldn't see either, so I asked here. Thanks.
StellaAthena#3530: The number itself isn’t in there, but the number of tokens per KB or something like that is |
Furk#5259: yeah ```Table 7: Tokens per byte for Pile components```
StellaAthena#3530: 0.29335 tokens / byte overall (3.1)
StellaAthena#3530: So approximately 376.26 billion
ym#0104: definitely looking forward to this!!!
Deleted User#0000: thx
Teto#0001: hey guys i am new to gpt, and i am wondering if we can train the gpt model more by giving our own set of data
Daj#7482: I'd recommend you read #rules as we are not a beginner community. Short answer is "yea, sure"
Deleted User#0000: I am.
It's iterative pattern enhancement training BTW.
mega b#6696: is there a repository?
mega b#6696: or a place i could find more info?
StellaAthena#3530: We have the beginnings of a project organization system here that I am looking to revamp https://github.com/EleutherAI/project-menu/
I am currently thinking about how to best organize recruitment and communicate requirements. Currently, I have the following tags. https://cdn.discordapp.com/attachments/729741769738158194/854404472688345138/Screen_Shot_2021-06-15_at_12.56.00_PM.png
StellaAthena#3530: I feel like projects often need people with coding skills that aren't "ML devs." For example, I sketched out a research project idea in #prompting that mostly required installing a github repo, writing some scripts, and running some analysis code. Very little ML background is required, and I wouldn't want someone ruling themselves out because they look at GPT-NeoX and go "I can't write code like that"
Sahl#0630: It would also be helpful knowing which projects need software developer experience but don’t require ML experience
StellaAthena#3530: Oooo that's a good one
StellaAthena#3530: @Sahl Do you identify as a non-ML SWE?
StellaAthena#3530: Could you do me a favor? Take a look at this github repo and let me know if you think you could (possibly after learning a bit) figure out how to run the same experiments on a different model or modify it to allow a custom input prompt https://github.com/Eric-Wallace/universal-triggers
StellaAthena#3530: I'm trying to scope the difficulty of a task I would like some help with |
mr_seeker#1337: I can help if you like, working as technical app manager for university. Can take a look at it right now...
StellaAthena#3530: TY
mr_seeker#1337: I would say change this part to use the GPT-Neo model?
https://github.com/Eric-Wallace/universal-triggers/blob/ed657674862c965b31e0728d71765d0b6fe18f22/gpt2/sample_from_gpt2.py#L35
StellaAthena#3530: That is correct.
Sahl#0630: Looks like many of the experiments load the model directly inside instead of taking it as a parameter
Sahl#0630: Would all the models you want to test have the same class?
&.#0001: I wonder how OpenAI's GPT-3 does semantic search. Their pricing page says:
> The 14 represents the additional tokens the API uses per document to accomplish the Semantic Search task,
+ (Number of documents + 1) * 14
+ (Number of documents + 1) * Number of tokens in your query
Which may suggest they may be using some sort of prompt
They also say using a **pre-indexed** set of files is faster. (And they charge less for it, though we don't know whether it's because it costs less for them, or because of a pricing feature)
How would Eleuther build semantic search?
&.#0001: Semantic search is a property of the model, it's orthogonal to finetuning.
StellaAthena#3530: They'd all be HF integrated
Sahl#0630: Does HF act as a common interface for a bunch of models?
&.#0001: Any ideas on how semantic search is implemented and how Eleuther might replicate it?
&.#0001: It computes a score of similarity between a query and a document, with some sort of pre-indexing ability
StellaAthena#3530: Yeah. I'm using HF (HuggingFace) as a shorthand for the `transformer` library they developed. It exists for exactly this purpose |
Sahl#0630: So what you would need is turn the experiments into a function taking a `transformer` instance instead of loading it within
&.#0001: Which is the query and which is the document seem to be distinct in OpenAI's semantic search, though I see some clear flaws in how their API works. For instance, if you swap the query and the document, the computed score is different
Sahl#0630: As long as you're only using stuff common to any such model
StellaAthena#3530: Honestly, I would be totally fine with you hand-coding the changes between runs. Extensibility is of limited use.
StellaAthena#3530: (Your idea is better than mine tho)
Sahl#0630: Oh I thought you were asking for a general solution
Sahl#0630: software dev moment
StellaAthena#3530: No, I'm just asking if this is the kind of thing you'd be comfortable doing.
Sahl#0630: What do you mean by custom input prompt btw?
mr_seeker#1337: @Sahl most likely reference to this: https://github.com/Eric-Wallace/universal-triggers/blob/ed657674862c965b31e0728d71765d0b6fe18f22/gpt2/create_adv_token.py#L64
mr_seeker#1337: replace target_texts with something less intrusive
StellaAthena#3530: Oh it seems target_text is already parameterized. Didn't realize that
Sahl#0630: seems like simplest thing is to turn things into functions
Sahl#0630: and then if you're running experiments on models you make your own script and just call them
Sahl#0630: I think I can do that but definitely not today
CRG#8707: <https://www.reddit.com/r/MachineLearning/comments/jf7td3/n_the_gpt3_api_has_a_semantic_search_endpoint/> https://cdn.discordapp.com/attachments/729741769738158194/854413286996967424/Screenshot_20210615-193256.png
ym#0104: @StellaAthena by custom input prompt do you have in mind the sorts of prompt templates at https://github.com/bigscience-workshop/promptsource ? they use jinja templates
&.#0001: Interesting. It sounds like it might be possible to create a semantic search using a different method that performs better than OpenAI's– for instance by checking attention levels versus a base document.
cfoster0#4356: My guess is they take your documents, run them through the model with some kind of prompt bracketed as "here is a document, followed by a query related to it [DOCUMENT] end of document" and store the kv embeddings. At inference time you can score the logprobs of the query given the document kvs pretty quickly
cfoster0#4356: Since your documents are likely much longer than your query |
&.#0001: Interesting. I wonder what weaknesses this strategy may have, and if we can research something better...
Sahl#0630: It's weak in that you have to try a bunch of queries
Sahl#0630: oh wait no
Sahl#0630: I misread
&.#0001: I find it interesting to use model data- such as attention- in ways other than pure text generation
&.#0001: Thanks, I will make any progress I make on implementing semantic search for GPT-J open-source (If I get around to it...)
mr_seeker#1337: I always try the "convert this name to an emoji"
&.#0001: Oh, I meant to say, "other than having it predict text using a prompt"
EricHallahan#1051: AR models are underused in domains where generating text is not desired.
StellaAthena#3530: Yeah that would work
mr_seeker#1337: AI knows Spiderman is 🕷️ 👨
cfoster0#4356: It's actually pretty nice. With some clever masking & positional encoding you can batch the computation for a bunch of queries on the same documents all at once
cfoster0#4356: The downside is having to store all that memory (or I assume they do)
EricHallahan#1051: What happens if you do a nonlinear jump in position with RoPE?
FerroMagnetic#6975: It'll turn all the historical fiction into one word: "aristocrats"
cfoster0#4356: How so?
cfoster0#4356: Just like 0 1 2 7 8 9?
EricHallahan#1051: Like instead of linearly increasing the position you create a jump instead. You can then do weird stuff like inserting two documents after inserting a prompt at the beginning.
```0 1 2 3 | 4 5 6 7 | 4 5 6 7```
cfoster0#4356: Oh, yeah that's what I was talking about |
mega b#6696: COGVIEW!!! https://github.com/THUDM/CogView
cfoster0#4356: If you mask it it should be no different
cfoster0#4356: If you don't then I assume weird shit will happens as it confuses the two
EricHallahan#1051: What happens if you predict with both in the context? Seems like something that would totally destroy it without the masking.
EricHallahan#1051: Might have an interesting failure mode though.
EricHallahan#1051: Actually maybe not. :thonk:
Spy#9778: Anyone know if it's normal for the VQGAN codebook loss to increase the beginning of training?
Spy#9778: I got this sorta curve and at some point the codebook collapsed I think https://cdn.discordapp.com/attachments/729741769738158194/854434845262217246/unknown.png
joaogui1#8461: (where is the notebook with the CLIP + VQGAN demo?)
alstroemeria313#1694: is it one of mine? then see the pins in #art
EricHallahan#1051: #art pins.
alstroemeria313#1694: idk, what do you mean collapsed
Spy#9778: all the outputs look identical
alstroemeria313#1694: oh
Spy#9778: reconstructions I mean, not samples even
alstroemeria313#1694: are you fine-tuning or training from scratch
Spy#9778: scratch
Spy#9778: I just got my implementation finished yesterday so it may be rife with bugs
alstroemeria313#1694: @𓅬 gabriel_syme 𓅬 have you ever seen this on yours?
alstroemeria313#1694: oh |
alstroemeria313#1694: your own impl
Spy#9778: yeah but it's a good idea to try someone else's to see what the behavior is like
Spy#9778: I assume the notebooks are generally from pretrained ones though?
alstroemeria313#1694: yes
alstroemeria313#1694: gabriel_syme has trained his own
alstroemeria313#1694: two different wikiart vqgans w/ different codebook sizes
alstroemeria313#1694: but with the compvis repo
Spy#9778: yeah we were talking a little the other day
alstroemeria313#1694: what is codebook loss again
Spy#9778: mean sq distance between encoding and quantized output
alstroemeria313#1694: ah right
joaogui1#8461: thanks!
alstroemeria313#1694: and you did the thing where you compute it separately for one side and then the other and use different weights?
alstroemeria313#1694: (idk how critical that is actually)
Spy#9778: wait hang on
Spy#9778: I have not heard of this thing, can you explain?
alstroemeria313#1694: this https://github.com/CompVis/taming-transformers/blob/1bbc027acb6a47e4eb348d611f9af53f1038ffee/taming/modules/vqvae/quantize.py#L69
Spy#9778: I did try to basically directly port the compvis repo to JAX, but I may have missed some stuff
Spy#9778: ohh the stop grad
Spy#9778: yeah I wasn't sure if it was important but I did do that yes |
alstroemeria313#1694: yeah the two stop gradients where the weight is different
alstroemeria313#1694: ah
alstroemeria313#1694: @Spy did you kick the discriminator in right away or wait
Spy#9778: Waited
alstroemeria313#1694: did it actually kick in
Spy#9778: You know maybe it collapsed when the discriminator turned on. I botched my logging so I'm not sure when it was running or what the loss was like
alstroemeria313#1694: i would think D would be anti-collapse but
alstroemeria313#1694: GANs can be weird
alstroemeria313#1694: esp if your adversarial loss weight is off/too big
Spy#9778: I was doing that weird adaptive weight thing. I'll add it to my plots and see what it looks like.
alstroemeria313#1694: so if the codebook collapsed how come the loss was so low
Spy#9778: Hmm yeah maybe it's just the encoder collapsing
Spy#9778: So then all the codes get shoved to the same spot
alstroemeria313#1694: did the encoder just start outputting only things that could get through the collapsed codebook and the codebook got no gradients to improve?
alstroemeria313#1694: encoder shouldn't collapse if there's no discriminator, right?
Spy#9778: No idea tbh
Spy#9778: I mean it could die if the learning rate is too high or something
alstroemeria313#1694: yeah
alstroemeria313#1694: i think really the #1 thing to know is "did D kick in and if so when"
alstroemeria313#1694: what did the reconstruction loss curve look like btw? |
Spy#9778: the reconstruction loss curve looked big sad
alstroemeria313#1694: oh :/
Spy#9778: https://cdn.discordapp.com/attachments/729741769738158194/854440431600140298/unknown.png
Spy#9778: this is 10 epochs
alstroemeria313#1694: well i mean yeah but were there... wait did it never go down
Spy#9778: but the reconstructions at the beginning did actually start looking like learning was happening
Spy#9778: the images are RGBA and I replaced transparent sections with magenta to use RGB
Spy#9778: and the reconstructions were like
Spy#9778: vaguely foreground shaped blobs with magenta backgrounds
Spy#9778: the samples became terrible within like 2k steps I think
alstroemeria313#1694: oh
alstroemeria313#1694: so probably not D
Spy#9778: I'm just gonna add some more plots and run again
alstroemeria313#1694: what was your lr
Spy#9778: 5e-6
alstroemeria313#1694: ah
Spy#9778: I just copied the imagenet settings
Spy#9778: other than the fact that I'm training at 128x128 so I'm doing one less downscale
alstroemeria313#1694: so you're training an f=8?
pragmaticml#1730: Take a look at the haystack library by deepset: http://GitHub.com/deepset-ai/haystack |
&.#0001: Thanks
Spy#9778: is f the resolution at the quantization layer?
alstroemeria313#1694: it's the downsampling factor
Spy#9778: then yeah
Spy#9778: ah JAX also has a bunch of different default inits from torch
Spy#9778: so who knows if that's messing stuff up
Spy#9778: alright before it runs away
Spy#9778: here is a reconstruction of thonk: https://cdn.discordapp.com/attachments/729741769738158194/854447941811830794/unknown.png
Spy#9778: so it's starting to do _something_ but I can't really see much change in the reconstruction loss
alstroemeria313#1694: @Spy it's a pity we don't have good four-channel perceptual losses
Spy#9778: there aren't that many 4 channel datasets right?
alstroemeria313#1694: not really
Spy#9778: guess it makes sense but yeah
alstroemeria313#1694: i was looking into training a 4 channel stylegan, looks like it's doable since it doesn't need a reconstruction loss
Spy#9778: I tried 4 channel stylegan for this dataset and it didn't work great
Spy#9778: the data's pretty shit though
alstroemeria313#1694: oh
alstroemeria313#1694: ...how much data is there
Spy#9778: there also may have been other issues
Spy#9778: eh it depends on how picky I'm being about cleanliness |
Spy#9778: I have like 10k high quality ones
Spy#9778: and > 100k kinda bad ones
alstroemeria313#1694: ...also should you use premultiplied alpha for 4-channel stylegan or not
Spy#9778: no idea
Spy#9778: this was also a while ago so all the details have completely departed my brain
alstroemeria313#1694: i would be kind of inclined to do premultiplied
Spy#9778: it does seem like the discriminator was murdering the reconstructions in cold blood
Spy#9778: I set it to turn on significantly later
alstroemeria313#1694: ah
alstroemeria313#1694: mb lower its weight too
Spy#9778: I'll do that if it still kills it with the later start
alstroemeria313#1694: Hey, what's the exact nature of the connection between VAEs and LDMGANs?
Spy#9778: I split the loss out into L1 and perceptual and it looks like it just looked like it wasn't improving since it was dominated by the L1 loss which has a high value but low grad https://cdn.discordapp.com/attachments/729741769738158194/854464872485486602/unknown.png
alstroemeria313#1694: ah
Spy#9778: man I would not want to do this as my day job
Spy#9778: but it is a nice break from the perplexity mines
alstroemeria313#1694: ehehe
alstroemeria313#1694: Yeah I sell AI art NFTs and keep trying to improve my methods
Spy#9778: I'm trying to get this good enough to add to my discord bot
Spy#9778: (it's just a for fun bot not a public one) |
Spy#9778: what's the going rate like?
Spy#9778: I haven't really been following NFTs
alstroemeria313#1694: low lol
alstroemeria313#1694: once you get your VQGAN you can train an autoregressive transformer on VQGAN token sequences and then it's back to the perplexity mines :)
Spy#9778: true :p
Spy#9778: but at least I can be very confident I'll get that part right
alstroemeria313#1694: @Spy i'm training an autoregressive transformer on CLIP text embeds and VQGAN token sequences rn
alstroemeria313#1694: It... is still kinda bad
Spy#9778: goal being text to image I imagine?
alstroemeria313#1694: yes
Spy#9778: honestly openAI not releasing DALL-E is almost offensive to me
Spy#9778: the sheer amount of missed creative output :(
alstroemeria313#1694: yep
alstroemeria313#1694: the latest demo grid from mine https://cdn.discordapp.com/attachments/821173872111517696/854471983538241546/demo_00001_096000.png
alstroemeria313#1694: and the prompts https://pastebin.com/Xm9NrXRH
Spy#9778: oh it's definitely using the conditioning info at least!
Spy#9778: uh heh
alstroemeria313#1694: oh?
Spy#9778: I had this implemented but in my config I had the weight set to 1. -.-
alstroemeria313#1694: oh |
Spy#9778: should have been 0.25
Spy#9778: whooops
Mickey666Maus#3529: I was just invited to Googles Foobar Chalenge...anyone her that can give me a hand? 😅
bmk#1476: I've never heard of that before
bmk#1476: what is it?
Mickey666Maus#3529: https://cdn.discordapp.com/attachments/729741769738158194/854485166021017620/20210616_001430.jpg
Mickey666Maus#3529: Its an invitation for some coding chalenge!
bmk#1476: huh I've never gotten that before
bmk#1476: :withered:
Mickey666Maus#3529: If I make it to Level 4 I can invite you...
bmk#1476: nah it's fine
alstroemeria313#1694: ...Can an LDMGAN encoder sneak information through in the correlations between variables in the latent?
alstroemeria313#1694: I think mine may be
𓅬 gabriel_syme 𓅬#3220: I saw a lot of weird stuff with losses while training, and especially fine tuning, not sure it was smth like this though.
Spy#9778: oh I got an internship off that
Spy#9778: it's pretty fun
Spy#9778: similar to like codewars problems or w/e
Mickey666Maus#3529: Nice!!
Mickey666Maus#3529: I just signed up!!
Mickey666Maus#3529: Maybe you can give me a hand once I am stuck? 😅 |
Spy#9778: that seems kiiiinda questionable
Spy#9778: although there's still interviews and whatever after so sure I guess
StellaAthena#3530: I do not recommend soliciting assistance with an aptitude test for employment. I haven't looked at the Terms but this is likely certainly considered cheating.
Mickey666Maus#3529: Ah OK, no problem...then I will try to figure it out myself!
Jonnathan#1234: Is there a time limit on accepting the Foobar invite? I just got one too, but I'd like to refresh for a day or two on leetcode problems before trying
Kharr#7888: You can _Google_ around for answers :brr:
alstroemeria313#1694: ...I don't have enough Ritalin in me rn to write a hierarchical VAE
alstroemeria313#1694: Sigh
Kharr#7888: What if you just re-used the same VAE repeatedly and sliced out your windows at different sizes + padded?
alstroemeria313#1694: idgi
alstroemeria313#1694: I mean this. https://cdn.discordapp.com/attachments/729741769738158194/854504089466437673/unknown.png
Kharr#7888: what does BNS stand for?
alstroemeria313#1694: batch norm swish conv, i think
Kharr#7888: Those are some very specific pieces. I'm assuming the inverted residual is some standard, like this? https://cdn.discordapp.com/attachments/729741769738158194/854505487361769482/unknown.png
chilli#5665: yes it is
chilli#5665: google foobar was (is?) stupidly effective for google internships
alstroemeria313#1694: @Kharr idgi, i'm looking at the code and it's not actually residual?
alstroemeria313#1694: But.
alstroemeria313#1694: I mean the basic idea
alstroemeria313#1694: Not the exact arch |
alstroemeria313#1694: (It is this https://github.com/NVlabs/NVAE)
Kharr#7888: At a quick glance I don't see residual connections in there. :thonk:
alstroemeria313#1694: I know right
alstroemeria313#1694: Maybe they mean 'residual' as in there are skip connections from the encoder to the corresponding place in the decoder
alstroemeria313#1694: (Also, wow I'm glad we have autograd, crazy architectures like this would be an unholy pain to write if you had to compute the gradients manually)
bmk#1476: how do they decide who to show it to?
bmk#1476: ive never gotten one
bmk#1476: maybe google just doesnt want me
Kharr#7888: I was like "who wrote this?" and then looked at the top.. NVIDIA.. okay, makes sense.
alstroemeria313#1694: yeah, i've mucked around inside stylegan2-ada-pytorch and modified it
alstroemeria313#1694: i know their style lol
Kharr#7888: Yeah, someone clearly thought about it before writing it. It's not some random intern's project.
chilli#5665: it used to be a very ... complicated procedure, which was basically
```
if search_query in ["python list comprehension", "arraylist java"]:
show foo bar
```
bmk#1476: .. nothing https://cdn.discordapp.com/attachments/729741769738158194/854509275036254229/unknown.png
bmk#1476: I've googled programming related stuff all the time for like 10 years now and I've never ever gotten it |
bmk#1476: guess google just decided they don't want me
Spy#9778: I remember like everyone in my cs department got it with like
Teemochu#8740: keep doing it
Spy#9778: "python operator precedence" or something?
Louis#0144: i took over #research with KGs to basically say "yeah they suck rn"
Louis#0144: LMAO
Teemochu#8740: do it, wait 5 seconds, retry
chilli#5665: it used to be literally those exact 2 phrases
chilli#5665: in that exact order
Teemochu#8740: you can be pretty sure it doesn't work if it doesn't work after 50 tries or so
bmk#1476: that's so dumb tho
chilli#5665: "python list comprehension" would work
chilli#5665: but not
chilli#5665: "list comprehension python"
Teemochu#8740: it's a low chance
bmk#1476: that's the most utterly braindead hiring method ever
chilli#5665: I agree
chilli#5665: and they weighted it insanely highly
bmk#1476: I'm kinda miffed
Teemochu#8740: my guess is it was intentionally leaked to the press |
chilli#5665: I had a bunch of friends who had their resumes thrown out when applying
chilli#5665: then they did foobar
Kharr#7888: Keep in mind Google now goes through BERT-like model for search. Could be changed a bit.
chilli#5665: and then they got emails back saying "we'd like to continue the hiring process"
Teemochu#8740: Foobar or codejam round 3 is how I got in
chilli#5665: keep in mind, some of these friends got to codejam round 3
chilli#5665: lol
Teemochu#8740: those were the only things I used that email address for that would be halfway relevant
chilli#5665: which is 100x harder than foobar
bmk#1476: I never made it past codejam round 2 because I forgot to mark it on my calendar
bmk#1476: or actually round 1
bmk#1476: it's a tradition
bmk#1476: every year I forget some round of codejam
bmk#1476: usually the qualification round
bmk#1476: but I actually remembered to do it this year only to forget round 1
bmk#1476: but yeah im really miffed at this foobar thing
bmk#1476: do adblockers interfere?
guac#4716: i just got it lmao
chilli#5665: So basically, Google:
|
Top 1000 competitive programmers in our flagship contest? - we sleep
chilli#5665: Search “python list comprehension”? Hell yeah
guac#4716: yup
chilli#5665: By searching those phrases?
chilli#5665: Lol
bmk#1476: ragecrylaugh https://cdn.discordapp.com/attachments/729741769738158194/854510918734184468/unknown.png
bmk#1476: im pretty sure google just fucking hates me
chilli#5665: Just open up a bunch of tabs and spam it
guac#4716: yo i swiped it away by accident cause it like warped the page with a shitty animation nooooo
chilli#5665: I don’t know if they still place as much weight on it lol
bmk#1476: this is the most braindead hiring practice in the world
bmk#1476: yes im salty
chilli#5665: Agreed
bmk#1476: good thing i dont plan on applying to google rn
Sid#2121: i got google foobar one time, can't remember what i was searching
Sid#2121: probably should've actually done it lol
chilli#5665: For a while it was a good way of avoiding google’s regular crappy HR process
bmk#1476: but like man why didnt they pick a more.. interesting search phrase
bmk#1476: the only people searching java arraylist are people just starting java
bmk#1476: or python list comprehension |
bmk#1476: you dont search that if you're an experienced dev
Sid#2121: it's not just list comp
Sid#2121: i was searching for sth else
chilli#5665: No no, not Java arraylist
chilli#5665: Arraylist Java
guac#4716: wait if you refresh the page you lose it? i can't get it back
Sid#2121: you get a log in but it's timed
bmk#1476: I've been searching for stuff for years and I have literally never gotten foobar
chilli#5665: Dependency injection was another one I remember working
Sid#2121: so like if you start a challenge then don't finish it in like 2 weeks i think it was, you get logged out
bmk#1476: that's boring corpo shit tho
Sid#2121: the few that i did were actually pretty fun
bmk#1476: i strongly associate dependency injection with java ee
zphang#7252: 1. put esoteric error message deep in TF/jax codebase that only someone working on some weird edgecase will encounter
2. when that message is googled, trigger foobar
bmk#1476: 3. hire them and send them to the tensorflow mines
zphang#7252: where they'll plant the invite for their next... victim
bmk#1476: this but with tf https://cdn.discordapp.com/attachments/729741769738158194/854512787346292777/14688326c12892e4edb694f31bb953db.png
kindiana#1016: it says free TPUs on the side
bmk#1476: someone go ps that |
guac#4716: i really just swiped "back" on my trackpad for the google foobar... she's gone :sadge:
Spy#9778: https://cdn.discordapp.com/attachments/729741769738158194/854516411434008596/unknown.png
Spy#9778: my reconstructions seem to get stuck being the right shape but weird green blobs
Spy#9778: any intuitions for what'd cause that? (vqgan)
bmk#1476: just keep searching the term over and over again?
guac#4716: you right lmao https://cdn.discordapp.com/attachments/729741769738158194/854516643380330546/Screen_Shot_2021-06-15_at_8.23.57_PM.png
𓅬 gabriel_syme 𓅬#3220: that reminds me of the initialization of some pretrained models when we use them with CLIP
𓅬 gabriel_syme 𓅬#3220: barring the background
𓅬 gabriel_syme 𓅬#3220: so maybe it's...early?
Spy#9778: could be
Spy#9778: maybe the discriminator will save it
𓅬 gabriel_syme 𓅬#3220: how much data do you have?
Spy#9778: ~9k
Spy#9778: images
𓅬 gabriel_syme 𓅬#3220: so you can do smth like idk 40k samples, then turn the discriminator on for another 40k?
Spy#9778: I was thinking about trying a super small dataset of like 20 images just to make sure it can memorize it
Spy#9778: to sanity check
𓅬 gabriel_syme 𓅬#3220: but yeah discriminator helps a lot, although not sure if it should be that bad during generator phase
chase#6932: I got it when I searched in an incognito window
BoneAmputee#8363: maybe certain things in your search history disqualify you :berk: |
Teemochu#8740: I highly doubt you'd be disqualified for how you've used your python before
bmk#1476: :guilty:
bmk#1476: maybe they just automatically disqualify anyone who's ever used Haskell out of sheer disgust
BoneAmputee#8363: I've never seen it either
BoneAmputee#8363: lot o searches
Teemochu#8740: Python list comprehension: comprehending you're on a list for how you use your python
AI_WAIFU#2844: Me: SD←((+/((X - AV←(T←+/X)÷⍴X)*2))÷⍴X)*0.5
Google: Keep him the fuck away from our systems.
AI_WAIFU#2844: APL
Louis#0144: what ever happened to NTM btw
Louis#0144: they like entirely died out
StellaAthena#3530: It was 90% hype
StellaAthena#3530: and the parts that weren't weren't mainstream useful
Louis#0144: maybe we should reimplement the eval they did with a decision transformer?
Louis#0144: showing that decision transformers are really good at storing memory like that
Spy#9778: so my initialization looks like this: https://cdn.discordapp.com/attachments/729741769738158194/854532733714956288/unknown.png
Spy#9778: and even with a dataset of just 15 images or something, it's having a lot of trouble memorizing, and I'm getting that discoloration https://cdn.discordapp.com/attachments/729741769738158194/854532908433670204/unknown.png
alstroemeria313#1694: Yeah it should be able to memorize
Spy#9778: hmm |
Spy#9778: I suppose I have probably screwed up something somewhere
Dirk Groeneveld#5137: What happened to #the-pile? Anyways, we just released the multilingual C4 dataset, and I thought I'd let you all know: https://github.com/allenai/allennlp/discussions/5265
Louis#0144: chonk
Dirk Groeneveld#5137: I'm happy to answer questions. If I'm unresponsive, @ me. I am excited to talk about this, I just don't hang on Discord a lot right now.
Louis#0144: were there any languages that you were surprised how much of it/little of it was available?
kindiana#1016: hrm, you should be able to do zlib compression on tfrecords
Dirk Groeneveld#5137: I would not have figured Russian to be the second most common one.
kindiana#1016: not sure if that works with tfds
Louis#0144: the russians love their shitposting
Dirk Groeneveld#5137: I want to say though, we haven't done a ton of analysis on this yet. We're talking about writing a paper like we did for the first one. Or maybe a better one.
Dirk Groeneveld#5137: I did not explore it. The original processing script works on uncompressed records.
Dirk Groeneveld#5137: For daily use I go with the JSON version anyways.
kindiana#1016: yeah, I've found zlib compression to work well, maybe that's something you can investigate if people are downloading the tfds ones a lot lol
Dirk Groeneveld#5137: Sadly we don't get the numbers for downloads 😔
Louis#0144: are you going to send people harddrives with the data on it
Louis#0144: ive had a company do that for me before
Dirk Groeneveld#5137: The TFDS version is mainly for people running pipelines inside of the Google Cloud. And in that case, the transfer is free.
kindiana#1016: yeah fair enough
Dirk Groeneveld#5137: Haha, I should pitch that as a business idea. Maybe my team can become a profit center instead of a cost center :-)
Louis#0144: lmao |
StellaAthena#3530: @Dirk Groeneveld How did you do language classification? Was it labeled by the mT5 people?
Dirk Groeneveld#5137: You know, I paid for a CD-ROM containing the BNC corpus once upon a time.
bmk#1476: just give it to the eye
Louis#0144: LDC sent me a 2tb hard drive once and I still have it in my machine (i rly like the LDC label on it)
Dirk Groeneveld#5137: It was autodetected by one of Google's built in language detectors. I it's called CDL3 or something like that?
bmk#1476: cld3
Dirk Groeneveld#5137: I don't want to make it seem like a lot of original thought went into this. This is the output you get from running the officially published script.
bmk#1476: sell the drives at market value of an x TB drive, but actually use shucked disks instead and profit off the difference /s
StellaAthena#3530: Gotcha. I wonder how accurate the labels are then…
Dirk Groeneveld#5137: It has to be that way, so everybody is talking about the same thing when they say "mC4".
bmk#1476: cld3 is a *fast* classifier
Louis#0144: Out of curiosity, what is a profit center at ai2? I would imagine its mosaic?
Dirk Groeneveld#5137: No, if anything counts as a profit center it would be the incubator. But probably not even that. AI2 is a non-profit.
bmk#1476: I haven't done very exhaustive testing but I bet it probably has its share of flaws
bmk#1476: wen AI2 x Eleuther collab
StellaAthena#3530: @Dirk Groeneveld If you do do a paper analyzing the data, I would strongly recommend using a methodology that someone else came up with. I’m not just plugging citations for the Pile here, you can use another dataset if you like. But having many papers using different methodologies and benchmarking different things makes it hard to draw conclusions
Louis#0144: mosaic as a startup: we're like GPT3 in that we change you a flat rate per vertex of your knowledge graph
Louis#0144: lmao
Louis#0144: I was discussing mosaic papers in #research today so it is a bit funny you joined now
Dirk Groeneveld#5137: I've been lurking for a while :-) |
StellaAthena#3530: The C4 paper, the Pile paper, and the BookCorpus retrospective are all non-comparable
StellaAthena#3530: But there’s a strong sense in which relative results are all that matter
Dirk Groeneveld#5137: This is a good point. I figure we're still trying to figure out how to properly document these.
Dirk Groeneveld#5137: I can put you in touch with Jesse if you like?
StellaAthena#3530: Nobody has any intuition for what an appropriate amount of swears in a dataset is, or where the line between “realistic” and “racist” is for co-occurrence charts
StellaAthena#3530: That information is helpful comparatively. But if nobody compares them, it’s just not useful.
Dirk Groeneveld#5137: I'd also be interested in getting them all into a compatible format. So the same analysis can be run on each.
Louis#0144: do you know if any other ai2 people are here
Louis#0144: out of curiosity
Dirk Groeneveld#5137: I'd be surprised if they weren't? But I don't know for sure.
StellaAthena#3530: We were *supposed to* do the Pile analysis in a fashion that makes it very easy to replicate. But then we had 15 or so people working independently and it kinda didn’t happen.
Dirk Groeneveld#5137: And we had more ideas for the C4 paper, but ran headlong into a deadline 😬
Dirk Groeneveld#5137: I think we'll converge after a few iterations.
StellaAthena#3530: How is mC4 formatted?
StellaAthena#3530: We have a light-weight custom formatting that handles tars nicely that Leo made
StellaAthena#3530: https://github.com/leogao2/lm_dataformat/
bmk#1476: a more fair characterization of lmd imo is it's a library that can read text data of basically any normal format that normal people use
Dirk Groeneveld#5137: It's JSON lines, in compressed files.
bmk#1476: tar.gz, zip, jsonl.zst, etc
Dirk Groeneveld#5137: Then lmd should be able to read this out of the box? |
bmk#1476: uh, what's the schema of each of the objects? like is it just a string and nothing else or is it in an object
bmk#1476: fwiw the library is kinda unmaintained rn
bmk#1476: there are some major refractors I'd like to do that I've just never gotten around to
StellaAthena#3530: The point is that if we process the data into this form, we should be able to do at least some of the same experiments done on the Pile right
bmk#1476: i don't expect processing the data to the right format to be the hard part
StellaAthena#3530: What do you expect to be the hard part
bmk#1476: actually finding all the evaluation scripts from the 10 different people who did the evaluations
StellaAthena#3530: Yeah…
Dirk Groeneveld#5137: It has URLs, timestamps, and text. That's all.
Dirk Groeneveld#5137: In a way each record has a language, too, which is hidden in the filename.
StellaAthena#3530: @bmk approve my pile PR
Spy#9778: ah I think I figured it out
Spy#9778: upweighting the L1 loss seems to help a lot
mr_seeker#1337: Just woke up. You know there is something called AWS snowball and AWS snowmobile?
bmk#1476: we'll need one to ship out pile v2 to users
Louis#0144: https://discord.com/channels/729741769192767510/833024668780068894/854586040437964841
Louis#0144: I’m so proud of this
mr_seeker#1337: @bmk I would be happy to get a harddrive with the pile shipped to me...
bmk#1476: the pile is too small to necessitate that
bmk#1476: pile v2, on the other hand... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.