data
stringlengths 115
7.61k
|
---|
Zippy#1111: Like you can design the next flappy bird on an old 2013 macbook, but if you want to be competitive with AI, you need money.
Mega Glaceon#8882: or ingenuity to make training current models a lot faster
Mega Glaceon#8882: :smort:
gollark#3909: Or Colab and TRC or whatever else.
bmk#1476: bitter lesson disapproves
Zippy#1111: I guess that's true. If you're lucky enough to get access to TRC or anything like that.. then I guess you don't need money.
Kia#2550: Lucky?
Kia#2550: :surprise: Huh
Zippy#1111: Well I mean, you can apply if you're a researcher to use TRC
choltz95#4641: or just do some theory :books2:
Zippy#1111: It sucks maan.. I went to umass, amherst which has an amazing AI program, but I just did general comp sci. I feel like I missed out :hawaiicry:
choltz95#4641: Boston's great tho
Mega Glaceon#8882: meh, i've done all my stuff as a hobbyist π
bmk#1476: Eleuther is hobbyist land
alstroemeria313#1694: Eheh... I went to college back before neural nets became popular again
alstroemeria313#1694: I took an "AI" course and we learned tree search algorithms.
bmk#1476: anyone can apply to TRC
bmk#1476: the bar is incredibly low
bmk#1476: you only need a pulse (optional)
Mega Glaceon#8882: it'd be counterproductive for me to get a degree, i'd have to wade through so much uninteresting basic stuff. as a hobbyist, i can just tune transformers all day and nobody can complain :gdyeet:
|
Zippy#1111: Yeah.. I'm just starting my AI journey.. At my last business we were doing process automation and I got to do a bunch of preprocessing for images for tesseract OCR, training ocr on the target text.. and it got me into it. We hired this AI guy, and my job was essentally to fix his broken code :overfloosh: .. and along the way I learned a bunch of AI LOL
bmk#1476: also if anyone tries to argue that hobbyists can't do anything, well, eleuther is a counterexample. just look at our models and our publications list https://cdn.discordapp.com/attachments/729741769738158194/884934534129418280/Screenshot_20210907-165158_Chrome.jpg
bmk#1476: :tribalism:
Mega Glaceon#8882: π my motivation for getting into ML was that, in 2017, i didn't know anything about ML and thought it was just a toy and didnt do anything useful. boy has this been a journey π
Zippy#1111: Eventually made my own custom transformer using some tutorial and customized it to classify document types based on what the OCR pulled.. generated a dataset and it got like 99.5% accuracy :woah:
choltz95#4641: u have yet to develop the self esteem issues of an academic
Zippy#1111: That's pretty awesome!
Kia#2550: Always so pretty to look at:hap:
Zippy#1111: Sorry if I sound like I'm boasting.. π¦ I just don't ever get to talk about AI stuff because my friends aren't interested in diving into the nitty gritty.
Mega Glaceon#8882: you didnt sound like you were boasting :thonkdorp:
Zippy#1111: I know I suck, I'm just.. a noob whose excited that I made a droopy snowman
Zippy#1111: :peeka:
Mega Glaceon#8882: i started with MNIST digit recognition and was super happy that my c++ code was so fast :salpmao:
Mega Glaceon#8882: like, i did NNs from scratch and made my own kind-of-library
Zippy#1111: Oh lord. That's crazy..
Mega Glaceon#8882: but i'm the bottom-up kinda person
Mega Glaceon#8882: i wanna know the details of everything, and im not satisfied before i can implement something myself
Zippy#1111: I like to go both directions.. I start very abstract and then do a depth first search until I have no idea what I'm looking at, and then I go back to the root and do breadth first until I get bored, then dive again, ....
Mega Glaceon#8882: :smart:
Mega Glaceon#8882: i got a nice refresher on how to take derivatives when i did NNs from scratch
|
Zippy#1111: derivatives are nice, but I don't like them because they are mean to computers that don't like irrational numbers.
Mega Glaceon#8882: :surmu:
Zippy#1111: :hawaiicry:
StellaAthena#3530: Welcome to science tbh.
StellaAthena#3530: Itβs weird to me that CS people are only starting to notice
StellaAthena#3530: Random people on the street stopped being about to do research in physics over a century ago
Louis#0144: nah the cranks just went to string theory
Louis#0144: (to clarify string theory has been massively essential to the development of differential geometry, not really to physics though)
bmk#1476: ew, I hate string algorithms
bmk#1476: "let me just match strings using the z algorithm" statements dreamed up by the utterly Deranged
Teemochu#8740: suffix array! :ptsd:
Zippy#1111: :hawaiicry: yeah.. I'm just used to being able to do essentially anything with a computer.
Zippy#1111: I can make a game, make a driver, build an OS, build a server, make a pretty website, make useful utilities, .... but when it comes to AI, *nope*
Zippy#1111: At least I can run the 256x256 ViT + CLIP guided diffusion thing on my old 1080.
Zippy#1111: but a lot of people probably don't have access to anything like that.
Kia#2550: *Mood*
Retoli Savoli#0469: whys everyone got a 1080 lmaooo
Retoli Savoli#0469: shit seems like the gold standard while I'm sitting here with my ratty little 6gb 1060
Sparkette#4342: How did I not notice until now that the GPT-3 model names are alphabetical? Ada, Babbage, Curie, Davinci
Zippy#1111: After that comes Eleuther :Smart:
|
Sparkette#4342: Haha, perhaps someday! π
Teemochu#8740: :pikawave:
Zippy#1111: :pikawow:
James#6892: Wow. Never noticed till now.
janus#0150: Thats why Elon Musk founded the group.
Retoli Savoli#0469: What are some good resources and methods to find resources to learn about VQGAN, and other Machine Learning related subjects?
Retoli Savoli#0469: I want to start learning without incessantly asking questions
π
¬ gabriel_syme π
¬#3220: I just google my way through it, find a few decent looking references/papers/blogs, and follow the thread after that. There's really no substitute for that. You could also go to the #art channel and take a look at the pins there. There are notebooks that will let you play with CLIP+VQGAN workflows, sometimes a bit of hands on helps you find the questions you care about.
Retoli Savoli#0469: I've been using like 4-5 different collabs for the past 24 hours and those have been a fun learning experience seeing what keywords manipulate what etc
Retoli Savoli#0469: currently running 3 concurrently on collabs atm
Retoli Savoli#0469: and Research papers etc are useful you say?
π
¬ gabriel_syme π
¬#3220: depends on what you want to do and how deep you want to go. But definitely read the VQGAN paper I'd say and see what comes after
tamay#2378: Does anyone know of any good resources on training large multi-output DNNs (i.e. with large output vectors), preferably in TF?
bmk#1476: how large are we talking? also how is this different from any other kind of training? i dont think ive seen any resources specifically for this because it's basically the same as with anything else
Sphinx#2092: I imagine you'll run into issues if you have a vocab bigger than 1M
Sphinx#2092: but otherwise, not a real concern
kurumuz#5695: stop doing TF, its illegal
tamay#2378: @bmk Maybe >100 dimensional output vectors?
kurumuz#5695: only 100?
tamay#2378: Yes?
|
kurumuz#5695: well gpt does 50k
bmk#1476: 100 is pretty smol lol
bmk#1476: just, like, train normally
tamay#2378: I'm finding training to be pretty inefficient as I scale up the dimensionality of outputsβI'd appreciate resources on how to deal with this.
kurumuz#5695: how so
kurumuz#5695: ofc it will be harder, its more demanding
kurumuz#5695: scale the network up
cfoster0#4356: What kind of loss? Regression? Cross entropy with a big vocab?
tamay#2378: Regression, tried MSE, MSLE, etc.
tamay#2378: Scaling up the network 10x didn't change much...
bmk#1476: what are you regressing with that many targets o.O
bmk#1476: i still dont see why it shouldnt work
Sphinx#2092: yeah I can't imagine the output size is the real bottleneck.
bmk#1476: are you trying to predict a latent vvector from another network or something?
bmk#1476: are you trying to predict multiple time steps in a time series or something
tamay#2378: I basically have a large dataset that includes a lot of input features and a lot of output features (multiple time steps, and a host of other stuff)βthe details are a bit messy, so I won't go into them too much. I'm trying to train a large network to be able to get it to map inputs to any possible output; and then fine-tuning this to do more specific things.
Yea, I agree that output size doesn't intuitively seem like the real bottleneck, but I've narrowed the issue down to that...
kurumuz#5695: you dont give a lot of detail tbh
kurumuz#5695: i think you will be on your own
|
random_lurker99#8915: I don't think anyone can help you on a high level description. The way to fix things is to do it incrementally, task by task, see if it works if you give it extra input the model should not have, etc
tamay#2378: Fair, thanks.
Sphinx#2092: One thing to check might be how you are actually evaluating the final output
Sphinx#2092: Hopefully you are not doing something like a for-loop
Sphinx#2092: though even at a 100, it seems unlikely to be the culprit
bmk#1476: what's the probability that this is for stonks
random_lurker99#8915: in that case we can save some time and say it wont work
bmk#1476: whenever someone shows up with an ML question that has anything at all to do with time series, there's like an 80% chance it has something to do with trading
alstroemeria313#1694: What are some good audio datasets?
tamay#2378: @bmk @random_lurker99 not for stonks π
Louis#0144: I find they're usually interested in neuro
Louis#0144: Not stocks
Louis#0144: lol
bmk#1476: ~~ah, it's for forex and crypto, then~~
Louis#0144: I wanna buy nfts of peoples brain implants
Louis#0144: There's probably a sex industry there
Louis#0144: tbh
Louis#0144: Oh this is general
Louis#0144: sorry
tamay#2378: It's actually for autoblow.v2
|
Louis#0144: LMAO
Louis#0144: FFS
bmk#1476: :goose9:
Louis#0144: I know just the people
Louis#0144: @AI_WAIFU @Teemochu
alstroemeria313#1694: so torchaudio just has some built in datasets?
Louis#0144: Yes
Louis#0144: I don't know about the quality
alstroemeria313#1694: i think i want speech
alstroemeria313#1694: Actually I am bored and want to try training an audio discrete VAE and autoregressive transformer model
Louis#0144: Audio diffusion model?
alstroemeria313#1694: (I have written my own discrete VAE before)
alstroemeria313#1694: ahah needs more compute than I have, probably
π
¬ gabriel_syme π
¬#3220: I think I remember
alstroemeria313#1694: But would probably work?
bmk#1476: TPU go brrr tho?
alstroemeria313#1694: Not with PyTorch it isn't
bmk#1476: darn
bmk#1476: can't you write it in jax
alstroemeria313#1694: I would have to learn JAX
|
bmk#1476: or are you just using exist code
bmk#1476: oh right that too
bmk#1476: I also need to learn Jax someday
alstroemeria313#1694: i think most of the time spent training a discrete VAE is in the Gumbel-Softmax temperature annealing
π
¬ gabriel_syme π
¬#3220: Aye, same here. Probably now that I have access to tpus is a good time
alstroemeria313#1694: idk if there are any audio discrete VAEs out there
alstroemeria313#1694: I mean public ones.
π
¬ gabriel_syme π
¬#3220: The original had audio examples right
alstroemeria313#1694: VQVAE did yes
alstroemeria313#1694: But I am looking at their code rn and don't see it
alstroemeria313#1694: You just have to stack Conv1d layers and downsamples for the encoder and the same with upsamples for the decoder right? ^^;;
π
¬ gabriel_syme π
¬#3220: Random find: https://github.com/ASzot/vq-vae-audio
alstroemeria313#1694: ooh
π
¬ gabriel_syme π
¬#3220: I think adapted from original
alstroemeria313#1694: how do you like... play audio.
alstroemeria313#1694: In Python.
alstroemeria313#1694: Or do I have to spit out a WAV and call an external binary (This is OK for my use case)
π
¬ gabriel_syme π
¬#3220: Pyaudio?
alstroemeria313#1694: No Macos don't open it in iTunes ffs
π
¬ gabriel_syme π
¬#3220: :(
|
alstroemeria313#1694: CMU ARCTIC is small? 1132 examples?
alstroemeria313#1694: > It consists of around 1150 utterances selected from out-of-copyright texts from Project Gutenberg.
alstroemeria313#1694: gonna look at librispeech
cfoster0#4356: https://docs.google.com/document/d/1oZpCCFJFcgmPkvso8ecNPmMo2mPLLGt4M-8KDslEJ1U
alstroemeria313#1694: ohh
cfoster0#4356: How much are you looking for?
alstroemeria313#1694: As much as can reasonably fit on my storage
alstroemeria313#1694: I probably want English only rn
cfoster0#4356: The biggest source that's easily available is probably Facebook's MLS http://openslr.org/94/?fbclid=IwAR2uazBO9WUWU65te4wV_X_aLexhey1nLgpX6nXHu0lRyHJ1GCleAurd6N8
cfoster0#4356: 45K hours of English, around 650 gigs uncompressed
StellaAthena#3530: @cfoster0 Do you know how many words it is?
cfoster0#4356: ~2.4B
cfoster0#4356: https://arxiv.org/abs/2012.03411
Oleksii Bulygin#8016: on mac u can use afplay file.wav
alstroemeria313#1694: ty :)
Oleksii Bulygin#8016: ie in subprocess.call('afplay
Oleksii Bulygin#8016: are you using torchaudio?
Oleksii Bulygin#8016: if working with jupyter notebook, you can just call display(audio)
alstroemeria313#1694: yes
alstroemeria313#1694: oh no, the shortest thing in this dataset is 22560 samples
|
alstroemeria313#1694: guess i can just zero pad
alstroemeria313#1694: on the right
Zippy#1111: I hate that vscode doesn't load function definitions / properties / documentation for code in .pyx files :hawaiicry:
Oleksii Bulygin#8016: okay, you can not. But if you import IPython.display.Audio and wrap it up as display(Audio(waveform[0]), rate=sample_rate)) then you can
Oleksii Bulygin#8016: anyways, could be worse
Oleksii Bulygin#8016: there's a handy function from docs
Oleksii Bulygin#8016: ```def play_audio(waveform, sample_rate):
waveform = waveform.numpy()
num_channels, num_frames = waveform.shape
if num_channels == 1:
display(Audio(waveform[0], rate=sample_rate))
elif num_channels == 2:
display(Audio((waveform[0], waveform[1]), rate=sample_rate))```
Iacopo Poli#2931: Hi I'm Iacopo, I'm lurking here since a few months. I have a question/doubt about the rotary embeddings blog post https://blog.eleuther.ai/rotary-embeddings/
In the GPT-NeoX (PyTorch) implementation `seq_dim` in `forward` defaults to 1, however the N.B. below the code reads "The layout of the queries and keys in GPT-NeoX, following Megatron, is [seq, batch, heads, hdim]" so this would be selecting `batch` and not sequence length. I went into the gpt-neox source, and I see that there is an additional `seq_len` element https://github.com/EleutherAI/gpt-neox/blob/d2215a65f6cb7f0656eab1c2cd8f632b44dd15fa/megatron/model/transformer.py#L363, that in the end makes `seq_dim` useless, so this implementation looks correct. Can you confirm that `seq_dim=1` is a wrong default?
alstroemeria313#1694: it's just producing a constant high pitched whine :/
EricHallahan#1051: What are you trying to do?
alstroemeria313#1694: Audio dVAE
alstroemeria313#1694: gonna try MSE loss instead
|
StellaAthena#3530: @Iacopo Poli The blog post was correct when it was written. We have subsequently changed the NeoX implementation to allow for more flexibility, specially using βpartial rotary embeddingsβ (where rope is only applied to some of the vector) and changing the base.
someKindaBean#8471: what do you want to do? ESC-50 is a widely used one for classification
alstroemeria313#1694: i wanted to train some sort of generator of audio
someKindaBean#8471: sounds awesome
alstroemeria313#1694: Uh, my VAE seems to be collapsing though.
alstroemeria313#1694: Toward just outputting silence.
alstroemeria313#1694: Or near silence.
someKindaBean#8471: π¦
cfoster0#4356: There may be too much silence in the data. Oftentimes people will trim those out
alstroemeria313#1694: ah
alstroemeria313#1694: it should like...
alstroemeria313#1694: work, though?
alstroemeria313#1694: I am printing MSE of the reconstruction vs 0 and MSE vs the input
alstroemeria313#1694: And vs 0 is *way* lower
someKindaBean#8471: haha, that's not great.
someKindaBean#8471: maybe start with some kind of constant, ambient noise to counter what cfoster said?
someKindaBean#8471: urbansound8k is another dataset that has environmental noise
alstroemeria313#1694: mb my model is just too small
alstroemeria313#1694: it just does not have enough capacity to model the signal so it outputs near silence.
someKindaBean#8471: oof, that sucks
|
someKindaBean#8471: are you using the mu-law trick that wavenet uses?
alstroemeria313#1694: no
alstroemeria313#1694: um, I don't know what it is so I think not.
alstroemeria313#1694: I just stacked stride 2 conv1d layers lol
alstroemeria313#1694: Then transposed conv1d in the decoder.
someKindaBean#8471: it's a way to scale inputs/outputs to reduce dynamic range used, https://en.wikipedia.org/wiki/%CE%9C-law_algorithm
EricHallahan#1051: Are you just convolving over a sequence of floats?
alstroemeria313#1694: yes lol
EricHallahan#1051: WaveNet uses the discretized form of mu-law and outputs a distribution over all mu-law values.
alstroemeria313#1694: ahh
alstroemeria313#1694: so vocab size 256?
alstroemeria313#1694: and cross-entropy training loss?
EricHallahan#1051: Yeah, pretty much.
alstroemeria313#1694: ah torchaudio has that
someKindaBean#8471: you also have to remap from mu-law to raw signal before your final output
alstroemeria313#1694: does it use mu-law as input?
alstroemeria313#1694: to the encoder?
alstroemeria313#1694: or is that still raw audio
EricHallahan#1051: Yeah, they compress the input at the beginning and decompress the output at the end.
alstroemeria313#1694: ahh
|
alstroemeria313#1694: ty :)
someKindaBean#8471: here's the wavenet paper if you need it, section 2.2 https://arxiv.org/pdf/1609.03499.pdf
Sphinx#2092: lol I'm glad this suggestion turned out useful.
someKindaBean#8471: hahaha, I almost tagged you to credit you
Louis#0144: big news on CARP, we beat the baseline by an OOM
someKindaBean#8471: Thanks for pointing it out the other day, I haven't used it personally yet
Louis#0144: π
Oleksii Bulygin#8016: what's the audio.metadata? esp encoding
Oleksii Bulygin#8016: I once was saving float waveform (-1, 1) as uint wav and it sure was silence
alstroemeria313#1694: it's -1 to 1, 1 channel, sample rate 16000
alstroemeria313#1694: ok it is now outputting something that is a high pitched whine that is not constant.
alstroemeria313#1694: this is progress.
Oleksii Bulygin#8016: humans learn it the same way so yes
AI_WAIFU#2844: how big is your latent?
alstroemeria313#1694: 64 channels inside the model, 512 latent codes
alstroemeria313#1694: It's tiny
alstroemeria313#1694: it's now making reconstructions that go softer and louder
someKindaBean#8471: so a high pitched whine that gets louder and softer, that's basically as far as my analog synthesizers ever got when i was playing in that area
alstroemeria313#1694: eheh...
someKindaBean#8471: what is your input?
|
someKindaBean#8471: like what kind of audio file?
alstroemeria313#1694: @someKindaBean https://www.openslr.org/12 train-clean-100
alstroemeria313#1694: English speech
alstroemeria313#1694: it now sounds like a bad vocoder going "dththththth"
someKindaBean#8471: are you sticking with a single speaker?
alstroemeria313#1694: no
someKindaBean#8471: and is this reconstructing a given input or just generating?
alstroemeria313#1694: reconstructing.
someKindaBean#8471: cool
AI_WAIFU#2844: and you're outputting on how many channels?
alstroemeria313#1694: 256.
alstroemeria313#1694: this model is p too small, i wanted to get something working first
tcapelle#3649: Hello everyone. I am curious if we could use eleutherAI to reproduce/develop research on energy forecasting.
I am interested on reproducing results that are no open sourced like:
- https://ai.googleblog.com/2020/03/a-neural-weather-model-for-eight-hour.html
- https://deepai.org/publication/skillful-precipitation-nowcasting-using-deep-generative-models-of-radar
- https://arxiv.org/abs/1711.02316
This is crucial research to advance energy transition, and it would be super cool if we could put this models freely available.
Currently there is OpenClimateFix that is trying to reproduce some of these, but it has not enough funds to train them (MetNet and PreceiverIO).
|
- https://github.com/openclimatefix/satflow
Kia#2550: @cfoster0 ?
Kia#2550: Wait :thonk:
Kia#2550: I mean Do They need Permission tho
Kia#2550: Sorry for the ping :v
Kia#2550: My apologies
MicPie#9427: Is there a :lucid: -style BERT repo that is recommended?
I'm interested in how to best setup the loss with the masking.
Sid#2121: EleutherAI is just people lol. You can't "use" us. If you want to do the work, you need to start an initiative. Generally though we're not super interested in meteorology. never seen it even mentioned on the server before.
tcapelle#3649: I know, but I am looking for people interested on the topic π
Exocamp#8255: Anyone else getting shitty GPUs in Colab lately even paying as pro?
EricHallahan#1051: I don't have Colab Pro but it sounds to be pretty universal.
cfoster0#4356: Yup
Exocamp#8255: So what's up with that?
Exocamp#8255: Is literally everyone using Colab now or is it a bit more malicious?
cfoster0#4356: Easy come, easy go
Zippy#1111: I bet people realized how good of a deal it is, considering that if you use it frequently, it's orders of magnitude cheaper than renting a gpu to use with a vm.
Exocamp#8255: It's just like
Exocamp#8255: I already pay 10 bucks a month for this, what's even the point if I get GPUs that free users get
Zippy#1111: I'm willing to bet that free users get less access.
|
cfoster0#4356: On pro I've never been told there wasn't a GPU available
Exocamp#8255: Sure but if I can't even *do* much with the access...
EricHallahan#1051: Doesn't Pro give you terminal access and other stuff?
Exocamp#8255: Does it?
Zippy#1111: yes
Exocamp#8255: never realized that lol
Sphinx#2092: Why don't you look up what Pro actually gives you before paying $10 a month for it? lol
Exocamp#8255: I was told I would be given higher priority access to good GPUs and that's literally all I need right now lmao
cfoster0#4356: Terminal access?
Orz#3023: Well
Kaggle give 40hours of P100 per week for free
So yeah....
Exocamp#8255: But I am aware of what Colab Pro gives. I probably won't cancel it I guess but seriously I hope this GPU issue will be resolved soon.
π
¬ gabriel_syme π
¬#3220: Crazy thing about colab Pro, even hitting on a decent gpu one day out of 30 its probably worth the 10 dollars
Zippy#1111: Yeah.. cloud gpu prices are absolutely bonked. It's amazing anyone can get gpus in colab at all.
Exocamp#8255: Welp got a T4
Exocamp#8255: Bless you O mighty Google
EricHallahan#1051: Plot twist: Google is restricting access of GPUs to push people to use the ten billion TPU v2-8s they have laying around.
Zippy#1111: If I was a bezos, I would make an actually cheap cloud service. The amount of money that places like aws make from cloud stuff is insane. They're obviously price gouging quite a bit.
Zippy#1111: Especially with gpus.
|
EarthEngineer#6001: Energy is the issue. Since we do not pay for energy, we can use old equipment 5 cents on the dollar what new stuff costs. Performance is better too.
StellaAthena#3530: If anyone is a web developer with experience with Hugo EleutherAI is looking for someone to revamp our website. We know roughly what we would like it took look like, but the current version was
a) designed by committee
b) implemented by ML researchers in their free time
c) done by people without any prior web dev experience
EricHallahan#1051: Do you know exactly what you are looking for?
Zippy#1111: I'm a web dev, but I don't know anything about hugo :Sadge:. I'm a node.js / react / next.js / blitz.js developer.
StellaAthena#3530: -ish
StellaAthena#3530: We've talked about the problems with the current website at length. One of those problems is that we don't have the time and/or expertise to design the right solutions
Louis#0144: Unpaid or paid?
janus#0150: *ML engineers try to design a product*
Louis#0144: (Assuming unpaid)
EricHallahan#1051: Everything here is unpaid lol
StellaAthena#3530: ^^
StellaAthena#3530: We can lend you come compute in exchange, probably.
StellaAthena#3530: And say very nice things about you in a discrete corner of the website
EricHallahan#1051: Though "web dev" and "needs compute" is a pretty narrow subset of people.
Louis#0144: I know it was unpaid, just the message seemed like it read like a job posting
Louis#0144: So I didn't want anyone to think it was paid
StellaAthena#3530: IDK how much web devs charge though. Like, if revamping the website and making it something eric or I can maintain costs $100 it's probably worth paying out of pocket tbh
|
Louis#0144: I'd be willing to chip in a bit too
Louis#0144: I'm fine paying a few hundred or something
EricHallahan#1051: I have just been slacking on it a bit lately to work on other projects (like HF GPT-J).
kurumuz#5695: 100? yeah probably not
Louis#0144: @kurumuz ask Chris rly nicely
Louis#0144: Pls
Louis#0144: :berk:
kurumuz#5695: sure
kurumuz#5695: i seriously will
Louis#0144: Ty
kurumuz#5695: tabloid for design as well
EricHallahan#1051: I would take it lol
kurumuz#5695: compute in exchange would be good tho
kurumuz#5695: you didnt take much better offers
kurumuz#5695: lol
janus#0150: Hugo is a static site generator which uses templates to turn markdown (and maybe other filetypes?) into pages. You write html for the pages and include variables like `<hugo_document_text>` which hugo fills in. Hugo takes your files and builds the static site and serves it. So mostly its web design. You can use js but unlikely something like node.
EricHallahan#1051: Like there isn't much keeping me from working on it, I had just assumed there was nothing urgent that needed to get done since #website has been silent for a week.
gollark#3909: Data centre GPUs are really expensive and they can't use the gaming ones due to EULAs.
bmk#1476: renting a gpu is *really* expensive
bmk#1476: even considering dc gpu prices
|
bmk#1476: the break even point is something liek a few months
StellaAthena#3530: Yeah, but it would be nice to have you go do some research and leave the revamp to someone who can contribute to the website much more than they can contribute to research
gollark#3909: I can slightly do web development, though I've never used Hugo specifically.
gollark#3909: My website is compiled using a somewhat horrible JS program.
Zippy#1111: Yeah... datacenter gpus are expensive, but ... just think about how much they actually charge for a gpu.. it's insane. It's 4.50 an hour to rent a v2-8 tpu, which is about 40 grand a year.
bmk#1476: a v2-8 is equivalent to like 4 gpus give or take
bmk#1476: so that's not really the best comparison
Zippy#1111: Yeah, and an A6000 costs about 6 grand. So essentially they could be breaking even in one year if it were the cost of 4 of those + the cost of maintenence and electricity.
Orz#3023: you could possibly take them as unpaid interns
bmk#1476: why can't I take people as unpaid interns if eleuther isn't an organization
EricHallahan#1051: I'm technically actively looking for an internship lol
gollark#3909: Hmm. Weird. I guess it would fit with the general thing of cloud costing a lot more than it "should".
Zippy#1111: Yeah... and considering that amazon makes much more from aws than from amazon.com
Orz#3023: I'm pretty sure anyone would give you any job you need,
just let them skim through your papers
Zippy#1111: cloud services are essentially a monopoly where none of the big players have any incentive to lower prices.
gollark#3909: There are less popular providers which are *also* pretty costly though.
Zippy#1111: Yeah because the cost of *starting* a cloud service is gigantic, but then it becomes a money tree.
gollark#3909: It isn't that big. You can just rent colocation and buy a few servers.
gollark#3909: Depending on how you define cloud, I guess.
|
Zippy#1111: True.
gollark#3909: Most of them have tons of managed services plus quick to deploy VMs.
Zippy#1111: I mean even places like heroku actually use aws.. they don't have their own servers because it's too pricey.
bmk#1476: idk that doesn't sound right
bmk#1476: cloud is a commodity
StellaAthena#3530: No it's not
bmk#1476: this is why aws and gcp are constantly trying to create new products with shiny features that lock you in - because just selling compute is very fungible
StellaAthena#3530: There's massive costs to migrate from one to another
StellaAthena#3530: On the enterprise level
bmk#1476: only if you use their custom features
Zippy#1111: https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle-scale-growth-repatriation-optimization/ this is a good article about it.
bmk#1476: if you just use cloud for compute you don't get locked in
Zippy#1111: Basically certain places have dramatically increased their profit by moving away from cloud and using their own servers.
gollark#3909: I think the argument for cloud is mostly that it's much faster to scale than "have a bunch of servers in your office", but it seems like you pay an insane amount for that.
bmk#1476: i.e if you use terraform and open source versions of whatever services you might end up using, then you can switch clouds easily
gollark#3909: Possibly also that you can hire fewer sysadmins? But I'm not sure they're that expensive if you have a lot of developers anyway.
Orz#3023: I think it depends on the scale of projects
It's certainly better to manage stuff locally(on a server) once a huge traffic kicks in
But idk if the same can be said for startups
Zippy#1111: > Dropbox gross margins increased from 33% to 67% from 2015 to 2017, which they noted was βprimarily due to our Infrastructure Optimization and anβ¦ increase in our revenue during the period.β
|
bmk#1476: gcp has some really good economies of scale there though
Zippy#1111: aka they started using colocations and buying their own hardware.
bmk#1476: like for example they have that crazy live migration thing
gollark#3909: I have an old tower server which costs maybe Β£5/month to run, which provides ~4x the CPU/RAM and ~10x the disk I'd get from a cloud provider at similar pricing, plus I could install a spare GPU when I wanted that. This is a very extreme case since I am entirely ignoring my time costs on managing it and don't have as much redundancy as them.
(Edit: also terrible internet connectivity, and colocation would be expensive)
bmk#1476: if you do in house you just can't afford that
gollark#3909: But it still seems like a big price delta given that, like you said, they have ridiculous economies of scale.
bmk#1476: and they can squeeze an absurd amount out of their hardware using their custom scheduler
bmk#1476: etc
bmk#1476: basically in my mind the main advantages of cloud over colo or vps is that you can scale much much faster and have way better uptime
Zippy#1111: Yeah it's amazing for startups. Without cloud it wouldn't be possible for many businesses to get off the ground.
gollark#3909: Do cloud providers start stuff that much faster than generic VPS ones? All the VPS providers I've used can manage initialisation in a few minutes.
gollark#3909: Generally less.
gollark#3909: That is only about three of them, though.
bmk#1476: well first off you usually buy in increments of months
bmk#1476: so accommodating spikes means purchasing entire months of compute
gollark#3909: Right, I forgot about that.
kurumuz#5695: you buy years yeah
kurumuz#5695: we started accumulating compute as well
bmk#1476: second, they're operating at a much smaller scale than gcloud
|
bmk#1476: so if you suddenly go to them and ask for 100 servers they might not be able to fulfill that
bmk#1476: whereas google is big enough to amortize that out
kurumuz#5695: who are we talking about
bmk#1476: but yeah if you're trying to save money it probably makes sense to use VPS for baseline and cloud for spike handling
bmk#1476: and assuming you don't care about uptime *that* much
gollark#3909: It does seem like basically nobody does this. Maybe it's assumed it would take too much developer time.
bmk#1476: right it's not usually worth it
bmk#1476: developers are expensive
kurumuz#5695: its free if you do it yourself other than time
bmk#1476: "it's free as long as you don't consider the costs"
kurumuz#5695: lol
gollark#3909: I don't think most individuals actually have stuff which needs that much scaling?
Zippy#1111: I wanted to make a massive kubernetes cluster that would run a server on whichever cloud service was currently cheaper, or giving the best prices.. like a monitor that figures out given the current and projected workload, choose the cheapest cloud service for the task and launch the containers on that cloud LOL
gollark#3909: I mean, computers are fast and small VPSes can happily serve a few thousand req/s with good software.
Zippy#1111: and then deactivate the current, less cost effective cloud service
bmk#1476: I've worked on [redacted] where the base load was handled by my own hardware and it spun up gcp instances as needed for spikes
bmk#1476: that was a gpu load though
Zippy#1111: It's sad though.. My last business could run on a ~$40/month digitalocean cluster.. but if we wanted to use AI for our process automation, it would exponentially increase costs lol
kurumuz#5695: well it adds more value
bmk#1476: just run it off a GPU in your basement
|
kurumuz#5695: lol
bmk#1476: slap a message queue on it and a downtime detector that starts a gcp instance if it ever goes down
bmk#1476: problem solved
Zippy#1111: Yeah the only issue is that it was b2b and our customer needed it to be running 100%.. and depending on a local isp and weather wasn't an option.
kurumuz#5695: lol no downtime is hard
bmk#1476: just run two servers in different places
kurumuz#5695: we kinda achieved it
bmk#1476: different isps
bmk#1476: if either one goes down start up the cloud instance
bmk#1476: still cheaper than 100% cloud
bmk#1476: also yeah not even google can achieve 100% uptime
kurumuz#5695: man our deployment so big and gets even bigger
kurumuz#5695: hard to cover %100 with alterbatives
Zippy#1111: We already had that issue.. we needed to have a node running on some guys homegrown server because it was the only way for us to get access to the business data.. and it would shut down and not autostart when booting back up, or his server would just crash, and we would lose connection.
Zippy#1111: I don't want that to happen again lmao
chilli#5665: lol you've only achieved "no downtime" until you didn't
kurumuz#5695: as we run like 200b it will be practically impossible
bmk#1476: if you need help with like devops stuff you can ask me I guess
Zippy#1111: It wasn't a devops issue, it was a-- unreliable server issue.
Zippy#1111: But thanks π
|
bmk#1476: I was talking to kuru
Zippy#1111: Ahh ok sorry :peeka:
kurumuz#5695: yeah will keep you in my mind
bmk#1476: @kurumuz you're running everything off cloud GPUs rn right
kurumuz#5695: you do contracts?
kurumuz#5695: @bmk for now yes
bmk#1476: I bet you could save a lot by running some stuff off a GPU rig in your basement
kurumuz#5695: we will have inhouse cluster tho
bmk#1476: or a colo yeah
chilli#5665: how much do you guys pay π
kurumuz#5695: dont think it makes sense to run it in turkey
kurumuz#5695: lol
bmk#1476: or buy a colo
kurumuz#5695: for what, monthly gpu costs?
chilli#5665: for contracts lol
choltz95#4641: im happy to donate hours on my iris 6000 graphics card
choltz95#4641: state of the art
chilli#5665: I somewhat doubt I'm legally allowed to do it
bmk#1476: you mean cause of noncompetes or whatever?
bmk#1476: hm
|
bmk#1476: ok that might be a problem
kurumuz#5695: i mean highly depends lol
kurumuz#5695: i do write most of the stuff myself but if we had to do some compiler stuff im not really good at that
bmk#1476: in that case ill probably be most helpful with like architecture stuff rather than implementing things
bmk#1476: also i might not be able to help after my start date because noncompetes
kurumuz#5695: i
o
bmk#1476: but that keeps getting pushed back because of visa bullshit
kurumuz#5695: you start at openai?
bmk#1476: well, contingent on the us govt giving me work auth, yeah
kurumuz#5695: neato
bmk#1476: which might not happen
bmk#1476: :harold:
bmk#1476: immigration is a fuck
kurumuz#5695: oh yeah sounds its kinda pita
kurumuz#5695: heard*
chilli#5665: we can just get married π
kurumuz#5695: LOL
kurumuz#5695: @chilli im open to that
bmk#1476: marriage fraud time
|
chilli#5665: who said anything about fraud π
π
¬ gabriel_syme π
¬#3220: if it doesn't can you still put ex-OAI on your twitter?
p.s. I really hope you find a solution to that shit
bmk#1476: i have a call with the immigration lawyers later today for the zillionth time and i have a feeling it's not good news
bmk#1476: :withered:
Zippy#1111: Why usa people gotta be so mean. *am usa person*
π
¬ gabriel_syme π
¬#3220: what if you opened the Canadian branch of OAI in Toronto
π
¬ gabriel_syme π
¬#3220: kind of shocked there isn't one tbh
bmk#1476: "fill out a form I-37432743289523243432 and wait 5-7 business months to find out whether you got approved for the X-1 visa" statements dreamed up by the utterly Deranged
bmk#1476: well, i would need to move to toronto for that
π
¬ gabriel_syme π
¬#3220: sounds good!
π
¬ gabriel_syme π
¬#3220: I mean, I guess it's better than there? not sure
bmk#1476: the time zone difference with SF is bigger
bmk#1476: edmonton is nice because it's only 1 hour
Louis#0144: where are u working
bmk#1476: right now? nowhere
Louis#0144: wow @EricHallahan looks like u got an upgrade
Louis#0144: but a start date
Louis#0144: where?
|
π
¬ gabriel_syme π
¬#3220: mfw when my time difference has been 6 hours for 3 years :wojak_despair: I can understand how that sucks tbh
bmk#1476: and it looks like im going to be working nowhere for quite some time
Louis#0144: sadge
bmk#1476: the fucking immigration system
bmk#1476: hate it
Louis#0144: imagine not being american
Louis#0144: i could never
bmk#1476: why cant we have a canada-us schengen area like agreement
Zippy#1111: I feel like they make it dumb and near impossible so that most people just give up.
bmk#1476: why do i have to jump through so many fucking hoops to go to the us
Louis#0144: see there was talk about like that for a while
Louis#0144: but the EU is failing
Louis#0144: so i doubt US/Canada would do it
bmk#1476: idk the schengen area seems pretty cool to me
Louis#0144: (Referring to brexit and other economical mishaps)
Louis#0144: (Greece being the braindead child for instance)
π
¬ gabriel_syme π
¬#3220: the Eurozone is not failing tbh. EU, yes it has
Louis#0144: Yeah
bmk#1476: this is totally seperate from the schengen area tho
π
¬ gabriel_syme π
¬#3220: But that's not because of the union, that helped countless people I guess. It's mostly big countries not really seeing it as a union
|
π
¬ gabriel_syme π
¬#3220: I am shocked canada and US need visas and shit
π
¬ gabriel_syme π
¬#3220: if I'm honest
Louis#0144: they wouldnt do a NA-zone if it didnt include something similiar to the EU no?
Louis#0144: like its kinda a package deal
bmk#1476: economic policy is totally separate from immigration
Louis#0144: economic policy is like atleast half foreign policy....
bmk#1476: especially betrween canada and the us
Louis#0144: especially when discussing the import and export of labor
bmk#1476: i meant fiscal policy
bmk#1476: fiscal policy is totallyu separate from immigration
Louis#0144: ok true
Louis#0144: ill give u that
bmk#1476: and the euro's biggest problem is it ties the fiscal policies of its members when they need different fiscal plicies
bmk#1476: but there's not going to be a shared us canada currency anyways
bmk#1476: and nobody really cares about that
bmk#1476: the immigration is the big stickler here
bmk#1476: i want to be able to just like work in the us whenever i feel like it and not have to go through this absolute convoluted nightmare that is the us immigration system
π
¬ gabriel_syme π
¬#3220: didn't it used to be much easier to go back and forth? and even work
bmk#1476: idk
bmk#1476: technically true as long as you relax the definition of "used to be"
|
bmk#1476: im sure nobody cared back in to 1800s
StellaAthena#3530: The US didn't even have a well-defined boarder with Canada for most of the 1800s π
π
¬ gabriel_syme π
¬#3220: I was thinking like idk 20 years ago. It may be my bad memory thinking I heard that people would cross easily from adjacent states.
someKindaBean#8471: doesn't NAFTA make it a little bit easier?
someKindaBean#8471: or is that only going from the US to work in Canada?
bmk#1476: a little easier; it would be *even harder* if i wasnt canadian
ProudNoob#5854: I was so excited gpt3 opened up, but even with all the examples that are "click-ready" quite underwhelming...
ProudNoob#5854: even tasks it's finetuned / pretrained for I feel it's hardly doing better than gpt-j
ProudNoob#5854: does allow for a little more "trust" in not wasting time on too many examples just to make your intent clear, but that's easily fixed yourself through either rewritten examples (for fewshot examples good enough), or light finetuning
someKindaBean#8471: I think the better way to frame it is that GPT-J is almost at the GPT-3 level
StellaAthena#3530: How good is a RTX 6000?
bmk#1476: pretty good
bmk#1476: one of the best GPUs you can get your hands on
StellaAthena#3530: Is it the non-datacenter A100?
bmk#1476: no, the RTX 6000 is a datacenter gpu
bmk#1476: there is no non datacenter A100
alstroemeria313#1694: rtx 6000 is workstation.
alstroemeria313#1694: it is one gen back from A100
bmk#1476: you're allowed to run it in a datacenter though right?
alstroemeria313#1694: I... maybe
|
alstroemeria313#1694: I'm not entirely sure, datacrunch has A6000 (the RTX 6000 successor) in datacenters and lets you rent VM instances with them
alstroemeria313#1694: but yeah it's p good
StellaAthena#3530: Oh I see, it's part of a workstation
StellaAthena#3530: ```
2 x Intel Xeon Gold 6134 @ 3,2 GHZ x 16 Cores
128 GB of RAM
2 NVIDEA RTX 6000 Quadro
Cuda 11.4
```
bmk#1476: that's a pretty good workstation
StellaAthena#3530: Obviously you're not going to be doing anything *massive* on this, but it's quite good for DL in general?
alstroemeria313#1694: yes
bmk#1476: though I am chuckling a bit at the fact that the CUDA version is listed as an entire point while the ram has no details other than quantity
StellaAthena#3530: The person who sent me this doesn't know anything about DL
bmk#1476: but yeah it's a really good machine for personal DL stuff
bmk#1476: better than what lots of academic labs have probably
alstroemeria313#1694: it has 24GB of RAM
alstroemeria313#1694: per card
bmk#1476: which isn't that impressive anymore but still pretty good
alstroemeria313#1694: yeah
|
EricHallahan#1051: That seems pretty respectable.
bmk#1476: the a6000 has like 48 per card right
alstroemeria313#1694: yes
bmk#1476: wild stuff
alstroemeria313#1694: also the RTX 8000, same gen as the RTX 6000, was the 48GB version
StellaAthena#3530: A commercial client reached out to ask about whether we can do X Y and Z on their hardware and I said "IDK, nobody has ever told me what hardware you have"
bmk#1476: they can do basically all normal scale research
bmk#1476: or as we call it around here, puny model stuff
Louis#0144: is this at work
Louis#0144: or eleuther
Louis#0144: @StellaAthena
Louis#0144: if its work just assume its CPUs
Louis#0144: :berk:
StellaAthena#3530: @Louis This is a commercial client at my company
Louis#0144: I still find it so funny that they wanted to train a BERT on CPU
Louis#0144: lmaooo
StellaAthena#3530: That was for a different client that literally didn't have GPUs
bmk#1476: ~~also tell them that I can give them hardware advice all day but that I bill $200/hour~~
StellaAthena#3530: IDK why but I find the online comparisons of GPUs unreadable
Louis#0144: oh theyre awful
|
Louis#0144: lmao
bmk#1476: well you always have the option to pay me to help you interpret the comparisons
Louis#0144: use puget
Louis#0144: lambda labs has *no idea* how to sell a GPU
Louis#0144: its really funny
Louis#0144: autocorrect
kurumuz#5695: you just need to benchmark them haha
kurumuz#5695: i can give advice too, pretty much benchmarked every nvidia gpu that is not too old
StellaAthena#3530: Is anyone familiar with what PPC is? Someone is trying out GPT-NeoX and emailed me about version incompatibilities
```
Module Closest version available for PPC platforms
-------------------------------------------------------------------
pybind11==2.6.2 Not in any PPC repo
six PPC Anaconda
regex PPC Anaconda
numpy==1.20.2 PPC Anaconda, v1.20.1
mpi4py==3.0.3 Standard module, v3.0.3
wandb==0.10.28 Not in any PPC repo
einops==0.3.0 Not in any PPC repo
transformers==4.5.0 Open-ce repo, v3.5.1
|
tokenizers==0.10.2 Open-ce repo, v0.9.3
lm_dataformat==0.0.19 Not in any PPC repo
ftfy==6.0.1 PPC Anaconda, v5.5.0
--------------------------------------------------------------------
```
gollark#3909: PowerPC?
gollark#3909: You can still get computers with that. Very expensive workstations.
StellaAthena#3530: That's a type of chip right
StellaAthena#3530: This is an obscenely expensive workstation
gollark#3909: Yes, it's a CPU architecture.
StellaAthena#3530: and it can't download python packages via pip?
gollark#3909: It probably can, but not all of them will have been compiled for it, as it's not widely used.
gollark#3909: https://github.com/open-ce/open-ce mentions PowerPC support, which I think supports this interpretation.
StellaAthena#3530: Ugh. That's a headache
Louis#0144: why is someone trying to run neo on power pc
Louis#0144: lmaoooo
Louis#0144: wtf
StellaAthena#3530: Why does that surprise you @Louis
Louis#0144: Well power pc is super niche
Louis#0144: And last time I tried power pc nvidia drivers for PPC were like 90% unusable
|
Louis#0144: With like super minimal cuda support
StellaAthena#3530: wait
StellaAthena#3530: I just finished reading the email
StellaAthena#3530: They don't have access to Torch 1.8 either >.>
gollark#3909: They could probably compile everything (except working Nvidia drivers), it would just be slow and annoying.
bmk#1476: I didn't realize anyone still used PPC
gollark#3909: It's basically the only modern performant platform which you can run usably without proprietary firmware.
StellaAthena#3530: How my V100s do y'all think exist? OOM?
bmk#1476: why would you use PPC instead of x86 or arm in the modern era other than to support legacy software or something
kurumuz#5695: all i know is coreweave have a lot of them but most with super trash CPUs :berk:
bmk#1476: even arm is sometimes a pain
Kharr#7888: Server CPUs are $$$ π¦ About the same price as the GPUs
kurumuz#5695: @Kharr well one CPU can drive 4 GPUs easily
kurumuz#5695: more like 8 tho with epycs
oreo#2740: I'm trying to add more papers to my reading list as a way to procrastinate. Can I ask people to list their top 2-5 AI/NLP papers from 2021? Much appreciated!
Kharr#7888: https://arxiv.org/pdf/2106.06899.pdf
oreo#2740: ooh this is where all that top-k business came from... thanks!
alstroemeria313#1694: ugh do i need to do feature engineering to output audio from a neural net
Oleksii Bulygin#8016: ~~seven vaganias~~ 120mb l3 cache
bmk#1476: do you really need that much cache tho
|
bmk#1476: plus some xeons go up to 60mb cache
Oleksii Bulygin#8016: question is not whether you need
Oleksii Bulygin#8016: it's about if you do need then you've an answer
Oleksii Bulygin#8016: also they're said to be super reliable but idk how measure
gollark#3909: They also have stupidly high-throughput cores with 8-way SMT.
someKindaBean#8471: maybe? whatcha doing?
alstroemeria313#1694: i was trying to train an audio discrete VAE
alstroemeria313#1694: like for speech
someKindaBean#8471: oh yeah
someKindaBean#8471: so where are you at that you need to do feature engineering?
alstroemeria313#1694: i couldn't get it to work at all w/ more than six downsamplings
alstroemeria313#1694: and quality was bad
someKindaBean#8471: i mean, that doesn't sound super unsurprising. how much were you downsampling by?
alstroemeria313#1694: 64x
someKindaBean#8471: hahahaha, 64x in time domain each time? or split between time and amplitude?
alstroemeria313#1694: oh no
alstroemeria313#1694: 64x total
someKindaBean#8471: oh wow, i was thinking you were really compressing the hell out of that
someKindaBean#8471: when i've done more traditional audio downsampling and upsampling, you are going to lose some frequency content. doing it the most straightforward way, you just lose high frequency content
alstroemeria313#1694: no i used a conv1d layer to make it 64-128 channels first
|
alstroemeria313#1694: then did the downsamples
alstroemeria313#1694: with 2x conv1d residual blocks in between
xcodevn#9003: have anyone shared this?
https://ai.facebook.com/blog/textless-nlp-generating-expressive-speech-from-raw-audio
facebook generative spoken language model.
someKindaBean#8471: So downsampling by max pool?
alstroemeria313#1694: it was average pooling but yeah
EricHallahan#1051: We haven't discussed this here yet no.
alstroemeria313#1694: i saw it but haven't looked in detail
someKindaBean#8471: just guessing, but maybe there is some limit at which you can't recover enough to reconstruct it, and maybe 64x is that limit
EricHallahan#1051: Pretty cool that it works at all.
someKindaBean#8471: although i'd think that it would reconstruct something
alstroemeria313#1694: you mean like... 250 discrete latents per second
alstroemeria313#1694: and nine bits per latent
alstroemeria313#1694: = 2250 bits per second
alstroemeria313#1694: Any more downsampling decreases the bitrate to reconstruct from further
someKindaBean#8471: phone lines run about 8 khz and 8 bit, so 8 kbits per second
someKindaBean#8471: sooooo 2.25 k bits per second should sound substantially worse
someKindaBean#8471: i'd think you could get lower than that, but now i'm seeing what you mean about feature engineering
EricHallahan#1051: Not necessarily.
|
alstroemeria313#1694: @someKindaBean POTS does ~300-3300 hz
StellaAthena#3530: > It opens the door to a new era of textless NLP applications for potentially every language spoken on Earthβeven those without significant text data sets.
Thatβs one hell of a spin.
**Quick fact check:** how many language have significant audio datasets but not significant text datasets?
alstroemeria313#1694: So 3000 hz bandwidth.
alstroemeria313#1694: If you assume eight bits you get 24kbit/s.
someKindaBean#8471: huh, ok. and you could probably push that 8 bits down to 6 and still have something understandable
alstroemeria313#1694: It's actually a bit more than that, if you remember what used to fit over phone lines in the dialup modem era.
bmk#1476: isnt "textless nlp" an incredibly roundabout way of saying "audio modelling"
alstroemeria313#1694: But not very much.
alstroemeria313#1694: speech modeling.
EricHallahan#1051: Perceived quality is not a linear function of bitrate.
someKindaBean#8471: hahah yeah, 56k dialup
bmk#1476: yeah thats what i meant
EricHallahan#1051: :yes:
bmk#1476: time to publish a paper on "imageless CV"
EricHallahan#1051: It is really awkward.
someKindaBean#8471: not linear for sure, but monotonically increasing maybe
alstroemeria313#1694: @someKindaBean so these new facebook things get like... 365-465 bits per second
someKindaBean#8471: CLIP talks to CLIP with no images?
|
EricHallahan#1051: There are a lot of ways to compress speech inefficiently.
alstroemeria313#1694: VQVAE got 800 bits per second and I couldn't even do as good as that.
alstroemeria313#1694: I wasn't conditioning on the speaker though
EricHallahan#1051: Sounds in the ballpark of codec2 450/450PWB.
alstroemeria313#1694: And I think all of these super compressing ones do that
someKindaBean#8471: that might help - fewer sounds/timbres to learn
alstroemeria313#1694: I was training a general speech model unconditional on the speaker.
EricHallahan#1051: But higher quality lol
someKindaBean#8471: i'm not sure what facebook thing you are referring to, but that's crazy to me
someKindaBean#8471: and very cool
someKindaBean#8471: this is also crazy cool
alstroemeria313#1694: the one that was just linked https://ai.facebook.com/blog/textless-nlp-generating-expressive-speech-from-raw-audio
someKindaBean#8471: ahh, thanks, i didn't read it yet
alstroemeria313#1694: i was using a wavenet decoder which is apparently not as good as like tacotron2
alstroemeria313#1694: well
alstroemeria313#1694: it was my own arch lol
alstroemeria313#1694: it just output logits like wavenet does
someKindaBean#8471: ok, nice
someKindaBean#8471: i really don't have experience with vocal audio, but it makes sense that if you restrict yourself to speech you can compress more
alstroemeria313#1694: i need some actual good method of outputting general waveforms, wavenet seems bad
|
alstroemeria313#1694: if i just output floats my model learns to output silence
alstroemeria313#1694: bc it can't actually model the audio well enough and that minimizes MSE in that case
alstroemeria313#1694: Can I use some other loss
someKindaBean#8471: didn't wavenet use cross entropy loss on the quantized levels?
alstroemeria313#1694: yes
xcodevn#9003: Wavenet is actually a very good vocoder. (acording to deepmind result). But you need a good conditional input.
xcodevn#9003: Tacotron2 used wavenet as a vocoder.
alstroemeria313#1694: ah.
someKindaBean#8471: my labmate was playing with MSE between mel spectrograms for a while
xcodevn#9003: wavenet converts: mel-spec => wave form
someKindaBean#8471: I'm not sure how successful he was with that
xcodevn#9003: while, tacotron2 converts text => mel-spec
π
¬ gabriel_syme π
¬#3220: that's..NLP right? need to learn this new code
StellaAthena#3530: NLP != text. NLP = natural language. Text is one way that natural language is represented but it has never been the only way.
bmk#1476: NLP is basically synonymous with text at this point in the common vernacular though
StellaAthena#3530: In the past couple years transformers have gotten so hot that theyβve made text data the currently cool thing, but thatβs simply not true.
bmk#1476: idk, even before transformers
bmk#1476: if it's spoken natural language, it's just speech
bmk#1476: and ive never heard of anyone categorize speech recognition, synthesis, etc as NLP
bmk#1476: at best, NLP is *useful for* speech recognition
|
bmk#1476: but i wouldnt consider speech recognition as part of NLP
fengoku#4038: I'm new here so I guess I'll introduce myself? Name is Steven, currently a 2nd-year research master's student at Carnegie Mellon University (CMU) and previously an undergrad at the University of Waterloo. Been working on NLP research for a while and have published at major conferences such as EMNLP, ACL, and AAAI. My current interests are mainly in language generation, data augmentation (relevant podcast ep with Prof. Ed Hovy: https://www.youtube.com/watch?v=qmqyT_97Poc&ab_channel=GradientFlow), and semantics. Feel free to read more on my website: https://styfeng.github.io/
fengoku#4038: BTW (this may be interesting to y'all), I am leading the organization of a **controllable generation** (both for text and vision) workshop taking place at NeurIPS 2021 on December 13th. Paper submission deadline of Sept. 30th and demo submission deadline of Oct. 29th. Feel free to check it out or share with anybody who might be interested: https://ctrlgenworkshop.github.io/
EricHallahan#1051: Welcome to EleutherAI!
Kia#2550: Sounds amazing wow
StellaAthena#3530: @bmk Would you be surprised then to learn that "Speech" is explicitly listed as a topic area on the ACL and EMNLP websites, and that they accept around a dozen papers for orals every year that don't talk about text at all?
bmk#1476: they also list Vision as a topic area
bmk#1476: and Robotics
StellaAthena#3530: okay. I don't see much point in arguing about this so I'll stop
fengoku#4038: pitching into this convo (couldn't help myself) but yes speech is **absolutely** part of NLP lol
Louis#0144: https://tenor.com/view/thriller-michael-jackson-eating-popcorn-watching-gif-5577709
xcodevn#9003: I think this is an interesting point, speech is "natural language" but in a different mode compared to text.
fengoku#4038: NLP is not just text... a bit surprised NLP researchers could think it is text only lol
EricHallahan#1051: It absolutely is, but my opinion is that this is a really weird way to sell the paper.
xcodevn#9003: In some senses, speech is more *natural* to human than text π
StellaAthena#3530: @fengoku Welcome! Always exciting to get new faces around here
EricHallahan#1051: And I don't disagree, it is why I am personally Interested in the topic.
bmk#1476: now that i think about it.. generation isn't really a type of processing either
bmk#1476: so technically text generation isnt NLP either
bmk#1476: processing implies you're taking something as input
|
bmk#1476: which also implies that GANs arent CV either
bmk#1476: wait
guac#4716: NLG is basically just a sub-branch of NLP
bmk#1476: CV doesnt have processing in the name
bmk#1476: i take that back
bmk#1476: GANs do count as CV but text generation doesnt count as NLP
bmk#1476: but *controlled* text generation does
StellaAthena#3530: @fengoku What brings you to our corner of the internet / how did you hear about us?
fengoku#4038: thanks! my current interests are mainly in language generation, data augmentation, and semantics. specifically, the controllability and interpretability of language generation models, methods to incorporate and assess their commonsense reasoning capabilities, and how we can integrate more structured and linguistic information to enhance them. also did some work on dialogue agents and machine translation back in undergrad. you can read more on my website haha (basically this message is just paraphrased from it lol)
cfoster0#4356: (Don't be afraid to give and receive critic. Folks here are pretty *opinionated* but good natured, generally)
Louis#0144: Just don't offend the geese
fengoku#4038: mainly from @Louis π
Louis#0144: Lmao
Louis#0144: Ya I asked Steven a few times
Louis#0144: This time he joined though
StellaAthena#3530: How do you know her?
Louis#0144: Oh uh Steven's pfp is a kpop artist
Louis#0144: lol
Louis#0144: Just so we are clear
Louis#0144: LMAO
|
Louis#0144: We did our undergrad together
Louis#0144: Used to hangout in the data science club together a lot
bmk#1476: is kpop like the pop equivalent of k nearest neighbors
fengoku#4038: LOL it's bae suzy as my pfp oops
fengoku#4038: i wish i looked like her though LMAO
StellaAthena#3530: My appologies
fengoku#4038: been working my ass off for PhD apps, got 3 months before december deadlines π
Louis#0144: Me too
Louis#0144: Third times the charm
Louis#0144: lol
Louis#0144: I think I have Mohit Iyyer as a backup though this time
bmk#1476: i hear grad admissions is insane these days
Louis#0144: It is absolutely insane
fengoku#4038: it's absolutely insane. especially for ML and competitive sub-areas of it like NLP
fengoku#4038: as in, crazy unreasonably fucked up insanely difficult
StellaAthena#3530: Controllability and interpretability of language generation models is something we are always excited about. If you have any thoughts on transformers in particular I would love to pick your brain. I'm currently building towards a project on that topic
Louis#0144: Five publications four of which are first author and I was still told I have a 50/50 chance
fengoku#4038: oooh nice
fengoku#4038: i think i have CMU itself as a backup LOL. i'm basically just going for stanford
Louis#0144: Oh jeez
|
Louis#0144: Uh good luck
Louis#0144: afaik Stanford NLP got like 1K+ applicants last year
Louis#0144: And took like 1 person
Louis#0144: lol
bmk#1476: wtf
cfoster0#4356: Wtf
Louis#0144: Maybe 2
fengoku#4038: sure! always happy to discuss π
bmk#1476: that's literally an order of magnitude worse than undergrad admissions
fengoku#4038: oh those numbers are wrong
bmk#1476: man
fengoku#4038: i spoke to people on the stanford adcom
Louis#0144: Chilli was discussing this last application round
fengoku#4038: i think 300 NLP people for 5 spots? a lot of people don't apply cause they feel like they have no chance (i'm talking about the PhD btw)
fengoku#4038: the 300 applicants are all strong though
Louis#0144: I made it to the last round of apps at UW last year π
fengoku#4038: as in, multiple publications, strong letters, working with famous profs/lobs
Louis#0144: I still rly want to join Yejin Choi's lab
someKindaBean#8471: come to a nice mid-tier school, it's a lot easier to get in
bmk#1476: or consider industry
|
fengoku#4038: oh yeah she does great work lol
bmk#1476: industry is nicew
fengoku#4038: UW is very competitive too, especially yejin choi's group
someKindaBean#8471: i miss industry and can't wait to finish this phd
Louis#0144: If I don't get in this year
Louis#0144: I'll just settle for google research
Louis#0144: or something
Louis#0144: I could get in np
Louis#0144: I already know people who work there
fengoku#4038: yeah fck trying more than 2-3 times
cfoster0#4356: bet
Louis#0144: LMAO
bmk#1476: why is it literally easier to get into industry
bmk#1476: everything's backwards
Louis#0144: I applied to Alan blacks last at cmu last year
Louis#0144: He ghosted me
Louis#0144: Her and I chat a lot
Louis#0144: π€·ββοΈ
Louis#0144: Also I might have my own grant this time
cfoster0#4356: Academia's incentives aren't aligned towards taking as many people as feasible
|
cfoster0#4356: The opposite, largely
fengoku#4038: he ghosts everybody including his own students here, dont worry LMAO
Louis#0144: lmaooo
fengoku#4038: oh nice that's good
fengoku#4038: i think the main thing in terms of admissions to top schools
fengoku#4038: are connections and rec letters
fengoku#4038: specifically rec letters from famous/well-known people, which is kind of fucked up
fengoku#4038: cause almost nobody gets a chance to work with famous profs
Louis#0144: I have riedl, rogelio, and uh idk I need a third one
π
¬ gabriel_syme π
¬#3220: My guess, thousands of positions vs a handful
Louis#0144: @StellaAthena write me a recc letter
fengoku#4038: all the stanford NLP admits recently got rec letters from famous professors. famous as in 80+ h-index lol
Louis#0144: Jkjk (not that your recc letter isn't of value)
π
¬ gabriel_syme π
¬#3220: Is that the kind of famous no one knows about?
Louis#0144: My advisors h index is 40
π
¬ gabriel_syme π
¬#3220: Smol
fengoku#4038: oh yeah not just h-index i don't mean it that way. but h-index is usually at least a semi-indicator
bmk#1476: i wonder how bad admissions is at uc berkeley
fengoku#4038: for NLP? bad cause it's basically only dan klein, marti hearst, and maybe david bamman
janus#0150: What do you think about a non-profit that tries to scale advisor/mentee teams? Assume there is funding. No classes, just research. Can this be done in a decentralized way and still incentivize research drive?
|
Louis#0144: Really? That's huge for NLP
Louis#0144: Lmao
bmk#1476: wait how much does admissions vary between fields
fengoku#4038: are u guys talking about 40 h-index?
someKindaBean#8471: is a low erdos number worth bragging about for admissions? /s
π
¬ gabriel_syme π
¬#3220: I'm curious, do US people ever think abroad? There must be elite labs somewhere
Louis#0144: No
Louis#0144: :berk:
π
¬ gabriel_syme π
¬#3220: Just shitposting
bmk#1476: the US is the only countrry
fengoku#4038: edinburgh and MILA have decent labs
bmk#1476: canada is just a state of the US
janus#0150: Depends how low. If you had an Erdos number of 0, you would be automatically admitted.
StellaAthena#3530: How low is yours?
bmk#1476: i wonder what the *highest* erdos number is
someKindaBean#8471: 6
someKindaBean#8471: that becomes a really hard problem to solve
Louis#0144: I think an Edinburg prof ripped off a paper I did
Louis#0144: And didn't cite me
Louis#0144: π
|
fengoku#4038: LMFAO?
someKindaBean#8471: so what's yours?
Louis#0144: Yeah half lmao
Louis#0144: I'll resolve it
Louis#0144: But like half his paper is useless
fengoku#4038: u should mail them
bmk#1476: 27
Louis#0144: Since he's restating what I did
Louis#0144: In my paper
Louis#0144: As novel
Louis#0144: Lol
janus#0150: Another anecdote: I had a very strong grad school app and went 0 for 12 (a few years ago)
π
¬ gabriel_syme π
¬#3220: well it's the first time he said it
Louis#0144: I did
Louis#0144: Even my advisor took a double take
π
¬ gabriel_syme π
¬#3220: imagine 12 schools not accepting you, thing has to be broken
bmk#1476: :harold:
StellaAthena#3530: 4
fengoku#4038: yeah it's fucked up. at least PhD admissions are
someKindaBean#8471: that's badass, although you are actually a mathematician, right?
|
fengoku#4038: like i said, need at least one 80+ h-index and connected/respected professor's letter for the top 4 schools (usually)
fengoku#4038: and, an actually strong letter LOL
fengoku#4038: so u need to get a famous/respected professor to actually like you a lot
fengoku#4038: it's beyond unrealistic
bmk#1476: i wonder how hard it is to get in for a not-so-hot field
StellaAthena#3530: Almost was 2, but *\*shrug\**
someKindaBean#8471: what's the least sexy ML adjacent field right now?
bmk#1476: like, i want to go do alignment under stuart russell or something
StellaAthena#3530: Yeah, studied combinatorics under one of his students in undergrad actually.
someKindaBean#8471: dang, that's sweet
StellaAthena#3530: Laci is bae
π
¬ gabriel_syme π
¬#3220: Maybe everyone befriends chemistry professors or smth, they always seem to have high h index
someKindaBean#8471: i'm pretty happy with mine given that i'm in engineering
AI_WAIFU#2844: bioinformatics
π
¬ gabriel_syme π
¬#3220: Design., come on over please
π
¬ gabriel_syme π
¬#3220: I can literally get PhDs for ppl at this point
Louis#0144: Storytelling
StellaAthena#3530: Real analysis
π
¬ gabriel_syme π
¬#3220: There is actually on on CV and AEC (design and construction)
StellaAthena#3530: Laci's list of things he's most proud of on his website brags about having an entry in the Stargate Wikipedia, and he's easily the smartest person I've ever met.
|
π
¬ gabriel_syme π
¬#3220: Cool, what domain?
StellaAthena#3530: He had a break through in a major problem in theoretical computer science a couple years ago. The last major paper on the topic was written by him.... in the 80s
fengoku#4038: hahaha
π
¬ gabriel_syme π
¬#3220: I also think AI researchers should work more on interstitial spaces. But I'm biased towards praxis
StellaAthena#3530: Man took 30 years off from a problem, came back to it, and then blew people's minds in two years of hard work.
someKindaBean#8471: electrical and computer - i started in signal processing stuff and now i'm doing ML things
π
¬ gabriel_syme π
¬#3220: Aha I see, still really close to CS
someKindaBean#8471: we are neighbors
ilovescience#3282: oh cool i have an erdos number of 5...
π
¬ gabriel_syme π
¬#3220: Now that the discussion is around this topic, if anyone is in Europe and you like TU Delft, there is a great position for AI and Architecture. I can definitely link anyone interested.
StellaAthena#3530: I'm vaguely interested... except my girlfriend would kill me for taking a 100k pay cut to go back to school.
π
¬ gabriel_syme π
¬#3220: Lol yeah, keep the 100k and get to the point of "researching for life" earlier
ilovescience#3282: btw you can check your erdos number here:
https://www.csauthors.net/distance/
fengoku#4038: https://cdn.discordapp.com/attachments/729741769738158194/885717303998689371/309c348026757c4bc428c7b63e835816.png
StellaAthena#3530: That's why I'm here lol
cfoster0#4356: Possibly?
StellaAthena#3530: You mean like EAI?
fengoku#4038: what's a good erdos number even?
StellaAthena#3530: 0
|
Louis#0144: I can very easily get an erdos # of 3 https://cdn.discordapp.com/attachments/729741769738158194/885717672308908032/image0.png
Louis#0144: Mayyyybe 2
Louis#0144: If I really push it
cfoster0#4356: *Wait, you're getting funding?* /s
fengoku#4038: is this more of a meme thing or actually useful? cause i dont see how this number means much sorry π
π
¬ gabriel_syme π
¬#3220: I mean I probably have the biggest erdos number recorded
StellaAthena#3530: It's a game people play, like seven degrees of kevin bacon but for mathematicians
Louis#0144: It's a meme
Louis#0144: When Eleuther has funding we're making a goose lab
π
¬ gabriel_syme π
¬#3220: No
π
¬ gabriel_syme π
¬#3220: But we will buy some geese
π
¬ gabriel_syme π
¬#3220: And coats
ilovescience#3282: i think erdos number is first
StellaAthena#3530: yeah, I get paid by my company to hang out here and give them the heads up on all the new tech. Duh
EricHallahan#1051: I haven't coauthored a paper nor have I published a paper, so π½.
π
¬ gabriel_syme π
¬#3220: Dream job tbh
someKindaBean#8471: lol, i'm legitimately curious how many people are paid to be here
someKindaBean#8471: i figure there's at least a few
someKindaBean#8471: and i don't mean people who are here on their work hours for funsies
ilovescience#3282: so is that an infinite erdos number? π€
|
EricHallahan#1051: "Not I," said the chicken.
π
¬ gabriel_syme π
¬#3220: I'm paid to do something else but instead I'm here. Does it count?
EricHallahan#1051: Infinity is not a number :zucc:.
π
¬ gabriel_syme π
¬#3220: How dare you, cantor would never
Louis#0144: Most of us are paid to do something else but our employers encourage us to be here
Louis#0144: Well
Louis#0144: Most of us that are paid
Louis#0144: I know cohere likes me spending time here
Louis#0144: Lmao
Louis#0144: I bring a lot of Eleuther related value back
cfoster0#4356: Oh ok same
cfoster0#4356: This is kinda the ideal setup
dmayhem93#3202: I think professional development hours count towards the gooseposting so yeah
zphang#7252: doing the superior PHD
**P**osting **H**onk on **D**iscord
StellaAthena#3530: I was being sarcastic lol. But it's awesome that's the case for you assuming you're not shitposting
Louis#0144: Lmaooo
Louis#0144: Does Sam ask u often about what's going on in Eleuther
Louis#0144: Mark asks me occasionally
StellaAthena#3530: Fun fact: Natalie Portman has an Erdos Number of 5, which is smaller than her Bacon Number
|
EricHallahan#1051: π½
zphang#7252: I'm surprised her bacon number is that high
ilovescience#3282: she published something?
zphang#7252: she had an undergrad pub
zphang#7252: in... something. biology? psych?
Kia#2550: Isn't that you?
someKindaBean#8471: it's true, this is like drinking from a firehose of knowledge and geese
cfoster0#4356: Mostly not shitposting lol
ilovescience#3282: this is natalie portman's paper:
https://pubmed.ncbi.nlm.nih.gov/12202098/
ilovescience#3282: it's been cited 212 times so far
StellaAthena#3530: Yeah. She has a couple papers
EricHallahan#1051: "You must mean knowledge graph."
StellaAthena#3530: https://scholar.google.com/scholar?oi=gsb95&q=Hershlag%2C%20N.&lookup=0&hl=en
janus#0150: But EAI can't provide salaries for advisors and mentees so they can fully commit. And we're so decentralized and informal that many people will think it won't provide a credible enough signal for whatever career goals they have.
StellaAthena#3530: I've written 5 papers and gotten 45 citations for EAI work this calendar year.
janus#0150: Ideally something could serve as an alternative to academia. The existing institutional incentives could be dropped, and for better or worse also classes. (Of course, getting funding is the hard part)
StellaAthena#3530: No, I meant paying people to participate in EAI
AI_WAIFU#2844: I'm pretty sure we can do the advisor thing
Louis#0144: I think human funding is important in the long run
|
Louis#0144: π€·ββοΈ
Louis#0144: Maybe not yet
Louis#0144: But in a few years
AI_WAIFU#2844: For now we need to rely on people applying for grants
Louis#0144: Yeah
StellaAthena#3530: Speaking of our bright future, @Louis you have a theory portion of a paper to finish
StellaAthena#3530: π
StellaAthena#3530: And I need to figure out a hparam search strat
Pepe Le Spooder#0420: Anyone know if theirs any Hosted Image upscalers?
Louis#0144: Yes
Louis#0144: I have time tmrw
Louis#0144: Sorry post op meds are killing me
Louis#0144: :berk:
StellaAthena#3530: Dude take care of yourself
StellaAthena#3530: I just figured if you had the bandwidth to shitpost you had the bandwidth to write. Maybe incorrectly tho
Parker#3197: yes, just google "image upscaler online"
Parker#3197: the first three links all worked for me
Pepe Le Spooder#0420: I ended up finding something bettter
Pepe Le Spooder#0420: Cupscale gui
Pepe Le Spooder#0420: https://github.com/n00mkrad/cupscale/releases
|
Parker#3197: that looks good too
Parker#3197: but that isn't online
Pepe Le Spooder#0420: yes
Pepe Le Spooder#0420: I have a rtx 3080 but just didnt feel like building my own model
Pepe Le Spooder#0420: I'm just kind of futzing about upscaling some of my game textures seeing if i can get them to look any better
π
¬ gabriel_syme π
¬#3220: Interestingly I was part of an effort trying to do just that for Architecture. It's a place where we had weekly webinars on different innovative matters some times from 'famous' experts. The idea was that this, collectively, could be a way to build an educational program (along with some collaboration with students). They were thinking to get accreditation, which you can get from practically any university anywhere, and then offer a program with much lower cost (and if possible free). Hasn't materialized yet but I do agree new forms of education will arise.
smallanimalfriend#4355: Anyone have strong opinions on going with Haiku over Flax? Searching through comment history here it seems like more people have gone with Haiku (e.g. Ben Wang, lucidrains), but it seems like Flax has more users in general (going by github stars, commits, forks, etc.), and I see that huggingface has a few Flax models in addition to TF and PT. I should probably just play around with both, but figured I'd fish for an opinion or two backed by some experience - in case there are later-stage regrets with one or the other API (though I imagine that they're both quite good)
π
¬ gabriel_syme π
¬#3220: I have little insight on coding Jax, but the HF FLax implementations have been incredibly useful for me.
Teemochu#8740: that can be sexy if it's the right kind of story
zphang#7252: Haiku benefits from the nimbleness of DM, Flax benefits from the scale of Google. I hear it's not too hard to switch from one to the other anyway
xcodevn#9003: Never use Flax. However, I have strong opinions on Haiku. I think Deepmind has some of the best ML engineers in the world. Haiku is extremely well designed and tested.
b3ck1#8746: Any one here doing AI and craft beer brewing?
I used the GPT-Neo model from Hugginface to create a model that creates beer recipes - if anyone is interested take a look at https://beerai.net and let me know what you think π
StellaAthena#3530: @b3ck1 Have you brewed any of these?
b3ck1#8746: Not yet - used some recipes from the training dataset, but not yet generated ones. But will give it a try and let you know...
b3ck1#8746: takes 4-6 weeks for a beer to be ready for drinking...
b3ck1#8746: in parallel searching for ways to train bigger models - used the GPT-Neo 125M for this - the others did not fit into my lokal GPU...
StellaAthena#3530: Have you tried Google Colab? That should fit our 2.7B parameter model
b3ck1#8746: Yes, i did, but that did not work - maybe i should take a look again
GPU or TPU?
|
EricHallahan#1051: Ideally tune on TPU, it is significantly faster.
b3ck1#8746: ok, i think i was trying GPU - will try again, thanks!
cfoster0#4356: For GPU I think you've gotta turn the batch size down low and gradient accumulate instead
b3ck1#8746: i was down to batch_size of 1 and still kernel broke π
b3ck1#8746: what are the approx. memory requirements for the 2.7B version?
StellaAthena#3530: Around 10 GB
b3ck1#8746: ok, that should even fit into my local card (have a 11GB 1080TI)
StellaAthena#3530: https://huggingface.co/EleutherAI/gpt-neo-2.7B/tree/main
StellaAthena#3530: Yes, it should
b3ck1#8746: thanks, will again take a look at the 2.7B version and try to use that one instead of the 125M version
b3ck1#8746: @cfoster0 just saw in the docs on the hunggingface website that there is a parameter for the gradient accumulation - thanks for the hint!
b3ck1#8746: tried to train on colab and local -> neither does work. Always getting a "Resource exhausted" message
b3ck1#8746: so it seems that the 2.7B does not fit into GPU or TPU for finetuning
b3ck1#8746: at least not with the huggingface libs...
EricHallahan#1051: We really can't say much about the Flax implementation of GPT-Neo in HF as we weren't involved in that implementation at all.
Louis#0144: which is an issue in and of itself btw
Louis#0144: lol
Louis#0144: ~~i was quite unhappy that we werent involved at all~~
Louis#0144: :berk:
StellaAthena#3530: Why
|
EricHallahan#1051: Well I can't have an issue with it if I don't have an opinion.
Louis#0144: bc neo is cursed
Louis#0144: :berk:
Louis#0144: the local attn stuff is questionable
StellaAthena#3530: That has nothing to do with our involvement in the flax model
Louis#0144: oh i meant in general
Louis#0144: for HF models
StellaAthena#3530: 1. thatβs not the topic of conversation
2. we werenβt excluded from the GPT-Neo implementation, itβs just that nobody wanted to work on it
thrasher#7261: https://twitter.com/yoavgo/status/1436355802370125830
Zippy#1111: idk I've noticed that sometimes nvidia cards will get OOM errors ~1.5 gb before they actually hit the memory limit.
Zippy#1111: like-- with my 1080 I couldn't ever load or train models that used over 6.5gb despite it being a 8gb card, when the card was not being used for anything else at all, not even display out.
EricHallahan#1051: Well there will always be some memory overhead.
alstroemeria313#1694: having trouble understanding this https://en.wikipedia.org/wiki/MetropolisβHastings_algorithm
alstroemeria313#1694: I am trying to do stochastic gradient Langevin dynamics and have that part working
alstroemeria313#1694: But apparently you can use non-decreasing step size schedules with it if you do MH rejection steps too?
alstroemeria313#1694: So the 'proposal' is the SGLD step?
alstroemeria313#1694: so
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/885964710388846592/Screen_Shot_2021-09-10_at_12.07.25_PM.png
alstroemeria313#1694: P(x_t) = density at the current point. (I have this in the log domain ofc)
|
alstroemeria313#1694: x' = the candidate SGLD step. (I already have this working)
alstroemeria313#1694: g(x' | x_t) = ...what
alstroemeria313#1694: The density of sampling that particular x'?
alstroemeria313#1694: And ofc I can just get P(x') in the same way as I got P(x_t), by evaluating the log prob at x'.
alstroemeria313#1694: And the most puzzling one, g(x_t | x')
alstroemeria313#1694: That's the density of the reverse proposal distribution evaluated at x_t?
alstroemeria313#1694: Like I need to get the gradient of the log prob at x' for it.
alstroemeria313#1694: (And then reuse it if I accepted x'.)
inox#5400: proposal density is arbitrary but I think it has to be symmetric?
alstroemeria313#1694: symmetric...
inox#5400: hm idk actually I think I'm saying something dumb
alstroemeria313#1694: The symmetric case is easier
alstroemeria313#1694: This is the general case
inox#5400: I think usually you choose a density so $g(x_t|x')/g(x_t|x') = 1$
alstroemeria313#1694: you mean 1?
inox#5400: damn yes
alstroemeria313#1694: right, but
TeXit#0796: **hayley** https://cdn.discordapp.com/attachments/729741769738158194/885967078169927720/147351084480856065.png
alstroemeria313#1694: It isn't going to be?
alstroemeria313#1694: my proposal distribution is a Gaussian where the mean is x_t plus the gradient of log P(x_t) times some step size
|
alstroemeria313#1694: And the gradient of log P(x') is not going to be the correct thing to make it symmetric
inox#5400: oh yeah that goes beyond my textbook understanding of MCMC
inox#5400: ...HMC incorporates gradients
alstroemeria313#1694: I think the thing I am trying to do is some simplified version of HMC
alstroemeria313#1694: Metropolis Adjusted Langevin Algorithm
alstroemeria313#1694: wowww MALA gets slow if the step size is too high
alstroemeria313#1694: It eventually ends up in an area where it almost never accepts a proposal
alstroemeria313#1694: oh wait i got the acceptance criterion backward
alstroemeria313#1694: Eheh.
alstroemeria313#1694: Now it doesn't work at all really
alstroemeria313#1694: oh i think i got it
Kharr#7888: Are you running Windows 10? This was fixed in the winter 2020 update. Before this, all Windows versions would allocate ~ 20% of the GPU memory to the system and prevent things like PyTorch from accessing it. For a 24 GB RTX 3090 this was a huge 5GB. The latest updates even allow you to overflow into RAM from VRAM but it is not recommended since it is magnitudes slower than GPU memory.
Zippy#1111: Well that's strange because I'm using windows 11 and it's happening.
Zippy#1111: Or wait, you're saying that in order to access all gpu ram, I have to go back to using windows 2020?
Zippy#1111: :pain:
Kharr#7888: Windows 11 is brand new, it's possible they broke stuff. Windows 10 with all the updates will allow you full access to GPU memory
Zippy#1111: Interesting
Zippy#1111: But win 11 is so pretty :hawaiicry: I don't know if I can give it up.
Kharr#7888: Cost is only 20% gpu memory π
bmk#1476: just use Linux
|
Zippy#1111: I want to use linux.. and I am a linux dev, I just use windows for AI stuff because of the wsl ram issue, but for all other tasks I'm using wsl.
Kharr#7888: Can't have Windows in there if you want full performance
Zippy#1111: I just.. I love the windows 11 UI :hawaiicry:
bmk#1476: I don't know what wsl ram issue you're talking about
bmk#1476: all I know is that my ubuntu box works perfectly for ML
AI_WAIFU#2844: The SGLD step is a proposal distribution, so just compute the SGLD step at the start and destination points after that and then plug it into the MH algorithm.
MH works by constructing a markov chain with the target distribution as it's stationary distribution.
AI_WAIFU#2844: It does this by ensuring that the "flow" of probability mass going from point A to point B after 1 step is equal to the "flow" of probability mass going from B to A
AI_WAIFU#2844: nope, you just need to use the ratio of the proposals in each direction
AI_WAIFU#2844: You can infact use *any* proposal distribution you want
AI_WAIFU#2844: as long as it only depends on your current state
AI_WAIFU#2844: and you run it through MH
CarsonPoole#0640: As a general question, if you train a large model, will the training time decrease significantly if you freeze some significant portion of the model?
CarsonPoole#0640: obviously then your optimizer params for all of those frozen params can go away
CarsonPoole#0640: considering how much calculating gradients contributes to training time I'd naively assume it _could_?
CarsonPoole#0640: but not sure if anyone here is ever done this
AI_WAIFU#2844: I think I saw a paper that did that
AI_WAIFU#2844: got a notable speedup
CarsonPoole#0640: if you froze like 90+% of the model, would you reduce wall clock training time by 90%?
|
AI_WAIFU#2844: almost certainly not
wabi-sabi#5811: You could probably get pretty smart about when you freeze what if you knew enough
cfoster0#4356: You've still gotta forward pass
AI_WAIFU#2844: you lose 90% of your parameters, so that's gonna cost you
bmk#1476: you reduce wall clock by like 25% or something
wabi-sabi#5811: Freeze, then thaw, then freeze more, then thaw more, etc. Or freeze half the weights within a layer, idk. Lots of different imaginable things.
wabi-sabi#5811: Half within a layer probably would break stuff
bmk#1476: I mean you really can't save that much because amdahl
bmk#1476: plus you still need to backprop through everything after a non frozen layer anyways
bmk#1476: you save a little bit by not having to backprop to the actual params but that's probably not a ton
wabi-sabi#5811: Control variates result in like 90+% reductions in computing time for certain simulations in physics and finance, IIRC. I'm thinking of freezing as essentially choosing to use a coarser model and then augmenting it with a finer grained model later on.
wabi-sabi#5811: Bring back swarm algorithms in 2022 π
wabi-sabi#5811: If you did all the exact same computations, then Amdahl would kick in. But if you're instead fracturing the original problem into a bunch of related problems, it seems like you can maybe do the estimation much much faster.
Kharr#7888: The model learns slower, but not proportionally slower. The bigger the model the more it makes sense to do it. For example I tuned GPT-J using dynamic subsets of about 50% of the parameters per step. Going to 25% or 10% of the paramters slowed down learning too much per step.
Kharr#7888: related: https://arxiv.org/abs/2010.11859
CarsonPoole#0640: how did doing 50% change the wall clock training time? and what about when you tested 25% or 10%
Kharr#7888: About 25% faster overall. You save on memory and optimizer ops
CarsonPoole#0640: 25% faster when doing 50%? Did you check the time when you tried freezing more of the model?
Kharr#7888: Yes for 50% frozen, the model learns a bit slower but every step is faster. I don't recall the others, but the return got worse as the model learned slower and slower.
Kharr#7888: Check out the paper I linked. It reports # of epochs
|
CarsonPoole#0640: thanks so much! π
Zippy#1111: So I'm curious.. how long did the pretraining for the large gpt-neo take?
EricHallahan#1051: 2.7B?
Zippy#1111: Yeah
Zippy#1111: wow 400000 steps.
EricHallahan#1051: Couple weeks IIRC?
Zippy#1111: That's pretty crazy.
Zippy#1111: Pretty weird to think that for AI, it might make more sense to buy a 3060 than a 3080, since the 3060 has more ram. The 3080 ram is faster, but for big transformer models, it'll give up sooner.
Pepe Le Spooder#0420: I'm seriously impressed with how much detail Cupscale can pull out of a 2d asset
Pepe Le Spooder#0420: https://cdn.discordapp.com/attachments/729741769738158194/886080670236037170/unknown.png
Pepe Le Spooder#0420: Its not perfect but jesus its a damn good job at guessing
EricHallahan#1051: Wait wtf is up with that product stack?
EricHallahan#1051: That makes zero sense.
Pepe Le Spooder#0420: Big brain nvidia Breaking the mould
Zippy#1111: It's just the higher bandwidth gddr6x vs gddr6-not-x
Pepe Le Spooder#0420: Which isnt really a just
Pepe Le Spooder#0420: gddr6x is superior in pretty much every way
Zippy#1111: Yeah, .. except when it comes to holding a nice big bert in memory.
π
¬ gabriel_syme π
¬#3220: The 3060ti has less ram than the normal :berk:
Kharr#7888: get comfortable with fp16 training and the memory will be less of an issue
|
π
¬ gabriel_syme π
¬#3220: Maybe gymnastics to avoid miners smh
Pepe Le Spooder#0420: I mean I wouldnt really be using Non ECC memory for something like ai anyways
Pepe Le Spooder#0420: Id be using like a tesla
random person#5234: Dont make a difference
random person#5234: EEC is a meme
random person#5234: Nevermind 6x is ECC
Zippy#1111: I have a 1080 so I'm fine for nowiish, although most of the neat models I want to try end up making my gpu commit sudoku.
random person#5234: If you actually read the spec sheet on micron for it
random person#5234: 6x does EEC
Pepe Le Spooder#0420: Mmmmhmmm But not always
π
¬ gabriel_syme π
¬#3220: Wait is it that big of a difference?
Pepe Le Spooder#0420: you have to install the right drivers and bios
π
¬ gabriel_syme π
¬#3220: Friendship finished with 3060 now 4080 super is my friend
Pepe Le Spooder#0420: since ecc is using a portion for redundancy
Zippy#1111: oh... it's about 46% increase in bandwidth :overfloosh:
Pepe Le Spooder#0420: Yeah
Pepe Le Spooder#0420: its actually insane coupled with pcie 4
π
¬ gabriel_syme π
¬#3220: Yeah but if you can't load the model..
Zippy#1111: :Kek:
Pepe Le Spooder#0420: Oh forsure your still gonna need like 3 cards to load most good models
|
π
¬ gabriel_syme π
¬#3220: 3 cards? What do you think I am a bank robber
Zippy#1111: and I don't think you can sli a 3060?
Pepe Le Spooder#0420: Not sli persay
Pepe Le Spooder#0420: but you can run them in conjunction
π
¬ gabriel_syme π
¬#3220: Nvidia routinely cuts sli on these intermediate cards because they can rival bigger ones
π
¬ gabriel_syme π
¬#3220: I remember the same with 1060
Pepe Le Spooder#0420: Yeah its more of a Studio driver registry mod
Zippy#1111: I feel like the options are either a 3060, or 3090 LOL
random person#5234: Its literally on the spec sheet
π
¬ gabriel_syme π
¬#3220: My option will be 4080
random person#5234: 6x need eec to maintain signal integrity
random person#5234: 6x also runs hot as hell
Pepe Le Spooder#0420: Yes but its not always toggled on for the whole memory bank
EricHallahan#1051: My option will be TPU. :ultrazucc:
Pepe Le Spooder#0420: ecc is inherently slower because of the redundancy checking
random person#5234: My 3090 toss its vram on the back
random person#5234: Ampere isnt bottlenecked by memory bandwidth that bad
Zippy#1111: Yes I want to buy a google. How much does a google cost?
π
¬ gabriel_syme π
¬#3220: I need a tpu at home tbh
Pepe Le Spooder#0420: Kek
|
Pepe Le Spooder#0420: Yes a google cloud server plz
π
¬ gabriel_syme π
¬#3220: Just a v3-8 nothing much
Pepe Le Spooder#0420: I'll take a node computing set up with that also
π
¬ gabriel_syme π
¬#3220: I mean they will throw them put eventually no
Zippy#1111: I wouldn't mind a nvidia super pod with a100's
π
¬ gabriel_syme π
¬#3220: Just give them to researchers
EricHallahan#1051: Me: Can I have **TPUs**?
Mom: No, we have **TPUs** at home.
**TPUs** at home: https://cdn.discordapp.com/attachments/729741769738158194/886083334336311296/unknown.png
Pepe Le Spooder#0420: KEK
Pepe Le Spooder#0420: China ASIC ILLEGAL
Zippy#1111: Yes I would like to buy one.
π
¬ gabriel_syme π
¬#3220: I'm 100% pushing fora DGX at work
π
¬ gabriel_syme π
¬#3220: It's hard to convince AEC companies
Pepe Le Spooder#0420: I'm still running a ibm x3400 π
Zippy#1111: I want a full bagel. A whole nvidia bagel. https://cdn.discordapp.com/attachments/729741769738158194/886083661353607218/data-center-dgx-ai-leadership-ai-data-center-2c50-D.png
Zippy#1111: One of those ... idk I want to call them bagels.
π
¬ gabriel_syme π
¬#3220: That empty dpace
Pepe Le Spooder#0420: I got one of these
Pepe Le Spooder#0420: https://cdn.discordapp.com/attachments/729741769738158194/886083838072217610/iu.png
|
Pepe Le Spooder#0420: https://cdn.discordapp.com/attachments/729741769738158194/886083858695610368/iu.png
π
¬ gabriel_syme π
¬#3220: But seriously
Pepe Le Spooder#0420: I use what i got :Laugh:
π
¬ gabriel_syme π
¬#3220: How many regretting not buying the 3090 early on or smth. Or even cheating out on a ti on my end
Pepe Le Spooder#0420: Oh frig i payed way too much for my 3080
π
¬ gabriel_syme π
¬#3220: Cheaping*
Pepe Le Spooder#0420: pny 3080
Pepe Le Spooder#0420: 1500 cad
Zippy#1111: wait u have a 3090 in this thing? :Kek:
Pepe Le Spooder#0420: Nooo
π
¬ gabriel_syme π
¬#3220: Tbf with TPUs I would just be playing games on them
π
¬ gabriel_syme π
¬#3220: I'm addicted to tpus rn
Pepe Le Spooder#0420: Thats in my main desktop
Pepe Le Spooder#0420: that uses like 4 quadros
π
¬ gabriel_syme π
¬#3220: Stop mining
Pepe Le Spooder#0420: Kek Not mining doing cad work
π
¬ gabriel_syme π
¬#3220: Ah nice
π
¬ gabriel_syme π
¬#3220: Wait engineer?
π
¬ gabriel_syme π
¬#3220: What domain?
Pepe Le Spooder#0420: I do Mostly 3d printing / Sls / Cnc
|
Zippy#1111: okay I didn't really want to say cuz idk I feel embarrased I guess.. I got a 3090 today :excited: .. I watched one of those youtube videos where they shout alerts about when cards come in stock and bought one for 2k.
π
¬ gabriel_syme π
¬#3220: Oh nice!
π
¬ gabriel_syme π
¬#3220: We are..close but not entirely. I'm in building design. Off site construction really bringing those things together
random person#5234: I mostly play csgo on my 3090
random person#5234: Its great
Pepe Le Spooder#0420: I use that for flow tests and tensile tests
Zippy#1111: If you can't beat the scalpers, join them.
π
¬ gabriel_syme π
¬#3220: Those communities are actually anti acalpwrs
random person#5234: Lol abacus or ansys
π
¬ gabriel_syme π
¬#3220: Anti scalpers
Pepe Le Spooder#0420: Two intel xenons and 4 quadros are still better than 1 ryzen and a 3080
π
¬ gabriel_syme π
¬#3220: Fck ansys though. So expensive
π
¬ gabriel_syme π
¬#3220: I did a lot of cfd at work, full on openfoam lol
π
¬ gabriel_syme π
¬#3220: The meshing software ppl were pitching on me costs 10k dollars per year. Like wtf
Pepe Le Spooder#0420: I use autocad flow design still and a couple others
Zippy#1111: Oh weird. I kind of figured that's where all the lazy scalpers come and sit watching cartoons and wait for the alarm to go off and then click the button in chat because it literally puts the thing in your cart and if you have a card already set up you can just click "check out now" and you bought it.. like literally about 10 seconds, and then it's out of stock ~2 minutes later.
Pepe Le Spooder#0420: I ended up having someone that ran a shop put one aside for me
Pepe Le Spooder#0420: still paid oem price from pny
Pepe Le Spooder#0420: pny overcharged like hell in the beginning
π
¬ gabriel_syme π
¬#3220: Yeah I guess for engineering you need more comprehensive. I was doing natural ventilation and urban studies mostly
|
random person#5234: No idea
random person#5234: I dont do meche work anymore
π
¬ gabriel_syme π
¬#3220: Knowing someone in a shop is important I feel. The good shop here can get cards but I doubt I can get in line
Pepe Le Spooder#0420: But yeah I have a Commercial Subscription to Autocad , And a couple other series
Pepe Le Spooder#0420: Grandfathers business account
random person#5234: I got a 3090 and cards for friends easily through retail.
π
¬ gabriel_syme π
¬#3220: Plus I need to buy a whole rig lol
random person#5234: Easier than you think
Zippy#1111: I was doing process automation until the parent company of the company we were doing business with said "nope ur not allowed to hire these people because they are third party". So yeah.
Kharr#7888: Nah, they use auto scripts and go do something else. If they're sitting there watching a notification feed they're not real scalpers.
π
¬ gabriel_syme π
¬#3220: I'm in malaysia
Pepe Le Spooder#0420: Yeah i asked around til i found a person on discord
random person#5234: Hmm maybe different idk
Zippy#1111: Yeah could be.
π
¬ gabriel_syme π
¬#3220: The place where a huge number of circuits are build but we can't buy them
EricHallahan#1051: Maybe this discussion should go to #off-topic?
π
¬ gabriel_syme π
¬#3220: Oops ye my bad
EricHallahan#1051: It seems to have veered off pretty quickly.
Zippy#1111: I actually got some automatoin scripts to work but all of the vendors pretty much immediately stopped responding and would time out. aka in order to be a real scalper you need vps's & vpns like all over the world so they don't ever figure out you're a single person.
Zippy#1111: Oh.. sorry.
|
π
¬ gabriel_syme π
¬#3220: Don't worry it happens. Reigning it fast is a great sign of a community :)
Zippy#1111: π
Teemochu#8740: It is, I can confirm through my overclocking... it slows down then hardcrashes with basically no in between as clock is increased
Teemochu#8740: which is what ECC does (non-ECC will start erroring instead of slowing down)
nev#4905: cool
jordiae#4107: https://arxiv.org/pdf/2109.00301.pdf
mkualquiera#3484: informer
mkualquiera#3484: lmao
Louis#0144: Hey guys I'm looking for more research assistants in #carp to work on prompt tuning carp.
It should be a relatively easy paper, maybe a month turn around time. Not a speedrun. We already have a few people who are interested in working on it but I wanted to get the ball rolling today
Louis#0144: The idea would be to prompt tune carp to boost a classifier
Louis#0144: Would be an easy but good publication if anyone wants one
Louis#0144: Maybe I should add it to the interp board
nev#4905: π
StellaAthena#3530: It's a general project board and yes you should add projects to it
cfoster0#4356: Anyone tried out prefix tuning yet (as opposed to prompt tuning)?
Louis#0144: Prefix tuning carp?
Louis#0144: Not yet
cfoster0#4356: No, just in general
π
¬ gabriel_syme π
¬#3220: both are next on my list after finetuning J, which ends my finetuning experiments I guess
|
Dashiell#8739: @Louis I'm available & interested to help w/ prompt tuning carp π
zphang#7252: depends on what you mean by either
cfoster0#4356: I guess by prefix tuning I mean learnable embeddings prefixed at every layer instead of just at the first layer+carried forward
zphang#7252: ah I see
Sid#2121: do you train a separate prefix embedding for each layer, or is it shared + prefixed at every layer?
alstroemeria313#1694: i thought it was separate
Sid#2121: might as well just use adapters
Sid#2121: i guess prefix tuning is a little more lightweight
cfoster0#4356: Possibly, yeah
cfoster0#4356: Not having to change any weights is kinda nice
Sid#2121: you don't have to change weights for adapters?
Sid#2121: of the model, i mean
cfoster0#4356: I guess it depends on what you consider weights
cfoster0#4356: For some reason I have adapters mentally filed as "adding extra layers with weights" but I don't have prefix tuning filed the same way
cfoster0#4356: I think there might be good way to do prefix tuning within HF transformers without changing the underlying code, just using `past_key_values` π€
Sid#2121: adapters are pretty easy to implement and work really well ime
Sid#2121: prefix tuning feels a bit hacky
Sid#2121: i mean at the end of the day you're freezing the model and tuning some new weights for either
Sid#2121: even if you tell yourself it's an embedding π
cfoster0#4356: The paper is definitely a bit hacky. Curious why doing it the straightforward way was unstable for them
|
π
¬ gabriel_syme π
¬#3220: does prefix tuning require a whole finetuning run like normal? I'm running nostalgebraist's adapters now and it feels like a normal finetuning in scope, and especially time required. I guess that could be due to J, the adapter layers seem to be 500m parameters lol
Sid#2121: what does that mean exactly? "a whole finetuning run" can be of arbitrary length depending how much data you have
Sid#2121: the main advantage to adapters is you don't need to store a whole set of the model's parameters for every task you fine tune on
Sid#2121: you can also control the number of parameters to use by changing the downsampling ratio of adapters
Sid#2121: what are you using rn? 500m is a lot
π
¬ gabriel_syme π
¬#3220: yeah badly phrased just walk up, I meant time and compute similar to if you were simply finetuning a model on the same data
zphang#7252: adapters are even more hacky lol IMO
zphang#7252: especially code wise
π
¬ gabriel_syme π
¬#3220: also, I'm pretty clueless about adapters, just using a code rather than understanding it. The model seemed to be 6.5b parameters at the start, vs 6b (It's a GPT-J)
zphang#7252: they're basically little modules injected in every transformer layer
zphang#7252: and you only tune those
Sid#2121: just replace your mlp layers with `nn.Sequential(original_mlp, adapter_mlp)`
Sid#2121: it's like a few loc Β―\_(γ)_/Β―
Sid#2121: I'm also tuning j with adapters a bit, and it's only ~200m extra params. I found the adapters after the attention layer didn't help much
zphang#7252: it does mean you need to fork/modify the model tho. I guess you'd have to do something similar for prefix tuning, but not prompt tuning
Sid#2121: so i just do them after the mlp
Sid#2121: with a downsample factor of 4
zphang#7252: and then bitfit broke my brain
Sid#2121: ```python
|
def add_adapters(
model,
adapter: nn.Module = Adapter,
hidden_size,
downsample_factor: int = 4,
transformer_attr: str = "transformer",
ff_attr: str = "mlp",
):
"""
Adds an adapter layer to `model`
"""
layers = getattr(model, transformer_attr)
n_layers = len(layers)
for l in range(n_layers):
mlp = getattr(layers[l], ff_attr)
setattr(
layers[l],
ff_attr,
nn.Sequential(
*[
|
mlp,
adapter(dim=hidden_size, downsample_factor=downsample_factor),
]
),
)
return model
```
Sid#2121: wait what's bitfit
zphang#7252: basically: finetune only the bias terms
zphang#7252: somehow gets comparable performance to full fine-tuning ???
Sid#2121: :thinkies:
Sid#2121: damn i missed that one
Sid#2121: i'll try that out soon
zphang#7252: https://arxiv.org/abs/2106.10199
π
¬ gabriel_syme π
¬#3220: ~~bias is all you need~~
cfoster0#4356: idk why I didn't just think to use setattr
zphang#7252: getattr and setattr always feel dirty and like cheating
bmk#1476: :harold:
bmk#1476: my code abuses getattr and setattr a lot
Sid#2121: i'm a setattr stan
|
zphang#7252: I'm not saying it's bad, I'm saying it feels bad
Sid#2121: that's why it's so good
bmk#1476: I recently wrote some code that overrides `__getattribute__` (the one that always gets called) to do an incredibly hacky thing where it calls object.getattr or something on itself to avoid an infinite loop and then calls getattr on super()
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/886397338816491540/unknown.png
Sid#2121: nooo
Sid#2121: surely there is a better way to do this lol
bmk#1476: there probably is
Sid#2121: this paper has massive "trying to fill the wordcount on your essay" energy
zphang#7252: You should see the original draft that they fired off to meet an anonymity deadline https://twitter.com/yoavgo/status/1344769789328306176
zphang#7252: I mean there's not much to add to it given it's just an empirical observation
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/886399720996634674/image.png
π
¬ gabriel_syme π
¬#3220: wait, how do you post if it's anon
Sid#2121: posted just before the anon period started i guess
π
¬ gabriel_syme π
¬#3220: oh, lol
zphang#7252: yup
janus#0150: Lol
Zippy#1111: Yes this makes sense to me. :blaze: https://cdn.discordapp.com/attachments/729741769738158194/886417425128636496/unknown.png
Zippy#1111: I apologize for the very stupid response. I'm .. I have a horrible sense of humor. Stupid things make me laugh.
Kia#2550: It's just a BERT paper I supposed
ilovescience#3282: I was wondering whether it was loading or something lol
|
sweg#8920: does anyone know any AR models with param counts in the ~300M ballpark?
cfoster0#4356: GPT-2 medium?
cfoster0#4356: You should probably use the ones they trained at the Stanford CRFM here, I've heard they're good https://github.com/stanford-crfm/mistral
EricHallahan#1051: GPT-Neo 350M?
EricHallahan#1051: Did we ever publish that again?
sweg#8920: oh 125M is on HF
sweg#8920: i think that works cause mainly i wanted to compare carp with AR vs MLM
sweg#8920: and i have 100M MLM to compare against
sweg#8920: thx
π
¬ gabriel_syme π
¬#3220: not yet I think although I'd love to use it π
EricHallahan#1051: That was a rhetorical question.
π
¬ gabriel_syme π
¬#3220: I did not realize
Oleksii Bulygin#8016: peak python
Zippy#1111: *woah* so it will lie.. if you ask whether it has been downloaded, it will say that has been downloaded, even though it may not have been downloaded before the caller asked whether it was downloaded :overfloosh: .. sounds like me when someone asks whether my room is clean.
Oleksii Bulygin#8016: or as they say -- Maybe monad
Zippy#1111: yes
alstroemeria313#1694: yeah he might want to except the "has been downloaded" attribute too?
alstroemeria313#1694: from autodownloading it when it is touched.
bmk#1476: the has been downloaded attribute exists soley for getattribute
alstroemeria313#1694: ah
|
alstroemeria313#1694: oh right, is private
Zippy#1111: Ah ok that makes more sense now.
Oleksii Bulygin#8016: but what for? lazy downloading?
bmk#1476: yeah
3.14#4598: Hi people. I'm Pi, just another DL Engineer here. I would like to get involved and help EleutherAI however I can. How can I help?
AI_WAIFU#2844: Hello, we've got a whole list of projects that we don't have time for. You can take a look a them to see if there's anything that interests you. Otherwise I recommend just hanging around a bit and getting to know what other people are working on. https://github.com/EleutherAI/project-menu/projects/1
Alex Havrilla#6435: Hi all! Same as @3.14 . Really excited to familiarize myself and get involved
EricHallahan#1051: Welcome!
StellaAthena#3530: It's not DL but if you wouldn't mind putting together some scraping code I have a website I would like to get scraped. Basically I need a table of all papers published in *CL venues in 2019, 2020, and 2021. Each datum should read `(paper name, venue name, year, type of paper)` where `type of paper` is "long paper" "short paper" or similar. Sometimes it will be none.
The data can be found here, I just need it scraped and stored in a csv: https://aclanthology.org/
StellaAthena#3530: @3.14 @Alex Havrilla Skimming the taskboard, these look like impactful but easy lift projects:
https://github.com/EleutherAI/project-menu/issues/25
https://github.com/EleutherAI/project-menu/issues/4
https://github.com/EleutherAI/project-menu/issues/10
Parker#3197: https://github.com/acl-org/acl-anthology/tree/master/data/xml
Parker#3197: might be helpful for that
Parker#3197: https://raw.githubusercontent.com/acl-org/acl-anthology/master/data/xml/2020.acl.xml
Parker#3197: which has the links on their website
Parker#3197: <url hash="ff986e52">2020.acl-main.1</url>
Parker#3197: it also looks like just creating a mirror downloads all the pdfs from their website (by following the instructions at the repository root)
|
StellaAthena#3530: huh, I didn't know that. Thanks @Parker, that does make it quite straight forward
someKindaBean#8471: this isn't anything new, but it's a really good blogpost (by someone else) on audio generation that I felt like sharing. https://benanne.github.io/2020/03/24/audio-generation.html
edit: looks like it was posted before, sorry
EricHallahan#1051: Do you know what hasn't been posted here before?
This concept code that I put together which generates token embeddings from raw codec2 3200 frames.```python
# Open and read headerless codec2 3200 binary
with open('codec2_3200_natural.bin', 'rb') as f:
c2data = np.fromfile(f, dtype=np.uint64, count=-1)
# Utility variables for unpacking
packing = np.array([1,1,5,7,*((5,)*10)],dtype=np.uint8)
shifts = np.uint8(64)-np.cumsum(packing,dtype=np.uint8)
masks = np.array([0x1,0x1,0x1f,0x7f,*((0x1f,)*10)],dtype=np.uint64)<<shifts
# Unpack codec2 frames
unpacked_c2data = (c2data.reshape(-1,1)&masks.reshape(1,-1))>>shifts.reshape(1,-1)
# Placeholder embedding matrixes
embed_lsp = np.random.normal(0,1,(32,20))
embed_Wo = np.random.normal(0,1,(128,32))
|
embed_energy = np.random.normal(0,1,(32,20))
embed_v = np.random.normal(0,1,(2,2))
# Construct embeddings from frame data
# Creates an array of shape (n_seq, 256)
np.concatenate((embed_v[unpacked_c2data[:,0]].reshape(-1,2),
embed_v[unpacked_c2data[:,1]].reshape(-1,2),
embed_energy[unpacked_c2data[:,2]].reshape(-1,20),
embed_Wo[unpacked_c2data[:,3]].reshape(-1,32),
embed_lsp[unpacked_c2data[:,4:]].reshape(-1,200)),axis=-1)```
someKindaBean#8471: That's pretty spiffy
EricHallahan#1051: Don't know when I'll get around to using it. :berk:
Zippy#1111: That feel... https://cdn.discordapp.com/attachments/729741769738158194/886793557527113788/unknown.png
Louis#0144: TFW the bass drops
π
¬ gabriel_syme π
¬#3220: that's the model doing the mic drop
Gurkenglas#7362: Are there optimizers that use the second derivative to calculate how far to jump?
alstroemeria313#1694: yes
alstroemeria313#1694: They are not used much in deep learning bc stochastic gradients tend to mess the most common ones up. there are stochastic second order optimizers but they are rarely consistently better than Adam on deep learning stuff and are often slower in wall clock time.
alstroemeria313#1694: but like L-BFGS for instance.
Gurkenglas#7362: because, as the-book tells us, deeper networks have larger higher-order derivatives?
|
alstroemeria313#1694: i'm not entirely sure what the reason is
alstroemeria313#1694: you would think having large higher-order derivatives would make the second-order ones *better*...
alstroemeria313#1694: actually. isn't the thing that matters the ratio of the largest eigenvalue of the Hessian to the smallest?
Gurkenglas#7362: why so
alstroemeria313#1694: (or does that only hold for convex functions i.e. Hessian is spd)
alstroemeria313#1694: @Gurkenglas https://math.stackexchange.com/questions/2285282/relating-condition-number-of-hessian-to-the-rate-of-convergence
alstroemeria313#1694: so second-order methods use the inverse of the Hessian, or some approximation or modification to it, as a preconditioner.
Gurkenglas#7362: ah so this (indirectly) measures how many directions at a time you can rule out by taking the second derivative into account
alstroemeria313#1694: well if you multiply by the inverse of the Hessian you cancel out the differences in eigenvalues
alstroemeria313#1694: (note that this makes the iteration attracted to maxima and saddle points)
alstroemeria313#1694: (i.e. it is Newton's method, it finds zeros of the gradient in general)
alstroemeria313#1694: (You can fix this by multiplying by the inverse of the matrix absolute value of the Hessian i.e. do negative Newton's method in directions of negative curvature, this makes it only attracted to minima)
alstroemeria313#1694: (BFGS and L-BFGS fix it by building an approximated Hessian that is guaranteed spd)
Some Point Process#3793: Doesn't the original formulation of meta-learning (MAML) require 2nd order in the exact case
Some Point Process#3793: But they somehow are able to "solve" it in practice with ordinary SGD
Some Point Process#3793: Momentum based optimizers are also able to overcome saddle points in practice, supposedly, which may or may not have obviated second order methods from ever being used. The training dynamics in that case seem to be finding very high order terms (since EMA of gradients are calculated on the fly). And then there are EMA of activations in neural ODEs (momentum resnets)
Desperate Noob#6277: It sounds like you are speaking in some code that no one can understand lol
Some Point Process#3793: I got carried away, especially with the neural ODE part. I was still going to share what might a more accessible critique of 'hessian free' optimization in ML, but I think it mentions how paul christiano (notable ai safety researcher) thinks that they are not necessary for training powerful AIs. In short, his message sounds something like "the smarter an AI/agent/ensemble of agents gets, the better it is at future optimization anyway". So in this view the optimization process is such that AI getting better at optimization itself (since it's simply becoming smarter), is by virtue doing whatever we meant by "higher order" optimization that was originally intended to be implemented explicitly (e.g. by computing hessians). Whether this view holds in practice might still be debatable though, unless the AI system is already very capable: <https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/+&cd=1&hl=en&ct=clnk&gl=us>
alstroemeria313#1694: also Adam uses an empirical diagonal preconditioner
alstroemeria313#1694: Like, in the place where you would put the Hessian or Hessian estimate in a second order method.
|
alstroemeria313#1694: Since it is diagonal you divide the (EMA of the) gradient by it element-wise instead of having to take a matrix inverse.
Some Point Process#3793: Interesting. I think I recollect something similar to that but forgot
Some Point Process#3793: These were always some interesting visualization/comparisons
https://cs231n.github.io/assets/nn3/opt2.gif
https://cs231n.github.io/assets/nn3/opt1.gif
alstroemeria313#1694: there's no Adam though
Some Point Process#3793: Damn
alstroemeria313#1694: (Adam is Rmsprop + momentum + bias-corrected EMA)
alstroemeria313#1694: So it kind of behaves like Rmsprop but smoothed
π
¬ gabriel_syme π
¬#3220: is this a good time to ask what you think about ranger21
π
¬ gabriel_syme π
¬#3220: also, not sure if you tried it
alstroemeria313#1694: i haven't tried it
Kharr#7888: Adam is very difficult to beat both in terms of training stability and model performance. The toy visualizations are cool but don't hold up in practice.
Once you mix in more complex data and bigger models optimization is less clear. The one thing you can be sure of is that anyone chasing SOTA is trying different optimizers yet Adam is the one most often published.
StellaAthena#3530: I got an email because we got promoted on linkedin lol: https://www.linkedin.com/feed/update/urn:li:activity:6842080032211402752/
DoesThisUnitHaveASoul#7264: Hello everyone, I come seeking the wisdom of the hive, should I be granted audience.
Is anyone aware of any frameworks out there that take as input a pretrained representation that is then fine tuned and evaluated on separate downstream tasks ranging from various vision ones, going well into NLP and other temporal tasks?
DoesThisUnitHaveASoul#7264: I am about to start building one for my work, but it would be *real* nice if something related existed
|
StellaAthena#3530: We have a framework for evaluating on NLP tasks: https://github.com/EleutherAI/lm-evaluation-harness
DoesThisUnitHaveASoul#7264: I am specifically interested in PyTorch based ones, but I am flexible
StellaAthena#3530: But I'm not aware of anything as comprehensive as you want
DoesThisUnitHaveASoul#7264: That's awesome for the NLP side of the spectrum.
DoesThisUnitHaveASoul#7264: It's also kinda shame that something like this doesn't exist. It would be good for the community
Louis#0144: @DoesThisUnitHaveASoul I swear I've seen u here before
Louis#0144: I can't remember when
DoesThisUnitHaveASoul#7264: I was here, yes
DoesThisUnitHaveASoul#7264: 4 months ago or something like that
Louis#0144: Oh ok
DoesThisUnitHaveASoul#7264: been a hell of a summer
Louis#0144: You're telling me
Louis#0144: LMAO
DoesThisUnitHaveASoul#7264: I also followed the advise of people on here for TPUs and Google credits, and managed to get two research compute sources, so thanks for that π
DoesThisUnitHaveASoul#7264: Btw, the NLP framework you guys built seems to do the kind of things I want for the text side of things, so thank you @StellaAthena
DoesThisUnitHaveASoul#7264: @StellaAthena That is some sweet sweet code. Thanks for this!
StellaAthena#3530: Most of the credit goes to @bmk for his tireless work on it π
DoesThisUnitHaveASoul#7264: @bmk Thanks for this awesome piece of engineering. π
bmk#1476: glad you find it awesome :D
DoesThisUnitHaveASoul#7264: I'll start building a suite for vision: classification, segmenetation, localization, regression, relational and abstract reasoning. Then I'll integrate it with my few-shot learning suite, and then use your repo for nlp stuff
|
DoesThisUnitHaveASoul#7264: gonna be intense, but it has to be done, as it doesn't seem to exist
bmk#1476: any contributions back to eval harness will be greatly appreciated
DoesThisUnitHaveASoul#7264: will surely do that when I find any necessary
bmk#1476: in particular there's a PR for documentation that was never finished and if you could help get that over the finish line that would be awesome
DoesThisUnitHaveASoul#7264: right, maybe once I have my codebase in place and start writing docs while experiments are running, I can help with that!
bmk#1476: awesome
bmk#1476: also if you have any tasks you want in eval harness, just PR them
DoesThisUnitHaveASoul#7264: will do π
louis030195#2462: https://codex.louis030195.com/
louis030195#2462: :p
Zippy#1111: That's awesome! @louis030195
louis030195#2462: it's really basic
alstroemeria313#1694: ```python
def foo(x):
return getattr(x, 'capitalize')()
```
```java
String foo(Object x) {
return x.toString().substring(0, 1).toUpperCase() + x.toString().substring(1);
|
}```
alstroemeria313#1694: I tried to trick it
alstroemeria313#1694: Is this right, I don't know Java
StellaAthena#3530: no
louis030195#2462: XD
louis030195#2462: yeah I could give bigger context
louis030195#2462: ```rs
const CONTEXT: &str = "Python:
def add(a, b):
return a + b
###
JavaScript:
const add = (a, b) => a + b;
###
Rust:
fn add(a: u32, b: u32) -> u32 {
|
a + b
}
###
Go:
func add(a, b int) int {
return a + b
}
###
C:
int add(int a, int b) {
return a + b;
}
###
";
|
```
Orz#3023: Can we promote stuff here?
Zippy#1111: I think it's actually correct.
Zippy#1111: aka I tested.. took me a while to set up my fricking java environment but I got it to work.
Zippy#1111: ```java
//Main.java
public class Main {
public static void main(String args[]){
Foo fooinator = new Foo();
String x = fooinator.foo(args[0]);
System.out.println(x);
}
}
//Foo.java
public class Foo {
String foo(Object x) {
return (
x.toString().substring(0, 1).toUpperCase() + x.toString().substring(1)
|
);
}
}
```
Mic#5645: what do you mean by promote?
> No advertising or spam
Orz#3023: oh
aight
Awesome_Ruler_007#7922: I am kinda obsessed with how we can integrate semi-supervised learning with dreamcoder and make another leap beyond codex π€
I keep coming back to graphs and multiple networks. Just like Dreamcoder, the aim is to build a library of concepts entirely with unsupervised learning and then use some Neural search to find the best graph that suits the given problem.
The problem then kinda becomes that variable names (like for one-liners) become pretty important association with a so-called `concept`... I wonder how researchers take leaps like that to connect things together
Maybe an autoencoder style architecture? take code snippets, generate a graph of concepts that the decoder tries to convert back to text - easy peasy.
Another network that aligns those token sequences to discrete "concepts" that the graph is made of, thus having arbitrarily named concepts actually accomplishing something useful
The only problem becomes how exactly we get a NN to figure out to "test" its generated code....
Awesome_Ruler_007#7922: what do researchers here think about which way would be the most promising for making LMs that are better adapted to programming tasks?
EricHallahan#1051: :morelayers:π
EricHallahan#1051: Training on joint natural language/code corpra seems to work pretty well.
Awesome_Ruler_007#7922: bu-but new architectures may lead us to better performance without scaling π¦
|
cfoster0#4356: Indeed
cfoster0#4356: The hit rate for new architectures is pretty darn low, which is discouraging
cfoster0#4356: Whereas scaling has pretty predictable returns
Awesome_Ruler_007#7922: but is it 0? π
~~we can do it guys!! this is sillicon valley just drink coffee and listen to your Robbins. This is going to be awesome, we will change the world!!!!!~~
Some Point Process#3793: Yes, come to think of it having some sort of recurrence (an algorithmic "feature") might enable a mini turing machine in the AI system that thinks about some arbitrary code (and how it executes) before it writes that code. At the very least, something like that won't be possible without recurrent connections. As for whether certain computational/neurosymbolic primitves (i.e.. the ability to code) is needed for *AGI*, is more ambiguous to me tho
cfoster0#4356: *not enough bits of precision, rounded to 0*
Awesome_Ruler_007#7922: yeah, that was kinda what I was going with the crude graph-thingy. getting an idea of how to plan the code is the first step, because that's kinda how we as humans do
cfoster0#4356: Recurrent architectures look real attractive, but teacher forcing in a parallel/unrolled architecture is very powerful. Whole lotta gradient information for your compute spent
cfoster0#4356: If you can only supervise the output after it's gone through the network 10 times, you might end up spending 10x more compute for the same gradient feedback, unless you've got something clever
Awesome_Ruler_007#7922: what I had in mind was like around 100-1000x times compute π
The problem would basically be representing the prompt as a graph of concepts that would be aided by a neural search. since the sample space is ..... "large" that's gonna be a real problem
Louis#0144: Oh no
Awesome_Ruler_007#7922: It all boils down to whether we can actually test the code it produces - which in most cases can't be done without major modifications
Awesome_Ruler_007#7922: what we need is an AGI basically - ez :3berk:
Awesome_Ruler_007#7922: Dreamcoder managed because its library wasn't really that large. solving small problems with a few lines of primitves doesn't exactly let it scale to `convert a COCO dataset to TFRecords` kinda thing
Awesome_Ruler_007#7922: that's a dead end π
Awesome_Ruler_007#7922: but tbf, since you guys are leaning on scaling - I have the right to ask that amount of compute too
Louis#0144: Not why I'm saying oh no
Louis#0144: Anything with graphs and NLP will not scale
|
Louis#0144: I learned this the hard way
Awesome_Ruler_007#7922: https://tenor.com/view/why-tho-though-gif-7485441
Awesome_Ruler_007#7922: I don't understand in what sense they cannot scale
cfoster0#4356: I don't think GPUs are really built for graphs, or at least not graphs that aren't 2/3D triangle meshes
cfoster0#4356: You end up with a bunch of sparse memory accesses, I think
Awesome_Ruler_007#7922: it's just kinda like a flow chart of vectors
Awesome_Ruler_007#7922: don't really know what data structure that would be
StellaAthena#3530: But the impact is massive. Ill take my transformer against your RNN any day
cfoster0#4356: For sure
cfoster0#4356: That's why so many researchers are publishing in that lane
Louis#0144: We're probably only a handful of revisions away from like an architecture that will carry us to AGI
Louis#0144: Architecture here also including loss func
Dashiell#8739: Because there isn't enough data? Or because the performance doesn't scale with data in the way it has for images/text?
Louis#0144: paired graphs + text in a meaningful way is rare
Louis#0144: especially if you want generalizable stuff
Louis#0144: By rare I mean you would have issues getting > 100m vertices
Louis#0144: @flowpoint is working on this actualy
Dashiell#8739: I'm very interested in this. @flowpoint can you point me in the direction of any work you've done / are doing?
choltz95#4641: Wb the multi-lingual wikipedia graph. That might be pretty big
π
¬ gabriel_syme π
¬#3220: there was some nice research from MSFT on code completion, bugfinding, etc. that used (mostly) graph networks I believe, although this is from memory. I wonder if you can revisit those attempts (which are really only like 5-6 years old maybe) with transformers / scale
|
π
¬ gabriel_syme π
¬#3220: is recurrence completely gone if you unroll the architecture? are you approximating something equivalent?
π
¬ gabriel_syme π
¬#3220: I'm thinking of perceiver for example, which is like unrolled in depth
cfoster0#4356: I misspoke before. Unrolling alone gets you a computational graph where you're applying the same operations everywhere, but you need to make the aggregation parallel to break out of slowdowns from having to compute past representations before current ones
π
¬ gabriel_syme π
¬#3220: I wonder if it is possible to have a perceiver work like a pondernet. Instead of hardcoding the depth, have the model take iterations through the cross attention + self attention block. But I can't for the life of me figure out what to do with the latent. Maybe this isn't possible to do in an equivalent way?
π
¬ gabriel_syme π
¬#3220: also, not sure if it's better lol. Might be slower, but memory would be lower right
Some Point Process#3793: Perceiver IO in the paper admits shared weight matrices (self-attention?) that can perform repeated processing of the latent vectors (once the observations have been "internalized" by the model)
Some Point Process#3793: I think it's this part but if it's not I'm totally wrong about it (so I apologize) https://cdn.discordapp.com/attachments/729741769738158194/887165137847480361/unknown.png
π
¬ gabriel_syme π
¬#3220: yeah this is why I asked, the original also shared cross and self attention I guess, so I wonder if you can dynamically set the depth for each sequence
π
¬ gabriel_syme π
¬#3220: thanks! I admit I have not yet dived into IO (always speak too soon lol)
π
¬ gabriel_syme π
¬#3220: oh dang so maybe they did it already. I'll read it later today and check if L can be dynamically set. Thanks!
π
¬ gabriel_syme π
¬#3220: perceiver will always be one my favorite papers just for the Kant reference lol
cfoster0#4356: Yeah in theory you could decode at multiple different depths, esp. if you have shared/repeated weights and at runtime sort of early exit out after a learned number of iterations
π
¬ gabriel_syme π
¬#3220: yeah exactly that's the idea, I'll look into it.
Kazumi#1297: I haven't heard anything back from the TRC application and now I'm wondering if I applied right
π
¬ gabriel_syme π
¬#3220: check the promotion tab
π
¬ gabriel_syme π
¬#3220: it's hidden there most of the time
π
¬ gabriel_syme π
¬#3220: (can also search TRC heh)
Kazumi#1297: nope, looking in all emails doesn't show it
π
¬ gabriel_syme π
¬#3220: hmm, not sure. it's supposed to come instantly
π
¬ gabriel_syme π
¬#3220: maybe try spam
|
Kazumi#1297: I'll do one more
Kazumi#1297: I was also thinking of just pony up and get google colab pro+ for the rest of the month to do what I need to, then go back to pro
flowpoint#7450: tldr:
i build an elasticsearch index of "the pile" for aligning a kg (yago), but searching hard negatives is somewhat slow (disk bottleneck),
so i am reading more, and thinking up alternative strategies now
long answer:
honk if youre \:goose; has experience in kg but is busy,
he suggested aligning kg's and the pile with elasticsearch,
(distantly similar to "distant supervision")
i want to learn and contribute,
i have little to no prior experience w kg/ir
but i made an ugly script that does it
searching for entity (nodes in a graph) in the pile (entity linking) is fast,
searching for hard negatives (syntactically similar documents) is slow
goal is a dataset "pile_kg" (temporary name?) for grounded joint pretraining of natural text and knowlegdge graph/base information, with a pretrained example model
my next idea would be:
|
start with a known good kg (wikipedia/dbpedia, yago) in rdf format,
align it to the documents, by leveraging common crawl as an index and the uri's of the documents in the kg's rdf-format.
=> better data quality (precision) than searching w elastic,
but less diverse fulltext examples of an relation (recall),
so might additionally revisit elasticsearch for implicit knowledge (aligning the atomic dataset), but if so, i intend to use pyanserini (which i didn't know of, when starting my ugly script)
flowpoint#7450: previous convo's are in gooses goose2goose discord in #the-pile-kg and general
flowpoint#7450: https://discord.gg/vemt4N7F
Louis#0144: You're pretty solid with KGs now
Louis#0144: I would not say you are inexperienced
Louis#0144: lol
Louis#0144: I mean like there's holes in your knowledge
Louis#0144: But for KGs at scale you're set
flowpoint#7450: lol, no
Louis#0144: Welcome to research where everyone doubts their ability
Louis#0144: That feeling won't go away
Louis#0144: @Dashiell feel free to join flowpoint if u want
Louis#0144: flowpoint would probably appreciate the help lmao
flowpoint#7450: sure...
someKindaBean#8471: I remember someone here talking about fractally pooled attention and discord search isn't showing me what i'm looking for and neither is google scholar
|
someKindaBean#8471: it was pooling nearby attention somewhat like this: https://cdn.discordapp.com/attachments/729741769738158194/887342488816320512/unknown.png
someKindaBean#8471: was this a lucidrains experiment that i can't find? i think the conclusion was that it only worked for autoregressive because otherwise the pools leak causal information
someKindaBean#8471: was this a dream or does someone else remember this?
ari#9020: That would be https://arxiv.org/abs/2107.11906
ari#9020: And lucidrains's repo: https://github.com/lucidrains/h-transformer-1d/
someKindaBean#8471: Awesome, thank you very much.
cfoster0#4356: You may have meant this, but I think the conclusion was it *didn't* work for autoregressive because it leaked information
Kharr#7888: This is also MSFT's Focal attention: https://github.com/microsoft/Focal-Transformer
Kharr#7888: There is a way to make it work if you use creative masking per token to prevent it from leaking information. It's a bit more complex. Every token needs its own specialized mask.
Louis#0144: this for nlp looks really cool
Louis#0144: I could see uses in like
Louis#0144: big bird
Louis#0144: or big bird 2.0
Louis#0144: has phil tried it for nlp yet?
Louis#0144: ill just ask
Louis#0144: :berk:
Louis#0144: @Deleted User de23c58c
someKindaBean#8471: ooooooh, thanks for correcting me
Awesome_Ruler_007#7922: ooh, nice. Thanks a lot!
Awesome_Ruler_007#7922: They used BiRNN https://www.microsoft.com/en-us/research/blog/learning-source-code/
|
wish I was a researcher - applying transformers to this could have been an easy paper π
Louis#0144: >wish i was a researcher
Louis#0144: just work on any project here
Louis#0144: ;P
Louis#0144: you'll be a researcher
StellaAthena#3530: Who says you're not? What's your skillset? What do you want to study?
Awesome_Ruler_007#7922: still in high school π much to learn here. I do have some ideas that I have actually made some pretty unexpected-level progress on - but I can't really pitch them anyways due to the age bias
Awesome_Ruler_007#7922: doing lame using "AI to solve hunger" high-school projects don't really stick with me because I know internally that they would have no possible impact or real-world use
bmk#1476: dont worry, i dont discriminate by age or level of education. if your ideas are good ill say they're good and if they aren't i'll say they arent
bmk#1476: or race or gender or etc
Orz#3023: I'm in a similar boat tbh
And it's awesome how welcoming this community is
It honestly feels like you've cracked gsoc, except you won't get a certificate
Awesome_Ruler_007#7922: you can't really say any idea is great or not - unless you experiment with it and it actually turns out to be useful. But thanks! π
bmk#1476: well, i mean, my opinion on your idea
bmk#1476: feel free to pitch ideas if you think they're good and dont let anything other than how good you think the idea is stop you from doing so
flowpoint#7450: well most idea's aren't new, so you can just tell that some ideas are not effective
flowpoint#7450: so the more ppl ask and tell, the less time will be wasted, imo
flowpoint#7450: like: no idea is a dumb idea, no question is dumb, but let's talk to not reinvent the wheel
provided, that the literature was considered aswell
|
bmk#1476: sure and im pretty blunt and ill definitely let you know if i think the idea is a waste of time, but you can rest assured it's not personal
Louis#0144: https://twitter.com/analysisfact/status/1437819731575197700?s=21 is there an encyclopedia of approximations like this
Louis#0144: I feel like they might actually be somewhat useful for DL
StellaAthena#3530: My brain?
StellaAthena#3530: Calc 101?
Louis#0144: Lmao
Kharr#7888: Wolfram Alpha
Kazumi#1297: series expansions gives good approximation, that's just first 2 terms of the series expansion too
StellaAthena#3530: 90% of the time, it's a two or three term taylor
Louis#0144: OH
Louis#0144: yeah thats true tbh
Clockworkfish#6603: Hey to all y'all in cali, go vote today if you haven't already!
Technobird22#2055: Had an idea: CLIP, but for audio
Sid#2121: https://arxiv.org/abs/2106.13043
Ravna#1831: Just dig all LSTM-related papers from arxiv and re-implement every one of them using transformers.:berk:
Zippy#1111: I want to learn more about ML, and even though I did my undergrad in comp sci and work in the field, and feel confident at programming, and did a minor in math, I feel scared of the math.
Zippy#1111: Also the class that I performed best at was linalg, so I *should* be good at it, I've just gotten so used to being able to intuit solutions instead of depending on mathematical structures.
Zippy#1111: Since that's essentially what programming becomes after a while.
Zippy#1111: *I programmed it that way because it felt right*
random_lurker99#8915: is there a question in there or a blogpost
|
Zippy#1111: I have no idea, maybe it is a blogpost?
bmk#1476: are we talking like abstract linalg or more computational
Zippy#1111: computational.
Zippy#1111: But I like abstract.
Zippy#1111: We did get into abstract in the last half and I had no problem with it.
bmk#1476: ah makes sense
bmk#1476: I'm a CS guy but ironically I like the abstract stuff better
Zippy#1111: Same lol.. I felt like as other people were getting more confused, things were getting easier for me lol.. (I suck at basic math)
Technobird22#2055: https://discord.com/channels/687504710118146232/687504710688702475/790918201907085322
Zippy#1111: Like.. ask me what 873 + 283 is, and it will take me a while to answer.
Zippy#1111: But if it's something related to how things work, or the *abstract concepts* then I have a much easier time.
kurumuz#5695: yeah same
kurumuz#5695: i am terrible at arithmetic
kurumuz#5695: always was
bmk#1476: what's your opinion on abstract nonsense
Louis#0144: where is this
Louis#0144: I cant open it
bmk#1476: like category theory and all that jazz
Kazumi#1297: me and my friend often debate about abstract vs concrete
Zippy#1111: Yeah. In terms of rote memorization or "using a thing that I don't understand", I'm terrible.. but if you ask me to figure out how the thing works, then I have tons of fun and it's miles easier.
|
Awesome_Ruler_007#7922: TPU Podcast server
Kazumi#1297: https://media.discordapp.net/attachments/736963923521175612/790053638156058654/image0.jpg?width=520&height=659
bmk#1476: sounds like you'd like category theory et al a lot
Kazumi#1297: another server
Louis#0144: ohhh
Louis#0144: derp
Ravna#1831: category theory is one of the best things happened in the 20th century
kurumuz#5695: schmidhuber?
Louis#0144: i went into that server once to stalk leo and find old things he said
kurumuz#5695: :schmidhuber:
Zippy#1111: I'm great at generalizing about things so I bet I'd love category theory :blaze:
Kazumi#1297: :tpu:
Technobird22#2055: oh you were the OP!
Kazumi#1297: I've been trying to learn group theory
Zippy#1111: Also, sorry about my blog post.
kurumuz#5695: I haven't done math in so long
kurumuz#5695: probably trash at it rn
Zippy#1111: Oh! Yeah one of the easiest concepts for me to grasp was graph stuff, so it seems like category theory would be neato.
Zippy#1111: Ok I am going to stop blogging now. π€
Technobird22#2055: By the way, would it be possible to add Technobot to this server? There has been quite a bit of interest surrounding it and I've posted it's outputs before. Currently it can complete text (using GPT-J-6B), and I'm working the text adventure game mode and conversation mode, and they are quite promising.
|
I also have an Nvidia M40 being shipped to me, and while slow, it will be able to generate large VQGAN images. I'm planning to dedicate this to Technobot. I've also got plans to automatically distribute generation between multiple GPUs on differrent machines, and am planning on adding more models to the bot in the future among other ideas.
Kazumi#1297: how big of a GPU did you need to run gpt-j 6B? I think to train, it needed a GTX 3090
EricHallahan#1051: That's for inference.
EricHallahan#1051: Unless you are doing some significant offloading tricks.
Kazumi#1297: oh
EricHallahan#1051: This reminds me that I haven't updated the FAQ in months. :guilty:
Kazumi#1297: I want to retrain my bot with a bigger GPT model :SadCat:
EricHallahan#1051: TRC exists.
Kazumi#1297: I applied 3 weeks ago with no response, I'm doing it again just in case
EricHallahan#1051: Huh, that's really weird.
EricHallahan#1051: Try again.
Kharr#7888: I helped someone also set up a training loop for RTX 3090 for GPT-J.. that codebase is somewhere.. let me see if I can find it. You basically tune about 50% of the params and keep everything on the GPU
Kazumi#1297: yeah, that's what I did with my previous model, I think I only trained the last few layers
Kharr#7888: You can also tune 100% of the params if you write your own optimizer which dynamically swaps out the trainable params (this is what I did) and you accumulate gradients for 50% of the params per step. My most recent recommendation is just to train adapters and freeze the rest of the model if using a 3090. The model is so flexible that you don't need much to push it to do a specific task.
Kazumi#1297: hm, true
Kazumi#1297: I tried to use huggingface's model and train it, I couldn't really figure out how to make it work
EricHallahan#1051: The official HF implementation is a memory hog. :grimberk:
Kharr#7888: https://gist.github.com/kinoc/dca36b12b5e956688a9b92a87ba7c52c I think this was it. Mentions me enough times :berk:
Kazumi#1297: tbh I'm fine with not going for the maximum size, anything above the gpt-2's 345M parameter model I used is good enough
|
Kazumi#1297: I'm not made out of money :SadCat:
Kharr#7888: Try https://github.com/stanford-crfm/mistral with the Medium Arwen model (highest chkpt). It's actually quite good (better than the original GPT2 version)
Kazumi#1297: is it gpt-2 medium?
Kharr#7888: Yes, just better trained
Kazumi#1297: I'll try that I guess, the gpt part isn't the main part I wanted to update with my bot, I already gave my bot the ability to annotate images to itself, I just found this as an excuse to move to a bigger model
Kharr#7888: Neo 1.3B and 2.7B are also good options. Somewhere between all the models you can surely find one that fits your budget π 6B is probably overkill in terms of compute and cost to run.
Kazumi#1297: I was looking at those, but memory problems happened
cfoster0#4356: I think those should be fixed in the next transformers version, right?
EricHallahan#1051: For GPT-J?
cfoster0#4356: Neo
EricHallahan#1051: Β―\_(γ)_/Β―
EricHallahan#1051: Hopefully the GPT-J memory loading problems will be resolved before it is released in 4.11.0.
cfoster0#4356: https://github.com/huggingface/transformers/pull/13491
EricHallahan#1051: Huh, didn't know this existed.
Louis#0144: how slow is it
Louis#0144: lmao
Kharr#7888: Not sure about that specific code since I haven't used it but on my setup it trains at 1.5k tokens per second. TPU VM v3-8 is 5k tokens per second. v3-8 is about 420 TFLOPS and RTX 3090 FP16 is ~ 140 so it's basically running at optimal compute.
Louis#0144: not thaaaaat bad
Kharr#7888: It's definitely fine, you can get a good tune in a few days.
oreo#2740: this seems to be written for finetuning on gpus; is it trivial to get it to use tpu v3, or should I just stick to using gpu?
|
Awesome_Ruler_007#7922: *All* HF implementations are memory hogs and buggy code
Kharr#7888: If you are using TPUs please use the official JAX codebase
Kazumi#1297: my experience with HF is that it works great out of the box, good luck otherwise
Awesome_Ruler_007#7922: throwback to my previous experience with BigBird on Huggingface. The repo was apparently """tested""". Couldn't even *import* the damn thing π like bruh - if it's not working atleast don't lie - it's not like we are holding you at gunpoint
bmk#1476: the gptneo implementation in HF is finally no longer terrible! https://github.com/huggingface/transformers/pull/13491
bmk#1476: rejoice
bmk#1476: it only took *checks sundial* 4 months (from when the PR was put in by finetune)!
π
¬ gabriel_syme π
¬#3220: Nice now that I finished all of my fine tuning :)
Technobird22#2055: what a coincidence "
A meme page to check every time MatLab crashes" posted it today on twitter :thonk:
Technobird22#2055: https://twitter.com/memecrashes/status/1437827002392891399
Clockworkfish#6603: Yall probably already saw, but major security vulnerability was found in Apple products and was patched, go update your stuff!
Kazumi#1297: I posted that meme last year
Kharr#7888: This is why it's important to read all the classic research. If you randomly came up with it, odds are it was obvious and someone else did many years ago. :)
π
¬ gabriel_syme π
¬#3220: I don't think I've ever had any of my ideas turn out to be novel, and I really wanted to lol
π
¬ gabriel_syme π
¬#3220: but imo, finding that someone else did or tried it before, it's a great sign. Means there's something good there
alstroemeria313#1694: some of mine have been
alstroemeria313#1694: Or I find out someone published them in a paper a year or two ago instead of in the 1980s lol
π
¬ gabriel_syme π
¬#3220: oh interesting, what do we think about this?
https://www.nature.com/articles/d41586-021-02486-7
|
alstroemeria313#1694: it's easier to reproduce in computer science, you provide code and a script someone can run that reproduces the results
alstroemeria313#1694: since it is so much easier it should be the default imo
π
¬ gabriel_syme π
¬#3220: GPT3?
alstroemeria313#1694: eheh.
π
¬ gabriel_syme π
¬#3220: I think the AI part of CS is getting really difficult to do that
alstroemeria313#1694: Yeah some people don't want to release
π
¬ gabriel_syme π
¬#3220: But I get what you mean
alstroemeria313#1694: But a lot of times there is no particular reason not to and the authors just don't bother or don't even provide code at all.
π
¬ gabriel_syme π
¬#3220: yeah got a lot of papers I'd love they had code π¦
π
¬ gabriel_syme π
¬#3220: and a year + after i lost hope
crafternoobs#5966: Does one of these have a chat bot AI like gpt-3
Kazumi#1297: one of these?
crafternoobs#5966: Yes
crafternoobs#5966: The ones made by eleuther
inox#5400: you could say if you want to publish a GPT3 paper you also have to provide compute capacity to the reviewers to replicate
inox#5400: only doubles required compute
wabi-sabi#5811: Unless reviewers don't cooperate on how to do the replication, but yeah. Still just multiplying by a constant.
65536william#9999: Certain prompts with GPT-J or even Neo for that matter can generate a chat-style conversation. I havenβt seen a dialogue fine tune of J yet though
StellaAthena#3530: I know *of* one but it's not publicly available currently.
Kia#2550: Hm
|
Kazumi#1297: was purple smart running gpt-j?
gollark#3909: If anyone wants it I have a bunch of IRC logs from different things you could finetune on.
gollark#3909: Only 200MB or so though.
gollark#3909: I'm not sure where you'd get more data in the same real-time-chat-ish style. Most of the newer platforms (like here) are very walled gardeny.
StellaAthena#3530: EleutherAI has 5.52 GiB of IRC logs I can point anyone interested to as well. They are included in the Pile though.
Louis#0144: lmao
gollark#3909: Most of the IRC logs you can get are from more technical communities, because nobody else uses IRC, which is maybe not ideal.
Louis#0144: its kinda interesting to think how much chat data google has'
gollark#3909: Didn't most of their chat stuff fail horribly?
Louis#0144: The meena follow up looks interesting
nev#4905: is there any research on how LMs memorize facts? i.e. does a knowledge neuron form when it first sees the data point, on the second try, does doing two gradient steps help, is it random etc?
gollark#3909: Not chatbots, I mean chat platforms.
StellaAthena#3530: There is on-going research here on this. @Sβ΅ is playing around with knowledge neurons and the logit lens while @triggerhappygandi and I train a dataset of checkpoints across time and scale to look at with it. If you'd like to contribute, I don't need any help with the models but more people on the exploratory data science side would always be helpful. While the models train, the big thing is to just gain experience using the tools and building an intuition for how to leverage them.
StellaAthena#3530: I'm speaking at the next Big Science workshop (it's Sept. 20, you should attend) and was given a rather freeform prompt for what to talk about:
> EleutherAI (how's it organized? would be cool and motivation to hear for BigScience folks), language models and why all this is important for open progress perhaps?
I'm bad at coming up with topics off the top of my head because I get decision-paralysis worrying about what people want to hear about. So, any suggestions?
https://bigscience.notion.site/Episode-2-Sept-20th-2021-489ffba0db7c4d3faa37c13c8cadc176
Daj#7482: From my experience, people love hearing about the origin story of eleuther, how we organize, what lessons we've learned about community management and moderation (always mention the 90/9/1 rule) and the like
EricHallahan#1051: I was going to say that you can probably pull a lot from the retrospective.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.