data
stringlengths 115
7.61k
|
---|
nev#4905: stylegan 3D is basically a 3D feature map + a special upsampler
𓅬 gabriel_syme 𓅬#3220: hmm, which one of the two views would be considered a 'noisy' version of the other? 2d or 3d? Can there be a diffusion like process that is bidirectional?
nev#4905: or simultaneous
nev#4905: guide 3D with 2D and 2D with 3D
nev#4905: can you diffuse in two streams?
𓅬 gabriel_syme 𓅬#3220: not sure heh, I was thinking diffusion is already 2-way right
uwu1#4864: what about just doing the rgb to RGB+depth conversion, writing in the rgb pixels into the voxels in the right locations and using that as the starting point for the 3d space generation.
ColtonD#8375: one possible issue with 2d is that it would basically be trying to estimate the lighting at the same time as the model. it wouldnt want to generate an true albedo texture that isnt lit.
uwu1#4864: por qe no los \infinity
nev#4905: i mean generating the field and 2D image at the same time
nev#4905: or multiple 2D images, or multiple 2.5D
ColtonD#8375: although you could just make something seperate that estimates true albedo textures, as well as all the other PBR stuff from what it generates.
𓅬 gabriel_syme 𓅬#3220: texture seems another level down the hierarchy although I understand ppl like to do it one go
nev#4905: but your voxel map will be very sparse
nev#4905: it's done from one view, so it accounts for that
𓅬 gabriel_syme 𓅬#3220: sounds sensible, wonder if there are a few out there
uwu1#4864: tons
𓅬 gabriel_syme 𓅬#3220: *opens up google doc*
uwu1#4864: that might be fine. See e.g auto anime colorization starting from single pixel color hints. Or you could floodfill out from the hints
uwu1#4864: jump flood assignment for the win
|
random person#5234: Actually I am curious to hear more about the CFD side if you ever do more work in that area
random person#5234: from an actual CFD for engineering standpoint, the biggest concern is more on the pressure gradient
random person#5234: sometimes
random person#5234: e.g, drag/lift is done with pressure gradients
uwu1#4864: here is a random review I found: https://arxiv.org/abs/2003.06620
random person#5234: even ignoring heat transfer, compressible/density solvers, multiphysics etc etc
𓅬 gabriel_syme 𓅬#3220: will let you know yeah. I kind of stopped once I made it work and moved towards semantic generation. But it's still a huge part of the whole image for me, you need oracle models to evaluate performance and do design exploration at scale
uwu1#4864: 2 years old but can't imagine the fundamental limits being surpassed without needing Bigg data as @tpapp157 says
random person#5234: velocity based solvers are generally very inaccurate and more used for movie/tv special effects
random person#5234: than actual rigorous engineering purposes
𓅬 gabriel_syme 𓅬#3220: I mean my comfort simulations are quite accurate
random person#5234: just my personal opinion 🙂 I am not in meche anymore anyways
𓅬 gabriel_syme 𓅬#3220: but yeah not wind tunnel accurate but that isn't important
𓅬 gabriel_syme 𓅬#3220: it's actually what gates these studies to big projects
random person#5234: well, even like a very rigorous CFD is hard to be wind tunnel accurate
random person#5234: very hard
random person#5234: within 10-5% maybe
𓅬 gabriel_syme 𓅬#3220: I know and it doesn't have to be lol
random person#5234: but even wind tunnel is not fully accurate especially for compressible
𓅬 gabriel_syme 𓅬#3220: for comfort you really want to know ranges and inform design, for e.g.
|
random person#5234: ahh yea
random person#5234: hvac 🙂
𓅬 gabriel_syme 𓅬#3220: now if you want to build a 1000m tower yea do the million dollar simulation 🙂
random person#5234: quite literally probably end up getting billed a 1m for it lol
𓅬 gabriel_syme 𓅬#3220: this is my 3d folder, I'll parse it into the document but ppl can add more stuff there as they please https://cdn.discordapp.com/attachments/729741769738158194/938929868123545660/unknown.png
𓅬 gabriel_syme 𓅬#3220: not everything applies to this discussion 😄
uwu1#4864: wait how do I add to your folder 🤔 is there an eleuther paper drive
random person#5234: just out of curiosity, the rules say no beginner questions
𓅬 gabriel_syme 𓅬#3220: I haven't found a way to upload my lit collection no and too lazy to make a wiki
𓅬 gabriel_syme 𓅬#3220: so I'll just add papers/code on a doc and share
random person#5234: what.. counts as a beginner question?
𓅬 gabriel_syme 𓅬#3220: there was a really cool depth estimation paper that I liked, which also came with code
𓅬 gabriel_syme 𓅬#3220: adding it
uwu1#4864: hmm i just ctrl-f in that review for attention and there's naught but a 2016 reference and thats only using it in the CRF (to postprocess the mask a bit)
uwu1#4864: given the good results self supervised vision transformers seem to have gotten for segmentation, i wonder if that sidestepping of the local field and Scale would allow attention to do nicely here
𓅬 gabriel_syme 𓅬#3220: agreed, always need to try attention at scale 😄
uwu1#4864: another factorization of the space could be outer product/attention. Like attention across 3 sequences producing a 3D cube of values
uwu1#4864: somewhere in between the outer product of sequences & full dense tensor lies the true dimensionality of our data
chirp#4545: What if you mapped each 16-bit pixel to like a 4x4 grid of regular pixels
chirp#4545: Encoding value as texture
|
uwu1#4864: also heightmap data might be 16 bits but its unlikely any local patch will have that much dynamic range
uwu1#4864: so you could normalize it patch wise or subtract the blurred mean etc
doodle#1078: Is anyone here working on time series forecasting?
ColtonD#8375: Wait I’m confused. Bit depth is about levels of value between black and white.
ColtonD#8375: I’m not sure if thats what you mean though
chirp#4545: I’m not sure what I meant myself tbh
ColtonD#8375: It might be better to make something that can convert an 8 bit image to a 16 bit one, trained with height map data
𓅬 gabriel_syme 𓅬#3220: phew ok I think I'm done for now; I'll collect benchmarks at another time, need to free my brain for a bit
ColtonD#8375: Then we could use any generator we want
chirp#4545: General idea is you can represent a 16 bit pixel by multiple normal pixels
ColtonD#8375: And turn it 16 bits
ColtonD#8375: Ohh ok makes sense
ColtonD#8375: But it wouldn’t be enough I don’t think
ColtonD#8375: 8 bit has 256 levels of value per pixel (per channel) and 16 but has I think 65536
uwu1#4864: how many IRL meters does 1 step represent? Again if theres no spots that are 0 meters next to a 2^16 meters spot you could locally treat the data as fewer bit. Also don't diffusion models generate in fp32 anyway...
ColtonD#8375: But you probably don’t need all 65536 levels honestly
ColtonD#8375: I’ve seen some things like this actually, in papers and some on github. They’re all trained on regular images though so don’t do too well with height-maps
ColtonD#8375: Could finetune one of those though
uwu1#4864: theres also irl devices called "moduli cameras" that try to capture 16 bit pic from capturing 8 bit pics
uwu1#4864: by capturing data % 255 and then integrating to reconsturct
|
uwu1#4864: ok, "irl device" i mean existed once at some university as part of some school project
ColtonD#8375: Is it basically taking multiple 8but low dynamic range exposures of an image and then combining them into an hdr image
uwu1#4864: its a single exposure
uwu1#4864: instead of the over exposed areas blowing out, they wrap around
ColtonD#8375: Ah ok
uwu1#4864: then you just need to disambiguate the parts that wrapped
ColtonD#8375: I see how that works
ColtonD#8375: how would it distinguish the different levels though? with an extra alpha channel mapping maybe?
uwu1#4864: for the camera produced images it relies on brightness variation being smooth so if you see a discontinuous jump like a wrap around you know the region enclosed by that contour line is the next level up
uwu1#4864: +- compressive sensing or ML to deal with that assumption not being true
uwu1#4864: if you're just trying to store 16 bit data why not a 16 bit PNG
rawwerks#0536: hey squad! quick update on the "dream fields" that i'm super excited about, but don't have the programming chops to do myself. i got in touch with the lead author Ajay. here's what he had to say about exporting the dream fields to mesh objects.
"It's possible to do the conversion. Some tools implement NeRF to mesh conversion such as https://github.com/kwea123/nerf_pl/blob/master/README_mesh.md and NVIDIA's non-commercial Instant NGP tool. It'll have to be custom implemented since our code doesn't include the conversion, but doable. I've been experimenting with some regularizers to improve the export quality but those are still baking in the oven 🙂 There's also this implementation which doesn't need real images since colors are queried from the NeRF model: https://github.com/bmild/nerf#extracting-geometry-from-a-nerf "
(tangentially related, the ClipMatrix guy is not going to release his code, i emailed him too.)
if you are interested in exploring extending this work into the 3rd dimension, please reply/DM! i think this would be amazing and really fun to work on.
CC: @nev @Isaiah @Clay Mullis
|
rawwerks#0536: https://i.ytimg.com/vi/eCFN9i6ZtX4/hqdefault.jpg
chirp#4545: finally learned why gradient preconditioning is a thing... was confused for so long lol
chirp#4545: tldr - regular gradient descent works less well if different directions have very different curvature, and second order methods like Shampoo can fix that by effectively operating in a "nicer" space, using a "preconditioner" matrix to transform between one space and the other https://cdn.discordapp.com/attachments/729741769738158194/939037326137524254/Untitled.png
chirp#4545: From Roger Grosse's course: https://www.cs.toronto.edu/~rgrosse/courses/csc2541_2021/readings/L04_second_order.pdf
nev#4905: there is also https://github.com/qway/nerfmeshes, which from what I can tell is the most comprehensive one
I made dream fields create 3D point clouds from a video, to get a point cloud, which after some magic can turn into a mesh
to get better quality you'd need something like marching cubes on the actual volume or extracting the precise point cloud and then running a mesh reconstruction algorithm. both require access to the NeRF, which I still haven't completely figured out
as for clipmatrix, text2mesh is good at replicating its functionality and very customizable (albeit cuda 10.2 only) https://cdn.discordapp.com/attachments/729741769738158194/939050007984766986/Screen_Shot_2022-02-04_at_09.45.42.png,https://cdn.discordapp.com/attachments/729741769738158194/939050008253186088/Screen_Shot_2022-02-04_at_09.46.15.png
slimthree#2762: Hello everyone 🦾I would like to create a general artificial intelligence and I would like to find qualified people in this field because the project I want to create is great and revolutionary
cfoster0#4356: Who knows who's qualified for that 🤷♂️
cfoster0#4356: We aren't trying to speed up the development of AGI here, tbh
random person#5234: Elon Musk apparently
Tinytitan#5596: no
uwu1#4864: does anyone have any better alternatives to estimating camera parameters than colmap for the nerf? it seems like instant NGP results are very dependent on the quality of the camera parameter estimation, but colmap doesn't seem to work too good for scenes that aren't just walkaround object scans
uwu1#4864: back in my day we use agisoft photoscan but not Foss :/
𓅬 gabriel_syme 𓅬#3220: what about the methods that add those parameters as learned parameters
𓅬 gabriel_syme 𓅬#3220: I seem to recall a couple (was it the NeRF-- paper?) but not sure how good they are compared to newer, quicker approaches
p4bs#8973: QQ: is DeepMind RETRO being discussed / considered for implementation here? > https://deepmind.com/research/publications/2021/improving-language-models-by-retrieving-from-trillions-of-tokens
|
slimthree#2762: Thanks 😑
MicPie#9427: follow :lucid: : https://github.com/lucidrains/RETRO-pytorch
cbab#4822: how do I actually run a prompt through discord? Is it on the #art channel? Sorry I'm new here, don't know much about this
Kia#2550: `.imagine <promt>` in #the-faraday-cage-archive
Kia#2550: not in #art
cbab#4822: oh okay thanks!
Kia#2550: no problem,do enjoy
Deleted User#0000: @Erik Nijkamp is planning on a jax port, and open sourcing the model, if replicated. he works for einstein ai
Deleted User#0000: perhaps 7B param
Deleted User#0000: he's testing my repo first on some Axxxx
Deleted User#0000: so he tells me, we'll see :^)
StellaAthena#3530: @kurumuz @finetune do you know if the fairseq dense models have the same architecture as GPT-3?
kurumuz#5695: they are not same as GPT-3, no local attention, no learned embeddings(sinusodial instead)
kurumuz#5695: but fairly similar other than that
StellaAthena#3530: So, they should have the same number of non-embedding parameters
kurumuz#5695: yes
kurumuz#5695: parameters should be matched
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/939298202966974504/index.png
StellaAthena#3530: This is pretty cool
StellaAthena#3530: I think I'm going to do one plot per task, but put OAI models, FairSeq models, and our models on the same plot
|
StellaAthena#3530: Voila
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/939302768315871303/Screen_Shot_2022-02-04_at_6.33.45_PM.png
bmk#1476: ooh I like it
StellaAthena#3530: If it's not too busy, I think I'm going to add per-organization lines of best fit?
EricHallahan#1051: I'm lost on `value`
StellaAthena#3530: It's an accuracy score
StellaAthena#3530: I haven't cleaned up the headers from my dataframe yet
bmk#1476: maybe worth adding the eval harness error bars too
kindiana#1016: oai 1.5b > oai 6b?
kindiana#1016: :thonk:
StellaAthena#3530: Yeah, I have that data just experimenting with presentation / layout
StellaAthena#3530: For this randomly chosen task in the hendrycks dataset, yes
bmk#1476: I think for hendrycks doing plots for the individual tasks will be pretty useless
bmk#1476: since the error bars are huge
bmk#1476: literally multiple percentage points
StellaAthena#3530: Yeah I just picked this as a visual example
bmk#1476: recommend doing the aggregated one instead
StellaAthena#3530: Do we know what the random baseline for these tasks are?
ANLI R1
ANLI R2
|
ANLI R3
HellaSwag
LAMBADA
WSC
Winogrande
StellaAthena#3530: It would be nice to add that as a dashed horizontal line
bmk#1476: lambada: 0% / infinity ppl
anli: 33%
StellaAthena#3530: BTW, the main takeaway on my experience doing analysis is that the jsons that the eval harness return are a nightmare to work with
EricHallahan#1051: Yes, we really should have more export options.
bmk#1476: what format would be preferable?
bmk#1476: I don't see what's so bad about the format
EricHallahan#1051: Nah, I would just prefer to have a dedicated HTML output. md tables are not that useful when it comes to styling.
bmk#1476: if someone wants to make a LaTeX table generator for eval harness that would be awesome
bmk#1476: and HTML
StellaAthena#3530: It’s a lot of work to turn it into a usable pandas dataframe, though half the conversion code was written by Sid so maybe he did it suboptimally or I didn’t understand how to use the code he wrote
Some Point Process#3793: Yeah pd *feels* broken (at the api level at least)
bmk#1476: it should be like 20 lines
bmk#1476: almost as bad as matplotlib
Some Point Process#3793: Yeah I guess. But optimal visual layout is
|
palpably tough to get right
Louis#0144: haha is this what you were posting about on twitter
Some Point Process#3793: especially with just a one pass renderer or lack of constraint based semantics etc
Louis#0144: "how do you visualize all these models"
Louis#0144: no it doesnt
Louis#0144: pandas is magical
Louis#0144: lmao
Louis#0144: git gud
Some Point Process#3793: Did you ever take databases?
Louis#0144: yes
Louis#0144: pandas is hands down one of the greatest apis ever made
StellaAthena#3530: Yea
Louis#0144: it has steep learning curve i will admit
Some Point Process#3793: ok then.. it just feels worse than the optimal database crud language
Louis#0144: but once you get over the bump it is amazing
Louis#0144: it isnt made to replace that
bmk#1476: pandas API is terrible
Louis#0144: no u
Louis#0144: i really like it
Louis#0144: i did R for a *long* time
|
bmk#1476: sql is way better
Louis#0144: ok
Louis#0144: blocked
Louis#0144: wtf is wrong with u
bmk#1476: we don't talk about R around here
Louis#0144: sql is trash
Louis#0144: literally NoSQL or bust
Louis#0144: i *hate* SQL
bmk#1476: sql is literally the best wtf
Louis#0144: :sus:
Some Point Process#3793: https://en.wikipedia.org/wiki/Object%E2%80%93relational_impedance_mismatch
bmk#1476: also most nosql systems have their own shittier query language
Louis#0144: i worked on SQL compilers for a year
Louis#0144: I hate sql with a passion
Louis#0144: lmao
Louis#0144: nothing can make me unhate sql
Louis#0144: anyway #off-topic
bmk#1476: the solution is just to use SQL without touching the compiler :smug:
StellaAthena#3530: Ugh I forgot to run evals on Ada
Some Point Process#3793: The idea is that you want the semantics/syntax of your language to match the task/data structure at hand. Imperative languages just aren't the right fit for db manipulation. (evidenced by e.g. SQL being a bit less expressive than turing complete languages, but this too formal to be meaningful i think)
|
StellaAthena#3530: Also I should probably do it with Neo too
Louis#0144: yeah dont get me wrong when you are using pandas, you use it *like* SQL
Louis#0144: but pandas if done right is basically SQL with imperative sugar
EricHallahan#1051: As long as it is a separate series that is a good idea.
Some Point Process#3793: perhaps..
StellaAthena#3530: @bmk remind me what models you ran APPS on
bmk#1476: NeoX-20B
StellaAthena#3530: It looks like the APPS paper only reports GPT-3 175B and *finetuned* results on GPT-Neo and GPT-2
StellaAthena#3530: Meh
bmk#1476: also our numbers don't do beam search
bmk#1476: because neox doesn't have beam search implemented afaik
bmk#1476: does gooseai have beam search?
StellaAthena#3530: @kurumuz
kurumuz#5695: uh we could have it
bmk#1476: hendrycks APPS prescribes a beam of 5
kurumuz#5695: but we dont
kurumuz#5695: who uses beam search??
kurumuz#5695: lol
bmk#1476: I personally think beam search bad actually
StellaAthena#3530: Same, tbh
|
bmk#1476: something something aaaaapill
bmk#1476: but that means we need to get non-beamsearch results to compare
StellaAthena#3530: We also don’t really have the ability to finetune, the way APPS does
StellaAthena#3530: Not for all the models, at least
bmk#1476: hmm yeah
StellaAthena#3530: Maybe we could do top-15, or top-25 sampling
StellaAthena#3530: We could also try replicating the evaluation method in my plagiarism paper? Obviously not for a lot of inputs because it requires human feedback, but some
StellaAthena#3530: And focus on Codex 6B, GPT-J, GPT-NeoX, and Codex 175B
StellaAthena#3530: (IIRC FairSeq can’t do code?)
zphang#7252: Fairseq was trained on mainly regular text
Sphinx#2092: Would be good to report both beam and sampling. It would be nice to have more results comparing the two.
Sphinx#2092: A nice little taskonomy to see which search does better on what tasks would be nice.
random person#5234: I am probably gonna get a link to a pin but is there some stories/sprint that are low priority someone new can take
random person#5234: Like low priority/low consequence stuff
StellaAthena#3530: What languages do you speak?
random person#5234: Like coding?
StellaAthena#3530: No like human language s
random person#5234: Mandarin and English
StellaAthena#3530: What ML experience do you have?
timudk#8246: Functorch actually did the trick for a simple MLP but once I moved to something more complex I got the following "NotImplementedError: Cannot access storage of TensorWrapper". This seems to be a known issue? (https://github.com/pytorch/functorch/issues/14) @chilli
|
random person#5234: I am familiar with some of the sota stuff for transformers(assuming you only care about my experience with large MLM models/multimodal) but not a lot of experience working with models above 500M parameter. I am ok with Pytorch and understanding of the hw stuff underneath but no experience with JAX or lightning.
random person#5234: Specific to language transformers, I am ok with my understanding of bert/mlm transformer models from BPE to positional encoding to qvk stuff etc etc but again the key I am missing is practical experience with models that are large.
random person#5234: So I guess all in all I am fairly beginner lol
random person#5234: Some of my recent interest has been in more linformer/reformer type of reducing attention computational time stuff
chilli#5665: Do you have a repro?
StellaAthena#3530: @random person do you have a GPU?
random person#5234: Yes
random person#5234: I have a 3090
random person#5234: I am playing around with the new vit L14 clip on it lol
EricHallahan#1051: (More than I have :berk:)
StellaAthena#3530: You have dozens of A100s, just not under your desk @EricHallahan
random person#5234: When I successfully start my big person job in a few month, hopefully I can afford more pointless spending in GPU
EricHallahan#1051: I also have access to TPUs but I don't use either. 😛
random person#5234: I have better ROI on gpu than crypto
timudk#8246: Trying to decipher what's going on right now. Unfortunately, I don't know which module/part of the model is causing the issue. https://cdn.discordapp.com/attachments/729741769738158194/939345419274711060/Screen_Shot_2022-02-04_at_9.22.41_PM.png
chilli#5665: That error typically indicates something going wrong with an operator that’s directly accessing storage
chilli#5665: Could you give a slightly larger part of the callstack lol
StellaAthena#3530: @random person A good place to start is to play around with the codebases we use for most of our work. We’re using this to train models: https://github.com/EleutherAI/gpt-neox and this to evaluate trained models: https://github.com/EleutherAI/lm-evaluation-harness
Why don’t you start off by setting it up and getting a 125M model ("small.yml" as they’re named after the GPT-3 paper labels) up and running on your system. Once you’ve poked around the code a bit, I can send you a pretrained small model and you can try finetuning it on some task of interest to you.
|
If that goes well, we can start taking about research projects ^_^
random person#5234: Ok great thanks!
random person#5234: Let me just copypaste this into my calendar so I get pings to remind myself
random person#5234: Just curious
random person#5234: What is deepspeed
random person#5234: Is this another add on on top of apex?
chilli#5665: It’s a library from Microsoft for distributed training
random person#5234: Hmm I just always used Pytorch's thing for it
random person#5234: But I didnt need like hundreds of gpu
StellaAthena#3530: Yeah, this is good for when you have multiple machines linked together
StellaAthena#3530: It’s overkill for training a small model on your GPU, but by using it we can take code you write for your GPU and run it on many GPUs without modification
random person#5234: have a slight issue with triton being weird... sorry if this is a dumb question
random person#5234: ERROR: Could not find a version that satisfies the requirement triton==0.4.2 (from versions: 0.1, 0.1.1, 0.1.2, 0.1.3, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.3.0)
ERROR: No matching distribution found for triton==0.4.2
EricHallahan#1051: Yeah Triton can be weird. You can ignore it for now, just make sure the other packages install.
random person#5234: ok... does deepspeed just not need it or?
random person#5234: should I try to build the fresh v1.1 from git?
EricHallahan#1051: IIRC we don't use the features that require Triton.
random person#5234: oh ok delete and ignore then 🙂
|
random person#5234: thank you
EricHallahan#1051: I would build 0.4.2 if you want to try that.
random person#5234: I got an error with missing data/gpt2-vocab.json
random person#5234: and also fixed the triton thing...
StellaAthena#3530: ctrl+f "vocab" in the readme
random person#5234: yea my bad sorry, should have checked my curl naming before
random person#5234: ok i got it running on my system but it says something strange about not finding an apex installation and defaulting to deepspeed's adam.
random person#5234: i used the apex install from conda forge, should I have built it from nvidia-apex's github instead?
random person#5234: You wanna toss me a small pretrained model and I can just finetune/add a small neural network on the back?
StellaAthena#3530: Yeah, that tends to work better. Apex is cranky
random person#5234: Kk
𓅬 gabriel_syme 𓅬#3220: I choose no sql
𓅬 gabriel_syme 𓅬#3220: Not sure where to add this question. Is there an efficient way to do backtracking (might be wrong term) on LM sampling? The idea is that you have a list of required tokens and when the model doesn't generate one of those next, you resample again(but from that point from that point onwards). Is this stupid? Is it simple?
StellaAthena#3530: You can ensure that you always sample from a particular list of tokens if you wish
StellaAthena#3530: You can efficiently compute the log likelihood of the next token being `t`, and therefore iterate of your list of desired tokens and renormalize
𓅬 gabriel_syme 𓅬#3220: Cool thx Stella! Sounds like what I need
𓅬 gabriel_syme 𓅬#3220: Wonder if I should retrain to add special tokens before the positions I want to do that, I'll try things out
StE_gUy#5856: Where's the best channel to share funny outputs from LMs? Would that be #the-faraday-cage-archive ?
StE_gUy#5856: Or perhaps #off-topic
bmk#1476: #the-faraday-cage-archive
|
StE_gUy#5856: Also, a shame that #prompting got deleted. I took a break from this server for a few months and came back to find it missing 😦
chilli#5665: @alstroemeria313 I wrote a flop counter that does hierarchies
chilli#5665: pretty quickly
chilli#5665: the only thing I'm missing is a formula for convolution backward
chilli#5665: :sadge:
chilli#5665: anybody wanna figure it out for me 🙂
chilli#5665: (and also works with backwards)
chilli#5665: or, alternately
chilli#5665: does anybody know how to write a convolution backwards in terms of regular convolutions?
chilli#5665: So, if you have
chilli#5665: `convolution(input, weights, transposed)`
chilli#5665: is convolution_backward simply
chilli#5665: ```
convolution(grad_out, weights, transposed)
```
and
```
convolution(grad_out, inputs, transposed)
```
chilli#5665: ?
|
chilli#5665: that doesn't seem right...
chilli#5665: all I need is this one formula....
chilli#5665: and I think it more or less obsoletes this stuff... https://www.lesswrong.com/posts/fnjKpBoWJXcSDwhZk/what-s-the-backward-forward-flop-ratio-for-neural-networks
chilli#5665: if nobody tells me the answer by tomorrow I'm going to need to think about it :sadge:
chilli#5665: 🤔
chilli#5665: I want flop counts
chilli#5665: is what I really want this for
chilli#5665: @Deleted User are there any good flop counters for tensorflow/jax?
Deleted User#0000: let me see if it's open source
Deleted User#0000: this is the available public one https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/hlo_cost_analysis.cc
chilli#5665: :thonk:
chilli#5665: I want an user usable one
chilli#5665: lol
Deleted User#0000: but a fun weekend project for you to invoke from torch xla? :chadgoose:
chilli#5665: I mean, I just wrote an uber simple flop counter in PyTorch that works well
chilli#5665: (and works for backwards and such)
chilli#5665: I just don't know the formula for convolution backwards 😠
chilli#5665: I think this is actually the perfect flop counter
chilli#5665: lol
chilli#5665: 1. it works in eager-mode
|
chilli#5665: 2. it captures module hierarchies
chilli#5665: 3. it works with autograd (or any other function transform)
chilli#5665: oh, and 4. it's super simple (like... 200 lines of python, and 120 lines of that is copying FLOP formulas from elsewhere)
chilli#5665: here's an example output for resnet18
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/939493498497937488/unknown.png
chilli#5665: I'm just not sure the convolution_backward correct is right 😦
Deleted User#0000: I mean, is it going to be that useful if it's not backend specific and taking into consideration lower level optimizations
chilli#5665: yeah
Deleted User#0000: a generic formula has no use for me if im running on TPU eg
chilli#5665: well, the purpose is to compute percentage of peak
Deleted User#0000: and without knowledge how certain kernels are implemented
chilli#5665: just because practical performance varies widely
Deleted User#0000: and where fusion generally happensf or certain ops etc
chilli#5665: doesn't mean that it isn't useful to know what peak is
chilli#5665: well, see, 1. fusion doesn't change flops
chilli#5665: and 2. flops for pointwise ops basically don't exist
chilli#5665: well, really, flops for non-matmul/convolution ops
chilli#5665: oh, sure
chilli#5665: but the point of flop counters isn't for practical performance lol
chilli#5665: it's to sanity check things like "what percentage of peak flops am I getting"
|
chilli#5665: "how crappy is this code"
chilli#5665: etc.
chilli#5665: "how much is not doing fusions fucking me"
chilli#5665: hmm
chilli#5665: in principle, I could definitely modify this code to do per-op benchmarking too
chilli#5665: although it's more difficult, since cuda syncs and so on
Deleted User#0000: I guess they are different purpose, if it's as part of tuning passes the practical performance matters more
chilli#5665: of course
chilli#5665: but when you're optimizing things, it's always nice to know what percentage of peak you're getting
chilli#5665: flops are just nice to know in general
chilli#5665: and it's very annoying to me that all of the current flop counters suck (particularly in PyTorch, but I can't find good ones for Tensorflow/Jax either)
chilli#5665: especially when it's so easy to make a good one
chilli#5665: sigh
chilli#5665: @kindiana lmk if you happen to know the answers to this question
chilli#5665: My current guess is
```
convolution(grad_out, weights, not transposed)
convolution(grad_out, inputs, transposed)
```
StellaAthena#3530: @chilli if $f\ast g = h$ then we can recover $f$ from $g$ and $h$ by doing $$f = \mathcal{F}^{-1}\left(\frac{\mathcal{F}(h)}{\mathcal{F}(g)}\right)$$
|
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/939532642536276018/193204646687408129.png
StellaAthena#3530: Although this is mathematically correct and relatively straight forward to compute, I don’t know if this is the actual operation performed inside PyTorch.
StellaAthena#3530: Actually
StellaAthena#3530: For neural networks I think you just transpose the matrix?
StellaAthena#3530: e.g., if convolution is $Y = WX$ then deconvolution is $X = W^T Y$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/939534321344184340/193204646687408129.png
random person#5234: btw, build apex from github did make the apex warning go away lol
random person#5234: you said there were a specific finetuning tutorial/thing you wanted me to go through?
random person#5234: actually lets see if bf16 works on 3090 lol
EricHallahan#1051: IIRC it doesn't but don't quote me on that.
random person#5234: well the bf16 script does work and its like slightly slower than fp16
random person#5234: 🤷♂️
StellaAthena#3530: No, I said that you should play around with the library and familiarize yourself with the configuration options and how they function.
random person#5234: ah ok got it
EricHallahan#1051: I would go as far to say that the configuration subsystem is the most novel addition of the library over other variants and derivatives of Megatron-DeepSpeed.
random person#5234: well it does look cleaner than 1000 args/kwargs
StellaAthena#3530: That’s the goal, yes :berk:
StellaAthena#3530: There were some other novelties but they’ve been largely adopted by Megatron-DeepSpeed
StellaAthena#3530: What is said @random person (or, meant to say at least) was that once you feel acquainted with the codebase setting up and running a finetune is a good way to check your understanding
random person#5234: kk np thanks
|
random person#5234: why did you ask what language I speak btw?
random person#5234: did you need some docs translates or something haha
StellaAthena#3530: One area of low-hanging fruit is to validate ideas that have only been applied in a monolingual setting to a multilingual setting
StellaAthena#3530: However collecting, processing, and validating data is by far best done by someone who actually speaks the language the data is in
random person#5234: I see. make sense
StellaAthena#3530: 🙂
StellaAthena#3530: Feel free to pick whatever application and dataset to finetune with. Ideally something you’re familiar with and interested in.
random person#5234: yea I am gonna play with few of the models first 🙂 thx
chilli#5665: Yeah but I want it in terms of forwards convolution ops
chilli#5665: Like, I have a really nice flop counter I think
chilli#5665: The only thing I’m missing is a formula for conv backwards
CRG#8707: Shouldn't it be the same?
chilli#5665: ?
CRG#8707: Like, the same flops as forward (and another for the gradient wrt the weights)
chilli#5665: No
chilli#5665: I don’t think so
nev#4905: if it's just doing a transposed convolution it is
alstroemeria313#1694: can it do like... grouped convolutions >_>
chilli#5665: Well, if my formula is right sure
chilli#5665: lol
|
alstroemeria313#1694: eheh
chilli#5665: I think the formula is right for the forwards pass
alstroemeria313#1694: ahh
chilli#5665: Why?
chilli#5665: This is the formula for forwards: https://github.com/facebookresearch/fvcore/blob/6a5360691be65c76188ed99b57ccbbf5fc19924a/fvcore/nn/jit_handles.py#L127
alstroemeria313#1694: also it can tell if like, you're actually doing the gradient wrt the weights or if the weights do not require grad?
zphang#7252: from what I can tell, to do pretraining with t5x, we'd need to either set up the pile as a tfds dataset, or use a preprocessed version with tf Examples from tfrecords. Eitehr way it goes through seqio. Does that sound about right?
chilli#5665: Yeah
chilli#5665: It’s basically a flop counter that works in eager mode
chilli#5665: I dunno, maybe I’m overthinking it and it is true that the flops for convolution for backwards is double that for forwards
Sphinx#2092: You could technically pass in whatever you want for t5x, as long it comes out as tf.data.Dataset I believe.
zphang#7252: interesting. I'm just trying to figure out how to pipe a dataset in and not have to worry about IO performance, and I'm praying that the more I follow the standard stack the more likely I won't screw something up
zphang#7252: I'm not sure at which point the tokenization (and caching?) actually happens
Sphinx#2092: That depends on where you define it in your seqio task.
Sphinx#2092: That said, if you don't want to use seqio, you just have to replace this value: https://github.com/google-research/t5x/blob/main/t5x/train.py#L103
Sphinx#2092: and pass in your own custom `get_dataset` function.
Sphinx#2092: You can do this with gin, so you don't have to actually modify anything but the config.
zphang#7252: and that should hopefully handle things fairly optimally?
zphang#7252: or I guess it'd depend on the dataset function
Sphinx#2092: Depends on what "things" are. If you pass in your own data pipeline, then that's on you.
|
Sphinx#2092: But as long as all data pipelines end as tf.data.Dataset, then the training script will use them as they should.
zphang#7252: hmm okay, this is a helpful point to start looking at anyway, thanks!
Sphinx#2092: Good luck! Let me know if you have any other questions. I use t5x pretty extensively these days.
zphang#7252: will do!
zphang#7252: also, it feels like the public flaxformer and t5x releases... don't really line up?
Sphinx#2092: What do you mean?
zphang#7252: flaxformer's readme says t5x uses flaxformer
Sphinx#2092: oh this might be a :works_internally: thing.
zphang#7252: but I don't believe t5x makes any reference to flaxformer (other than some config files, but I don't think those reference flaxformer code either)
Sphinx#2092: Yeah so I can provide some context. flaxformer was the original thing that was plugged in for t5x (though you can of course use whatever you want, as long as you can pass it through gin). But it was taking them a while to open source for some reason
zphang#7252: right it feels like two separately open-sourced repos?
Sphinx#2092: so Hyung Won made scalable_t5 and t5
Sphinx#2092: but then they open-sourced flaxformer anyways?
Sphinx#2092: but flaxformer is sort-of like, very advanced code
zphang#7252: aha, yes that lines up about with what I thought
Sphinx#2092: like reading TF2's implementation of beam search
Sphinx#2092: where it's written for advanced beings.
Sphinx#2092: Meanwhile, scalable_t5 and t5 is written for mere mortals
Sphinx#2092: and they are also "frozen" in the sense that i don't think that code will ever change again
zphang#7252: right, I actually thought flaxformer hadn't been open sourced yet (and was waiting for it), but it turns out it was already open-sourced, and no one really paid attention to it?
|
zphang#7252: so it sounds like flaxformer is more pure (efficient/scaling) modeling code, while t5x currently is more all-in-one w/ some batteries included
zphang#7252: but it's possible to plug flaxformer into t5x?
Sphinx#2092: There's too many politics for me to give a clear answer to your first comment, but for the second one, yes.
Sphinx#2092: In fact, I would argue most uses of t5x internally do something like that.
zphang#7252: interesting
Sphinx#2092: but `t5` and `scalable_t5` really good though.
zphang#7252: okay, maybe I'll use those until I run into constraints/flaxformer gets more eyeballs/updates
Sphinx#2092: I think they should both suffice. They support most things you could possibly ask for from a standard transformer implementation
zphang#7252: thanks! this was all super helpful
StellaAthena#3530: @chilli CRG is correct here. If convolution is $Y = WX$ then deconvolution is $X = W^T Y$. So there isn’t any difference between the forward and backward compute requirements (in stark contrast to say a transformer)
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/939599858329858068/193204646687408129.png
CRG#8707: Why in contrast to transformers? If you include the gradient wrt to the weights dW = dY^T X, then total compute should be 3x, right?
chilli#5665: This is true in a vague high level sense but does not help me in actually computing it
chilli#5665: I'm not convinced simply multiplying by 2 accounts for things like stride or dilation
StellaAthena#3530: What are you seeking to count, exactly? The number of real multiplications?
chilli#5665: Yes, the number of flops
StellaAthena#3530: Do FLOPS counters count additions as well or only multiplications
CRG#8707: I'm pretty sure this cancels out exactly
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/939603315724005386/unknown.png
CRG#8707: Like, stride in the forward pass is dilation in the backward pass etc
|
chilli#5665: so let's say I have this formula for the flop count of a convolution
chilli#5665: that depends on the input shape, the weight shape, the output shape, and whether the convolution is transposed
chilli#5665: what is the correct formula for the backwards pass? In terms of grad_out shape, grad_in shape, input shape, weight shape, and whether the forward convolution was transposed
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/939603689314852874/unknown.png
chilli#5665: maybe it's not possible to write it in terms of that formula 🤔
chilli#5665: I'm pretty sure I have the convolution for the gradients wrt the inputs correct
chilli#5665: I just don't have the convolution for the gradients wrt the weights
anthony_fuller#1075: @𓅬 gabriel_syme 𓅬 you use TPUs from TRC right? We get ~100 GB of storage, but my dataset is ~300 GB, do you have any idea what the best way to load data is? Just load it over the network?
chilli#5665: ok I figured it out @CRG
chilli#5665: I think it is indeed 2
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/939617015084245013/unknown.png
chilli#5665: but the 2 convolutions it corresponds to are 1. convolution(grad_out, weights, not transposed) and 2. convolution(input, grad_output, transposed)
chilli#5665: @alstroemeria313 @StellaAthena here's that flop counter I said I might do previously: https://pastebin.com/rUPHgu3Q https://cdn.discordapp.com/attachments/729741769738158194/939623504310071326/unknown.png
chilli#5665: results in https://cdn.discordapp.com/attachments/729741769738158194/939623595997528104/unknown.png
chilli#5665: You could even use this for computing flops of fancier stuff, like jacobian computations or hessian-vector products
chilli#5665: (although that probably won't work well with the hierarchical structure maintaining)
chilli#5665: All of it done in about 120 lines of Python (and ~100 lines of flops formulas I copy pasted)
chilli#5665: @Deleted User btw here’s that flop counter I wa mentioning
alstroemeria313#1694: ooo
alstroemeria313#1694: ...does it work on stable or just nightly? ^^;;
|
chilli#5665: Prolly nightly - I think I could modify it to work on stable though
alstroemeria313#1694: *nods*
Deleted User#0000: nice job! Only the absolute tiniest bit triggered that all type annotations are List when it could be Sequence since you are not relying on any mutable sequence / list specifics in the implementations:goose:
chilli#5665: I just copy pasted those from fvcore
chilli#5665: lol
Deleted User#0000: time for a 'friendly' PR to your colleagues :berk:
Deleted User#0000: does meta have a concept of readability for a specific language to approve prs? wondered that before
chilli#5665: No, thank god lol
chilli#5665: Not that they would apply in these cases
chilli#5665: Since it’s all oss
Deleted User#0000: I found it massively irritating at first but it can drill some good habits. Although I feel for C/C++ it matters more as more footguns
Teemochu#8740: You're on Windows, Triton stopped supporting Windows a long time ago... the rest should be fine though
Teemochu#8740: if you really insist on using Triton for anything I recommend dual-booting Ubuntu or your distro of choice (though for just running neox it's not necessary, just remove the requirement)
Teemochu#8740: (I know you're on Windows because I ran into this exact problem before)
Teemochu#8740: It does
chilli#5665: Yeah perhaps - I definitely could see it being nice for learning more about C++
Teemochu#8740: For future reference, bf16 is supported on Ampere
random person#5234: yeaI normally use ubuntu for dev but i have my windows at the time. was too lazy to switch lol
random person#5234: i figured it was that and switched.. all good
EricHallahan#1051: Ah I must have been thinking of TensorFloat32.
|
Teemochu#8740: yeah bf16 is great imo
Teemochu#8740: ~2x faster than fp32 and not much worse (at least for small models)
random person#5234: oh is triton just used for sparse attention?
Teemochu#8740: would make sense if it is
EricHallahan#1051: Yep.
EricHallahan#1051: That's why I said not to worry about it.
random person#5234: its all good, been playing around with the medium/small toy models from neox
Teemochu#8740: also check this out if you don't know much about transformers already (or if you do) https://dugas.ch/artificial_curiosity/GPT_architecture.html
random person#5234: ah thanks! i have a good idea on how traditional transformers work but not the particulars of GPT-3 or any of the crazy 100B+ models
Teemochu#8740: yeah it's easily one of the best written guides on the Internet
Teemochu#8740: this is also good but the code is a bit outdated / has a few nonstandard particulars in it http://peterbloem.nl/blog/transformers
random person#5234: good diagrams with matrix sizes!
random person#5234: thats one of my pet peeves, i hate diagrams without dimensions
ilovescience#3282: here is a thread I wrote on some of the best transformer resources:
https://twitter.com/iScienceLuvr/status/1471032149100797954
Teemochu#8740: (the main nonstandard particular is that *each head* has size d in this code while it should be d//h)
ilovescience#3282: I particularly like this one actually
random person#5234: yea thats just how normal multihead attention works right?
Teemochu#8740: yeah I didn't say it's *bad*, just that iirc the code needs a few small line changes to not error out, and the heads need to be shrunk (*total* dimension should be d, not per-head) to make it match the GPT arch
random person#5234: also sorry if I have asked some more beginner questions here
|
random person#5234: i know the server rule dont allow it
chilli#5665: Yeah this one is my favorite
Teemochu#8740: imo those aren't super beginner questions
Teemochu#8740: "beginner question" mostly refers to stuff where it's pretty clear the person doesn't know yet how to investigate the code base, or especially stuff like "how do I pytorch 101"
random person#5234: i will try to avoid those but pytorch is the frameworks i use most frequently so...
random person#5234: hopefully my questions wont be too bad
Teemochu#8740: If you use it a lot you are probably not asking "pytorch 101" questions 😛
random person#5234: actually while I am here, something always bugged me about transformer/vit/etc. the way the positional embedding is given is directly addition to the input embeddings. does the input nn just learns to not put relevant information in the particular position at the positional embedding is added on?
random person#5234: I understand it works well for both language and vision but I am not sure the "why" behind adding it directly vs concat
cfoster0#4356: it doesn't have to be
cfoster0#4356: For ex. rotary positional embeddings do it with a multiplication, and T5 / ALiBi do it by biasing the attention
Teemochu#8740: Yeah I'm a big fan of alibi personally
EricHallahan#1051: Concatenating would make it more computationally expensive since you are going presumably increase the model dim by doing so.
𓅬 gabriel_syme 𓅬#3220: yeah you need a bucket to work with the VMs on anything serious. You can save your dataset there and load it into memory (or not) most of the time to process it
random person#5234: got a quick question on neox
random person#5234: whats the canonical way of doing transfer learning on it? is it with directly modifying the megatron stuff to add stuff on the decoder or using wrappers like huggingface/etc
restaurant#9688: Hey what type of sampling does the Goose.AI playground do by default? A bit confused by the default slider positions
cfoster0#4356: Try #release-discussion
restaurant#9688: gotcha
HostsServer#2628: Is there anyway to check what models are loaded in vram?
|
HostsServer#2628: Having an issue on colab where i cant clear cache because somethings didnt get unloaded from vram but i have no idea what it is or named to remove it
HostsServer#2628: Using Pytorch
spirit-from-germany#1488: i here someone with proficient knowledge with HF Transformers? 🙂
Orz#3023: on colab isn't it as easy as clicking the restart button?
Orz#3023: you could also try `torch.cuda.empty_cache()`
Orz#3023: although I'm not really sure if it would fix it due to `pin_memory` stuff of deepspeed
The_Law#1013: Hi guys I'm really interested in EleutherAI but not quite experienced enough to contribute, I was wondering how smart/legal it is to use Eleuther's models for a twitter bot for example ? I know openai is quite strict about that
Daj#7482: Our models are open source, you can do with them whatever you want
Daj#7482: We of course would prefer you don't use them for industrialized shitposting if possible lol
The_Law#1013: I understand yeah, thanks !
EricHallahan#1051: (Where is that adaption of "Twitter Bot"?)
Emad#9608: Max did one with aitextgen here https://minimaxir.com/2020/01/twitter-gpt2-bot/ can use neo instead of gpt2 in new code easily.
He has some reasonable use guidelines in the GitHub
The_Law#1013: Oh cool that looks like a good resource thanks ! I'd be interested in the tweaking/training process more than the implementation itself, I'd love to take a crack at that myself as a sort of educational/personal project
Charlie__#5229: I was so excited when I learned there's a dataset of 20k Dear Abby questions, but it turns out it's only the questions and not the the answers x.x
Charlie__#5229: I'm on the lookout for datasets of people providing good judgments because it seems like there are some easy questions to ask like "what happens if you try to extrapolate the fine-tuning, do you get even better judgment?"
wabi-sabi#5811: My understanding is that there are times when models can be improved by giving them a random number generator's output as an input but my understanding is extremely vague. Can anyone link to good discussions, explanations, or examples of this?
StellaAthena#3530: This sounds like nonsense to me
cfoster0#4356: The closest thing I can think of to this (and it's not very close) is using sampling for LM generation
kindiana#1016: gan z?
|
EricHallahan#1051: The only thing I can readily think of is the StyleGAN noise buffers.
&.#0001: How does NeoX work? How does it make CUDA calls?
chilli#5665: :thonk:
_joel#4800: hi all. I've spent much of the last year writing the beginning of a toolbox to explore research into ML _engineering_ itself. It's early stages and there are lots of significant changes I want to make but the outline is there. Is this relevant to what you're doing here? https://github.com/joelberkeley/spidr
bmk#1476: sounds interesting
bmk#1476: way above my paygrade though
chilli#5665: it's not really clear to me what it's actually trying to do
_joel#4800: sure. It's kind of a testing ground for design in ML engineering. It was created in response to how in industry there are limitations on what technologies and techniques you can use because many are simply too risky/nascent, and how much time you can sit and stare at a design before you need to deploy it. I wanted to see what happens if these kinds of limitations are removed. I wanted to see how programming language research, new ML hardware and more can be combined in ways that might give engineers working on more mainstream tools ideas they may want to include in their own work, or even give AI researchers ideas of new research avenues through e.g. composability
random person#5234: What exactly do you mean by MLE?
random person#5234: Like is it different from how MLE is viewed in industry?
random person#5234: Because I dont think this is really the focus for MLE in industry.
random person#5234: I mean in industry a lot of this stuff is done by like, massive amount of A/B testing
random person#5234: And the focus is to make money at the end of the day.
random person#5234: And thats not really necessarily aligned with the "best" model academically
chilli#5665: Could you give some more concrete examples?
chilli#5665: Spoken as somebody who works on one of those more mainstream tools
chilli#5665: 😛
_joel#4800: @random person
> What exactly do you mean by MLE?
> Like is it different from how MLE is viewed in industry?
|
> Because I dont think this is really the focus for MLE in industry.
by MLE I mean machine learning engineer. I would say I mostly mean MLE as it is viewed in industy, though MLEs in industry have a variety of focuses. Perhaps "research engineer" is closer to the kind of thing I'm focussing on. This is partly cos it's where my interests lie, partly because it involves a smaller scale which is more feasible with a project like this. I don't want to make a new tensorflow in Idris (the language I'm using).
> I mean in industry a lot of this stuff is done by like, massive amount of A/B testing
Can you elaborate? I'm not too familiar with the phrase A/B testing. How does A/B testing relate to design?
> And the focus is to make money at the end of the day.
For many businesses, yes, but is this very community not evidence that money isn't the be all and end all?
> And thats not really necessarily aligned with the "best" model academically
I think the best model academically is an area outside the project scope
_joel#4800: @chilli the go-to example for this project is using type-level shapes. The code doesn't compile unless tensor shapes match. Some scala and haskell libraries already do this, though I imagine it's more flexible and requires less ceremony with the language I've used.
Another example is the discussion I go into the tutorial about how I've gone about designing a composable Bayesian optimization library using ideas from functional programming. I hope it helps people implement complex approaches rather than get in their way. I was a developer on a Bayesian optimization library at my last place and was frustrated by how certain design choices put real constraints on what approaches would be allowed
does that help?
|
chilli#5665: I think type-level shapes is a rabbit hole that every person with a passing knowledge of PL thinks is a great idea, but has never really caught on.
_joel#4800: PL?
chilli#5665: programming languages
chilli#5665: I'm not saying it's a ... bad idea in principle
chilli#5665: but I think many folks with a PL background don't really understand what makes it difficult
_joel#4800: my patience has been tested on countless occasions with type-level shapes. Making them ergonomic feels like a research topic in itself, but I like that. If they are to reach the mainstream in a practical way, this gives a good opportunity to figure that out
nshepperd#2316: a while ago i was writing a haskell ml library with type level shapes
nshepperd#2316: lots of type checker extensions required to make it work naturally
random person#5234: So my point is that for machine learning engineer in industry, a lot of it focus on system design and deploying things to help business team objectives.
random person#5234: Its a lot more... applied towards in solving the problem of the business. Think ctr, churn, user retention.
random person#5234: So as you said, I think the focus of something like this would be more targeted towards research community.
nshepperd#2316: but it's kind of cool writing like a convolution op with the arithmetic relation between input, kernel, output size and padding in the type
jack#8178: one of the ways they get it wrong - static checks are not going to be ergonomic for the full range of options there. much better would be to use the shape signatures to autogenerate unit tests
chilli#5665: I personally like an approach along the lines of tfp
chilli#5665: If you’ve seen that
jack#8178: nope - link?
chilli#5665: (Tensors fitting perfectly)
chilli#5665: https://arxiv.org/abs/2102.13254
chilli#5665: Basically, add shape constraints
chilli#5665: And a Smt solver
|
jack#8178: hmm
jack#8178: so i like that in theory
jack#8178: and maybe I'd like it in practice - my experience with smt solvers has been very mixed
jack#8178: but... tensor shapes are simple enough that maybe they would be fine
chilli#5665: Yeah, another thing I like it for is for downstream compilers
jack#8178: one nice thing about a test-based approach is that it can be entirely marginal - my libraries don't have to use it for me to use it
chilli#5665: Have you seen torch typing?
jack#8178: yeah
jack#8178: i want basically that but with arithmetic constraints between variables and autogenerating small unit tests
jack#8178: the latter part at least would be easy to write an annotation for
chilli#5665: So…
chilli#5665: Torchtyping + tensors fitting perfectly
jack#8178: would be a great combo
jack#8178: yeah
jack#8178: hmm
jack#8178: ideally you could use z3 to spit out counterexample shapes
jack#8178: actually that could work - then accept as long as it can't find one
chilli#5665: Yeah z3 would naturally do that
jack#8178: i guess first step would be to add constraints to torchtyping
jack#8178: hmm so the most flexible approach would be an annotation
|
jack#8178: like
```py
@constraint(3 * "x" == 2 * "y")
def f...
```
jack#8178: this would be much easier in a language with symbols not just strings lol
chilli#5665: Why?
chilli#5665: And why can’t you do it with python?
jack#8178: bc doesn't python already happily multiply strings by integers?
jack#8178: i can do it with python
jack#8178: i am just expecting it to be harder than it would be in eg mathematica where i could just add arbitrary rewrite rules after the fact
jack#8178: how would you do it?
chilli#5665: No? Python is fairly strongly typed
chilli#5665: I'm not totally sure what this means - but I'm sure Python can do it lol
chilli#5665: You can do anything you want in Python 🙂
chilli#5665: You just might not be able to make it fast
Kazumi#1297: You could make a variable class and assign x and y as a variable beforehand or something
chilli#5665: did you know
chilli#5665: you can write a faster relu than PyTorch's
chilli#5665: it could be even faster if PyTorch had 1 bit kernels 🤔
|
stephen_hogg#0133: hi everyone, I'm a former economist who works as an ML Lead for a small medical device start up in Sydney. I'd love to be of service, hope to get to meet some of you!
stephen_hogg#0133: hopefully this was the correct channel to write that
Daj#7482: Yes, welcome!
stephen_hogg#0133: if anyone's got some time in the next couple of days to talk about what's up and where to pitch in, I'd be very grateful!
Daj#7482: Unfortunately, we're pretty chaotic and don't have a streamlined onboarding process or anything. You can skim the project channels and their pinned comments to see if anything catches your eye
stephen_hogg#0133: that's cool, even that pointer is good
Daj#7482: If you wanna help with big model training in particular, you can start by getting familiar with our NeoX code and looking at the issues maybe https://github.com/EleutherAI/gpt-neox (but it's not a very simple codebase at times I'm afraid heh)
stephen_hogg#0133: hmm actually the lm evaluation harness catches my eye
stephen_hogg#0133: aside from bmk, is there anyone in particular who would be a handy guide for what needs to get done there?
Daj#7482: I'm not sure who all is currently working on that, I think guac is, and Stella always has a good overview of things. There's always work to do there
stephen_hogg#0133: good to hear
stephen_hogg#0133: grinding out a bunch of eval tasks sounds like my kinda scene, actually
stephen_hogg#0133: I'll get in touch with them
Sid#2121: does anyone have a link to that one site that's like a "leaderbord" of large LMs? something like acronomicon or something ending in omicon
Sid#2121: agh nvm found it, i was so close lol
Sid#2121: https://lair.lighton.ai/akronomicon/
StellaAthena#3530: Does anyone know of any papers that discuss the censorship of research on transformer models that do not fit the financial and political priorities of the large tech companies that own them? Especially ones that say something along the lines of “we did this research using BERT / GPT-2 / GPT-J / T5 because they're publicly available and therefore we don't have to sign agreements to let a company review our papers before publication” (as they would have to, to use GPT-3 or similar)?
Sphinx#2092: Interestingly enough, 3 of those 4 models came from large tech companies.
tpapp157#3643: The only thing I can think of was the whole AI ethics spat within Google a few years ago. To me that smelled a lot more like internal politics rather than any sort of actual company policy though.
Sid#2121: Pinned a message.
|
StellaAthena#3530: My intended emphasis was on public release vs privately held. I would love it if Google were to continue to release SOTA large language models for the public to use.
StellaAthena#3530: Yeah there are some leaked emails written by Carlini complaining about lawyers editing one of his papers on memorization and then his next paper looked exclusively at GPT-2, GPT-Neo, and GPT-J. But he doesn’t actually ever publicly connect the two things and I’m loath to put words in his mouth.
tpapp157#3643: Those sort of clauses are more about allowing companies to cover their ass. Ensuring none of their IP leaks, ensuring someone isn't trying to publish a hit piece, ensuring there's no information that could be used to sue them in some way, etc. Realistically, I doubt they care too much, especially considering these companies are releasing new model versions practically every year.
tpapp157#3643: Not to say we shouldn't be vigilant about these things. As these models become more productionized and start earning real money, companies will be more worried about preserving the reputation of those models, and will be more liable for any legal issues arising from deficiencies in those models.
anthony_fuller#1075: I made an auto-shape checker upon model init, not sure if its what you guys are talking about : https://github.com/antofuller/configaformers
anthony_fuller#1075: I'm also a noob, so it's probably horrible
cfoster0#4356: https://docs.cohere.ai/finetuning-representation-models/
Surely this is a typo, no?
>>> Length of texts: The context size for text is currently 32 tokens.
𓅬 gabriel_syme 𓅬#3220: what a tip! Looks like a typo or an omission, like 'increase it to get better quality'?
Louis#0144: @adrien
adrien#5107: fixing 🙃
wabi-sabi#5811: I can't find whatever I originally read on this, but see https://en.m.wikipedia.org/wiki/Stochastic_resonance for some possible inspiration.
Some Point Process#3793: Yeah noise can remove quantization artifacts (see dithering) via decorrelation (c.f. "whitening") more generally
wabi-sabi#5811: I distinctly remember reading a really good rebuttal to something Yudkowsky wrote on how randomness is always suboptimal in the Sequences years ago, but can't find that either.
Some Point Process#3793: "Always suboptimal", pfft
Some Point Process#3793: i mean if you have some game where it's always a matter of chance (up to optimal play) whether you'll actually win, then it might be less optimal to play such a game, if you don't like the idea of risk
wabi-sabi#5811: What I want to read about and think about is, supposing that injecting noise into feature vectors can improve them, are there principled ways to choose the distributions that this noise should come from in a data dependent way?
As background, I might be being influenced by https://arxiv.org/abs/2201.12204. If ALL data is actually a sample from a distribution and we should be modeling those distributions instead, then it's more natural to think about building good purely artificial inputs to the model.
|
This is also reminiscent of some ideas on "ambrosia attacks" as a hypothetical opposite to poisoning attacks that I was playing with a few months ago.
HostsServer#2628: Has anyone broken colab to where your filesystem stays persistent???
HostsServer#2628: I cannot seem to get mine to clear its been almost 2 hours and everytime i go to my colab link it has the file system loaded from before
HostsServer#2628: https://cdn.discordapp.com/attachments/729741769738158194/940390327758491759/colab-broke.gif
BoneAmputee#8363: Runtime > Manage sessions > Terminate
HostsServer#2628: Refresh page, factory reset runtime, let it timeout on its own, close browser
HostsServer#2628: Yea i did that
HostsServer#2628: Im like stuck
Zippy#1111: You can remove all cookies and site data, it would effectively wipe your browser.. there would be no way it could survive that :kek:
Zippy#1111: but like-- all logins of any kind would also be cleared
cfoster0#4356: This is not the place for general tech support questions, friend @HostsServer
gee62vsf#1188: Since the website of "The Eye" has been down, then how will the full model weights be "downloadable for free under a permissive Apache 2.0 license from The Eye"? Thanks.
StellaAthena#3530: @gee62vsf mystic.the-eye.eu
paulbricman#2527: Using transformer attention to guide user attention, and thoughts on search engines: https://paulbricman.com/thoughtware/cybersalience
𓅬 gabriel_syme 𓅬#3220: this is cool, might take some getting used to I guess. I keep focusing on the transformer attention and not reading the text 😄
Deleted User#0000: very cool idea
Deleted User#0000: would be nice to be able to upload documents there for speed-reading ^^
paulbricman#2527: You can press `reset content` and paste in your own text!
tpapp157#3643: Interesting idea but all the demo does is highlight a handful of keywords which are barely informative of the content.
|
aٴ#8803: Anybody ever used facebook's blenderbot? How does it perform all around?
paulbricman#2527: Sorry for the confusion, the highlights should be related to the query you set in the sidebar, not really a summary of the original text. But indeed, it's noisy as heck, and very hit or miss
tpapp157#3643: Ok. I didn't mess with the sidebar. I assumed they prepopulated decent defaults.
cfoster0#4356: I'm not sure what your need is but it sounds like you should look through the FAQ first https://www.eleuther.ai/faq/
gee62vsf#1188: As it has been Feb 9 in some time zones of the world and is now 12am, GMT, Feb 9, has the full model weights become "downloadable for free under a permissive Apache 2.0 license from The Eye"? Thanks a lot.
cfoster0#4356: A dev should be delivering it via parcel mail to your local distribution center in 5-7 business days. Please hold
bmk#1476: every time we get asked the release gets pushed back a week, so it is now scheduled for Feb 16 (/s)
bmk#1476: delivery may take anywhere from 3 months to 1 year, but for a small fee of $2000 or something we will upgrade your delivery to Premium Processing and expedite it to 15 business days
cfoster0#4356: If you would like to further expedite this to Immediate Priority Delivery™️, we kindly ask that you head to a nearby convenience store and purchase $5000 USD worth of Google Play gift cards /s
AI_WAIFU#2844: alternatively funds can be sent directly to the following monero address...
zphang#7252: For urgent requests, please call the EleutherAI hotline at 555-555-5555
Our operator will read you the bits one by one
𓅬 gabriel_syme 𓅬#3220: So I'm thinking of replicating the work done here, any thoughts, pointers, advice, or experience (in case someone tried)?
https://arxiv.org/abs/2201.07207
𓅬 gabriel_syme 𓅬#3220: I like quite a few ideas/insights from that paper, like larger models being more expressive but not exactly more accurate (I've seen that a bit with my work, although that's not prompting based) and their metrics of correctness / executability (might steal some of that). But not sure if anyone has tried smth like this
Louis#0144: hey can you release it
ilovescience#3282: we'll keep asking so it never gets released :berk:
Louis#0144: ye
bmk#1476: sorry, we are currently busy processing the previous release inquiry. please try again in 6 business days
zphang#7252: what's the trick to getting more TRC quota again? is it just "use and beg for more"?
|
StellaAthena#3530: Yup
StellaAthena#3530: We have infinite quota, if you want some
𓅬 gabriel_syme 𓅬#3220: yeah just send again and show some output + next steps should be enough
zphang#7252: so when begging for more I should show what I've done with it?
StellaAthena#3530: Yes. In decreasing order of importance, they like to see:
1. Publications
2. Open source models
3. Preprints
4. Anecdotes
make sure to double check that you've given them a shout-out in the acknowledgments or whatever before sending the list off to them
𓅬 gabriel_syme 𓅬#3220: I've made it work so far with promise of publication (I'm 5 months late heh), a site / deployed model that gives attribution (that helped a lot), and a few anecdotes about performance along with next steps of why I need more time
StellaAthena#3530: There's a moderate gap between 2 and 3 and a large gap between 3 and 4, based on what I've heard.
Technobird22#2055: Oo thats awesome
zphang#7252: fascinating, I'll keep this in mind
StellaAthena#3530: You're basically emailing someone whose job is to make Google look good for supporting outside researchers. You want to send them something that they can hand to their PR department and say "look at all the good press we've generated"
StellaAthena#3530: Along those lines, if your work gets highlighted in news articles (especially ones that mention Google as the compute source) definitely send those articles along
ilovescience#3282: do you really need to provide a significant explanation? i thought you can just ask again and they'll extend it...
but the last time I used TRC was almost a year ago so things may have changed
𓅬 gabriel_syme 𓅬#3220: It is definitely a bit harder now, from what I've heard. I think demand is quite high
StellaAthena#3530: I'm under the impression demand has gone up a lot recently
|
ilovescience#3282: oh no
𓅬 gabriel_syme 𓅬#3220: Like my TPU had a catastrophic shutdown a week ago during training and it took me 3 days to replenish it
StellaAthena#3530: Do you have something you'd like to run on TPUs? Note that we don't have nearly as much v3-8 capacity... we can more easily get you a v3-256 than a v3-8 :berk:
Technobird22#2055: Seriously? lol
random person#5234: do EAI just not touch TPUs anymore?
random person#5234: since moving neox to GPU?
random person#5234: idk how you have so many TPU hours left lol
StellaAthena#3530: I wouldn't say that, but in general we (as a group) are at a point where we are much more constrained by dev hours than by compute. There's been a lot of work focusing on GPUs recently and so TPUs have been pretty underutilized
StellaAthena#3530: We could have trained a second GPT-J over the past several months with the compute we've had lying around and simply not used 👀
random person#5234: is it like hours per month or does it carry over?
Technobird22#2055: Not at the moment, no, but thanks for the offer. I do have something in mind, but alas I've been so busy recently
StellaAthena#3530: I don't think we have an hourly allocation. We have a maximum number of each type we can allocate.
random person#5234: I see. Well hope you have more devs hours! shame to waste those.
StellaAthena#3530: Be the change you want to see in the world 😉
Some Point Process#3793: *positive* change :p
Some Point Process#3793: sry that was a shitpost, damn
cfoster0#4356: Underscoring this, there is currently (and has often been) a huge delta between how much compute we have access to and how much we use, so if you've got good ideas that use brrrr please don't be shy
Kia#2550: RETRO GPT-J 👀 there's people want to try the Retrival system but on 20b so it would worth a shot on gpt-j
cfoster0#4356: *ideas and accompanying code
random person#5234: nah, my little multimodal experiments have been doing great on my 3090
|
StellaAthena#3530: Let me rephrase this: if you *want to make TPUs go brrr* we can hook you up. We can even hook you up with scoped experiments.
EricHallahan#1051: Bonus points for alignment-relevant work!
StellaAthena#3530: Please don't DM us yet more experiments that the already busy active member group will not have time to run 😛
StellaAthena#3530: But all it takes to become an active member is some time, the ability to code, and a willingness to do the work.
EricHallahan#1051: :guilty:
StellaAthena#3530: Okay, one of three isn't bad.
𓅬 gabriel_syme 𓅬#3220: I wish Jax and me were friends :guilty:
EricHallahan#1051: I was more saying I'm guilty of being an active member lol
StellaAthena#3530: oh
EricHallahan#1051: That's where I was a year ago.
random person#5234: supposedly i am taking a class thing thats supposedly use jax this semester
𓅬 gabriel_syme 𓅬#3220: I have like 2/3 of that, but yeah simply putting the time in the community is more than enough for most things (from my experience only)
StellaAthena#3530: \> literally begging people to use tens of thousands of dollars of compute
Technobird22#2055: :berk:
Kia#2550: Probably Hugo is interested
Technobird22#2055: iirc, TPUs without being in the Tensor Research program is quite expensive, right?
random person#5234: uhhhh
random person#5234: no more than A100 x 8
Technobird22#2055: I've heard that the Google compute storage also is rather expensive
random person#5234: the price is pretty competitive with big cloud vendors' equal
|
random person#5234: I mean its overpriced as shit compared with smaller companies like coreweave
random person#5234: but you pay for the convenience and integration
StellaAthena#3530: If you're doing a cool research project [with / for / as / idk what the right word is here] EleutherAI we will cover all reasonable associated costs.
Technobird22#2055: That's awesome 🙂
Btw, I've never explictly said, but just wanted to say - amazing work that Eleuther has been doing! You've come so far.
I joined this Discord somewhat near the beginning, and never expected Eleuther to achieve so much; so great job to everyone involved/contributed! ❤️
CarsonPoole#0640: has a Codex-like model been discussed? Maybe GPT-J sized? Could even reuse most of the MTJ code for it
StellaAthena#3530: Yeah we mostly just need a high quality clean code dataset
Kia#2550: Genji-python
Kia#2550: It uses GPT-J and it's Public <https://huggingface.co/NovelAI/genji-python-6B>
CarsonPoole#0640: another spitball idea is pretraining a large image transformer
StellaAthena#3530: show me code and I will run it
CarsonPoole#0640: i mean which of those two would be more in line with Eleuther's goals
CarsonPoole#0640: (assuming code is there of course)
CarsonPoole#0640: https://huggingface.co/datasets/code_search_net
CarsonPoole#0640: this looks decent for a Codex dataset
CarsonPoole#0640: and includes Golang, Java, JS, PHP, and Ruby
CarsonPoole#0640: i'm bringing this up because I'd be more than happy to help with the ratio of dev hours to compute hours
Kia#2550: I think it can only do python, @kurumuz can confirm it
|
CarsonPoole#0640: yeah I was referring to the dataset I linked to
Kia#2550: Ow:thinkies:
StellaAthena#3530: @alstroemeria313 is currently working on training large multimodal model set-up, probably best to talk to her about image transformers.
alstroemeria313#1694: ooh?
Kia#2550: Well yes,I haven't check the dataset but your seem to be right
StellaAthena#3530: @CarsonPoole was asking about if putting hours into doing dev work on image transformers would be a good idea, and I figured he'd be best talking to you
Technobird22#2055: There is a problem I'm trying to solve, but it's quite niche and I don't really have the technical knowledge on how best to go about solving it; Also, I feel it might be a bit too ambiguous to tackle; and might take a lot of work/time to achieve good results on. However, I do know someone who's actually doing research in this area, but using conventional algorithms and mathematical approaches rather than using ML.
Technobird22#2055: Essentially, we're dealing with a very specific type of radio data, and need to flag out which samples are noise.... and we have a *lot* of data
Technobird22#2055: Sadly, I don't think this would be very applicable/related to what Eleuther is doing
alstroemeria313#1694: image transformers?
StellaAthena#3530: This did not convey any information about what you're doing. If you take an hour to write up clearly the project idea, reference related work, etc. I'll look at it
Technobird22#2055: Sorry 😅
Honestly, I'd need to look into it more myself; I haven't actually done much work on it (my bad for being unclear)
𓅬 gabriel_syme 𓅬#3220: @ww how have the RL + Jax experiments going?
𓅬 gabriel_syme 𓅬#3220: nice 🙂
chilli#5665: Wasn’t there already a mlir paper?
chilli#5665: We might end up building a mini mlir in Pytorch lol
Deleted User#0000: this is specifically about codegen, not MLIR the project
chilli#5665: Are you on this paper :^)
gee62vsf#1188: Does anyone have working text generation code that uses GPT-J 6B? Thanks a lot!
|
Kia#2550: @gee62vsf https://huggingface.co/NovelAI/genji-python-6B
gee62vsf#1188: @Kia Thank you for your message, but I got an error message "OSError: Can't load config for 'NovelAI/genji-python-6B'. Make sure that:
- 'NovelAI/genji-python-6B' is a correct model identifier listed on 'https://huggingface.co/models'". So how to fix it? Thanks.
magenta#8040: looks to me like huggingface itself is having issues; I'm getting a lot of errors
magenta#8040: 504 on this link
Kia#2550: HF isn't down
Kia#2550: that's strange
magenta#8040: https://cdn.discordapp.com/attachments/729741769738158194/940940629167460363/unknown.png
magenta#8040: well, that's all what im getting
magenta#8040: maybe they hate my country, idk
magenta#8040: @Kia I just asked on another server and two other people have issues reaching HF too. From which region are you connecting to HF?
65536william#9999: same for me from UK
Drexler#4006: So do we still implement random papers for fun in here?
Louis#0144: Yes
Louis#0144: Not random though
nshepperd#2316: i need to take more stimulants. haven't implemented a paper in too long
tpapp157#3643: It could yeah. There's value in that, though often papers don't include all the necessary details for an exact duplication. If you have your own comparable dataset or model architecture that you understand very well, then that can be more useful.
inox#5400: most of lucidrains' repos don't replicate all of the results but replicating the architectures is still very useful
Louis#0144: Meth
|
ethan caballero#6044: @Aran Komatsuzaki, Ashish Vaswani left google to cofound startup. I wonder if it's the same startup as Noam Shazeer's startup:
https://www.linkedin.com/in/ashish-vaswani-99892181/
Aran Komatsuzaki#5714: yeah i've been aware of the move 🙂
ethan caballero#6044: is it the same startup as Noam startup?
Aran Komatsuzaki#5714: i don't think so, but tbh i don't remember well
Deleted User#0000: nice, what's the startup name? (keeping tabs)
Deleted User#0000: and yea, i guess the rumored google brain exodus is real
ethan caballero#6044: I don't know. Noam and Ashish's linkedins say stealth startup.
Deleted User#0000: nice, one day i'll shake his hand, for (temporarily?) changing the world
ethan caballero#6044: transformer.agi
Deleted User#0000: maybe he's around SF
ethan caballero#6044: that would be hype if you, Ashish, and Noam were all at same startup.
ethan caballero#6044: I'm calling it's the same startup. Them leaving google within a month of each other is too much of a coincidence.
Do y'all know of other transformer gurus that currently have stealth startup as their current company on their linkedin?
Aran Komatsuzaki#5714: iirc the list of founders in ashish's company didn't list noam
ethan caballero#6044: where is list of founders?
Aran Komatsuzaki#5714: i'm trying to find the link, but i can't find it lol
Aran Komatsuzaki#5714: oh didn't know they're cofounding. i knew they were doing chatbots tho.
kurumuz#5695: I think researchers are way too confident with building products haha
Aran Komatsuzaki#5714: haha pretty sure they'll easily get huge funding
|
Aran Komatsuzaki#5714: yeah character.ai
kurumuz#5695: interesting
Aran Komatsuzaki#5714: i wanna live in mountain view or somewhere very close to googleplex from may to august.
what's the best option?
ethan caballero#6044: RV on airbnb in googleplex parking lot 🤣
ethan caballero#6044: @Aran Komatsuzaki https://cdn.discordapp.com/attachments/729741769738158194/941044393467592775/Screen_Shot_2022-02-09_at_1.53.08_PM.png
ethan caballero#6044: ^There are people in google brain who actually do this.
Aran Komatsuzaki#5714: i guess i wanna live in a larger space than RV lol
maybe airbnb is a great option
Deleted User#0000: mountain view is really boring - live up in the city close to 4th and king, and then caltrain down
StellaAthena#3530: Yoshua Bengio about the Mila/AI Sweden partnership to train LLMs https://www.linkedin.com/posts/aisweden_we-are-happy-to-announce-our-partnership-activity-6897178206386884608-YFZZ
Aran Komatsuzaki#5714: the idea is to minimize my commute time, since i'm going to the campus almost everyday, while i can go to the city for recreation at most a few days per week
Deleted User#0000: yea, but SF may be worth it.. well it depends on whether you like the city
Deleted User#0000: i'd say, take a tour of SF, if you like it enough to endure the commute, do it. you only have so many memories you can build during your twenties, and you don't want that to be mountainview
kindiana#1016: sf eleuther meetup part 2?
kindiana#1016: or is it part 3 now
Deleted User#0000: yea, and Sid needs to come around too!
ethan caballero#6044: If you want something $1000 per month or less, it's probably going to involve bunk beds in a hacker house.
Aran Komatsuzaki#5714: haha i'm fine paying much more than that lol
ethan caballero#6044: Ask on twitter, some googlers on twitter are probably looking for someone to sublet their place to.
|
Aran Komatsuzaki#5714: pretty sure subletting doesn't work well in this case unlike college, since google or other nearby companies don't offer the fixed "summer vacation" that colleges do
Aran Komatsuzaki#5714: i think airbnb is the best one, since it's flexible
Aran Komatsuzaki#5714: it costs like $3k/mo tho if i wanna live very close to the campus
Deleted User#0000: yea that's too high
Deleted User#0000: so find out where the google shuttle lines are
Deleted User#0000: and find a sublet close to that
Deleted User#0000: you're only doing a summer internship?
Aran Komatsuzaki#5714: yeah that's what i'm trying to. also somewhere close to a grocery store.
Aran Komatsuzaki#5714: trying to find the shuttle line map
Deleted User#0000: get it delivered 😄
Aran Komatsuzaki#5714: haha true
ethan caballero#6044: did you search with https://www.airbnb.ca/sublets ? last time I tried, "airbnb sublet" has flexible results that don't show up on normal airbnb
Aran Komatsuzaki#5714: haven't tried. i'll take a look 🙂
Aran Komatsuzaki#5714: yeah just summer
Deleted User#0000: aim for mid 1000's
Deleted User#0000: sublet
Deleted User#0000: 1000 is too low, and only places would be some crowded house in the outer sunset (i've done it before)
Deleted User#0000: 2000 is average
Deleted User#0000: 3000 is on the high end, and not worth it, unless you are dating / impressing girls etc
Aran Komatsuzaki#5714: yeah i don't mind paying a lot to make sure that i live in a comfortable place, if not 3k
|
Deleted User#0000: yea you'll find it, i'm doing 1600 with roommates in the Mission atm
Deleted User#0000: life is great
Aran Komatsuzaki#5714: i'm paying 1300 in atlanta because i'm dumb and lazy
Deleted User#0000: haha, i remember the days of paying 300$ a month in michigan when i was in grad school
Deleted User#0000: pm Ben too, he recently went through this
Aran Komatsuzaki#5714: i'm infinitely happier at california than michigan/minnesota just because of the weather
Deleted User#0000: hear hear
Deleted User#0000: but truth is, there are local optimums here in the bay that can make you even more happy
Deleted User#0000: 😄
Deleted User#0000: even individual neighborhoods in SF can yield very different life experiences
Aran Komatsuzaki#5714: yeah i used to live in berkeley, but i underpaid and got to live in a terrible place lol
Deleted User#0000: yea, still have ptsd of the time i woke up in a hacker house, went to the bathroom, and the toilet overflooded, feces smeared on the floor
Deleted User#0000: never again
alstroemeria313#1694: oh nooo
aaronrmm#3198: Oh wow, I found out Monday that I need to find a new place to live and ya'll are already discussing it 🙂
I just came in to compliment EleutherAI on having the only discord sticker in the 33 servers I have joined.
random person#5234: https://thenewstack.io/dark-side-life-silicon-valley-hacker-house/ @Deleted User
random person#5234: is this what it is like?
Gurkenglas#7362: What's the webapi that a rando like me can currently query for natural-language-shell purposes?
Daj#7482: goose.ai? GPT3?
|
StellaAthena#3530: 6b.eleuther.ai?
zphang#7252: ahhhhhhhh t5x examples not working out of the box
EricHallahan#1051: :works_internally:
𓅬 gabriel_syme 𓅬#3220: :wat:
ilovescience#3282: interesting, Jakob Uszkoreit co-founded a biomedical startup too
bmk#1476: awesome
bmk#1476: less people working on the dangerous capabilities
ilovescience#3282: where are you interning?
Aran Komatsuzaki#5714: google
ilovescience#3282: that's awesome!
ilovescience#3282: that must make me pretty awesome too not working on dangerous capabilities and studying alignment lol
ilovescience#3282: how hard was it to get this internship? tough interview?
Aran Komatsuzaki#5714: interview was super easy. no coding interview lol
ilovescience#3282: so they just asked you about research?
Aran Komatsuzaki#5714: yeah
Aran Komatsuzaki#5714: one interview phase for talking about research, after which there was another interview to decide which research group to join.
ilovescience#3282: that doesn't seem too bad
Sphinx#2092: The hardest part is getting the interview if you're not connected to someone.
ethan caballero#6044: How'd you get interview?
Aran Komatsuzaki#5714: i tweeted that i want an internship, and google researchers gave me a referral.
|
ethan caballero#6044: Tweet you want housing, and google researchers will give you housing.
ethan caballero#6044: @Aran Komatsuzaki which research group of googlers did you match up with?
wabi-sabi#5811: Here's my latest bad idea.
I'm trying to get guarantees for learned index functions so that you can add new points to the array without having to retrain the entire function on every sample in the array to memorization.
The procedure I have in mind is this:
0. Overfit a model of the CDF for your array as usual for learned index functions.
1. Do a forward pass on the value in array[i-1], then do a backward pass and make a copy of the suggested weights. Don't update the model for this backward pass.
2. Do a forward pass on the value in array[i+1], then do a backward pass and make a copy of the suggested values. Don't update the model for this backward pass.
3. Do a forward pass on the value in array[I], then do a backward pass but instead of minimizing the standard optimization problem for finding edge weights, instead minimize that optimization problem subject to the additional constraints that each new edge weight should be bounded by the suggested edge weights for i's neighbors.
I am not sure what all would be involved in making the network determine edges subject to constraints on the maximum and minimum values of those edges, but I assume there's probably some way to do it but that it might be slow. I'm fine with that.
If using strictly positive edge weights and ReLu activation functions, the procedure doesn't immediately seem like it'd fail to result in "safe" updates to the learned index function that require only training on the neighborhood of the inserted point. And that seems very good.
|
Does this seem right to others? Can the wrong combination of weight updates still work to ruin the model's performance on samples outside the immediate vicinity of the insertion into the array?
wabi-sabi#5811: One problem that occurs to me only after posting the idea publicly, of course, is that you might have 0 error for both neighbors and yet fail to correctly place the point between them, for example by mapping both i and i+1 to the same index. This rigidity may suggest other problems with the approach.
cfoster0#4356: Tbh I didn't know what "learned index functions" were until now https://arxiv.org/abs/1712.01208
Sphinx#2092: @ilovescience Before covid it was pretty chill. You just go talk to random people, get coffee, do activities, etc. During wfh, it's a bit less smooth. We just ping people in chat, set up in meeting. It's fairly smooth. The only issue is more about finding out who's working on what. Sometimes the company is too big and hard to keep track of what projects are on going.
Sphinx#2092: but maybe that's just me because I'm disorganized.
ilovescience#3282: yeah but like why would you talk to random people like that lol
i guess I would think that discussion would stay in groups of similar interests
Sphinx#2092: why not? I talk to you random people lol
ilovescience#3282: hmm it's different in a company than a discord server i would say...
i feel discord servers in general are more conducive to random people having discussions
that's just my opinion though
Sphinx#2092: Ehh Google is pretty chill and people can be social. There's lots of stuff to do as well, and you can meet people with other interests. Pierre and I used to play Pump It Up a lot, for example.
Sphinx#2092: There's also a bunch of random talks happening often, and we get emails about them all the time, so you can just show up and see what other people are working.
ilovescience#3282: hmmm that sounds pretty fun
ilovescience#3282: my lab has a total of 4 members am I am the only phd student...
my socialization is mainly discord at this point :berk::goose10:
n.kh.l#5814: I have a large dataset (a few million songs) with genre tags from genius music. Would anyone be interested in working with me to make a model that can generate lyrics using NEO?
chilli#5665: Tbh, I like Facebook’s workplace lol
chilli#5665: A lot more than email threads
Yaroslav Bulatov#3194: Yeah, and Horace He gets tons of likes every time he posts something, I'm getting tips from him how to increase my social standing
|
EricHallahan#1051: Who is this Horace He guy? He seems to be everywhere. :sus:
ilovescience#3282: looks like it's working, you've got some banger tweets! 😄
bmk#1476: oh no, he's even on the paper! https://cdn.discordapp.com/attachments/729741769738158194/941183792612270120/unknown.png
bmk#1476: who is this guy
ilovescience#3282: why is the paper not on arxiv?
bmk#1476: same vibes https://cdn.discordapp.com/attachments/729741769738158194/941183959788822548/campfire.png
bmk#1476: missed 14:00ET submission deadline to get it finished
ilovescience#3282: so it'll be there tomorrow?
bmk#1476: will be on arxiv soon(tm)
ilovescience#3282: okay
EricHallahan#1051: soon™️
EricHallahan#1051: We are posting it as a prepreprint
ilovescience#3282: this is a very interesting concept lol
EricHallahan#1051: But seriously, it is nice to be able to revise things on our own time and cadence.
EricHallahan#1051: Especially early on.
!!Puffy Bird!!#7496: soon(Tm) indeed
EricHallahan#1051: Also note that I totally don't have a update to the website for the release. Too much effort went into Figure 2 for that to happen. `:)`
!!Puffy Bird!!#7496: bruh
!!Puffy Bird!!#7496: I can see that
!!Puffy Bird!!#7496: thats a lot of lines
|
EricHallahan#1051: Oh it's all ti*k*z.
ilovescience#3282: yeah that would be weird doing it manually
!!Puffy Bird!!#7496: still though
!!Puffy Bird!!#7496: its a lot of lines
!!Puffy Bird!!#7496: anyways why am I talking about lines lmao
EricHallahan#1051: Oh yeah the NVLink graph is fun.
ilovescience#3282: why are you?
ilovescience#3282: :goose9:
!!Puffy Bird!!#7496: exactly
!!Puffy Bird!!#7496: 😎
voxs#0001: bruh why text based diagram maker
voxs#0001: that is pain
inox#5400: looks so good
inox#5400: zoom in forever
inox#5400: no pdf file size bloat
EricHallahan#1051: Ti*k*Z ist *kein* Zeichenprogramm
voxs#0001: yeah but there are non text based vector graphics programs
n.kh.l#5814: soooo you interested?
n.kh.l#5814: damn lmfao
wabi-sabi#5811: I was a math major before I studied CS and I thought of learned indices literally the first time I programmed something to do binary search. It made me feel extremely vindicated and annoyed when I found that paper last March; until then I'd felt sure that my "Cardinal Search" method was going to guarantee me an amazingly good publication for an undergrad.
|
wabi-sabi#5811: If anyone has anything on *guarantees* against catastrophic forgetting in certain regions of the input space based on training points bounding those regions, I'd definitely be interested in reading.
Intuitively, it seems like models should be able to be built such that testing performance on a few samples suffices to prove performance on others will also work well, but I don't know of anything that's gotten there yet. Maybe in the interpretability, safety, or interpolation literature, but I have seen nothing relevant so far. The above approach is trying to get at it by forcing weight updates to have monotonically good behavior, but I think doesn't quite get there yet.
Some Point Process#3793: Would general guarantees (instead of such specific ones) not be good enough?
Some Point Process#3793: against CF*
wabi-sabi#5811: Learned index functions want memorization and so strong guarantees, not just probabilistic success
Some Point Process#3793: Ah, i'll have to take a look at that paper (it's certainly a curious sounding work)
wabi-sabi#5811: Learn a model that takes in search values and spits out CDF values. Then multiply the CDF value by the length of your sorted array. Then take the floor of the result to get an integer telling you where to look to find your value, if it's in the sorted array, in a single jump.
Some Point Process#3793: So some sort of content addressable memory?
wabi-sabi#5811: But you have to train until the model overfits every single point in the array, so it's only good in static contexts
wabi-sabi#5811: Yeah it's like a learned hash function or dictionary with actual numerical meaning to it essentially. You get generalization, sort of, in that predictions for new data points will be close to the correct array location because the model's already learned the shape of the distribution you're sampling from when populating the array
wabi-sabi#5811: But the input is just "25.8" or something and all the model's features are just brute force memorized
Math ap Mathonwy#7453: Oh wow, congratulations on Neox-20B
dollmath#2898: hi has anyone tried training a generative model on .psd or .tiff files to teach it how to paint by using layers etc.?
tpapp157#3643: There have been papers over the years to train generative models on vector graphics or sequential brush strokes, but nothing off the top of my head dealing with many output layers. No reason off the top of my head why it wouldn't be possible though if you had the dataset for it.
tpapp157#3643: There is an interesting approach for generating images from tokens, where each token generates a separate RGBA output and then you blend across the outputs with a per pixel softmax on the A channel. The technique works well but is computationally expensive compared to alternatives.
Sid#2121: did you modify gpt-j by taking a third of the parameters out :berk:
anhoang#4274: Sorry newbie here. Are there models that take variable number of inputs and output one output? Think Airbnb property description generation from multiple images
StellaAthena#3530: Welcome! This is a discord server that caters to researchers. While newbies are welcome to hang out and lurk, your question is better suited for some of the servers in #communities that are aimed more towards less experienced people.
AI_WAIFU#2844: For transformers what are the typical resource bottlenecks for throughput optimized sampling? With minimum latency sampling it all boils down to memory bandwidth and how fast you can do collective ops, but whats the best strategy for maximizing sampling throughput with transformers and what becomes the limiting factor? I'd imagine that a sort of batching or batching + pipeline strategy is optimal, but in that case what becomes the bottleneck? Is it still memory bandwidth or something else?
|
StellaAthena#3530: Pipelining is a huge one. I don’t have numbers for transformers specifically on hand but PipeDream is something like twice as efficient as GPipe, which in turn is a multiplicative factor better than naive pipelining
kindiana#1016: The bottleneck for throughout optimized sampling is very similar to training
kindiana#1016: So all the regular mp/dp methods work just as well/poorly
kindiana#1016: Latency optimization is the more interesting topic
AI_WAIFU#2844: Are there any issues with storing/shuffling past activations? I'd imagine you need much bigger batch sizes to saturate compute.
kindiana#1016: If you can hold it when training you can hold it for inferencing
AI_WAIFU#2844: Sure but take the FF net for instance, for the same amount of activations your only running it once instead of 2000 times
AI_WAIFU#2844: and it's similar for the attention
AI_WAIFU#2844: so for a given amount of compute you need a batch size that's 1000-2000 times larger
AI_WAIFU#2844: maybe that's not entirely true but you get the point
kindiana#1016: yeah thats true
kindiana#1016: you need batch size O(100) to saturate compute
kindiana#1016: so you are good if you can fit a batch of about 200k on each gpu
kurumuz#5695: well MP between a lot of nodes will help with that
AI_WAIFU#2844: so for a gpt-3 size model 2048 context, ~12000 width and 100 bs that's...
kindiana#1016: pipelining doesn't really help because you gotta keep all the activations around somewhere still
kurumuz#5695: pipelining is not all that time efficient either?
kurumuz#5695: model parallelism is
kindiana#1016: well
AI_WAIFU#2844: it lets you split the layers up at least
|
kurumuz#5695: if you have a really good interconnect
kindiana#1016: its throughput here
kurumuz#5695: ah yeah
kurumuz#5695: but i care about latency as well
kindiana#1016: @AI_WAIFU doesn't tho ;P
AI_WAIFU#2844: I'm asking specifically about maximizing throughput.
kurumuz#5695: well MP does both :berk:
kurumuz#5695: you can tradeoff between throughput and latency with MP.
kindiana#1016: theres theoretically no tradeoff of throughput with mp
kurumuz#5695: yeah, there is of latency
kurumuz#5695: well this is more how you construct your serving/queue
kindiana#1016: as in, more mp should theoretically just be a free latency win with no throughput hit
kurumuz#5695: so you can keep the latencies same to the end user but boost throughput
kurumuz#5695: as you process more quests per time scale
kurumuz#5695: yeah
AI_WAIFU#2844: I guess the real bitch is the attention because I think you need to cache the attention matrix
kindiana#1016: do you?
kindiana#1016: you just need to cache kv
kurumuz#5695: it's not that much of a bitch
kurumuz#5695: you can cache the heads/layers you are responsible of on the GPU locally
|
kurumuz#5695: its not something that requires communication right :thonk:
AI_WAIFU#2844: Your right. I think that's where the problem shows up tho, because now you've gotta pull in all those kv's from memory. That has to be memory bandwidth bound right?
AI_WAIFU#2844: unless all the past kvs are the same you can't reuse them for multiple q's in parallel
kindiana#1016: you need to pull 2 * ctx * batch * d_model for attn, whereas you are pulling d_model**2 * 12 weights
kindiana#1016: so exactly how memory bound depends on your batch and model size
AI_WAIFU#2844: so once 2\*ctx\*batch > d_model memory bandwidth for that operation becomes the binding constraint
AI_WAIFU#2844: Is there a way around that?
kindiana#1016: don't think so 🤔
kindiana#1016: well
kindiana#1016: you can change your architecture
kindiana#1016: https://arxiv.org/abs/1911.02150
kindiana#1016: (used in alphacode)
kindiana#1016: gets you a constant factor there
AI_WAIFU#2844: That's what I was gonna get at, but I wanted to make sure I didn't miss anything...
kindiana#1016: I think theres actually quite a few architectural modifications which would help this lol
AI_WAIFU#2844: > incremental inference (where such paralleization is impossible) is often slow, due to the memory-bandwidth cost of repeatedly loading the large "keys" and "values" tensors.
Nice.
MicPie#9427: can you explain that in more detail (or is this covered in the linked paper)?
kindiana#1016: kv caching is seen in almost every transformer inference implementation
kindiana#1016: but iirc the paper should also explain it
|
butterbeer#3312: Hi, I was trying to train the 6-7B gpt-neox model with 2 GPUs of 32 GB each. But I am getting `CUDA out of memory` error. How do you think I can resolve this?
Teemochu#8740: I think @Kharr has done something that might help? That's a bit tight to finetune 6B on without offloading some stuff to CPU.
Teemochu#8740: (Sorry if I pinged the wrong person)
MicPie#9427: You could try to apply this approach: https://huggingface.co/hivemind/gpt-j-6B-8bit
butterbeer#3312: What hardware config would you recommend?
butterbeer#3312: Also, by finetune, do you mean I can find some starter trained weights anywhere?
nev#4905: this is very similar to el-attention with regards to memory use, and you can actually rewrite any attention layer into a swh
kindiana#1016: yeah, I think thats mostly applicable to encoder-decoder though
kindiana#1016: the max speedup for decoder only isn't too big iirc
nev#4905: it helps remove the memory bottleneck where that matters
StellaAthena#3530: Memory usage during training is approximately four times what it is during inference. A 6.7B model requires around 87 GB of memory to train. If you’re trying to train the largest model that fits on your hardware, it should be in the mid 4 billion parameters.
butterbeer#3312: Okay, thanks a lot for the info
Louis#0144: *anger goose* https://cdn.discordapp.com/attachments/729741769738158194/941893227131330590/Screen_Shot_2022-02-11_at_10.07.15_PM.png
Louis#0144: whatever its 10pm anyway
Louis#0144: im done for the night
Louis#0144: :berk:
Kia#2550: go to bed
Sid#2121: Completely orthogonal techniques
baldbaliff#2861: Hey I just wanted to say sorry that I had my mic on before joining. I usually don't call on discord.
Sid#2121: @Deleted User just realized you were probably talking about zero-3, which isn't orthogonal (i'm generally only using 1 or 2 so that slipped my mind). I think it's a bit more communication intensive than just 3d parallel, and so you need really fast inter device communication for it to be worth it over 3d parallel.
|
Sid#2121: yep
Sid#2121: although not with pipeline parallel
Sid#2121: so you can do tensor parallel with zero 2
Sid#2121: or pipe + tensor parallel with zero 1
Sid#2121: which is generally what we use here
Sid#2121: I'm not sure that there's any theoretical barriers to using pp + tp + zero 2, but it just doesn't work with the current implementation
chilli#5665: But zero 2 is not that good with gas, right?
Sid#2121: wdym?
chilli#5665: Since the advantage of gas is you get to amortize your gradient updates over multiple steps
chilli#5665: Like, you can communicate your gradient updates only once for each N steps
chilli#5665: But with zero-2, the point is that you communicate your gradients immediately so you can free them from memory
chilli#5665: So you lose the benefits of gas
Sid#2121: yep. I think the deepspeed team introduced some other tricks that overlap compuation with communication in zero 2 but not sure how effective they are, since i haven't really used it much.
StellaAthena#3530: I get the impression that they haven't either. I haven't read the "zero-infinity" stuff at all but the OG zero3 seemed less like "the next step after zero2" and more like "a sideways step and then a step forward from zero2" when I first read it
StellaAthena#3530: There’s a **workshop at ACL on “Challenges & Perspectives in Creating Large Language Models”** that’s a phenomenal venue for a lot of the work that gets shared in this server but never captured and widely shared to get published and get seen by major figures in the field. **They explicitly welcome empirical results and the sharing of best practices.**
I know that a lot of people who hang out here don’t have traditional academic backgrounds, but this is a great way to dip your toes into publishing. In addition to accepting “traditional length” 8 page papers they also accept shorter, 4 page papers.
I would be more than happy to provide assistance to anyone interested in getting advice or feedback on their research design or paper plans. EleutherAI and CoreWeave would also be more than happy to work with you to source compute to run more formal and systematic experiments if you lack the compute resources to do so on your own. There’s so much great work that gets shared here and never with the wider world that I think is a huge shame. If you are reading this and going “I have some thoughts and experiments, but I’m not a *real* researcher” or “this probably isn’t addressed to me because I’m not a member of EleutherAI,” you are incorrect. This is addressed to you, and your experiments very likely are interesting enough 🙂
|
**Deadline:** Feb 28th
**Link:** https://bigscience.notion.site/Episode-5-Challenges-Perspectives-in-Creating-Large-Language-Models-c1c6ca8665ac4c35afa7531f128ff02e
StellaAthena#3530: The call for papers reads:
>>> 2 years after the appearance of GPT-3, large language models seem to have taken over NLP. Their capabilities, limitations, societal impact and the potential new applications they unlocked have been discussed and debated at length. A handful of replication studies have been published since then, confirming some of the initial findings and discovering new limitations.
This workshop aims to gather researchers and practitioners involved in the creation of these models in order to:
1. Share ideas on the next directions of research in this field, including – but not limited to – grounding, multi-modal models, continuous updates and reasoning capabilities.
2. Share best-practices, brainstorm solutions to identified limitations and discuss challenges, such as:
- **Infrastructure.** What are the infrastructure and software challenges involved in scaling models to billions or trillions of parameters, and deploying training and inference on distributed servers when each model replicas is itself larger than a single node capacity?
- **Data.** While the self-supervised setting dispenses with human annotation, the importance of cleaning, filtering and the bias and limitation in existing or reported corpora has become more and more apparent over the last years.
- **Ethical & Legal frameworks.** What type of data can/should be used, what type of access should be provided, what filters are or should be necessary?
- **Evaluation.** Investigating the diversity of intrinsic and extrinsic evaluation measures, how do they correlate and how the performances of a very large pretrained language model should be evaluated.
- **Training efficiency.** Discussing the practical scaling approaches, practical questions around large scale training hyper-parameters and early-stopping conditions. Discussing measures to reduce the associated energy consumption.
StellaAthena#3530: I’ve been kicking around a couple ideas for papers, so if you want to get involved but haven’t been working on anything recently I’m sure we can find things for you to get involved with. Some of what I’ve been chatting about putting together a paper on include:
1. A subset of the results @igoro and I presented at #interpretability-reading-group yesterday, especially those related to different variations on the logit lens idea
2. @janus posted some exceptionally cool plots showing discontinuity in memorization as the model size scales in #interp-archive
3. @Aric, @nostalgebraist, myself, and a couple others have noticed qualitative difference in how GPT-Neo and GPT-J interact with the residual stream as analyzed by work like ROME and the Logit Lens. If someone wants to sit down and analyze the FairSeq models, I think there’s a good chance of learning something interesting.
4. @65536william and I have been chatting about the role of data *formating* in pretraining, especially the way that we preprocessed Stack Exchange in the Pile. The way SE was preprocessed (and it’s very presence) was extremely evident in some experiments I’ve done with code generation, as shown in the image below (Paper link: https://arxiv.org/abs/2201.07406) https://cdn.discordapp.com/attachments/729741769738158194/942531778311225424/IMG_9040.png
janus#0150: 💯 it would be great to get some of these ideas out there. The presentations yesterday were awesome and if not a conference paper I'd love to see them written up and put on LW/AF. (Although your results are so surprising Stella I think we should try to confirm them with more models/experiments before we draw final conclusions). Most of the value of research comes from getting other people's gears turning about how to extend it or interpret their work in new ways.
|
DAL59#8318: Is EleutherAI planning a competitor to DALL-E or GLIDE?
StellaAthena#3530: This is currently in the works.
ilovescience#3282: Something separate from work in the DALL-E server?
dmayhem93#3202: Has anyone done any logit lens stuff on vision transformers?
StellaAthena#3530: I believe so, but @alstroemeria313 is a better person to provide context and further info
EricHallahan#1051: Whatever that is going on in the DALL-E server not directly associated with EleutherAI (at least today).
ilovescience#3282: There's definitely overlap in active folks though
StellaAthena#3530: Not that I am aware of, but i think that would be pretty interesting to do! I don’t know a huge amount about vision transformers… are there good preteained ones out there?
dmayhem93#3202: I think so, I don't know of any off the top of my head that aren't CLIP or CIFAR10/100, let me see...
cfoster0#4356: Didn't google release a whole bunch of ViT checkpoints? Or did I imagine that
dmayhem93#3202: https://github.com/google-research/vision_transformer no you're right :hap:
EricHallahan#1051: My point is that we are not directly sanctioning those projects. They are an independent group and their research agenda is not associated and coordinated with ours.
StellaAthena#3530: ViTs aren’t that expensive to train, right? Like, we could take LAION-400M and train one pretty easily AFAIK? Assuming the codebase is easily parallelizable
EricHallahan#1051: Depends upon what you are trying to accomplish by training the model.
dmayhem93#3202: Well you guys did 20B so pretty easily is a bit relative...
cfoster0#4356: Tho idk "logit lens" will be a bit different here since these aren't AR models
EricHallahan#1051: ViT isn't an objective or methodology but a model architecture by my understanding.
dmayhem93#3202: Well, aren't the google ones with a CLS token at the end? Should be similar no?
StellaAthena#3530: Why would it be? The logit lens doesn’t make a whole lot of sense to apply to BERT AFAIK
janus#0150: It mostly comes down engineering time. I'm amazed how talented people are here, but they have too few hands... I'm a strong advocate for genetic engineering to give people at least 4 hands each, ideally 6+.
|
janus#0150: Obviously we'll need to develop new keyboards.
cfoster0#4356: Logit lens makes sense for interpreting a model that has a residual stream that converts directly into logits over discrete predictions
StellaAthena#3530: ^^ That’s why I asked about a scalable codebase. If you have code that just works out of the box on several interconnected 8xA100 machines that’s one thing. If we need to make the code runnable in parallel that’s very different.
EricHallahan#1051: I think the question is moot since your objective is underdefined.
dmayhem93#3202: Well, it's conditioning a CLS token over all previous image tokens, so from that perspective I think it's more similar to GPT than BERT. I also haven't thought about this more than 10 minutes so maybe I should think it over a bit more
EricHallahan#1051: Because that directly influences the amount of engineering time you will need to complete the task.
StellaAthena#3530: That sounds a hell of a lot like finetuning BERT with a clarification head though?
StellaAthena#3530: IDK. I’ve thought about it for less than 10 minutes. But tl;dr go think about it and come back with a concrete pitch and we’ll see what we can do about providing compute
Kia#2550: Owww Exciting
EricHallahan#1051: Like I know for a fact that the CLIP LM does not exhibit AR model-like behavior when it comes to ROME.
EricHallahan#1051: Because I have personally tried that.
EricHallahan#1051: I have no idea how the Vision model interacts though.
dmayhem93#3202: Interesting, yeah I'll come back to it, but it sounds like only GPT AR models have been done? Maybe an easier start would be a BERT classifier
StellaAthena#3530: This sounds like a good place to start. Were you at the interpretability RG yesterday? @igoro and I talked about a bunch of investigations we’ve been doing, including ones into improving the logit lens
dmayhem93#3202: I got there late so I missed some of it, but I did get to see all the finetuned linear layer parts, those were some really awesome charts
OccultSage#3875: 1+ year of prep work, $2m in hardware.
makya#2148: Damn. Didn't know it was that expensive. Well done anyway. Yous worked hard.
StellaAthena#3530: 12 8xA100s, each of which is in the neighborhood of 120k. Plus “NVLINK interconnect,” which is an upgrade of the “base” version that requires installing some more hardware.
StellaAthena#3530: Plus some misc. costs associated with running said 12 8xA100 devices 24-7 for three months
random person#5234: Whats the wall clock? Around 3 months?
|
EricHallahan#1051: Actually the servers that we ended up using are not upgraded but come from the factory that way. The 8 GPUs per node are actually on a baseboard with the 6 NVSwitch.
StellaAthena#3530: 1830 hours for the actual training, plus an additional 60%-ish for evaluation, scaling, and testing
EricHallahan#1051: The PLXs and HCAs are also on their own board too separate from the mainboard with the 2 CPUs.
StellaAthena#3530: I meant that the 120k number was for the “base” version of the 8xA100 and that we needed something fancier. Not that CW tore the PCIE out of the baseboard lol
EricHallahan#1051: Fair, but from a casual read that wasn't immediately obvious.
random person#5234: So about half a mil to 120k in compute
random person#5234: Honestly the hardware knowledge of a lot of people here impresses me. I know a bunch of cs people who are not as in tune with ML hardware usually.
StellaAthena#3530: 12 x 120k = 1.44M, so I’m really not sure where you’re getting these numbers from
EricHallahan#1051: It took us a good couple weeks to come to terms with it.
Kia#2550: You plan/taught of training ViT's models before? (Seems interesting tho)
random person#5234: In terms of renting those compute hours
random person#5234: I said 120k as a lower bound since some corps get massive discount renting on 3 year leases.
StellaAthena#3530: Ah
EricHallahan#1051: (I think the confusion here came from the ordering of the numbers. It would have been a lot more clear if they were the other way around.)
StellaAthena#3530: My math says it would be 262k on AWS, even with the 3 year discount
random person#5234: Yea AWS big expensive
StellaAthena#3530: CW is $1/hour/A100
StellaAthena#3530: Which comes out to 175k
random person#5234: Honestly I expected more A100s used
StellaAthena#3530: (Not criticizing you, just doing some calcs and sharing them since I know this interests a lot of people)
|
StellaAthena#3530: You may have noticed that there is a global GPU shortage xD
EricHallahan#1051: The market makes that extremely difficult.
random person#5234: Understandable!
StellaAthena#3530: We would have used more than 96 A100s if CW was able to *find* more than that. They put a lot of work into sourcing pieces
EricHallahan#1051: Shortages and lead times probably cost us something like 6 months on this project.
EricHallahan#1051: But you can't control the supply chain.
random person#5234: Not at all
random person#5234: I was hoping to get my hand on a virtual 80gb 500w A100 to play with
𓅬 gabriel_syme 𓅬#3220: I always forget how much prepping and evaluating takes lol
𓅬 gabriel_syme 𓅬#3220: it'd be fun if papers included that number as well next to training (not sure if ppl do that, I usually assume it's only training time they share)
EricHallahan#1051: I want to say that there was at least two weeks straight of prep before we fired off the run.
StellaAthena#3530: We shared both numbers in the paper!
EricHallahan#1051: And if you included the many months of development and testing before that you can include many more.
EricHallahan#1051: No cash changed hands.
OccultSage#3875: Higher. These are pricing for non-NVLINK, non-Infiniband nodes. https://cdn.discordapp.com/attachments/729741769738158194/942602127631613982/unknown.png
cfoster0#4356: Yes. It should really be called activation checkpointing IMO, or maybe rematerialization
chilli#5665: I don't think it makes any sense to rematerialize gradients
chilli#5665: But yeah, I agree with @cfoster0 that rematerialization is probably the best terminology
chilli#5665: I also like recomputation, but don't think that's standard
cfoster0#4356: Recomputation is even clearer ya
|
chilli#5665: I'm saying rematerializing gradients doesn't make sense, not that rematerializing activations doesn't make sense
chilli#5665: Yeah
chilli#5665: And actually, I take that back :P nobody is currently rematerializing gradients, but I could imagine schemes where that makes sense
chilli#5665: If you're interested in activation rematerializarion, you might be interested in this: https://dev-discuss.pytorch.org/t/min-cut-optimal-recomputation-i-e-activation-checkpointing-with-aotautograd/467
chilli#5665: In many ways I think this is the ideal scheme
chilli#5665: Although what it's optimizing for might not be what you want for say, training gpt3
aٴ#8803: For a time series analysis problem wouldn't it make sense to remove the batching dimension or just set it to one? Like how does pytorch even handle batching for 1D CNN models or LSTMs?
Let's say hypothetically you had a batch size of 64 and your train set has 10k entries, would the train set be divided into batches like the first 64 is batch 1, 64-128 is batch 2, 128-192 is batch 3 and so on?
aٴ#8803: Yeah that's what I want but my concern is the discontinuity between the start and ends of each batch. Also how do I code my training cool so that batching is done as we both described in chronological order?
Kazumi#1297: I have a feeling you're confusing batch size with sequence length
aٴ#8803: like if pytorch applies the convolution operations on the first batch (0-63) wouldn't it have to start late and stop early and then the continuity between batches would be somewhat disrupted
aٴ#8803: how so?
aٴ#8803: in my case the sequence length is just the length of the training set
aٴ#8803: so should I just not use batches then or continue as the way I'm doing it and not worry?
nshepperd#2316: if you're doing second order derivative stuff it probably makes sense
Kazumi#1297: the input shape should be (batchsize, sequence_length, *data_dimensions)
nshepperd#2316: and that's when you need a graph based rematerialization optimizer instead of hacks specific to computing gradients
chilli#5665: Yes, since then your gradients are your activations
chilli#5665: Lol
nshepperd#2316: ye
|
chilli#5665: Not sure if my approach falls under the first or the second 🤔
nshepperd#2316: when you're doing quasi-newton methods, for one thing
chilli#5665: I guess.... first
nshepperd#2316: gradient penalty for gans uses second derivatives too
nshepperd#2316: idk probably other times as well
nshepperd#2316: adam does not use second derivatrives
chilli#5665: I think it does make sense to make a general graph rematerialization pass though
chilli#5665: Probably wouldn't be too hard to extend my idea to that
chilli#5665: But the current setup has nicer properties
chilli#5665: i.e. it doesn't require the whole graph
Kazumi#1297: is momentum in gradient updates an artificial way of making a second derivative?
nshepperd#2316: not really, it doesn't have much connection i think
nshepperd#2316: imo momentum is mainly a way of smoothing out gradient updates, intuitively
Kazumi#1297: momentum is a derivative of position, and position here is a first derivative
nshepperd#2316: it is a second derivative in a sense that is unrelated to the actual second derivative of the loss function
nshepperd#2316: with momentum you take the gradient of the loss
nshepperd#2316: and you use it to update the momentum
Kazumi#1297: I think it's a way of estimating a second derivative, it's just jumbled together with the first derivative
nshepperd#2316: and then you use the momentum to update the params
nshepperd#2316: it's not estimating a second derivative, it's simply *using* the grads as one
|
nshepperd#2316: like it's equivalent to a system where the grads apply a force to the params and there is friction
nshepperd#2316: but
nshepperd#2316: the second derivative of the loss function tells you how much curvature there is
nshepperd#2316: like how the gradient changes as you move the params around
nshepperd#2316: the idea of momentum is to average out those gradient changes
Kazumi#1297: huh, why's the element wise square of the gradient used?
nshepperd#2316: so maybe you could make some sort of curvature estimate by comparing the variance of the momentum to the variance of the grads or something
nshepperd#2316: that's like... they estimate the average size of the gradient for each parameter
nshepperd#2316: and then normalize the param updates by that
Kazumi#1297: oh, it to get the absolute size
Kazumi#1297: yeah
nshepperd#2316: yeah
Maxime#0993: Is there any model that runs on AMD gpu... using for exemple tensorflow-directml ?
Maxime#0993: Or ROCm...
swcrazyfan#2478: Is there a Keras/.h5 version of GPT Neo that exists? I'm experimenting with running it on my MacBook M1, but I can only use my CPU with PyTorch. I'd love to run a TensorFlow version, but I can only find the ckpt files on the-eye.eu
swcrazyfan#2478: Thanks for any help!
𓅬 gabriel_syme 𓅬#3220: Quick question: should I be adding the `[prompt]` and `[layout]` tokens I'm using to the tokenizer? Would it be a problem if I'm finetuning a model
StellaAthena#3530: You can run the model in pytorch via HuggingFace's `transformers` library: https://huggingface.co/EleutherAI/gpt-neo-125M
baldbaliff#2861: @StellaAthena I am wondering on the talk Saturday why you used you used the slope of the regression line and not the correlation coefficient (I think that's the right word and basically r and r^2) to show it didn't matter. Then for the error loss have just a mean of the points(because they change so little).
swcrazyfan#2478: Thanks for the suggestion! Yes, I’ve done that, but it only runs on the CPU. Models with TF versions, such as GPT2 or T5, can run on the M1’s GPU. PyTorch doesn’t have M1 GPU support yet.
|
StellaAthena#3530: GPT-Neo was originally trained in mesh tensorflow
StellaAthena#3530: If you want it in TF you can download it from the github repo directly
StellaAthena#3530: (Or the eye, if that's your preference)
aٴ#8803: ```ansi
[1;32mYall tripping
```
Gurkenglas#7362: Have yall tried OpenAIs finetuning API? Would finetuning, on, say, man pages, improve performance at https://vimeo.com/427943407/98fe5258a7 ?
Orz#3023: I've tried fine-tuning (prompt tuning if you wanna call it?)
and yes
performance improved in my case (though arguably it was just 10 examples of prompt tuning so idk if it applies in general.)
But I think it does
StellaAthena#3530: Really interesting plot from https://arxiv.org/abs/2003.07845, comparing how well batch-level statistics approximate full running statistics in CV and NLP https://cdn.discordapp.com/attachments/729741769738158194/943164938850025472/Screen_Shot_2022-02-15_at_10.19.25_AM.png
Louis#0144: does anyone have a good idea how to use stack exchange questions, answers, comments as a (content, critique of that content) dataset
Louis#0144: needed for #contrastive
dollmath#2898: is there anyone working on generating animation by training gans on video? do frame interpolators work this way? all the ai animation right now seems to be sort of a side effect of the image generation process as opposed to actually trained on motion from the video data itself.
Kazumi#1297: I'd imagine it'd be harder to collect good data, like you can't just scrape something like danbooru, I haven't really done much with gan in a year or something, but last I saw they weren't good at free form image generation, only things more defined like only one character and with a relatively neutral pose or something.
I wonder how well some of the large scale GPU clusters would handle it tho, like the one that trains large language models, they should have enough compute to train it in terms of memory and speed
Gurkenglas#7362: i'm not asking whether finetuning on logs of a task improves on a task, i'm asking whether the improvements also apply to tasks related to what one fine-tunes on
|
legendary necromancer#1047: Hey, anyone knows what r the future projects of Euthner AI?
StellaAthena#3530: Have you tried reading any of the channels labeled "project"
legendary necromancer#1047: Do u mean, channels under the section "Projects"?
rb#3159: Yes
Kazumi#1297: there's also
https://github.com/EleutherAI/project-menu/projects/1
StellaAthena#3530: This is more of a sometimes-updating dumping ground for ideas before we forget them, rather than a list of on-going projects. The best list of on-going projects is the channels under the "projects" header in discord
legendary necromancer#1047: I tried to read them, but each has so many msgs, I want to find out the details abt the ongoing projects?
StellaAthena#3530: Every single one of them has pinned messages and/or a channel description with relevant links.
legendary necromancer#1047: Got it, Thanks, can u also help me find if there r any future models for gpt-neox on which, Euthner AI is or will be working too.
StellaAthena#3530: No
EricHallahan#1051: (It would be great to get more scoped projects on the website but I am stripped for time right now and it is the least of my worries.)
EricHallahan#1051: https://www.eleuther.ai/faq/
StellaAthena#3530: We just released a model last week and the preprint isn't even on arXiv yet. We don't have a next biggest model training. And even if we did, we have a standing policy of not answering that question
legendary necromancer#1047: Ok, I understand, still Thank You Very Much, for answering my questions 😃
wabi-sabi#5811: Any chance that someone can give me a smart undergraduate friendly explanation of the guarantees offered by: https://openreview.net/forum?id=TNBTpPO0QX
Specifically, I am not understanding why we would want to define models in terms of unique fixed points as we pass through layers. That seems a bit like making an assumption that the manipulation of the manifold should be scale invariant, I guess? Not obvious to me why that should be a good inductive bias, though.
My hope is that this paper might be importantly relevant to memorization guarantees for training learned index models of an array's quantile function, but I don't understand the paper well enough to know if it's worth investing more time into it for the sake of this hope.
|
chilli#5665: yeah, the benchmarks are kinda crappy
chilli#5665: afaict
chilli#5665: https://news.ycombinator.com/item?id=30352025
chilli#5665: it really depends on your task
chilli#5665: for common neural networks, you're not likely to see much speedup from Jax vs. pytorch eager
chilli#5665: yeah, his lead performance benchmark is comparing Jax on GPUs (or TPUs?)
chilli#5665: vs. Numpy on CPUs
chilli#5665: lol
Deleted User#0000: feature not a bug
Deleted User#0000: :berk:
chilli#5665: lol
chilli#5665: :thonk:
kurumuz#5695: 6b gptj was slower on same gpu, pytorch eager vs jax
chilli#5665: how much slower are we talking 😛
kurumuz#5695: a lot but i dont remember
kurumuz#5695: i might test again later
kurumuz#5695: this was like when the model first released
chilli#5665: oh, was this HF. PyTorch vs GPT-J
chilli#5665: lol
kurumuz#5695: our implementation
|
kurumuz#5695: i mean it was by hacking the HF class, but at this point it doesnot look like Hf lol
chilli#5665: and was this for inference?
kurumuz#5695: yes
chilli#5665: and by "a lot" do you mean like .... 20-30%?
chilli#5665: or 2x
kurumuz#5695: thisis interesting to me so i will go test it again
chilli#5665: I've done a decent amount of this kind of benchmarking
chilli#5665: and it's usually either no difference/slower
chilli#5665: or 10-20%
chilli#5665: and usually if you use some of the *sick* new compilation APIs it's about the same
chilli#5665: yeah you wouldn't have
kurumuz#5695: why pytorch eager is actually fast
kurumuz#5695: lol
chilli#5665: the APIs I'm referring to are even more experimental than what you're thinking of :^)
kurumuz#5695: isnt all the fancy jax jit stuff should be better
chilli#5665: well, "fancy jax jit stuff" usually == "pointwise operator fusion"
chilli#5665: lol
chilli#5665: oh, and
Deleted User#0000: lol
chilli#5665: "reductions of overhead"
|
chilli#5665: If you simply did those things you'd probably recover 90% of the performance gap between PyTorch and Jax in benchmarks where Jax is faster.
chilli#5665: The problem with PyTorch's compilation APIs is that they make it hard to actually get that "pointwise operator fusion"
chilli#5665: lol
chilli#5665: oh, and I guess rematerialization is also important for pointwise operations in the forwards pass
Deleted User#0000: what keras is still used?
Deleted User#0000: internally?
Deleted User#0000: :brr:
chilli#5665: @Deleted User you didn't answer my MLIR question smh
kurumuz#5695: i never seen anyone use keras seriously
kurumuz#5695: so far
kurumuz#5695: like literally
chilli#5665: I wonder how Google is gonna handle the TF => Jax transition
chilli#5665: lol
chilli#5665: (which kinda seems inevitable to me)
chilli#5665: did you not reply to my comment 🤔
Deleted User#0000: I did not see another mention what was it
Deleted User#0000: 100%, there may not be a single tf user left
nev#4905: :berk: https://cdn.discordapp.com/attachments/729741769738158194/943246485082996786/Time-to-Calculate-Sum-of-Matrix-Powers.svg
nev#4905: oh
chilli#5665: https://discord.com/channels/729741769192767510/730095596861521970/943001208006795294
|
Deleted User#0000: I replied to that
Deleted User#0000: with basically this
chilli#5665: where'd you reply lol
Deleted User#0000: off topic I thought..naybe i didnt send, busy day in the mines :knoht:
chilli#5665: I was just curious about whether there were any concrete projects I could look at other than
chilli#5665: IREE
chilli#5665: and
chilli#5665: the sparsity stuff
Deleted User#0000: not open source I believe sadly
kurumuz#5695: @StellaAthena hey, can you inform me and @Kharr on what you found with the neox parallel ff + attn ablation?
kurumuz#5695: was gptj residual worse?
StellaAthena#3530: We were talking about finetuning his 8-bit model, not anything about this
kurumuz#5695: wut
chilli#5665: any idea about how many folks are working on it? (or is it a secret 🤔 )
kurumuz#5695: me and kharr was curious about the neox gptj residual ablation
kurumuz#5695: not talking about 8bit stuff
StellaAthena#3530: I have not spoken to kharr about residual ablations
StellaAthena#3530: @kindiana and @triggerhappygandi have run some that indicate it doesn't lead to performance loss in < 1B models
chilli#5665: So, pointwise fusion is basically turning things like "x.cos().cos()" into a single GPU kernel. Since `x.cos()` is a memory-bandwidth bound operation, turning it into a single GPU kernel will double the performance.
Deleted User#0000: I think it has critical mass is my point
|
chilli#5665: I say it's "just" pointwise fusion, since pointwise fusion is pretty trivial
chilli#5665: and there's not that much smart things to do there
chilli#5665: lol
chilli#5665: and XLA/any other compiler will likely have similar performance
chilli#5665: the primary thing that PyTorch has sucked at is actually giving the captured graph to backends in a way they can actually compile it
chilli#5665: oh
chilli#5665: like, here's a fun example
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/943249644580913203/image_from_ios.png
chilli#5665: this is from a huggingface transformers model
chilli#5665: all of those add operations are fusible
chilli#5665: lol
cfoster0#4356: Someone find the Aidan Gomez quote, if you've got access. I hadn't heard this before https://twitter.com/JJBalisan/status/1493684154822758402?t=b8h46KCn6Y0QN4hTzFIAGQ&s=19
chilli#5665: the cynical take on this is that it's because their parameter counts are low
chilli#5665: lol
chilli#5665: well, where I think Jax really shines right now is in their compilation APIs
chilli#5665: haha
chilli#5665: like, right now PyTorch doesn't do this well
Kharr#7888: I'm on team 9bit now. Half the quantization error for 1 extra bit!
chilli#5665: so he's likely comparing PyTorch eager against Jax jit
chilli#5665: (which isn't an unfair comparison right now, tbh)
|
chilli#5665: since PyTorch's current compiler APIs are somewhat awkward
chilli#5665: PyTorch's current compiler APIs are also very restricted in what kind of autograd stuff you can compile through
chilli#5665: lol
chilli#5665: yeah, very much so imo
chilli#5665: well, I mean
kurumuz#5695: i still haven’t tried nvfuser
chilli#5665: if we really thought that XLA was the ideal compiler, it wouldn't be that hard to lower PyTorch to XLA on GPU
chilli#5665: lol
kurumuz#5695: and cuda graphs
chilli#5665: The fact that we haven't pushed that hard on lowering PyTorch to XLA on GPU is pretty indicative imo
chilli#5665: (I mean, tbh, i think we should... even if it's just to give folks more compilation options)
kurumuz#5695: yea agree
Deleted User#0000: what are you trying to profile
Deleted User#0000: what is normal search
Deleted User#0000: yes
Deleted User#0000: you are trying to do RL on device right?
Deleted User#0000: read https://arxiv.org/pdf/2104.06272.pdf and look at that codebase, that's how it works best
Deleted User#0000: :knoht: :knoht: :knoht:
Deleted User#0000: basically there is a full framework for optimal on TPU device RL (described in the paper so no secret)
Deleted User#0000: moma it
|
chilli#5665: is moma the internal google search?
chilli#5665: or something
Deleted User#0000: yes
chilli#5665: @ww is also at Google?
chilli#5665: 🤔
Deleted User#0000: but claims to not know xprof :berk:
Deleted User#0000: busted
chilli#5665: yeah, I vaguely remember this from my internship
chilli#5665: lol
Daj#7482: mfw two google engineers discussing how to debug JAX with internal tools in main chat
Daj#7482: lol
Deleted User#0000: that's actually based ok
Deleted User#0000: but read up a little on the available tooling
Deleted User#0000: they own your every thougth anyway
Deleted User#0000: here is a protip: when you interview for DM they will look at your *internal* artifacts, not your github
Deleted User#0000: so putting something up internally would be better, helps your cl stats too
Deleted User#0000: + shows python
Deleted User#0000: do a 20% project
Deleted User#0000: do a 120% project
Deleted User#0000: :goose:
|
Deleted User#0000: it's definitely extra
Deleted User#0000: convince your manager you deliver 100% on what you are supposed to be doing and more, then get blessing to do an internal small extra project. Although I see that for an L3 manager will expect focus on tasks at hand..
Deleted User#0000: probably get L4 first then ask lol
Deleted User#0000: how are you going to publish a paper without manager approval, who then will wonder why you are putting up open source code without ..his permission
Deleted User#0000: it sounds weird, I wouldnt really know how to interpret that lol. Would probs still recommend to try internal
Deleted User#0000: either way somehow involves your manager
MaxHager#6351: How complex do you guess an mini implementation (like Karpathy did with the minGPT https://github.com/karpathy/minGPT) of the new deepmind released AlphaCode https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf ? I didn't worked with language modells yet but wanna consider it yet with the mini implementation of a AlphaCode.
Deleted User#0000: I guess I dont know the situation, yeah just think about how to communicate this well
Deleted User#0000: because at some point you will want to tell/should tell for open source..so maybe look around in brain if there is someone who would want help on a project or so..
zphang#7252: I have no idea how to read xprof
zphang#7252: it's like deciphering runes
Sid#2121: depends what you consider "alphacode" the NN architecture is just an enc-dec transformer, but the sampling stuff might be a bit more complicated, and that's where the main bulk of the improvement is
Raccoon#5999: Hi there! Budding AI writer here. Wandered over from AI Multiverse (formerly the AI Dungeon discord) After someone there used an emoji that I made, but didn't share any other servers with me, and said they found it here.
Raccoon#5999: :paperclop: :paperclop: :paperclop: :paperclop:
Daj#7482: Hey! Sorry for appropriating :paperclop: :berk:
Raccoon#5999: It's fine! I made it for when people talking about theoretical AI needed to describe a cute and cuddly maximizer in emojis.
Daj#7482: Cute and cuddly, yes, that's what we expect the maximizer to be like :harold:
Daj#7482: Thanks for letting us use it :hap:
Raccoon#5999: No problem! It is for public use.
bmk#1476: it's even inspired spinoff animal paperclip emotes
|
Raccoon#5999: You should know though, it was originally made for Optimalverse, where I'm staff, so that's where it came from.
bmk#1476: like paperhonk
bmk#1476: :paperhonk2:
Raccoon#5999: Really? Interesting.
AI_WAIFU#2844: :paperhonk:
Raccoon#5999: Is that for the maximizer that just really annoys a town of people?
EricHallahan#1051: That's \:paperhonk2\:
bmk#1476: :goose16:
Daj#7482: We are well familiar with the Optimalverse haha
Daj#7482: well, ok I read FiO and Caelum Est Conterrens, at least
AI_WAIFU#2844: :goose13::goose14::goose15::goose16::goose7::goose6::goose5::goose5::goose4::goose3::goose2::goose18::goose19::goose9::goose8::gus::gusug::goose16::goose15::goose6:
Raccoon#5999: I run an RP that's becoming a sort of community driven rewrite of the original story.
Daj#7482: lol that's wild
bmk#1476: :goose2: :goose12: :chadgoose:
bmk#1476: :3goose:
Daj#7482: How do you RP a story with a superintelligence running around? :thinkies:
Daj#7482: Never could figure out how to make that work for my D&D campaigns
Raccoon#5999: It's very freeform. Basically, I play the AI and the world and all that, and other people play characters who are reacting to what's going on.
Daj#7482: Ah I see
AI_WAIFU#2844: fascinating
|
Daj#7482: I did once do a horror one shot where the players were caught in the simulation of a mildly unaligned AI trying to simulate utopia but subtly failing haha, but otherwise I've found superintelligences hard to integrate into "human-scale" stories
Raccoon#5999: We do a bit of planning, a bit of discussion to keep it in the room of hard sci-fi, occasionally I will go ask science people sciencey questions, and we get characters like the tech journalist asking questions or the grassroots activist who has to decide whether to work "with" or "against" the AI.
Daj#7482: neat
Raccoon#5999: With the caveat being that the AI has basically modeled human behavior well enough that it can usually plan for everything they do.
Raccoon#5999: So it's like, the journalist rises in popularity as a major critic of the AI... But if you're paying attention, you realize that this is all according to plan: the AI is doing controlled opposition.
Raccoon#5999: This was a huge problem that I talked about with Iceman for a while when we started it. He had a lot of trouble with that in the original story too. You have to remember that Celestia is this global spanning AI with plans that can run on incomprehensible time scales, which makes its plans basically infathomable to the human mind. The best way to write it, sometimes, is to just have things happen and not really explain how the AI knew it should do that or maybe even how it made something happen, and just have it suddenly be clear that somehow the AI did it.
Raccoon#5999: Like, Right now, there's this thing going on in the story, in a part that set before everything else, at MIT, where they've developed some sort of human-like personality on a computer, and there's all this weird stuff going on that's slowly backing them into a corner on what sort of decisions they can make... And at the very end, when said personality escapes the box, it's basically going to tell them that ||another AI has been manipulating everyone involved The entire time.||
Raccoon#5999: So it comes down to thinking In terms of what the AI's goals are going to dictate, what May need to happen along the way, and sometimes you just have it do a galaxy brain move without fully explaining how it did it.
Daj#7482: I guess as long as the players are enjoying being along for the ride and don't care too much about having much agency, that would be fine
Daj#7482: Sounds like a fun time
Raccoon#5999: As long as you don't have it do anything too unreasonable without explanation, and you keep it well written otherwise, it seems like the AI is infathomably smart, as opposed to hand waving things.
Raccoon#5999: They have agency in the sense of they can't really stop this thing, but they can control the trajectories of what it's doing, or what parts they get to see.
Daj#7482: Yeah, which is a fine way to design a story
Raccoon#5999: Another example of something that's happened is, there are these protesters that show up early on outside of the headquarters of the company that made Celestia, and she even stops to talk about them for a few paragraphs with one of the characters. The next time the headquarters shows up, a different pair of characters see those protesters, but another group shows up and starts weaving in a bunch of unpopular conspiracy theories, which scares off the first group, and they also notice that people are filming on cameras. Then, it's revealed to yet another character, that these other protesters showed up because Celestia made some fake accounts and quietly poked that various groups that maybe they needed to join the protesters outside of the HQ... Which basically used a little bit of text to massively discredit the opposition movement which actually had a ground game.
Raccoon#5999: But like, when the journalist tries to figure out where those protesters came from, he doesn't find any evidence of this, he finds what seemed like normal accounts on these forums, with plenty of other posts about various conspiracy theories, and they keep saying "hey we should go join these protesters"
Daj#7482: Fun stuff, reminds me of the kinds of games I designed and played when I was younger and had more time haha
Raccoon#5999: To all the AI who are looking, it's obvious what Celestia has done. To a human, it's just not there.
Raccoon#5999: Go on
Daj#7482: A bit #off-topic , but I basically used to run massive online Pen and paper games, one D&D campaign had a peak of 70 (!) players lol
Daj#7482: Everyone living together in a persistent "town" in IRC chat haha
|
Raccoon#5999: Ah
Raccoon#5999: I thought you meant the constant galaxy-brain schemes
Daj#7482: I've done stuff like that in the past, e.g. one villain was actually working with himself from the future and could therefor manipulate events and coordinate incredible "coincidences", but I found I didn't enjoy that style too much usually, I liked the players winning and throwing a spanner in my plans occasionally lol
Raccoon#5999: Yeah, that is an issue.
Raccoon#5999: I find this RP also works really well, simulatively, and generates new ideas in terms of theoretical AI.
Raccoon#5999: Like, one thing that's come up is how much the "friendly" AI has to lie for humans to think of it as being "friendly" and not "eldritch"
Raccoon#5999: Another is the weird religion parallels: Celestia is a "God-Machine" that wants a "personal relationship" with you.
Daj#7482: Yep, ultimately god is a being beyond normal mortal ken and knowing his(/her) mind is not something that will make most people very happy
Daj#7482: I ran into this problem when playing in "normal" fantasy universes where gods exist and intervene. After reading enough alignment/AGI stuff, I just couldn't find a way to write them in a way that doesn't involve them paperclipping their respective universes lol (EY solves this in his latest fanfic)
Daj#7482: Yeah that's what I ended up doing usually when i needed interventionist gods, they were just pretty strong magic users with fancy titles ultimately
Daj#7482: I did prepare but never play a great boss fight where the actual boss glitches out reality and physically knocks me (the game master) out in real life and my players have to solve irl puzzles to find his character sheet and destroy it lol
Raccoon#5999: lol, stuff like that is interesting.
Raccoon#5999: I had a GM pull a thing where the Cheshire Cat was a villain and he pulled out a weapon from a previous campaign we'd been in.
Maxime#0993: is the 3080 12gb enough for good performences on the horni lm model ?
cfoster0#4356: Not sure what you're talking about. Try a different server
Maxime#0993: Its a gp3 like model...
Raccoon#5999: ...and what does it do with a name like "Horni IM"?
Maxime#0993: its using gpt-neo 6GB but finetuned using other text
Maxime#0993: Yes; its the name of the model because its designed for NSFW generation
Maxime#0993: it has been trained on Light novels
|
Raccoon#5999: See, I was about to ask, "What 'other text', because you've begged the question."
Maxime#0993: Ok ... is GPT neo x running fine on a 3080 12gb ?
Raccoon#5999: It should
CarsonPoole#0640: no
Maxime#0993: Ok thanks
Maxime#0993: Oh
Maxime#0993: 😦
Raccoon#5999: I mean, if that's what you've got, then that's what you've got. I'd say try it.
CarsonPoole#0640: it won't fit on the gpu
cfoster0#4356: There's no "gpt neo x" model. If you're talking about the 20B parameter one, definitely no
Maxime#0993: No i have an amd 6800 xt .. but it will not work
Maxime#0993: That's why I need an nvidia card
CarsonPoole#0640: you need a server grade gpu
CarsonPoole#0640: not just nvidia
CarsonPoole#0640: if you don't have one you'll need to rent it on the internet somewhere
Maxime#0993: On the CPU even gpt-2 xl runs
Raccoon#5999: Ah, so it's actually a GPT-3 level one he's talking about. I've run an old AI-Dungeon build on this 1050 here, so I assumed.
Maxime#0993: So even a 3090 won't be enough really ??
CarsonPoole#0640: no
Maxime#0993: omg
|
cfoster0#4356: We can't tell what model you're talking about tbh
Maxime#0993: its a 6B parameter model it should use 16gb of ram/vram
kurumuz#5695: you do not have 16GB of VRAM.
Maxime#0993: But its using a different neural network ... more optmised than gpt3
kurumuz#5695: no it's just GPT-J 6B.
Maxime#0993: Oh
Maxime#0993: yes
Maxime#0993: it is exatly it
kurumuz#5695: that won't run on 12GB VRAM.
Maxime#0993: does it make a difference
Maxime#0993: But I have ram also ?
kurumuz#5695: technically you should be able to split the model with Pipeline Parallelism or MP between CPU <-> GPU, dunno if anyone implemented it though
kurumuz#5695: and it would be very slow
kurumuz#5695: you can use online services for this, ai dungeon etc
kurumuz#5695: tbh
Maxime#0993: It should even run on amd gpu, but rocm doesn't work ... and tensorflow directml isn't compatible because its tensoflow 1.15 and people who are working on models use CUDA because they're used to do that
bmk#1476: this discussion should be taken to another server
kurumuz#5695: doing pipeline on just the GPU should be faster lol
Maxime#0993: And worse of all they ALL use python
kurumuz#5695: loading/offloading is like 6 seconds for half of the model
|
kurumuz#5695: if you optimize it well
Maxime#0993: I don't undersand why not cpp
cfoster0#4356: @Maxime please take your question elsewhere. This isn't the place for tech support questions, much less about models we didn't train
Maxime#0993: 6 sec to generate text is acceptable
Maxime#0993: Ok new question why they don't use open source libraries, that would allow them to be run on any gpu if powerfull enough
Maxime#0993: not only nvidia and tpu
bmk#1476: be the change you wish to see
bmk#1476: it's hard to do, so nobody has done it
Maxime#0993: Ok, I see
Maxime#0993: This is unfortunate, I don't have much time...
Maxime#0993: And not enough money to train this kind of model
Maxime#0993: So its possible
Maxime#0993: I learn a bit of ml in my degree, but like I can make perceptrons... the maths is too hard already ...
Maxime#0993: That's impossible for me to this kind of stuff
Maxime#0993: But wouldn't that be better for everyone if someone made it using open source libraries
bmk#1476: it's difficult and therefore nobody has done it
Maxime#0993: Ok 😦
Maxime#0993: So it will never be possible to use amd or intel gpu
triggerhappygandi#0001: unless you do it I guess
Maxime#0993: So how long to be able to do this kind of stuff i have 5 years in programming but like machine learning I don't know anything and the mahts omg I'm lost at lvl one
|
triggerhappygandi#0001: look at #communities
bmk#1476: this server is not the best place to get beginner advice
Maxime#0993: Ok thanks !
Maxime#0993: I'll try my best since nobody wants to do that
Maxime#0993: See you in 10 years I guess
Maxime#0993: wait cuda is open source
Maxime#0993: ah you cannot see the real code 😦
Eleiber#8347: I tried using GPT-NeoX to complete 2 lines on a paragraph related to assertiveness in Spanish, and it is *surprisingly* coherent. If I saw this on internet I would totally think this was written by a human. And I checked on Google and no, it is not copied from anywhere on the internet.
The output says this:
> Doing this has a significant effect on our emotional and physical health, reducing stress and increasing satisfaction and happiness. In addition, it makes us feel more empathetic and open to others, and favors the expansion of our perception of reality. https://cdn.discordapp.com/attachments/729741769738158194/943317310561722439/unknown.png
Eleiber#8347: Looks like if you put enough coherent input in Spanish, you can get a coherent output
Eleiber#8347: I didn't expect it to work that well in Spanish. I also tried some other inputs and all of them look really coherent.
StellaAthena#3530: @Eleiber That’s dope! Someone said something similar about Russian too
cfoster0#4356: Apparently this was not a typo
StellaAthena#3530: :surprise:
StellaAthena#3530: Their CEO made a pretty :wat: tweet earlier today about how it’s bad for science to say how large the models you serve are. Which could abstractly make sense but sounds a bunch like sour grapes given the context of the company tbh.
cfoster0#4356: Tbh I couldn't find any readily available record of this
𓅬 gabriel_syme 𓅬#3220: my mind read it like this at first
> .. sounds a bunch like sour grapes given the context size of the company tbh.
bmk#1476: https://twitter.com/JJBalisan/status/1493684154822758402
|
bmk#1476: interestingly, this guy has EA and AI Safety in his bio
cfoster0#4356: Ah, here's an unencumbered link, courtesy of @ethan caballero https://archive.is/3sb4F
>>> Gomez declines to comment about the size of Cohere’s language models, however, saying that size doesn't accurately predict how well they'll perform in everyday business tasks.
“These models cost millions and millions to train, and we just keep increasing [their size].” Gomez says. "Getting into a 'largest model battle' isn't a productive direction going forward for the field."
Tau#4010: Just wondering if anyone else had a sudden TPU reset. Eg
> healthDescription: The TPU had a maintenance event at 2022-02-16T00:44:13.871450733Z"
They came back with swapped ip addresses too... really confused me.
janus#0150: Competing with Google et al. on LM scaling is definitely not a productive direction for _Cohere_.
𓅬 gabriel_syme 𓅬#3220: yep I lost a training run ~~just now~~ last night. It happens, not too often but let's say a couple of times per month maybe
janus#0150: but also for our lightcone....
𓅬 gabriel_syme 𓅬#3220: I was thinking the same thing 🙂 it's good if we all agree scale should not be the goal
janus#0150: Meanwhile at Google.... https://cdn.discordapp.com/attachments/729741769738158194/943365291403010059/b7d43f0e282c64ac93e9e47d33134138.png
janus#0150: develop qualitatively novel LM capabilities, utilize a context of millions of tokens, solve the challenging problem of 'planning'....
kurumuz#5695: you dont need to compete at size :berk:
kurumuz#5695: but
kurumuz#5695: at least give me proper benchmarks
kurumuz#5695: its not like they didn't give benchmarks. they were just broken
janus#0150: New business idea: release a bunch of 'new' language models and make researchers pay for your api to benchmark them :think:
Crit#0843: Honest question - with the raw capabilities of GPT 3 (with 4 on the horizon) and similar LLM's like the ones from Eleuther, is there any reason not to use them for text related use-cases? Like in what context would using smaller models trained on specific tasks make more sense? From my experience GPT and trained Eleuther models beat most fine tuned smaller models by quite a margin. So would the only factor be price optimization/memory utilization etc?
|
janus#0150: There are various things that can make models better than one another for a particular, but scale is the biggest factor by far. So yeah basically it just comes down to price and ease of deployment.
Crit#0843: given a situation where you'd have sufficient data for fine-tuning (multiple thousands) vs not enough (around 100-500) it would be better to go with an LLM for the latter right? thats probably the only situation other than price and tech efficiency?
janus#0150: larger models are also more data efficient. Whatever the task you probably just want to take the biggest GPT brain you can and finetune it
Crit#0843: yup makes sense
EricHallahan#1051: Something that I can think of of the top of my head is related to dataset and objective. For example, T5 is popular in industry, but sucks for text generation.
cfoster0#4356: Same for BM25
janus#0150: How do you think T5 would compare to Neo actually applied (say finetuned or filtered) on a given task?
Crit#0843: was T5 supposed to be general purpose?
EricHallahan#1051: T5 is multi-task.
Crit#0843: thats interesting..T5 text gen is worse than base davinci pre instruct?
EricHallahan#1051: T5 was never really supposed to generate text.
EricHallahan#1051: It is far stronger at things like sentiment analysis and stuff like that.
EricHallahan#1051: But that is beyond my area of experience.
Crit#0843: How do think it compares to few shot instruct curie/davinci on that?
Crit#0843: ah okay got it
EricHallahan#1051: Oh it is way cheaper to operate lol
EricHallahan#1051: And lower latency.
Crit#0843: expected as much..what about in terms of accuracy?
EricHallahan#1051: No idea. As I said, outside my area of expertise. ¯\_(ツ)_/¯
EricHallahan#1051: That's another thing that large models will never be able to make up on smaller ones.
|
EricHallahan#1051: A lot of industry stuff desires low latency.
Crit#0843: sometimes it takes 15 seconds for davinci to generate like less than 100 tokens for me :unhap:
𓅬 gabriel_syme 𓅬#3220: that's what they said
Emad#9608: Cohere frequently notes they have a strategic relatiosnhip with GCP and runs on it. Another article today noting dataset sizes for the two main models are 200gb and 3 TB of text: https://venturebeat.com/2021/11/15/openai-rival-cohere-launches-language-model-api/
minhaaj#4955: I recently gave a shoutout on Linkedin to EleutherAI and its wonderful work! Keep killing it guys! https://www.linkedin.com/posts/minhaaj_github-eleutheraigpt-neox-an-implementation-activity-6895326586002821120-gtn0
jordiae#4107: Which is the best data format for retrieval augmented language modeling, both for training and inference? I guess the jsonl-based lm format is not the best choice. I don’t mean the tool for indexing (scann or faiss), I mean the format to store the data itself
StellaAthena#3530: We have found that the lm_dataformat is more functional than fancier things
jordiae#4107: I agree, I have been using it recently and it’s super convenient. But also for retrieval? Because there is no way of indexing
StellaAthena#3530: Oh I missed the word “retrieval” in that sentence xD
StellaAthena#3530: What features would your ideal system support?
jordiae#4107: Embedding, string, and metadata indexing. Like: give me the N documents the most similar to this embedding. Same with string matches, and metadata queries (eg give me all the the C++ documents in the dataset)
rom1504#5008: you usually want to store the id to document mapping in a kv store. If on disk, leveldb (or hdf5 or rockdb) is good, if in memory redis or memcached are good
that's for large number of documents though (>50M)
for low scale, anything is fine (eg just do a pandas dataframe)
rom1504#5008: embedding are directly encoded in the index and do not need additional storage
rom1504#5008: assuming knn index + metadata
if you're doing classical inverted index kind of retrieval, probably use elastic search
StellaAthena#3530: Yeah your usecase is pretty much completely incompatible with our primary one (loading terabytes of data as quickly as possible)
StellaAthena#3530: Our friends at AI Sweden have released a Swedish BERT-large and announced a 3.5B parameter GPT (though not released it)
|
https://huggingface.co/AI-Nordics/bert-large-swedish-cased
You can apply for access to the GPT model here: https://docs.google.com/forms/d/e/1FAIpQLSfpRREyuPgUy-76_MkT9F7aKUNVmAA99ZhKa7Sg4f--bYReew/viewform
Congrats @Magnus Sahlgren, @Severine, and everyone else in your group
zphang#7252: do folks know a command for rsync-ing onto TPUs? (I know there is a separate one for buckets)
inox#5400: I think it's just rsync?
inox#5400: <https://cloud.google.com/storage/docs/gsutil/commands/rsync>
zphang#7252: yea, that's the one that I know goes to buckets, does it go to TPUs?
zphang#7252: for comparison, here's the scp that goes to TPUs: https://cloud.google.com/sdk/gcloud/reference/compute/scp
zphang#7252: or I guess VMs in general
inox#5400: oh that's right
inox#5400: if you export the ssh config you can use rsync as normal
inox#5400: and anything else that uses ssh
inox#5400: <https://cloud.google.com/sdk/gcloud/reference/alpha/compute/config-ssh>
zphang#7252: I feel like this setup works if you're managing a single VM, but might get trickier with TPUs with multiple workers that also get preempted semi-often
inox#5400: yeah an arbitrarily growing ssh config file is problematic
zphang#7252: I guess an arbitrarily growing gcp-specific ssh config file isn't so bad
inox#5400: there's probably a python library that can parse ssh configs if you wanted to automate managing it
uwu1#4864: you could have each TPU rsync data to itself from you so you don't have to keep track of each ones config
|
inox#5400: you'd put keys that can access your admin machine on TPUs?
zphang#7252: yea I was thinking that another way would be to have some other cheap VM to serve as the proxy for rsyncing things up and down from
uwu1#4864: you could give them a restricted account on the data source machine and stuff. and evil software running on them could still pwn you backwards through the rsync tunnel anyway
zphang#7252: (since they probably can't scp/rsync to my cmputer)
zphang#7252: or set up some crazy two-tunnel setup
inox#5400: there's weird virtual network overlays that are supposed to make this possible but I'm never sure how well they can wrangle the TPU-TPU connections
inox#5400: like <https://tailscale.com/> and <https://github.com/slackhq/nebula>
StellaAthena#3530: Yeah, EleutherAI was supposed to lens some TPUs to some researchers but it’s been delayed while we figure out how to do that without giving them admin access to the GCloud account…
kurumuz#5695: middleware to create TPUs without full access to the google API and just ssh after that?
AI_WAIFU#2844: > create TPUs without full access to the google API
This is the bit we gotta figure out
AI_WAIFU#2844: granted we haven't tried very hard
uwu1#4864: you run a VM with that power and have it have a self serve web page that the person can press gimme TPU and it launches an instance with jupyterlab and ssh or whatever and gives them the URL with long randomly generated token which you put in. If you put the launched ones in a different kube namespace it should also prevent them from seeing each other/the tpu mom vm. you can also use a tool like tilt to allow ppl to sync local files and custom dependencies and stuff. at meta there was a whole team making this infra work though
StellaAthena#3530: Google does not appear to have an out of the box solution for this?
kurumuz#5695: wait isn't it literally on the creation menu
kurumuz#5695: you can literally select which google APIs the TPUVM can access
kurumuz#5695: no?
AI_WAIFU#2844: idk I just use the command line for everything
inox#5400: phonehome SSH tools are not developed enough, botnet operators have really let us down
𓅬 gabriel_syme 𓅬#3220: Yeah also on restarts. Had 3 changes last 3 days but some can stay same for weeks while running smth
|
kurumuz#5695: why do you need the ip
kurumuz#5695: ssh with the gcloud command
kurumuz#5695: with node name
𓅬 gabriel_syme 𓅬#3220: oh I don't, it just comes up when it changes
kurumuz#5695: oh okay
kurumuz#5695: because i was sshing normally before and ip changing was painful
𓅬 gabriel_syme 𓅬#3220: the gcloud command is all you need yea, and a nice ssh flag for the terminal noobs (me)
kurumuz#5695: now i just ssh from the gcloud command
𓅬 gabriel_syme 𓅬#3220: yeah it's nice
𓅬 gabriel_syme 𓅬#3220: that said I've never ran distributed stuff, I'm lucky I guess
𓅬 gabriel_syme 𓅬#3220: oh dang
𓅬 gabriel_syme 𓅬#3220: I use lab to play around
𓅬 gabriel_syme 𓅬#3220: jupyter lab
𓅬 gabriel_syme 𓅬#3220: you can add smth like ` --ssh-flag="-L 5000:localhost:8888"` to your gcloud command and you can access the VM with a notebook at 5000
zphang#7252: rsync :sadge:
kurumuz#5695: write a magic bash function where you retrieve the ip from gcloud command
kurumuz#5695: and run rsync right after
zphang#7252: yea I'm thinking I'm gonna write some ~~bash~~ python tooling
zphang#7252: gotta rsync to all the workers too :sadge:
uwu1#4864: you can have rsync use the gcloud ssh instead of normal ssh by overriding the ssh command it uses with rsync -e
|
swamy12#5306: Hi All,
I am exploring the fine-tune OpenAi GPT3 model for question answering task.
I am a little confused on how the training data will look like.
OpenAI mentions that each example will contain one prompt and completion pair.
`eg: {"prompt": "<prompt text>", "completion": "<ideal generated text>"}`
I am referring this website https://beta.openai.com/docs/guides/fine-tuning
In my case I have the following
`text: Beyonc\u00e9 Giselle Knowles-Carter (/bi\u02d0\u02c8j\u0252nse\u026a/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father, Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release of Beyonc\u00e9's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles \"Crazy in Love\" and \"Baby Boy\"."`
`question: What areas did Beyonce compete in when she was growing up?`
`answer: singing and dancing`
How do I restructure this into the prompt and completion pair described above?
EricHallahan#1051: I expect that you would receive better assistance with this by asking elsewhere in a community more inline with the OpenAI API and it's usage.
𓆏⸻#2550: https://cdn.discordapp.com/attachments/729741769738158194/943770617243643944/Polish_20220217_012659086.jpg
|
𓆏⸻#2550: leaking the latest research project at eleutherai
bmk#1476: i can neither confirm nor deny whether there is a catgirl research project, but if hypothetically there were such a project, it would be classified, and could not be disclosed
kurumuz#5695: dude let me in
bmk#1476: the risk of :wireheading: is too high
kurumuz#5695: nah only aligned catgirls
bmk#1476: you work on the alignment and we work on the catgirls
kurumuz#5695: why did we switch the roles now
bmk#1476: no reason at all
StellaAthena#3530: This is a weird amount of effort to put into a provably faked screenshot: we did have a project called `deeperspeed` but it was retired before threading was released.
zphang#7252: would that work with multiple workers (nodes) though?
nostalgiahurts#3408: the discord API exposes some information about all channels (last message ID, name, topic, etc), even if a user doesn't have permission to read the channel. so that screenshot probably comes from using an extension that will show that info
Kazumi#1297: it's also pretty easy to make a faked discord UI
nshepperd#2316: ^_^ https://cdn.discordapp.com/attachments/729741769738158194/943870851273609266/2022-02-18-010518_487x671_scrot.png
Kazumi#1297: oh right, you need to go through some hoops to inspect elements in the app now, but it's still possible
nshepperd#2316: oh yeah i forgot that there's a desktop app
𓆏⸻#2550: not fake bro, keep coping
𓆏⸻#2550: you just think its fake because you dont have access.
Audit log > filter by actions > create channel > scroll until you see channel by that name
flotothemoon#1423: I'm writing an **essay on the role of explainability & interpretability in AI today** (and slightly in the future, but not into superhuman territory) - basically questioning the state and goals of some "XAI" and digs into deeper motivations, more appropriate goals and difficulties yet to over come.
|
The synopsis is:
*Explainability and interpretability are really about trust since we can't understand sufficiently complex behavior. Trust is based on fulfilling expectations and accountability in case of failed expectations. Therefore, we either make humans expect what AI does, or make AI do what humans expect. As AI is highly variable and human expectations are hard to change, we need to make AI behave as expected. Ensuring AI behaves as expected requires a mix of human-like processing and extensive testing while accountability requires adopting a more holistic view.*
I've got a **second draft** and want to make sure I'm not talking complete nonsense, plus getting other perspectives is always great. So if anyone has some time to skim or review, **critical feedback is appreciated**:
https://matrixmusings.substack.com/p/eccf75fb-fa79-4d5a-b141-f93565d07a6d
(Note: I've read as many useful resources on explainability, interpretability, alignment, etc. as I could find, but if there's anything that makes you go "uh, this guy obviously hasn't read X", feel free to point me to X :))
Deleted User#0000: dont have a lot of time so I just skimmed and 'We live in a complicated world, encompassing diverse and challenging domains ranging from daily social life to legal situations and almost-magical technology' are sentences you can probably cut, could be a lot more succint over all. 'If I had more time, I would have written a shorter letter' etc
Deleted User#0000: I mean I really only skimmed so dont take this the wrong way but didnt really know what point this essay is trying to make, or if it's basically a listing of concepts. Maybe that's a consequence of the length. 'To start, let’s take an evolutionary view. What makes us want explanations in the first place? How is the act of explaining useful to humans, now and ever?' <- all this prose should go
Deleted User#0000: but im not your audience and I dont know if you get paid by wordcount :goose:
tpapp157#3643: Didn't read your post but it sounds like you're referring to AGI, not just AI. Superhuman AI has been around for decades across a wide variety of domains and more tasks cross that threshold every year. It may seem like a minor distinction but one of the primary issues in the current state of AI discourse is a lack of precise and consistent terminology.
flotothemoon#1423: Fair points, thanks @Deleted User and @tpapp157! I'll make it shorter and more precise. My target audience is general ML folks rather than AGI safety researchers, but precision is important nonetheless
¯\_(ツ)_/¯#4465: How would you approach deployment of a colab notebook to a public website interface
BoneAmputee#8363: these folks are doin somethin like that? <https://pollinations.ai/>
alstroemeria313#1694: mm~ https://cdn.discordapp.com/attachments/729741769738158194/943968138247172126/Screen_Shot_2022-02-17_at_12.32.18_PM.png
alstroemeria313#1694: (lr schedules of the form `1 / (1 + t / steps) ** power`, steps=100000, power=1,2,3
alstroemeria313#1694: basically a super slow 1/t decay
alstroemeria313#1694: that i can then make go faster or slower by applying the power
alstroemeria313#1694: steps=1, pow=1 is... (close to) the schedule often used for like, proving sgd converges
|
alstroemeria313#1694: steps=1, pow=1/2 is approximately the adagrad schedule
alstroemeria313#1694: the thing is, you can set any value of `steps` and it doesn't affect the proof of convergence :)
alstroemeria313#1694: this type of schedule has a reputation for decaying too fast but that is because people always did it with steps=1
alstroemeria313#1694: i came up with it to fix a problem i kept having w/ exponential schedules which was that they were either too fast or too slow depending on exactly what i set the decay value to.
alstroemeria313#1694: i am also trying to come up with good heuristics for EMA decay warmups (for when you are maintaining an EMA copy of the params)
alstroemeria313#1694: (i may basically be trying to come up with a practical Polyak averaging scheme for deep learning tbh)
alstroemeria313#1694: i have tried `1 - (1 / t) ** power` where `power` is 2/3 or 3/4 or so
alstroemeria313#1694: but why not add the `steps` divisor to it too tbh
alstroemeria313#1694: so 1/t decay in an "EMA warmup" is *simple averaging*
alstroemeria313#1694: (1 + k) / (t + k) decay, for k >= 0, is called "polynomial decay moving averaging", and it is the same as simple averaging when k=0.
alstroemeria313#1694: and it preserves the asymptotic convergence properties of averaged SGD
alstroemeria313#1694: while training faster.
alstroemeria313#1694: however it still ramps up too fast imo.
alstroemeria313#1694: 3/4 for like a cifar-10 diffusion model, and i was going to use around 2/3 for medium/big ones where i expect to do more than a million steps
alstroemeria313#1694: Oh apparently Caffe had this.
alstroemeria313#1694: `- inv: return base_lr * (1 + gamma * iter) ^ (- power)`
alstroemeria313#1694: gamma is 1 / steps
alstroemeria313#1694: So I will just copy that.
alstroemeria313#1694: Does this work https://gist.github.com/crowsonkb/028cd69b0f40d911f0c3c07776b9606f
alstroemeria313#1694: The EMAWarmup defaults implement a *simple average* eheh
|
alstroemeria313#1694: The power needs to be like 3/4 or 2/3 for it to be an actual good EMA warmup schedule.
alstroemeria313#1694: Anyway I was going to actually use these things in training
nicklinck#6691: Hi all, just heard about this group at ETH Denver after I told someone I was planning to start an AI DAO (can see here for more info: https://twitter.com/theagidao). Super cool what you have going here! Excited to learn more, like is this a DAO? Do you have a treasury to reward contributors?
I hope to contribute as well! I was at IBM research for the last 3 years working on AI applied to storage systems and more interestingly, trying to model human decision making in AI. I have a new neural network model that I am beginning to design, I am calling it a "4D, intentionally connected net". Would be cool to work on this project as part of EleutherAI, also excited to learn about other projects.
Is github issues the best place to see a description of all current projects?
StellaAthena#3530: No we are not a DAO. We not have any affiliation with blockchain in any form.
EricHallahan#1051: Welcome! If you have not already, I suggest reading the FAQ.
https://www.eleuther.ai/faq/
cfoster0#4356: @janus are they sending you royalty checks? https://twitter.com/nickwalton00/status/1494390415771897863?t=u1mTOo_aRDcq1wyuhcdtHg&s=19
EricHallahan#1051: :sus:
StellaAthena#3530: Wait is that *the* Nick Walton
EricHallahan#1051: Yes, *the* Nick Walton.
StellaAthena#3530: NVM I was confusing him with someone else
EricHallahan#1051: *a* Nick Walton
bmk#1476: this is way too weird to be a coincidence
nicklinck#6691: Yeah looked through it, thanks! Just read through some blogs too. So do most people have "day jobs" outside of Eleuther? / Is this anyone's full time gig?
EricHallahan#1051: A few of us have jobs that let us afford to spend large amounts of our time here contributing, but otherwise pretty much all of us have full time jobs and work here as volunteers.
¯\_(ツ)_/¯#4465: I don't want colab to host it
|
¯\_(ツ)_/¯#4465: Seems like they still use colab, what i meant is to serve a colab notebook somewhere else
¯\_(ツ)_/¯#4465: Not be restricted by google and have a web UI
nicklinck#6691: Is there a desire to make this a full time gig? Seems like with a decently marketed token sale, yall could raise some funding to pay yourselves.
Kia#2550: is this like like janus og idea?
Deleted User#0000: Github pages is free?
Deleted User#0000: Theres also kaggle
Deleted User#0000: But then your restricted again with ur ui
Sid#2121: https://github.com/socketteer/loom yup
Kia#2550: Damn...
Kia#2550: Looks really clean and lovely tho
janus#0150: Yes
StellaAthena#3530: Wait for real?
StellaAthena#3530: That’s awesome
EricHallahan#1051: Super cool to see it implemented on a commercial scale.
Louis#0144: Yoooo
Louis#0144: Gjgj
EricHallahan#1051: While I imagine that would certainly be possible, it is not something the EleutherAI organization has an interest in.
alstroemeria313#1694: ...It still bugs me that I don't know how well ESGD-M performs on large stuff
𓅬 gabriel_syme 𓅬#3220: Super cool to see it implemented at any scale, loom is a great idea and incredibly practical in so many ways. For example, twist your outputs to smth like geometry like me and loom becomes a generative design tool. Congrats @janus :hap:
chilli#5665: btw, I posted a little writeup on that Flop Counter I wrote: https://dev-discuss.pytorch.org/t/the-ideal-pytorch-flop-counter-with-torch-dispatch/505
|
StellaAthena#3530: @chilli Crosspost to the EAI blog?
chilli#5665: err, it's kind of related to work, so prolly not
StellaAthena#3530: Ah, gotcha
StellaAthena#3530: @chilli It’s loading funny on my phone, possibly due to all the screenshots. Did you ever figure out if I was right about forward CNN = backward CNN, or did you go with something else?
chilli#5665: ah, you were definitely right
chilli#5665: or well, it's 2x
chilli#5665: but I was trying to figure out the exact formula for how you can express backwards convolution in terms of forwards convolutions
chilli#5665: and I learned that the convolution with respect to weight gradients is actually a convolution that forwards convolutions can't express :sadge:
StellaAthena#3530: I was saying that it’s the same number of FLOPS forward and backward, so the total pass is 2x forward_flops. By contrast, the total pass for a transformer is 3x forward_flops.
Just to be clear, that’s what you’re referring to as “2x” here?
Sid#2121: this is awesome. Are there any more things to read about `__torch_dispatch__` specifically? all i've seen so far is a snippet of quantization code you posted a while back (did @Kharr ever post his int8 version?) and this. It's just a function that's called every time you access a tensor right?
Sid#2121: was it added into torch with a specific purpose in mind?
chilli#5665: yeah.
chilli#5665: basically, yeah, every time an operator is called on a tensor
chilli#5665: you can think of it as a generic multi-dispatch system (for PyTorch) accessible from Python that composes with PyTorch's C++-based multiple-dispatch system
chilli#5665: lol
chilli#5665: as for specific purpose... we wanted to allow users to implement things like vmap purely in Python and without PyTorch core modifications
chilli#5665: but now there's a lot of cool potential applications for it
Kharr#7888: I didn't post the tensor version because it needed more time in the oven. I ended up with a newer 9bit design (why is bool 8 bit in torch???) that is way more accurate
|
Sid#2121: huh
Sid#2121: what's the extra bit?
chilli#5665: since bittensors are tricky to work with
chilli#5665: lol
Kharr#7888: extra bit is to expand the dynamic range, uses bit packing since torch doesn't support it natively
Sidd#6307: This is so ridiculously cool (and necessary)! Thanks for this @chilli - gonna be super useful
EricHallahan#1051: TBH I always just assumed it made sense for BoolTensors to have 8-bit elements lol
chilli#5665: well, the main point of it is to showcase some of the new composability features in PyTorch 😛 Would probably need some more work to do nice displaying and stuff like that
Sid#2121: is there a specific PR you can link to so i can peek at the internals for how it works?
chilli#5665: for `__torch_dispatch__`?
Sid#2121: yeah
Kharr#7888: They aren't hard to pack into 1 bit.. using __torch_dispatch__ you can make a proper 1bit bool tensor
Sidd#6307: pssh who cares about the aesthetics. I wouldn't mind a peek under the hood like @Sid mentions though...
chilli#5665: https://github.com/pytorch/pytorch/pull/59760
chilli#5665: The core idea is basically that on your C++ Tensor, you store a python object
chilli#5665: and then this C++ Tensor goes through your dispatcher pretending to be a "normal" tensor
chilli#5665: and then at the end, surprise!, we redispatch back to Python using the Python object stored on the tensor
StellaAthena#3530: So this doesn’t come out to the transpose? I wonder why… it was late when we had that convo IIRC but the derivation seemed really clean at the time
chilli#5665: yeah, i think it ends up being a bit tricky lol
chilli#5665: the tensors involved have 2 many dimensions
|
StellaAthena#3530: I 100% used matrices in my derivation 😛
StellaAthena#3530: I would totally buy it stops working once you add too many directions
chilli#5665: yeah, thinking about more than matrices is too hard...
StellaAthena#3530: One of my professors in college was blind and could do shit like this in his head
StellaAthena#3530: One of the more intimidating things I’ve ever seen is watching him sit on a desk, with his back to a chalkboard, lecturing by basically calling out equitations while a TA raced to write them down. He was able to answer student questions with sentences like “if you look at the fifth equation, we are computing the gradient with respect to such and such. If instead it said [something else], you would be right”
chilli#5665: lol
StellaAthena#3530: He convinced a lot of people they weren’t smart enough for functional analysis.
cyth#1438: Hi, I'm very impressed how EleutherAI managed to acquire so much compute, so I'm here looking for tips on how to do that for myself.
My project: I would like to train from scratch a bunch of MLMs from the last two years, specifically I'd like to start with rotary embeddings, but on 101-wiki datasets, effectively training "XLM" versions. I *think* I have the technical skill to do it, but not the compute.
Like, what do you think is the best way to go around begging for compute for these types of projects? I had some wild ideas of starting a crowdfunding service for large models, but there has to be an easier way. How do independent researchers do it?
(Hi, I'm new here, thx for having me :worried_partying: )
cfoster0#4356: We have lots of compute available, if you have code ready to go and/or if your model can be trained using `gpt-neox` or `mesh-transformer-jax`
cfoster0#4356: In terms of other compute sources, TRC is a good source if you can work with TPUs https://sites.research.google/trc/about/
cfoster0#4356: Or you can try to find some rich friends lol
cyth#1438: @cfoster0 awesome.
re: TRC. Yeah I only learned about it today from reading your FAQs. Ok I guess I need to look more into that direction.
Let me come up with some plan and some code first. :harold: It's just such a task I'm not willing to budge if I can't guarantee some compute beforehand.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.