data
stringlengths 115
7.61k
|
---|
params = model.init(jax.random.PRNGKey(0),
jnp.zeros([1, *shape]),
jnp.zeros([1]),
{})
```
alstroemeria313#1694: so when haiku does the shape inference
alstroemeria313#1694: it returns the params
alstroemeria313#1694: And I'd have to make another hk.transform() and init it and that would give me a second set of params and use my device memory up.
elderfalcon#4450: Eugh
alstroemeria313#1694: Because I am short on memory and trying to use gradient checkpointing to fit a bigger model into memory already.
EricHallahan#1051: Why not use `jax.lax.cond()`?
elderfalcon#4450: Okay, first things first -- are you doing this to train and eval? What are you gaining here from JITting it?
alstroemeria313#1694: I have no idea what that is lol
guac#4716: Beat me to it heheh
EricHallahan#1051: https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.cond.html#jax.lax.cond
alstroemeria313#1694: This is my first JAX project ^^;;
EricHallahan#1051: I've never used JAX.
elderfalcon#4450: https://c.tenor.com/JBmnjSa2LJkAAAAM/plot-twist.gif
alstroemeria313#1694: so can this cond() accept a boolean
alstroemeria313#1694: Like, an ordinary Python boolean?
|
EricHallahan#1051: > **pred** - Boolean scalar type, indicating which branch function to apply.
alstroemeria313#1694: Is that a Python type or a special JAX type.
elderfalcon#4450: Sounds like python.
Why not just `python3` from a terminal and run it with both? Might be the shortest path cause I don't know haha/:'(
alstroemeria313#1694: Yeah works with Python booleans.
alstroemeria313#1694: ```
In [2]: jax.lax.cond(True, lambda _: 0, lambda _: 1, None)
Out[2]: DeviceArray(0, dtype=int32)
In [3]: jax.lax.cond(False, lambda _: 0, lambda _: 1, None)
Out[3]: DeviceArray(1, dtype=int32)
```
CRG#8707: My understanding is that cond effectively does both branches in parallel
CRG#8707: I think this should work
EricHallahan#1051: So it performs both branches and throws one result away at the end?
guac#4716: no way it should just trace both not perf both
CRG#8707: Hm, wait I'm not sure now: https://github.com/google/jax/issues/3103
alstroemeria313#1694: ok the inference loop is running.
alstroemeria313#1694: ty :)
|
alstroemeria313#1694: (after it finishes making the first demo grid it will start training, and then make demo grids at intervals)
alstroemeria313#1694: It looks like it's sampling slower though?
alstroemeria313#1694: like 2/3 as fast
EricHallahan#1051: Joao has the right idea:
> I think this should be added to the FAQ, or documented explicitly somewhere
- <https://github.com/google/jax/issues/3103#issuecomment-629653455>
alstroemeria313#1694: I am just doing the dropout unconditionally and then using the cond() to select whether to return the dropped-out version or the original one ^^;;
elderfalcon#4450: Hayyyy, congrats, dude/dudette/dxdes-that-there-aren't-good-words-for-yet! :D
alstroemeria313#1694: lol wtf Nvidia I do not need to enable 2FA on an "account" I only use to download cudnn.
alstroemeria313#1694: What is someone going to do, impersonate me and download cudnn.
EricHallahan#1051: If they wanted to make it truly secure then they wouldn't force us into making an account to download stuff.
kurumuz#5695: I could hate NVIDIA just for locking their cuDNN downloads
kurumuz#5695: like holy sit
alstroemeria313#1694: They must want to collect email addresses. That's the only reasonable explanation.
jbustter#5167: in case someone is interested, i made a "one-line-game-of-life", it saves every frame in a single tensor https://cdn.discordapp.com/attachments/729741769738158194/896124716295790592/game_of_life.py
genetyx8#7543: ||*laughs in APL*:
```APL
life←{↑1 ⍵∨.∧3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵}
```||
jbustter#5167: the actual calculation is 149 characters but there are probably easy ways to shrink it further 🤷♂️
|
Deleted User#0000: there are so many reasons to hate nvidia
genetyx8#7543: inb4 implementing an NN framework in APL just to beat the smallest ImageNet code
Deleted User#0000: u could use import accumulate as a
Deleted User#0000: just use single character variable names too lol
Deleted User#0000: i assume ur not concerned with readability
jbustter#5167: true
jbustter#5167: just a sec
genetyx8#7543: ```Python
import torch as t
import matplotlib.pyplot as plt
from itertools import accumulate as a
g = torch.zeros([1,1,156,156]) # define board
g[0,0,78:81,78:81] = t.tensor([[1,1,0],[0,1,1],[1,1,0]]) # add inital condition
n_frames = 200 # number of frames
b = t.cat([i for i in a([g for i in range(n_frames)], lambda x, y: t.clamp(1.5-2*t.pow(t.conv2d(x,t.tensor([[1,1,1],[1,0.5,1],[1,1,1]]).unsqueeze(0).unsqueeze(0),t.Tensor([-3]), padding=1),2),0,1))]).squeeze()
```
genetyx8#7543: sorry for sniping :guilty:
Deleted User#0000: gotta shorten n_frames
|
jbustter#5167: i think i have you beat https://cdn.discordapp.com/attachments/729741769738158194/896130373145206794/game_of_life.py
genetyx8#7543: could you put it in a code block instead of a file? It's short enough for that
jbustter#5167: No. It needs to be shorter.
Deleted User#0000: @jbustter replace the -3 index with whatever the equivalent positive index would be
jbustter#5167: it's not an index
jbustter#5167: it's part of the calcolation, but i think i can remove it actually
Deleted User#0000: oh cool
Deleted User#0000: srry didnt look super close
jbustter#5167: moves the -3 so it's not the bias of the conv
EricHallahan#1051: This version runs:```python
import torch as t
import matplotlib.pyplot as plt
from itertools import accumulate as a
g = t.zeros([1,1,156,156]) # define board
g[0,0,78:81,78:81] = t.tensor([[1,1,0],[0,1,1],[1,1,0]]) # add inital condition
n = 200 # number of frames
b = t.cat([i for i in a([g for i in range(n)],lambda x,y:t.clamp(1.5-2*(t.conv2d(x,t.tensor([[[[1,1,1],[1,0.5,1],[1,1,1]]]]),padding=1)-3)**2,0,1))])[:,0,:,:]
|
f, axarr = plt.subplots(3,figsize=(12, 12))
axarr[0].imshow(b[0])
axarr[1].imshow(b[100])
axarr[2].imshow(b[199])
```
jbustter#5167: yeah
jbustter#5167: now that line is 158 characters instead of 253
genetyx8#7543: `[i for i in a([g for i in range(n)]` looks like it has room for improvement
jbustter#5167: you're right!
jbustter#5167: ```python
import torch as t
import matplotlib.pyplot as plt
from itertools import accumulate as a
g = t.zeros([1,1,156,156]) # define board
g[0,0,78:81,78:81] = t.tensor([[1,1,0],[0,1,1],[1,1,0]]) # add inital condition
n = 200 # number of frames
b = t.cat(list(a([g for i in range(n)],lambda x,y:t.clip(1.5-2*(t.conv2d(x,t.tensor([[[[1,1,1],[1,0.5,1],[1,1,1]]]]),padding=1)-3)**2,0,1))))[:,0,:,:]
|
f, axarr = plt.subplots(3,figsize=(12, 12))
axarr[0].imshow(b[0])
axarr[1].imshow(b[100])
axarr[2].imshow(b[199])
```
jbustter#5167: now it's 151
chilli#5665: lol
chilli#5665: time for me to submit some PRs to PyTorch to facilitate this
chilli#5665: oh, here's an easy improvement
chilli#5665: replace `t.clamp` with `t.clip`
chilli#5665: one character saving
chilli#5665: clip is just an alias for clamp 🙂
jbustter#5167: wait that doesn't work D:
chilli#5665: 😮
jbustter#5167: https://cdn.discordapp.com/attachments/729741769738158194/896136630971277363/Code_9NoWfuLL75.png
chilli#5665: do you have an old version of pytorch?
jbustter#5167: 1.4.0
kurumuz#5695: oh wow
kurumuz#5695: that's so old!
jbustter#5167: i just used anaconda :S
|
Sid#2121: are we code golfing GOL in pytorch now
jbustter#5167: anaconda gives me the 1.4.0 version of pytorch
EricHallahan#1051: You could make the grid size smaller for a saving of two chars
chilli#5665: My suspicion is you have some other dependencies that end up constraining PyTorch to be 1.4.0
chilli#5665: https://pytorch.org/get-started/locally/
EricHallahan#1051: I think that was meant for @chilli :)
jbustter#5167: i don't think so https://cdn.discordapp.com/attachments/729741769738158194/896137942882476042/Code_4PGJuGouPZ.png
chilli#5665: :thonk:
jbustter#5167: conda alwys has old versions of libraries...
chilli#5665: I'm not sure that conda will let you update PyTorch if there's other dependencies that constrain it
chilli#5665: one of the "standard" ways of installing the newest PyTorch is through conda
BoneAmputee#8363: yeah pytorch.org defaults to conda install instructions
`conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia`
jbustter#5167: ill try to uninstall and reinstall it, i think it might have happened because i installed torch, and not pytorch
EricHallahan#1051: Three chars saved:```python
import torch as t
import matplotlib.pyplot as plt
from itertools import accumulate as a
g = t.zeros([1,1,99,99]) # define board
|
g[0,0,49:52,49:52] = t.tensor([[1,1,0],[0,1,1],[1,1,0]]) # add inital condition
n = 200 # number of frames
b = t.cat(list(a([g for i in range(n)],lambda x,y:t.clip(1.5-2*(t.conv2d(x,t.tensor([[[[1,1,1],[1,0.5,1],[1,1,1]]]]),padding=1)-3)**2,0,1))))[:,0,:,:]
f, axarr = plt.subplots(3,figsize=(12, 12))
axarr[0].imshow(b[0])
axarr[1].imshow(b[100])
axarr[2].imshow(b[-1])
```
cr7#3542: Hello Guys, do you think it would be feasible to reduce the inference time down to 1s for 400 tokens ? What do you think is the absolute speed limit ?
chilli#5665: can't you replace `[g for i in range(n)]` with `[g]*n`?
EricHallahan#1051: Yep, it works:
```python
import torch as t
import matplotlib.pyplot as plt
from itertools import accumulate as a
g = t.zeros([1,1,99,99]) # define board
g[0,0,49:52,49:52] = t.tensor([[1,1,0],[0,1,1],[1,1,0]]) # add inital condition
|
n = 200 # number of frames
b = t.cat(list(a([g]*n,lambda x,y:t.clip(1.5-2*(t.conv2d(x,t.tensor([[[[1,1,1],[1,0.5,1],[1,1,1]]]]),padding=1)-3)**2,0,1))))[:,0,:,:]
f, axarr = plt.subplots(3,figsize=(12, 12))
axarr[0].imshow(b[0])
axarr[1].imshow(b[100])
axarr[2].imshow(b[-1])```
jbustter#5167: oh nice!
jbustter#5167: now the entire line fits on my discord window
Sid#2121: ```python
import torch as t,matplotlib.pyplot as plt;from itertools import accumulate as a;g=t.zeros([1,1,99,99]);x=[1,1,0];g[0,0,49:52,49:52]=t.tensor([x,x[::-1],x]);n=200;b=t.cat(list(a([g]*n,lambda x,y:t.clip(1.5-2*(t.conv2d(x,t.tensor([[[[1]*3,[1,0.5,1],[1]*3]]]),padding=1)-3)**2,0,1))))[:,0,...];f,a=plt.subplots(3,figsize=(9,9));[a[i].imshow(b[j])for i,j in zip([0, 1, 2], [0, 33, -1])]
``` let's take this seriously now
chilli#5665: I thought we were just golfing the computation line
chilli#5665: lol
jbustter#5167: anaconda is giving me a headache, i updated anaconda nevigator multiple times to no effect, and working with any new environment on vscode sometimes bugs out
genetyx8#7543: just don"t use anaconda :chad:
jbustter#5167: i have the same issue with conda
spruce#1680: conda pissed me off way too much i just use WSL for programming on
jbustter#5167: I think I might just need to update cuda, I have cuda 11.0
|
jbustter#5167: the best one so far
bmk#1476: >let's take this seriously
>doesn't even remove unnecessary whitespace after commas
Teemochu#8740: you need to force your version then
Teemochu#8740: when you do the update do pytorch=1.9.1 cudatoolkit=11.1.1
Teemochu#8740: 1.9.1 I think, ~~might be 1.9.0~~ it's 1.9.1
Teemochu#8740: also you can do --debug if it takes over a few minutes
Teemochu#8740: if it's stuck in solving you may need to dig a bit and/or delete some things
gollark#3909: Surely you could shorten `plt`, remove some spaces, and swap out the `itertools` line (as it only appears to be using accumulate once).
Teemochu#8740: the solver is not multithreaded btw; I wish it was, it's just a SAT solver lol
inox#5400: do you have the right channels enabled? you need `-c pytorch -c nvidia`
Teemochu#8740: I use pytorch conda-forge and a couple of others but not nvidia
jbustter#5167: what is the rational behind this? why would I need to specify the updated pytorch version for conda?
Teemochu#8740: it tries to solve your environment and it says "oh yeah I'd rather downgrade pytorch instead of all this other stuff"
Teemochu#8740: but 1.4 is old enough that something is very wrong there
Teemochu#8740: you probably have at least one very old thing that depends on pytorch
Teemochu#8740: I personally had to force for a bit because it kept insisting on a version of cudatoolkit that doesn't even work on ampere
Teemochu#8740: oh one thing what's your python version?
jbustter#5167: that's a little weird because pytorch was the first thing i installed
Teemochu#8740: it will go to hell and back to keep from updating python
|
Teemochu#8740: you may need to force 3.8
Teemochu#8740: (don't force anything else [or do, it's faster if you guess right], and it will take a while to solve; use --debug if you want to watch the pretty numbers)
jbustter#5167: i used python 3.8.something
jbustter#5167: i guess it's probably cuda, i have version 11.0
kurumuz#5695: that is new.
Teemochu#8740: yeah try `conda update --all pytorch=1.9.1 cudatoolkit=11.1.1 --debug`
Teemochu#8740: make sure you're fine with it updating everything, and it will ignore the pinned file
Teemochu#8740: I have a ginormous set of things in my main env (no tensorflow though) and haven't had many issues with the occasional update --all, other than sometimes having to specify channels or force versions due to a slow solver
jbustter#5167: oh, i can update cuda through conda?
Teemochu#8740: yes, but you need the latest drivers
Teemochu#8740: generally speaking I wouldn't go up a cudatoolkit level without first updating your nvidia drivers
chilli#5665: I'm most interested in golfing the central line 🙂
jbustter#5167: the latest drivers from nvidia? would updating through geforce experience be enough?
Teemochu#8740: (seriously why can't the conda solver be multithreaded)
EricHallahan#1051: I thought I could be creative and leverage the `bias` argument of `torch.conv2d` and input the padding as a positional, but the result was longer because it requires a 1D tensor of shape `(out_channels)`.
EricHallahan#1051: I even tried to set `t.t=t.tensor` and that wasn't enough to save it. :grimberk:
jbustter#5167: you can try to optimize the equation i came up with and make it shorter https://www.desmos.com/calculator/krd2art3kd
jbustter#5167: a is the value for the center of the kernel
jbustter#5167: what channel are you using?
Teemochu#8740: not at the computer but I know I have conda-forge and pytorch
|
jbustter#5167: im getting this error https://cdn.discordapp.com/attachments/729741769738158194/896158290130665482/cmd_un6Owqh115.png
Teemochu#8740: and I think I have fastai but that doesn't matter
Teemochu#8740: also I'm on win10
Teemochu#8740: yeah add conda-forge and pytorch to your channel list
jbustter#5167: im amazed how far behind the offical channels are
jbustter#5167: darn... https://cdn.discordapp.com/attachments/729741769738158194/896161272075280404/b88807a9-205e-4dae-b3bd-5275bf47ac65.png
bernaise#6161: i switched to using mamba instead of anaconda a while ago and i've been very satisfied
inox#5400: fastest way to solve 90% of conda environment problems is to use this alias:
```
conda-remove-env() { conda remove --name $1 --all ;}
```
bmk#1476: conda bad
bmk#1476: pyfra environnent management good
inox#5400: is anyone using poetry?
bmk#1476: i tried it at some point
bmk#1476: i dont use it anymore though
bmk#1476: venv is good enough for me
bmk#1476: plus i let pyfra manage that for me anyways
jbustter#5167: @EricHallahan ```python
b=t.cat(list(a([g]*n,lambda x,y:t.clamp(3-(t.conv2d(x,t.tensor([[[[2]*3,[2,1.,2],[2]*3]]]),padding=1)-6)**2,0,1))))[:,0,...] ```managed to make it even better
|
jbustter#5167: down to 124 or 123 if your pytorch is updated
bmk#1476: is that list necessary
jbustter#5167: if there is an alternative to accumulate it's not
jbustter#5167: or a better way of using accumulate
bmk#1476: you could also do [*whatever]
bmk#1476: saves 3 chars
jbustter#5167: can you write it? i don't know that feature
bernaise#6161: ```>>> a = range(5,10)
>>> [a]
[range(5, 10)]
>>> [*a]
[5, 6, 7, 8, 9]
>>> [*a] == list(a)
True
```
jbustter#5167: ohh
jbustter#5167: ```python
b=t.cat([*a([g]*n,lambda x,y:t.clip(3-(t.conv2d(x,t.tensor([[[[2]*3,[2,1.,2],[2]*3]]]),padding=1)-6)**2,0,1))])[:,0,...]``` down to 120
bmk#1476: 119
```py
|
b=t.cat([*a([g]*n,lambda x,y:t.clip(3-(t.conv2d(x,t.tensor([[[z:=[2]*3,[2,1.,2],z]]]),padding=1)-6)**2,0,1))])[:,0,...]```
kurumuz#5695: pyfra on your local computer?
bmk#1476: can do
bmk#1476: just use `local`
bmk#1476: is the `,...` necesasry?
bmk#1476: it shouldnt do anything
chilli#5665: I feel like there's gotta be a more efficient way to write this part
chilli#5665: `t.tensor([[[[2]*3,[2,1.,2],[2]*3]]])`
bmk#1476: since it's at the end of the thing
jbustter#5167: https://discord.com/channels/729741769192767510/729741769738158194/896169839108947979 bmk sort of found a better version
chilli#5665: yeah I saw that
chilli#5665: but I feel like there's gotta be an even better way
bmk#1476: also i love how code golf has become part of the eleuther lore now
jbustter#5167: also ```python
b=t.cat([*a([g]*n,lambda x,y:t.clamp(3-(t.conv2d(x,t.tensor([[[z:=[2]*3,[2,1.,2],z]]]),padding=1)-6)**2,0,1))])[:,0]
``` does work XD
jbustter#5167: so now it's 115
bernaise#6161: that's numberwang
bmk#1476: mornington crescent
Sid#2121: I didn't realise people outside of the UK knew about mornington crescent wtf
|
Sid#2121: how do you know about that lol
bmk#1476: i also know about look around you
Sid#2121: sure, but that's a viral yt video
Sid#2121: mornington crescent is an obscure BBC radio 4 in joke
bmk#1476: isnt mornington crescent also viral
Sid#2121: i mean, if i search it on youtube the most viewed video i get is like 50k
Sid#2121: so unless you know something i don't, not really??
BoneAmputee#8363: look around you was on adult swim too
bmk#1476: huh
bmk#1476: i dont even know how i found out about mornington crescent
bmk#1476: all i know is ive known about it since forever
alstroemeria313#1694: Was it from Douglas Hofstadter
alstroemeria313#1694: Probably not actually
jbustter#5167: ```python
b=t.cat([*a([g]*n,lambda x,y:t.clip(3-(t.conv2d(x,t.tensor([[[z:=[2]*3,[2,1.,2],z]]]),None,1,1)-6)**2,0,1))])[:,0]
``` 114
jbustter#5167: k, good night
Kia#2550: This is the best flex lol
inox#5400: getting this cool new colab crash where it instantly restarts the kernel at the same time
Teemochu#8740: for a while I enforced it to only use conda-forge
|
elderfalcon#4450: Ah the memories haha.
elderfalcon#4450: Yes, the hacker known as 4chan has been doing this a lot recently, unfortunately.
elderfalcon#4450: Yeah I was hoping someone could figure out the kwarg issues, nice. Also, I love the walrus operator switch up above. Our BDFL would be proud. :D
bmk#1476: "who is this hacker 4chan"
Teemochu#8740: kuru
nostalgiahurts#3408: 91. also saving of 4 on the import, if that counts
```python
from functools import reduce as r
b=r(lambda x,y:t.cat([y,z==3+(4==x*(z:=t.conv2d(x,t.ones(1,1,3,3),None,1,1)))]),[g]*n)[:,0]
```
nostalgiahurts#3408: oh, we can use an avgpool2d instead to save 10, which lets us go back to the accumulate strategy, saving 1, giving 80
for some reason `avg_pool1d` is in `torch` but `avg_pool2d` isn't
```python
b=t.cat([*a([g]*n,lambda x,y:z==3+(4==x*(z:=t.nn.AvgPool2d(3,1,1)(9*x))))])[:,0]
```
also the `z==3+(4==x*z)` trick comes from a J solution I saw, so I can't take credit for it
SysD | 12many#3843: https://t.me/ml_world/834
Paxilius#2291: hey guys, I have a 1080ti do I have a chance to fine-tune GPT-NEO with DeepSpeed?
Paxilius#2291: it has 11Gb VRAM of which it seems 500Mb are taking by the OS by default
Paxilius#2291: tbh I know it's a silly question, and I'm expecting a no. I got really excited and trying to get knolwedgable, even being pointed in the right direction would make me super happy 🙂
|
jbustter#5167: We need a sort of code obfuscation competition here
jbustter#5167: I think there will be interest
Louis#0144: We went with this btw incase you're curious
Deleted User#0000: we need a goose pun
Kazumi#1297: I'm happy to see some codegolfology happening
jbustter#5167: you can create a video from the array with `torchvision.io.write_video("game_of_life.mp4",b.view(n,156,156,-1).repeat(1,1,1,3)*255,fps=30)` https://cdn.discordapp.com/attachments/729741769738158194/896385680199024720/game_of_life.mp4
Deleted User#0000: my bud and i are gonna try and make our own FOSS implementation of pluribus (poker ai)
jordiae#4107: Can you recommend any cheap cloud provider for storing datasets? S3 is very expensive and it’s not designed for the use case of ML data (not many requests, but wanting good speed when you do them, and potentially storing TBs of data)
OccultSage#3875: These requirements are nearly mutually exclusive.
Teemochu#8740: get a seedbox
Teemochu#8740: try to beat this https://seedbox.io/shared-seedbox/ (the "shared storage" ones are cheaper and also fine if you are ok with sharing your gigabit line and not opening up links to the public)
Deleted User#0000: check out the eye’s server. a lot of people there can help u set up a seedbox
Teemochu#8740: also $23/mo/tb (s3 cost) is highway robbery
Teemochu#8740: (and the transfer cost looks even worse... $90/tb outbound)
Deleted User#0000: oof
Louis#0144: That name is so awful
Teemochu#8740: you, too, were an early redditor?
jordiae#4107: Many thanks!
Yerren#1954: My god I have peaked in life
Louis#0144: https://twitter.com/ak92501/status/1446273443612217351?s=21
|
Louis#0144: Went viral too
Louis#0144: LMAOO
Louis#0144: You know what we need
Louis#0144: The pile but for critiques
Louis#0144: Like a giant dataset of like aligned text, critique of text pairs
Louis#0144: For tons of domains
Louis#0144: @bmk make it so data monkey
Louis#0144: Jkjk
Louis#0144: Lmaooo
bmk#1476: be the change etc etc
Louis#0144: I don't even know if it would make sense though
Louis#0144: Cross domain preferences is kinda weird
Louis#0144: Pretty sure all carp would do is learn to separate the modes lol
Awesome_Ruler_007#7922: What models preserve spatial information in images? I read a few primer on CapsNets which is almost what I am looking for, but its too computationally heavy
Awesome_Ruler_007#7922: maybe putting it on TPUs might be great project?
alstroemeria313#1694: what is your input and what is your output?
Awesome_Ruler_007#7922: plain images
alstroemeria313#1694: oh, like image to image translation?
Awesome_Ruler_007#7922: mostly gaining an encoder that considers other attributes of the object in the projection - just like Capsule networks
alstroemeria313#1694: oh
|
alstroemeria313#1694: you could take VGG-19 or something and take the linear layers off the end
alstroemeria313#1694: that would encode a 3x224x224 to a 512x7x7
Awesome_Ruler_007#7922: nah, its mostly theoretical stuff 🙂 I was just looking for the best way to model object interaction with vectors
alstroemeria313#1694: oh
Awesome_Ruler_007#7922: Kinda going towards the idea of building world models, starting off simply with a toy dataset and getting Transformers to work
Awesome_Ruler_007#7922: however, I was exploring other ways of representing the data
𓅬 gabriel_syme 𓅬#3220: in other news, this is cool. I'd love to play around with this model a bit. Did we ever guide it?
https://github.com/dorarad/gansformer/issues/17#issuecomment-939376900
𓅬 gabriel_syme 𓅬#3220: also, @jack thanks for the write up on the diffusion + VQGAN thing! really cool 🙂
jack#8178: huh?
𓅬 gabriel_syme 𓅬#3220: damn different jack 😄 sry!
jack#8178: lol
𓅬 gabriel_syme 𓅬#3220: there's two jack's I think, like same name, missed the coinflip
𓅬 gabriel_syme 𓅬#3220: oh wait, I think it is you? was it you that did the diffusion work?
𓅬 gabriel_syme 𓅬#3220: I was talking about this: https://github.com/lucidrains/DALLE-pytorch/discussions/375 (apologies if it's someone else)
jack#8178: different guy
jack#8178: interesting work
jack#8178: definitely a direction I've considered, fun that another guy with the same name is working on it lol
jack#8178: ...and w/the same kinda unusual hardware config
jack#8178: (4x3090 - unusual bc it is actually kinda awkward to power in a residential/office setting, and rack units tend to have datacenter cards instead)
|
𓅬 gabriel_syme 𓅬#3220: yeah it fooled me
!!Puffy Bird!!#7496: Hey so here's an idea, We have EleutherAI and LAION, One for NLP, one for vision, what about one for SOTA Audio and Art?
bmk#1476: who said EleutherAI is for just NLP
!!Puffy Bird!!#7496: fair
!!Puffy Bird!!#7496: I'd assume NLP is a big focus
!!Puffy Bird!!#7496: i mean in the ideal world I think we as an open source community should band nothing to form something that'll go past OpenAI
!!Puffy Bird!!#7496: see how much we can pull our shoe-string budget
bmk#1476: well
bmk#1476: "going past OA" isn't really the goal, nor is it feasible
!!Puffy Bird!!#7496: That's true
!!Puffy Bird!!#7496: but
!!Puffy Bird!!#7496: I'm basically saying I want to start something too
!!Puffy Bird!!#7496: to try to go further with research
bmk#1476: trying to beat OA would only worsen the AI race dynamics
!!Puffy Bird!!#7496: that's true
bmk#1476: you can start anything anytime, nobody can stop you
bmk#1476: except for the irs
!!Puffy Bird!!#7496: the only thing I ask of... is some experience first, yes I know this isn't formal but I want to join EleutherAI
!!Puffy Bird!!#7496: I need some direction (I'm pretty young) and I think the best thing would be to be in the field
!!Puffy Bird!!#7496: LAION and I aren't doing so well
|
!!Puffy Bird!!#7496: need to figure out what to do lol
!!Puffy Bird!!#7496: is there anyone that can help give me first hand experience?
bmk#1476: to join eleuther all you need to do is just start writing code
!!Puffy Bird!!#7496: any recommendations on what to work on?
bmk#1476: uhh
bmk#1476: what are your skills
cfoster0#4356: Audio has a ton of low hanging fruit
cfoster0#4356: I suspect major players are intentionally witholding in this regard :guilty:
cfoster0#4356: Because the chances it would get policymakers and media involved
!!Puffy Bird!!#7496: yup, I mean I can figure out copyright later, using "copyright free" stuff can't be too bad
!!Puffy Bird!!#7496: my main skills are sadly conceptual
cfoster0#4356: A lot of TTS and voice conversation papers do not release model weights, probably for fear of misuse
EricHallahan#1051: I guess text and image generation are like "oh that's kinda cool", audio and video is "oh that's dangerous stuff, we can't have people be running around with that".
EricHallahan#1051: e.g. DeepFakes
EricHallahan#1051: *cough* *cough* AutoVC *cough* *cough*
Teemochu#8740: IRS can't stop you, they just want their cut
!!Puffy Bird!!#7496: lmao
Teemochu#8740: oh copyright?
EricHallahan#1051: Yeah that's another angle.
Teemochu#8740: I mean the music industry is infamous for enforcement
|
cfoster0#4356: I was thinking more, "impersonate anyone with a minute recording of them, practically free"
!!Puffy Bird!!#7496: I was thinking more music and art
!!Puffy Bird!!#7496: making art and music more automated
!!Puffy Bird!!#7496: I used to have a cringe name for this dream company
!!Puffy Bird!!#7496: Simp Labs
!!Puffy Bird!!#7496: making Art and Music easier and more streamlined to produce
bmk#1476: be more specific
bmk#1476: algebraic topology is conceptual
bmk#1476: Goose species identification is conceptual
!!Puffy Bird!!#7496: I'll be honest I just think of ideas and then never do anything
bmk#1476: what kind of ideas
!!Puffy Bird!!#7496: I'm working on trying to be better at implemting
bmk#1476: are they about goose identification
bmk#1476: or transformer models
bmk#1476: or audio
bmk#1476: or vaes
cfoster0#4356: or goose *generation*?
!!Puffy Bird!!#7496: **what about goose video generation**
!!Puffy Bird!!#7496: the goose movie
!!Puffy Bird!!#7496: using NLP and Sketching GANs
|
!!Puffy Bird!!#7496: CLIP might be a good start for this, CLIP would predict directionality
!!Puffy Bird!!#7496: (or some other network who knows)
!!Puffy Bird!!#7496: ideally I would be able to make a direction and space model, along with something to pull up image classes
𓅬 gabriel_syme 𓅬#3220: I'd be way more stocked if some people here started focusing towards DM more than OAI tbh. Although, I understand that'd be a bit niche work since a lot of the work is in NLP. That said, my Transformer/RL folder has 20 papers in it now
!!Puffy Bird!!#7496: whats that company again?
!!Puffy Bird!!#7496: oh
!!Puffy Bird!!#7496: deepmind
𓅬 gabriel_syme 𓅬#3220: yes sorry, deepmind
!!Puffy Bird!!#7496: on an unrelated note
!!Puffy Bird!!#7496: Robotics have been in high demand
!!Puffy Bird!!#7496: ok so I have some LEGO and a tablet on hand
!!Puffy Bird!!#7496: what if I just make a "cost efficient" robotics lab
!!Puffy Bird!!#7496: and try to use mostly vision
!!Puffy Bird!!#7496: maybe make a cheaper version of CLIPort
bmk#1476: robotics is kinda pointless imo
bmk#1476: personal opinion tho
!!Puffy Bird!!#7496: I mean it honestly doesn't matter, I'm not trying to win anything here
bmk#1476: I mean yeah if it's for learning go for it
!!Puffy Bird!!#7496: yea, I think i need to learn more, so I'll be doing that
mkualquiera#3484: why?
|
bmk#1476: well
bmk#1476: robot boring
bmk#1476: big transformer go brrr
mkualquiera#3484: I mean ultimately you need some way to interact with the real world
bmk#1476: no you dont
bmk#1476: source: me
bmk#1476: i havent interacted with the real world in like months
mkualquiera#3484: even the best transformer is really just a transformer if it can't catgirl
bmk#1476: but what if no catgirls
mkualquiera#3484: hmnmmmm\
mkualquiera#3484: I guess that's fair enough
mkualquiera#3484: arguably big transformer could catgirl much better than we ever could
EricHallahan#1051: AI robotics is pointless.
AI_WAIFU#2844: nah, we're just absolute trash at it
!!Puffy Bird!!#7496: lmao
EricHallahan#1051: Other fields of robotics are totally non-pointless (like those for exploration and those for manufacturing).
bmk#1476: lemme rephrase, the marginal value of an hour spent working on robotics is puny compared to like a bunch of other things
mkualquiera#3484: Yeah
AI_WAIFU#2844: Only if you're good at prioritizing
mkualquiera#3484: It's just too inefficient
|
AI_WAIFU#2844: If you go do RL and give your bot 100 lifetimes of experience, then you're mostly wasting everyone's time.
bmk#1476: yeah but like say working on something like, uh, big model training
bmk#1476: or thinking about alignment
bmk#1476: or working on [redacted] at OA
!!Puffy Bird!!#7496: I'm just trying to think about a project to do
!!Puffy Bird!!#7496: like dang this is hard
AI_WAIFU#2844: Like I said, if you're good at prioritizing and keeping things in context, there are better things to work on. But the good thing about robotics is how *real* it is. It's much harder to bullshit robotics.
bmk#1476: it's also pretty hard to bullshit LMs
bmk#1476: it's easier to compare two LMs than compare two robotics systems
Parker#3197: any goal in mind?
AI_WAIFU#2844: I don't think so. The large variance in generalization from pretraining from architectures with identical losses would suggest otherwise.
AI_WAIFU#2844: Also reminder that modern LMs are trained on 2-3 OOMs more text than humans will ever see in their lifetimes.
bmk#1476: oh I don't mean comparing architectures
bmk#1476: I literally mean comparing this specific set of weights to another specific set of weights
bmk#1476: just like toss it in eval harness and like bold down your column without regard
bmk#1476: also doing downstream gets around the loss problem
bmk#1476: well not entirely
bmk#1476: but like makes it harder to have a model that sucks a lot at something without noticing
Parker#3197: like, has to be related to machine learning, has to improve on what is available currently, has to make some process easier, has to look good for a resume, has to profitable, has to be entertaining, has to help research, etc
!!Puffy Bird!!#7496: sorry for not responding
|
Parker#3197: anything that you're trying to do more so than others?
Parker#3197: or is it literally anything that involves programming and you're just looking for something that people will find useful?
AI_WAIFU#2844: Yeah but now you need a whole damn eval harness to compare LMs.
AI_WAIFU#2844: So that you can't BS it
!!Puffy Bird!!#7496: I just wanted to contribute to research for the sake of it
!!Puffy Bird!!#7496: I'm really inexperienced with actual AI/ML research and wanted to try to contribute so i can get better
!!Puffy Bird!!#7496: don't have any clear goal
bmk#1476: the tangibleness of robotics eval harnesses is a downside
EricHallahan#1051: Contribute to existing projects then. Maybe you'll come up with one in time.
mkualquiera#3484: Help us with cleaning up the Laion dataset
bmk#1476: it costs me like next to nothing to run more tasks on a LM
bmk#1476: just need to wait like 5 extra seconds
bmk#1476: testing a real robotics thing is much more expensive and involved
AI_WAIFU#2844: I got one for you:
prevent the dissassembly of the planet by malign AGI.
Go.
!!Puffy Bird!!#7496: I mean that's fair, I'll most likely stay with LAION for now, their project is the most interesting to me, I just need to figure out how to better contribute
EricHallahan#1051: I had no idea what I was doing either when I arrived.
|
!!Puffy Bird!!#7496: lmao
bmk#1476: we don't fundraise, laion shouldn't either
!!Puffy Bird!!#7496: yup
bmk#1476: actually I don't know what they're up to now
bmk#1476: sounds kinda wat
!!Puffy Bird!!#7496: filtering would be a good job for me, they said they needed lots of detectors
bmk#1476: :Virgin: gofundme
:chad: no fundraising at all
EricHallahan#1051: Thank god we don't deal with this kinda of stuff at all.
bmk#1476: anyways they're their own thing so they can do whatever they want
bmk#1476: I just find it weird
!!Puffy Bird!!#7496: I can do hate and violence, since chris is already making an nsfw detector
bmk#1476: could != should
EricHallahan#1051: I find NSFW detectors kinda sus.
bmk#1476: everyone I've talked to is kinda thonk about that paper
EricHallahan#1051: Most people consider it to be a bad paper.
!!Puffy Bird!!#7496: yup, I agree
!!Puffy Bird!!#7496: we should be slightly more senstational
bmk#1476: how about not
Teemochu#8740: It's basically a PRD lol. Seen plenty in my time, those are never meant to do anything but convince the stakeholders to go in on something.
|
!!Puffy Bird!!#7496: hahahahahah LAION-400M:CLIP is all you need
AI_WAIFU#2844: Yes and you can also get 100x more data. This is great if you've got the bigger picture in mind. But also if you're not careful you'll end up incrementally pushing benchmarks because it's really easy to spend all your time doing A/B tests and seeing which tweaks make the numbers go down lower.
bmk#1476: or at least if you want to be sensationalist, please don't do that here
!!Puffy Bird!!#7496: yeah, I mean filter the life out of them and it could be useful
Teemochu#8740: TBH Alamy is probably the best dataset
bmk#1476: I mean.. I guess that's a problem, but also I think the overfitting to benchmarks problem is overrated
bmk#1476: I mean, imagenet classifiers do mostly generalize to imagenet
bmk#1476: and imagenet is about as goodharted as real metrics get
Teemochu#8740: It's professionally labeled and all the filtering or whatever is taken care of before it's uploaded, no need to filter out stuff like you would if you download from sketchville.pw
Teemochu#8740: (Please tell me that site doesn't exist)
AI_WAIFU#2844: Maybe, but IMO we waste a tremendous amount of time making small tweaks to transformers when 1 iteration of moores law gives you the same benefit.
bmk#1476: I don't disagree with that honestly
guac#4716: we gotta do something while the real heroes do their work sheesh
EricHallahan#1051: We should just start scaling S3 then. :ultraberk:
Teemochu#8740: The Amazon thing?
guac#4716: has any one started an implementation?
Teemochu#8740: Use a seedbox, way cheaper
bmk#1476: what if we train a transformer to design EUV machines
EricHallahan#1051: Not that I am aware, @lucidrains wants to test it though.
cfoster0#4356: Really? The content is solid imo, even if the underlying rhetoric is questionable
|
!!Puffy Bird!!#7496: chip design is starting to be made by AI
!!Puffy Bird!!#7496: google made the paper already
EricHallahan#1051: Ill stated, I meant they dislike the paper.
bmk#1476: most of the difficult work is still being done by wizards in Taiwan tho
!!Puffy Bird!!#7496: yup
AI_WAIFU#2844: all
bmk#1476: I'll be totally honest I haven't read the paper in its entirety
bmk#1476: I don't think anyone has
guac#4716: that's because it's a textbook lol
cfoster0#4356: No one has
bmk#1476: not even the authors (probably)
cfoster0#4356: I would recommend it as a reference for someone who knows a little but isn't as wrapped up in this as EAI folks
AI_WAIFU#2844: All the shit we do is a fucking joke compared to the magic bs that goes on between TSMC and it's suppliers..
bmk#1476: but I did skim the alignment section
bmk#1476: and nothing really stood out to me
guac#4716: yeah i just try to mentally block that out though lol
bmk#1476: what would happen when ML actually starts feeding back into the chip design and mfg process significantly
bmk#1476: does that cause even faster exponential growth
AI_WAIFU#2844: Look I need to be able to understand this stuff if I want my forecasts to be accurate.
!!Puffy Bird!!#7496: no it just makes it the same as before
|
guac#4716: isn't that the most legitimate case of "paperclipping"
bmk#1476: .. so you're going to go learn how to do chip layout wizardry?
AI_WAIFU#2844: Significantly? The planet get's disassembled.
bmk#1476: ok so shilling transformers to TSMC is a bad idea gotcha
AI_WAIFU#2844: The chip layout stuff is still pretty easy. It's the fabrication process that's complete magic.
!!Puffy Bird!!#7496: yeah I agree that stuff is crazy
bmk#1476: ok are you going to learn fabrication process magic
AI_WAIFU#2844: Like the truth is that for the kinds of computations we do, computers don't actually need to be that complicated. Even the OS mostly just get's in the way.
AI_WAIFU#2844: So the whole software stack is comparatively simple.
AI_WAIFU#2844: So yes, I've been getting aquainted with the fabrication magic.
AI_WAIFU#2844: But there's serious limits
AI_WAIFU#2844: Most of it is truly secret knowledge.
bmk#1476: ok lmk when TSMC gives you a $1M offer to work as a chip wizard
AI_WAIFU#2844: And even if you had someone write it all down for you, it doesn't mean you could replicate it.
AI_WAIFU#2844: There are layers and layers and layers of subcontractors that make this stuff possible.
bmk#1476: I mean where are you getting this stuff from anyways? like papers from academics working with like proof of concept stuff or what
bmk#1476: are there even a lot of useful papers out there about how fab processes work
AI_WAIFU#2844: Each with their own trade secrets, oral tradition, and tricks that the practitoners are only aware of subconciously.
bmk#1476: honestly from the outside view this sounds a lot like ML
Parker#3197: idk like cpu production is complicated
|
Parker#3197: idk how any new business is ever supposed to enter
AI_WAIFU#2844: Yeah, because it's like ML, except at least 1000x more complicated and much more heavily funded.
bmk#1476: makes sense
AI_WAIFU#2844: People just tweak processes over and over and over
AI_WAIFU#2844: very similar to ML people fucking with NNs
Parker#3197: there has to be just so much stuff involved with manufacturing. we don't even see clear IP theft really
AI_WAIFU#2844: except that instead of it being just an NN with a few tricks, each step of the fabrication process is like that, and there are a hundred steps, and each of the inputs and machines are subject to the same situation such that each source of feedstock or machine is built using the same kind of dark magic.
AI_WAIFU#2844: recursively
bmk#1476: so uh then how the heck are you learning about any of this
bmk#1476: sounds like a big clusterfuck
AI_WAIFU#2844: I learn about the stuff from several decades ago
bmk#1476: where do you even begin
bmk#1476: ah
AI_WAIFU#2844: just to get the most basic jist
bmk#1476: and assume that stuff now is like that but like even more fucked up
AI_WAIFU#2844: presumably
AI_WAIFU#2844: Just looking at the basic operating mechanism of EUV machines is insane
bmk#1476: so uh sounds like the upshot is this entire thing is really brittle
AI_WAIFU#2844: I don't spend a lot of time doing it
AI_WAIFU#2844: I just do it to have some understanding of the backbone that makes all of this possible.
|
bmk#1476: like I know software people like to complain about how brittle software is but this sounds 100x worse
AI_WAIFU#2844: You know that recursive dark knowledge fuckfest that applies to each step of the process? There's another recursive dark knowledge fuckfest that applies to keeping disturbances out of the system so that the extremely delicate manfuacturing process keeps going without interruption.
AI_WAIFU#2844: Several, actually
bmk#1476: so, basically DevOps on steroids
bmk#1476: what are some examples of the shit they do to keep it going?
guac#4716: holy shit you need to snipe a ~30um of tin to generate extreme uv lmao
bmk#1476: also I kind of want to claim that software is almost as bad except we're just all more used to it so it doesn't feel as bad
guac#4716: this stuff seems cooler than ml hmmm
bmk#1476: I mean docker sort of is kinda recursively fucky in a way for example
AI_WAIFU#2844: Don't know much actually. But for instance, I did read that radio waves from cell phones would fuck up the calibration of some of the equipment. So presumably something to avoid that.
bmk#1476: our legacy code is too brittle so we make a box to put the code in so we never have to touch it directly
AI_WAIFU#2844: No, you have to do it twice in precise and quick succesion.
AI_WAIFU#2844: And you gotta do it several times a second
AI_WAIFU#2844: no fuckups allowed
guac#4716: *grabs joint* alright let me pull up some wikis for the night
AI_WAIFU#2844: I found this youtube series to be informative https://www.youtube.com/watch?v=bUJiMJweI8M&list=PLRhTt8ZaQryvF6WvXDJagC2p2waeHQORz
bmk#1476: or the fact that half of the world's software is weird legacy stuff in dead languages that nobody understands
bmk#1476: or that a lot of stuff like databases are layers on layers of dark magic
bmk#1476: I hear Oracle db is particularly terrible
AI_WAIFU#2844: that's if your lucky, if you're unlucky they're object files for a dead architecture written in dead languages with missing source code running in a vm.
|
bmk#1476: I'm thankful I don't have to deal with that shit lmao
AI_WAIFU#2844: Or just entire machines, filled with unknown dependencies, cloned over and over, over the course of a decade, and kept around because nobody wants to foot the bill to rewrite all the shit that runs on them.
bmk#1476: anyways I think software is probably a good fraction of the cursedness of silicon, just with less secrecy because >0% of software is oss
guac#4716: software is just innately viral lol
wabi-sabi#5811: There's a series of learn to code books out there somewhere written from the frame of software archaeologists maintaining million year old code, can't remember the name but was really enjoyable to read
AI_WAIFU#2844: Taiwanese and european wizards spend billions making shit run close to the limits of what physics will allow, all to be undone by javascript developers
Zippy#1111: Happy to be of service :peepoBlanket:
bmk#1476: and god bless js for performing this important role in slowing timelines down
AI_WAIFU#2844: I've seen code that makes network calls half way around the planet to generate a random number.
guac#4716: those taiwanese seeds too sheesh
Zippy#1111: I mean the time it takes to get around the world a couple times could be a decent rng source :ThonkRotate:
AI_WAIFU#2844: Microservice architectures are the worst thing to ever be invented
Zippy#1111: Although tbh I've seen a lot worse python code than js code :overfloosh:
AI_WAIFU#2844: You take all this speed and power and memory, and then you shit all over it by introducing 10ms of latency between every 5 lines of code
wabi-sabi#5811: *laughs in Galactus*
AI_WAIFU#2844: and that's if you do shit correctly
Zippy#1111: Well I mean, with node.js, that's actually one of the benefits, it doesn't have to wait for some request to finish before it can start working on other tasks in the task queue.
AI_WAIFU#2844: If you don't do shit correctly, everything becomes a syncronous call and all of a sudden it takes a whole fucking second to process a request.
Zippy#1111: So it's not really true wrt javascript in node.js
𓅬 gabriel_syme 𓅬#3220: Yeah but I guess much less depends on python code?
|
wabi-sabi#5811: https://youtu.be/y8OnoxKotPQ
Zippy#1111: I mean, some of the most mega open source code you can use are python (ML libraries)
Zippy#1111: And I've seen some absolutely horribly coded ML libraries lol
𓅬 gabriel_syme 𓅬#3220: I meant the vast world outside ML, which even now is probably tiny compared to everything else
Zippy#1111: Yeah. I do kind of wish there was another lang we could use (I mean there is, julia) but the community is just too small at this point.
Zippy#1111: For actually fast code.
Zippy#1111: Which could help speed up a ton of things.
AI_WAIFU#2844: I'm not laughing
AI_WAIFU#2844: nvm I got to the bit about dying alone, this shit is hilarious
BoneAmputee#8363: one of my fav internet vids :berk:
zphang#7252: *someone write an analogy about how attention is really like a bunch of microservices talking to each other*
𓅬 gabriel_syme 𓅬#3220: Lol that's funny
𓅬 gabriel_syme 𓅬#3220: Reminds me of that Tenet video
zphang#7252: oh which one
𓅬 gabriel_syme 𓅬#3220: https://youtu.be/s2FXfFeRtJo
𓅬 gabriel_syme 𓅬#3220: Hilarious
𓅬 gabriel_syme 𓅬#3220: Now I have to watch it again obviously
ilovescience#3282: i enjoyed tenet and this is still funny lol
ilovescience#3282: (i've seen this video before actually)
𓅬 gabriel_syme 𓅬#3220: yeah lol
|
𓅬 gabriel_syme 𓅬#3220: I liked tenet too, this does hit it on the head
𓅬 gabriel_syme 𓅬#3220: kidnapped the wrong elephant lol
ElectricLizardFren#8378: hello
EricHallahan#1051: Welcome!
ElectricLizardFren#8378: I am new yes! Hi!
bmk#1476: I swear I read a blog post once that goes like this but I can't find it again:
company decides to break big monolithic application into microservices. when they're done, they turn it on and try to load a page and.. nothing happens. they're all like fuck it and put it on the biggest cluster they have with an enormous high bandwidth network plane and it finally works but it takes 5 minutes to load a single webpage and it totally saturates the network
bmk#1476: can someone please help find
EricHallahan#1051: @joaogui1 🙂
ElectricLizardFren#8378: Dumb question, but would you be able to feed an AI with official Smash bros ultimate renders and then make it able to turn any cartoon image to the shading and image style of that? Or am I putting my expectations too high or overthinking?
StellaAthena#3530: No that's easy
ElectricLizardFren#8378: Oh wait really?
ElectricLizardFren#8378: I mean I know it is but how high quality can it be?
StellaAthena#3530: Extremely
StellaAthena#3530: The main limit is induced by the amount of official renders you have
StellaAthena#3530: Are we talking 10, 100, 1000?
ElectricLizardFren#8378: Let's seee....
ElectricLizardFren#8378: Now I have to do math for every smash bros render there is
StellaAthena#3530: This is from a recent paper. The large images (except the top left) are AI-generated https://cdn.discordapp.com/attachments/729741769738158194/896770543649447947/Screen_Shot_2021-10-10_at_10.44.25_AM.png
|
ElectricLizardFren#8378: So there's 89 Smash bros characters
ElectricLizardFren#8378: But every character has 8 costumes
ElectricLizardFren#8378: So 89 × 8...
ElectricLizardFren#8378: ....712?
ElectricLizardFren#8378: Woah! So cool!
ElectricLizardFren#8378: I decided to "draw" the most simple thing I can use to experiment this idea on
ElectricLizardFren#8378: (note the lack of shading cause I didn't know if I should add detail) https://cdn.discordapp.com/attachments/729741769738158194/896771064376487956/Untitled116_20211010074424.png
AI_WAIFU#2844: Recent? Pretty sure this was 2017
AI_WAIFU#2844: Sorry, 2015
ElectricLizardFren#8378: Oh god that was when FNAF was released
AI_WAIFU#2844: https://arxiv.org/abs/1508.06576
StellaAthena#3530: oops wrong image
StellaAthena#3530: I pulled up a couple and thought I had grabed a recent example
StellaAthena#3530: Weirdly all the examples I'm finding are older
StellaAthena#3530: At least, the best ones. The more recent stuff makes the problem a lot harder deliberately.
EricHallahan#1051: Style transfer is effectively a solved problem.
ElectricLizardFren#8378: Yikes
EricHallahan#1051: Researchers aren't very interested in it anymore because there isn't much to gain from it.
StellaAthena#3530: @ElectricLizardFren Post one image and one reference image
ElectricLizardFren#8378: Here?
|
StellaAthena#3530: yeah
ElectricLizardFren#8378: https://cdn.discordapp.com/attachments/729741769738158194/896773389660856420/Untitled116_20211010074424.png,https://cdn.discordapp.com/attachments/729741769738158194/896773390021582879/1200px-Mario_SSBU.png
ElectricLizardFren#8378: Y e a h
cfoster0#4356: I would say transfer of this kind isn't solved yet
ElectricLizardFren#8378: That's what I'm saying!
ElectricLizardFren#8378: Unless we use a different render
ElectricLizardFren#8378: Smash bros has a lot of different art styles
cfoster0#4356: I think we know how to go from the smash image to the flat image, but the other direction likely doesn't work very well
ElectricLizardFren#8378: True true
EricHallahan#1051: Also alpha channel.
𓅬 gabriel_syme 𓅬#3220: maybe something around nerf could eventually help
𓅬 gabriel_syme 𓅬#3220: adding texture at a later stage, I'm guessing there's been a couple of papers I forget
𓅬 gabriel_syme 𓅬#3220: I think there might have been one in the iclr dump
EricHallahan#1051: I am not too well versed in NeRF, but why would NeRF be relevant? I don't see implicit 3D representations as something that would help such a model to generalize.
flowpoint#7450: <https://www.youtube.com/watch?v=gfh-VCTwMw8>
bmk#1476: lmao yes that's the one
bmk#1476: thanks
cognomen#6297: vqgan+clip is surprisingly good at transfer
cognomen#6297: https://discord.com/channels/729741769192767510/730510538060071043/860359515940585483
BoneAmputee#8363: there is a lot of depth/texture already there though. easy mode
|
BoneAmputee#8363: https://cdn.discordapp.com/attachments/729741769738158194/896795713688727552/1633879658_.jpg
BoneAmputee#8363: it's not there yet :guilty:
BoneAmputee#8363: maybe pix2pix would be better
ElectricLizardFren#8378: Amazing
BoneAmputee#8363: or uhhhh, some gaugan-based thing
ElectricLizardFren#8378: This is god like
ElectricLizardFren#8378: Making me cry
ElectricLizardFren#8378: Truely we have peaked with AI
ElectricLizardFren#8378: Everyone rap it up we can go home AI is over, maybe we can wait for the sequel
BoneAmputee#8363: I also didn't feel like trying more than one setting. it's kind of like an audio synth. gotta fiddle with the knobs a bit for a desired effect
ElectricLizardFren#8378: Ohh
alstroemeria313#1694: we should try diffusion gaugan at some point
alstroemeria313#1694: like why not
cognomen#6297: less than ideal https://cdn.discordapp.com/attachments/729741769738158194/896802370510553128/an_attempt.png
cognomen#6297: I tried lol
Awesome_Ruler_007#7922: Huh,
model is kinda overfitting during training, **and** the 10% validation set
Awesome_Ruler_007#7922: https://tenor.com/view/wazowski-mike-mike-sulivan-meme-monster-inc-gif-19634164
Awesome_Ruler_007#7922: like overall `val` loss is in 2e-2...? but atleast loss for bounding box of 1 class is 7.34~ish
Awesome_Ruler_007#7922: bruh, how's that even possible
|
Awesome_Ruler_007#7922: because my model sure as hell can't be *that* good
Awesome_Ruler_007#7922: also, over a couple of epochs memory climbed a teeny-tiny `9062` to `9711` :thonk:
Louis#0144: smol
Awesome_Ruler_007#7922: dunno, but with doubling batch size I don't expect OOMs certainly
Awesome_Ruler_007#7922: it is mysterious
Awesome_Ruler_007#7922: especially if it can do the first 50 steps with no problem at the edge, then suddenly OOMs (with double batch size :sus:)
ElectricLizardFren#8378: Amazing
ElectricLizardFren#8378: I do like how it got the hair right
ElectricLizardFren#8378: What about something more detailed?
ElectricLizardFren#8378: Is the thing I drew to simple?
gollark#3909: My friend just sent me this. It sounds interesting. Thoughts? https://arxiv.org/abs/2110.03501
bmk#1476: my impression from a 30 second skim is it seems kinda bad
bmk#1476: so they pretrained translation (??) models and then fine tune them for theorem solving?
bmk#1476: this would be more ok if the focus of the paper was the transfer learning and they explored different pretraining and tasks
bmk#1476: but they don't
bmk#1476: all they do is they compare a number of different translation pairs and conclude that there's no effect
bmk#1476: also.. it looks like their data is synthetically generated anyways
bmk#1476: so the whole sample efficiency angle is kinda wat too
bmk#1476: "literally just generate more data lmao"
bmk#1476: would make more sense if the data was like expensive human created data
|
bmk#1476: overall this seems like a really weird and bizarre paper
Sphinx#2092: Lol English Romanian, what a random pair to use
Sphinx#2092: I wonder why not use the base one. The writing is a bit unclear though
StellaAthena#3530: I don’t see what language the math content is in. It might be interesting if you could pretraining a translator then train on English math and have it learn \_\_\_ math
StellaAthena#3530: But IDK if they’re doing that
gollark#3909: I think it's just expressions to integrate, not mathematical proofs or whatever with language in them.
𓅬 gabriel_syme 𓅬#3220: because I thought this was a task for 2d->3d->texture and style 🙂
alstroemeria313#1694: what's this https://www.cs.umd.edu/~tomg/projects/stable_gans/
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/897088235946266654/Screen_Shot_2021-10-11_at_4.png
alstroemeria313#1694: They have some sort of better attracted-to-saddle-points method than alternating/partially sign-flipped gradient descent?
alstroemeria313#1694: But that isn't like. As expensive as Newton's method or smth?
alstroemeria313#1694: I actually want to know for other stuff too, like constrained optimization
alstroemeria313#1694: Since you can do constrained optimization by forming the Lagrangian and doing gradient descent on the main objective and gradient ascent on the Lagrange multipliers to find a saddle point that corresponds to a minimum of the original problem satisfying the constraints.
alstroemeria313#1694: So their thing is some sort of damping?
Louis#0144: Should be yes
alstroemeria313#1694: gonna reread https://papers.nips.cc/paper/1987/file/a87ff679a2f3e71d9181a67b7542122c-Paper.pdf and compare
alstroemeria313#1694: Which just uses a quadratic penalty that tries to push the infeasibilities toward 0.
alstroemeria313#1694: And that damps the oscillations.
alstroemeria313#1694: I guess there are two questions I have, one, how well does this actually work for GANs, and two, can I apply it to Lagrangians
alstroemeria313#1694: So one advantage of this method over Platt and Barr seems to be that it uses the *actual* velocity of u.
|
alstroemeria313#1694: Because it is supposed to work with momentum and adaptive learning rate optimizers.
alstroemeria313#1694: So it just takes the actual step the optimizer performed and predicts forward with it.
nshepperd#2316: i wonder if this is better or worse if you use an EMA of the recent steps instead of just the last step?
alstroemeria313#1694: I noticed weird effects trying to use the Platt and Barr method with anything but SGD w/o momentum.
alstroemeria313#1694: Bc it assumed this in its construction.
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/897095083114004500/Screen_Shot_2021-10-11_at_5.15.32_AM.png
alstroemeria313#1694: Why not use prediction for both actually?
alstroemeria313#1694: anyway i am going to try this
nshepperd#2316: hmm... you could modify equation 7 to add that. and re-solve it to see what both does
nshepperd#2316: or yeah, just try it lol
alstroemeria313#1694: my mnist gan still collapsed with lr 1e-3
alstroemeria313#1694: seems to be training with 5e-4
alstroemeria313#1694: it sometimes randomly fails though
alstroemeria313#1694: 45 epochs https://cdn.discordapp.com/attachments/729741769738158194/897102768827953152/demo-114.png
nshepperd#2316: that's using the prediction for one?
alstroemeria313#1694: yes
alstroemeria313#1694: no EMA either
alstroemeria313#1694: just predicting G
alstroemeria313#1694: for D's loss
alstroemeria313#1694: as in the paper
|
nshepperd#2316: *nods*
alstroemeria313#1694: This is seriously clumsy to do detailed experiments with in PyTorch
alstroemeria313#1694: Can I just have jax.tree_map() pls
nshepperd#2316: eheh
nshepperd#2316: that is the one thing jax is nice for
nshepperd#2316: plugging the differential equation into mathematica made a monstrosity of an equation but it kind of looks like predict-both... won't obviously not work at least?
nshepperd#2316: like predict-G makes a thing with a negative exponential, and predict-both makes a thing with a negative exponential
alstroemeria313#1694: hm should i be using the same random z for both g and d steps?
alstroemeria313#1694: i wasn't and probably should be?
nshepperd#2316: i think last time i did gan things i found that using the same z was worse
alstroemeria313#1694: yeah
nshepperd#2316: but
nshepperd#2316: idk with this method
alstroemeria313#1694: well it's still collapsing at lr 3e-4 on a simple mnist gan
alstroemeria313#1694: idk this method
alstroemeria313#1694: it doesn't really resolve the fundamental issues with gans, it's just another stabilizing trick
alstroemeria313#1694: back to diffusion
alstroemeria313#1694: i'll get around to trying something like it for constrained optimization at some point
nshepperd#2316: ~
alstroemeria313#1694: Predicting with an EMAed step seems to work
|
alstroemeria313#1694: IDK if better
alstroemeria313#1694: Usually when this collapses it's because D became too strong?
alstroemeria313#1694: So that failure mode still exists.
alstroemeria313#1694: D not giving useful gradients to G.
alstroemeria313#1694: (I turned gradient penalty off for this)
nshepperd#2316: yeah i assume that's why most "fixes" tend to involve limiting D somehow
nshepperd#2316: this paper seems to sort of implicitly claim that their thing will prevent either D or G from getting too strong
nshepperd#2316: but doesn't actually explain how it helps G compensate for a too strong D
alstroemeria313#1694: i started predicting both (with no EMA) and the run is working at lr 1e-3
alstroemeria313#1694: need to rerun it w/ a few random seeds
alstroemeria313#1694: It's mode collapsing though.
alstroemeria313#1694: Mostly 1s
alstroemeria313#1694: Maybe it does if you predict both.
alstroemeria313#1694: idk
alstroemeria313#1694: Maybe you still need gradient penalty.
nshepperd#2316: oh, mode collapse :/
alstroemeria313#1694: Ahah
alstroemeria313#1694: I'm surprised it ran *at all* at lr 1e-3
nshepperd#2316: eheh
nshepperd#2316: well, maybe this means that predict both would be a good idea for lagrange
|
alstroemeria313#1694: (i also kind of think the "too strong D" framing hides some critical stuff)
alstroemeria313#1694: (The crux of the problem is that D doesn't have paths to follow to get from fakes to reals with gradients that don't vanish or explode.)
alstroemeria313#1694: (These paths often explode as a result of D learning sharp enough class boundaries)
nshepperd#2316: ahh
nshepperd#2316: yes
alstroemeria313#1694: Thus gradient penalty methods.
alstroemeria313#1694: the problem, now that i think about it, is that the infeasibilities aren't parameterized
alstroemeria313#1694: They are just functions of the main parameters and the... oh
alstroemeria313#1694: And the data.
alstroemeria313#1694: lol https://cdn.discordapp.com/attachments/729741769738158194/897109794207453184/demo-115.png
nshepperd#2316: ahah
alstroemeria313#1694: ...Which means they *are* parameterized and they just share parameters.
alstroemeria313#1694: And normally we just do simultaneous gradient descent/ascent.
nshepperd#2316: you mean it involves alternating updates to the same parameters?
nshepperd#2316: i forget how that worked
alstroemeria313#1694: it's simultaneous
nshepperd#2316: oh
alstroemeria313#1694: ...Wait.
alstroemeria313#1694: I forgot how this worked lol.
alstroemeria313#1694: Every constraint has one parameter.
|
alstroemeria313#1694: The Lagrange multiplier.
alstroemeria313#1694: i.e. the thing we actually do gradient ascent *on*.
alstroemeria313#1694: (And it's simultaneous)
alstroemeria313#1694: So yeah. We can predict the main parameters when updating the Lagrange multipliers and predict the Lagrange multipliers when updating the main parameters.
nshepperd#2316: ahh
alstroemeria313#1694: This is going to slow things down right?
alstroemeria313#1694: Bc we need to switch to alternating and then do two evaluations of the main parameters per main parameter optimizer step.
nshepperd#2316: it's two backward passes instead of one?
alstroemeria313#1694: But might work
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/897111976239571005/demo-116.png
alstroemeria313#1694: Wow.
alstroemeria313#1694: The GAN *converged*.
alstroemeria313#1694: ```
350 0.6679608821868896 1.381277322769165 0.00014080807159189135
400 0.6931367516517639 1.3859772682189941 8.23522077553207e-06
450 0.6985084414482117 1.3863229751586914 0.0
500 0.6946711540222168 1.3862966299057007 0.0
550 0.6933348774909973 1.3862944841384888 0.0
600 0.6931635737419128 1.3862943649291992 0.0
```
|
alstroemeria313#1694: This is with zero-centered GP
nshepperd#2316: ahah
alstroemeria313#1694: And predicting both.
nshepperd#2316: i hate that this is something you always have to be surprised by
nshepperd#2316: "it actually worked"
alstroemeria313#1694: So the GP was able to force the discriminator's gradients to vanish and then it settled into the Nash equilibrium.
alstroemeria313#1694: Those outputs look bad though
alstroemeria313#1694: That is not an optimal equilibrium
alstroemeria313#1694: Gonna try 1-centered GP.
alstroemeria313#1694: Which forces them to neither vanish nor explode.
alstroemeria313#1694: (Normally GANs don't converge even with zero-centered GP of that weight)
alstroemeria313#1694: Nah the 1-centered GP died somehow.
alstroemeria313#1694: Like how is that even possible.
nshepperd#2316: it died?
alstroemeria313#1694: Yeah, produced an all black output and never recovered
alstroemeria313#1694: I guess enforcing the GP on straight lines between fakes and reals wasn't good enough
alstroemeria313#1694: Bc the actual path it would need to take vanished or exploded
nshepperd#2316: huh
alstroemeria313#1694: oh, GANs
alstroemeria313#1694: They are such a hack
|
nshepperd#2316: ^^;;
nshepperd#2316: i hope i never have to train a GAN again lol
alstroemeria313#1694: Like, the way it actually enforces it on straight lines is by sampling random points on random straight lines
nshepperd#2316: lerp between a random real and a random fake, right?
alstroemeria313#1694: So probably D learned a sharp boundary between all black and having any nonzero values
alstroemeria313#1694: Yeah
kurumuz#5695: i hate GANs
kurumuz#5695: tbh
alstroemeria313#1694: If the boundary happens there then you probably won't sample near it enough to make a path from fakes to reals
alstroemeria313#1694: I guess
nshepperd#2316: (what you need is a adversarial^2 net, that seeks out the regions where the gradient is furthest from 1 so you can apply the gradient penalty there)
nshepperd#2316: (probably a bad idea)
alstroemeria313#1694: Ahah
alstroemeria313#1694: friendship ended with GANs, now diffusion is my best friend
nshepperd#2316: 100%
nshepperd#2316: eheh
Kia#2550: Why not both :thinkies:
alstroemeria313#1694: idk how to make an adversarial diffusion model, the one time i tried it was bad
cfoster0#4356: There was a paper I think we missed at ICLR that did this
alstroemeria313#1694: Ohh?
|
cfoster0#4356: One moment
Kia#2550: Ow:surprise:
Kia#2550: Oww
alstroemeria313#1694: No Alias-Free GAN yet
alstroemeria313#1694: But it's still 6:43 AM Pacific time.
cfoster0#4356: https://openreview.net/forum?id=JprM0p-q0Co
alstroemeria313#1694: So they aren't at work
alstroemeria313#1694: ty :blobcutehappy:
Kia#2550: Probably 12am
alstroemeria313#1694: Probably after 9 AM
Kia#2550: True
alstroemeria313#1694: (Nvidia is Pacific time right)
alstroemeria313#1694: Yeah it's Santa Clara.
alstroemeria313#1694: > Through extensive evaluations, we show that denoising diffusion GANs obtain sample quality and diversity competitive with original diffusion models while being 2000× faster on the CIFAR-10 dataset.
alstroemeria313#1694: Faster how
alstroemeria313#1694: I'm training ones in only like 50 epochs now
alstroemeria313#1694: MNIST diffusion often trains faster than an MNIST GAN
alstroemeria313#1694: Wait do they mean sampling
cfoster0#4356: Yes
alstroemeria313#1694: Oh
|
cfoster0#4356: They do 4 steps or something
nshepperd#2316: I just realized why my 4x diffusion upscaler wasn't working. I changed the demo grid to scale down 4x. but forgot to actually change the training loop. so it was still training 2x -_-
alstroemeria313#1694: ^^;;
Kia#2550: Owww
Kia#2550: Yey
alstroemeria313#1694: We can apparently get to 4 steps with progressive distillation, without using adversarial training
cfoster0#4356: I think what it shows is you should be able to get down to those very low step counts with a single training stage
alstroemeria313#1694: *nods*
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/897118648194449428/Screen_Shot_2021-10-11_at_6.49.11_AM.png
Kia#2550: Interesting
alstroemeria313#1694: @nshepperd so this setup has GAN-type small latents!
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/897119281140084746/Screen_Shot_2021-10-11_at_6.51.42_AM.png
alstroemeria313#1694: It uses them as a condition
nshepperd#2316: oooh?
nshepperd#2316: where does z come from? is it just random normal?
alstroemeria313#1694: I think so
alstroemeria313#1694: They tried taking it out and their sample diversity suffered
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/897120163512594482/Screen_Shot_2021-10-11_at_6.55.12_AM.png
alstroemeria313#1694: This is complicated, non-adversarial diffusion models have perfectly fine diversity without this
alstroemeria313#1694: I guess you need the conditional normalizations anyway for stuff like CLIP conditioning though.
|
alstroemeria313#1694: Is this thing still, like, stable
nshepperd#2316: ok so. take a real. noise it (x_t-1) then noise it further (x_t). then feed that through a diffusion model conditioned on random z to get pred. noise that pred conditional on x_t and then... apply discriminator to the two real/fake noised images?
nshepperd#2316: discriminator is also conditional on x_t
alstroemeria313#1694: Does it train stably and you can do validation loss and does it work on small datasets etc
nshepperd#2316: yeah i'm not sure it's worth it just to get z if you need to bring all the gan tricks back
nshepperd#2316: the inputs to the gan are noised which i think is supposed to be good for stability
nshepperd#2316: but i doubt that's enough by itself
𓅬 gabriel_syme 𓅬#3220: woot that is coming today?
alstroemeria313#1694: yep
𓅬 gabriel_syme 𓅬#3220: that's going to be intense, wonder if it'll be better than diffusion
nshepperd#2316: it's already the 12th here. ~~i have bad news from the future~~
𓅬 gabriel_syme 𓅬#3220: almost here as well :sadge:
𓅬 gabriel_syme 𓅬#3220: I have to go to sleep again, just when the server is awake
alstroemeria313#1694: The thing is. It's going to have all of the usual GAN training instabilities still.
𓅬 gabriel_syme 𓅬#3220: I was thinking CLIP +
𓅬 gabriel_syme 𓅬#3220: not sure it's possible?
alstroemeria313#1694: It probably is
alstroemeria313#1694: Gonna try it
alstroemeria313#1694: (I was looking at the MetFaces dataset yesterday and noticing the modes in it that were dropped from the Nvidia trained MetFaces StyleGAN2)
𓅬 gabriel_syme 𓅬#3220: so we have alias free and soon hopefully improved vqgan
|
𓅬 gabriel_syme 𓅬#3220: that's nice
𓅬 gabriel_syme 𓅬#3220: also reminds me, I should try the smaller codebook dim thing on a new vqgan huh
nshepperd#2316: just want to steal alias free gan's ideas for diffusion tbh
alstroemeria313#1694: what are we going to do for edge padding
alstroemeria313#1694: AFG generates at a higher resolution and then does padding-less convolutions
alstroemeria313#1694: But we use U-Nets
alstroemeria313#1694: We could condition diffusion on a Fourier Features coordinate grid fine though
nshepperd#2316: hmm idk
alstroemeria313#1694: AFG uses learnable layers to derive rotation and translation parameters for the grid from the latent
alstroemeria313#1694: Which I think may be hard with diffusion
nshepperd#2316: reflect pad the noisy input image, then do padding-less convolutions throughout the network
alstroemeria313#1694: ahh
alstroemeria313#1694: wait won't there be problems with the skip connections
alstroemeria313#1694: bc the decoder stages will be smaller
nshepperd#2316: the skip connections will need to trim off the edges
alstroemeria313#1694: ah
nshepperd#2316: instead of just being Identity
StellaAthena#3530: @bmk https://twitter.com/lorenlugosch/status/1447586939784282115?s=20
StellaAthena#3530: (This tweet links to my discussion of your criticisms of token-based normalization)
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/897146530371358720/Screen_Shot_2021-10-11_at_11.39.58_AM.png
|
bmk#1476: based and antitokenizationpilled
EricHallahan#1051: My one problem with tokenization is that it cannot adapt to context.
ersatz#0001: more fuel to the arms race narrative: https://www.reuters.com/technology/united-states-has-lost-ai-battle-china-pentagons-ex-software-chief-says-2021-10-11/
Awesome_Ruler_007#7922: exactly the stuff we need
Awesome_Ruler_007#7922: AI funding looking juicy 🤤
TastyBucketOfRice#8796: The DeepSpeed+Megatron project was trained on the pile:
https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/
StellaAthena#3530: Discussed in #research
StellaAthena#3530: If anyone wants to do a couple hours of work and get a paper out of it, I could use help with a thing. The only skills required are the ability to read English and experience with WandB hyperparameter sweep agents.
Alternatively if papers don’t appeal to you, it’ll allow me to start training a suit of models that open source NLP researchers will find exceptionally helpful 🙂
bmk#1476: just use pyfra instead /s
StellaAthena#3530: You keep telling me pyfra isn’t an experiment manager
bmk#1476: if this is for the distillation thing, have you figured the batch and lr stuff out, or is this the stuff you're using to figure that out
StellaAthena#3530: This is the scaling laws thing. I need someone to go look through the scaling laws papers and the GPT-3 paper and define a parameter range to sweep for models of each size from e^10 through e^21
bmk#1476: and the purpose of this is to figure out the optimal batch/lr/etc?
StellaAthena#3530: Yup
bmk#1476: makes sense
bernaise#6161: the scaling laws paper and the GPT-3 paper are "Scaling Laws for Autoregressive Generative Modeling" and "Language Models are Few-Shot Learners", respectively?
|
StellaAthena#3530: There's also "scaling laws for neural language models" which focuses on language modeling
StellaAthena#3530: Those three should cover the relevant info
Orz#3023: Did you guys know about MT-NLG before it was released?
StellaAthena#3530: If the answer were yes, this probably isn't something we would talk about publicly.
Orz#3023: yeah
makes sense
ersatz#0001: *really*?
ersatz#0001: I hope this is a joke lol
Awesome_Ruler_007#7922: well, ww1 and ww2 have been the biggest stimulus to scientific research - with plenty of evidence supporting this
Awesome_Ruler_007#7922: ideally, an indefinite stalemate is the best path for humanity to have an incentive to progress science (IMO) - though we kinda have entered one with nukes and mutually assured destruction 🤔
Awesome_Ruler_007#7922: Unless the western world feels Chinese research has overtaken western science, then funding towards science is going to be decent at best - especially AI which could *really* do with some billions
gollark#3909: I think the fear is that it will spur lots of research without much regard for safety.
Awesome_Ruler_007#7922: The tradeoff I am talking about is to not divert resources from vital programs to CS/AI but rather to encourage, incentivise and re-distribute resources to better recognize the importance of AI reseasrch
Awesome_Ruler_007#7922: that's something to be regulated by governments - stopping science just for the sake of safety is *not* practical
gollark#3909: Maybe, but historically governments have been really terrible at this stuff.
Awesome_Ruler_007#7922: remove goverments, return to monke :think:
gollark#3909: I see.
Awesome_Ruler_007#7922: glad to have such early backing! would you like to join our facebook group? 🤗
gollark#3909: I don't use Facebook unironically, I'm afraid.
Awesome_Ruler_007#7922: very smart
|
oreo#2740: @StellaAthena I've read the scaling laws papers and have experience with wandb sweeps. still need someone like that?
StellaAthena#3530: Sure!
Kia#2550: Is the new Nvadia Billion param language model cited the pile and eval harness?
Kia#2550: Or is it from MS:thonk:
EricHallahan#1051: both :bigbrain:
just some guy on the internet#7637: Based on the release of 530B parameter Megatron what year do you think human level AGI will be made and why ?
Kia#2550: 2030-2040 :v just wait
Kia#2550: If we can get out of this Chip shortage it would be better
Aviv#2135: looking at the results of the 530B model, it doesn’t look that much better than GPT3, no?
Kia#2550: *┐( ∵ )┌*
Kharr#7888: scaling laws suggest it needs to be 10x to see a decent improvement
just some guy on the internet#7637: Would you say nothing crazy will happen before 2025 so I can avoid AI news until then ?
Kharr#7888: The billions of $ being pumped into improving AI suggest otherwise
Kia#2550: Probably yeah...Also making Multimodal architecture and training it would take to long
Kia#2550: And expensive
Kia#2550: Take a look at CLIP,No one gonna invest that much cash to it again just to tell what different images is or is not
Kharr#7888: CLIP has dramatically improved guided generation, video and image search, etc. It has opened up a whole new area of research
Kia#2550: (Training one in different data would be inefficient and probably need a whole lot of compute and resources)
Kia#2550: Ye
Kia#2550: If we want something bigger, effecient and really effective,We probably want to Invest a whole lot of cash and research
|
Kia#2550: And it wouldn't come easy and definitely not in 2025
Kia#2550: Im so sick:goose10:
AI_WAIFU#2844: no
chilli#5665: huh, I didn't realize that Stylegan3 was done in PyTorch
chilli#5665: I guess it's about time for Nvidia
kindiana#1016: what did you expect?
chilli#5665: well, stylegan2 was done in tensorflow
kindiana#1016: oh
kindiana#1016: haha
ilovescience#3282: yeah i was thinking the same thing, but apparently stylegan2-ada was done in pytorch too so this isn't the first time they are working in pytorch
chilli#5665: well, nvidia as a company has generally switched from Tensorflow to Pytorch
EricHallahan#1051: I think that is why they spent so much time perfecting StyleGAN2-ADA.
EricHallahan#1051: Because they knew that StyleGAN3 was going to be PyTorch no matter what.
ari#9020: While StyleGAN2 was originally implemented in TensorFlow, the port to PyTorch happened after StyleGAN2-ADA but before StyleGAN3: https://github.com/NVlabs/stylegan2-ada-pytorch
ersatz#0001: I’ve given up on trying to predict the speed of progress of the field at this point lol
ersatz#0001: I thought we would be at 1T params by the mid 2020s but no one seems to think it will take that long anymore
minhaaj#4955: EleutherAI is mentioned in latest state of the AI report. Great achievement. stateof.ai
minhaaj#4955: I am new by the way and would love to help out any way possible. thats me just in case https://www.linkedin.com/in/minhaaj/
minhaaj#4955: 🙂
StellaAthena#3530: What skills do you have? There’s lots to do, it’s just a matter of matching you to a task
|
bernaise#6161: @StellaAthena is this on the right track for the scaling laws sweep? https://cdn.discordapp.com/attachments/729741769738158194/897479753194438666/unknown.png
StellaAthena#3530: Yes
Orz#3023: My skill is that I enjoy anything tech released after 2010
Is that even a skill?
:sadge:
genetyx8#7543: even COBOL?
Orz#3023: guess I've gotta be a bit more specific
Ravna#1831: Most programming languages are not "tech". They share the same good old imperative PL semantics. Most differences are superficial and can be summarized as design choices in a relatively small design space.
Ravna#1831: PL designers are more like UI artists instead of engineering people.
Ravna#1831: :berk:
genetyx8#7543: I'mma have to disagree on this one. There's plenty of weird languages around. Most of them trying to achieve a design that makes modular programs easy to write and maintain
Louis#0144: o man
Ravna#1831: Yes, but they are not the targets of my criticism.
Ravna#1831: I'm more like making fun of those people who chant "pythonic way" as if there's anything important about that
genetyx8#7543: I see
raccoonrebels#1792: I'm currently part of an academic partnership with 4 of the big-5 publishers of scientific articles (Elsevier, Wiley, Springer, and Taylor and Francis) that let's us do bulk text analysis of scientific articles for **non-commercial** purposes. We have a stack of about 14 million **full-text** docs (i.e., the whole shebang, the refs the figs, the tables, the abstract, and the body) from across 14 thousand journals (growing at about 8k a day). I think the next closest comparable academic dataset is PubMed Central bulk text ibrary which only has 2.75 million articles.
Anyway, I thought our non-commercial mission might be a good fit for the open-source philosophy of the EleutherAI group. Would anyone be interested in using our data for any of your projects?
bmk#1476: nice, where can I download the data
bmk#1476: would be nice to add to the next Pile
|
raccoonrebels#1792: That's the tricky part, you can't just download it. You have to go through this somewhat complex registration process - this is insisted upon by the publishers to make sure the data isn't being used for commercial purposes
bmk#1476: that sucks
raccoonrebels#1792: It's a bunch of annoying hoops I know
raccoonrebels#1792: but I still think its a great resource
bmk#1476: it doesn't sound very open to me
raccoonrebels#1792: correct. How can I put this, it is controlled-access data that can only be used to create open-source products
bmk#1476: I guess it is better than the alternative, but the alternative sucks thanks to Elsevier et al in the first place
raccoonrebels#1792: exactly!
raccoonrebels#1792: Its amazing they let us get even this far to be honest
raccoonrebels#1792: It taken years of negotiation
bmk#1476: i guess id be down for like fine tuning a model on it
bmk#1476: so, how involved is this registration process
raccoonrebels#1792: https://xdd.wisc.edu/adept/ here's the starting browser cite
raccoonrebels#1792: the registration is pretty much easy, I just approve you
raccoonrebels#1792: but its the pipeline that's kinda weird
raccoonrebels#1792: https://github.com/ngds/xdd-docker-recipe/blob/master/README.md#objective
bmk#1476: oh man
bmk#1476: I'll uh look into it later
raccoonrebels#1792: Yeah its daunting. I won't blame you if it doesn't look like a good fit.
bmk#1476: btw I looked at the thing and I couldn't find the registration thing
|
raccoonrebels#1792: I just figured it was worth a shot letting people know its out there.
raccoonrebels#1792: Oh, I'll change that button to say Login/Register not just Login
raccoonrebels#1792: Honestly the site is a little jenk. We built it on a shoestring budget, but we're trying to make the hoops as navigable as possible... jury out on whether we succeeded or not
bmk#1476: ok I registered
bmk#1476: I'm a bit busy recently so I probably won't have time to actually do anything with the data for a while though
raccoonrebels#1792: Sure, the xdd staff will have to approve your registration anyway, so there's time
Deleted User#0000: Hey guys, I’m 2nd year CS student, planning to make a presentation on AI use in drug discovery. Found this one book on amazon which isn’t uploaded anywhere: https://www.amazon.com/Artificial-Intelligence-Drug-Discovery-Beginners/dp/B08YDTLMZT/ref=sr_1_2?dchild=1&keywords=Artificial+Intelligence+drug+discovery&qid=1634064325&s=books&sr=1-2 would anyone in US want to group buy it? You’d be expected to receive it and make pdf out of it, so I can read it and we can upload it somewhere to help if anyone would be searching
Deleted User#0000: I’m in europe so don’t want to ship to myself as that will take too long
!!Puffy Bird!!#7496: random idea, what if someone tried to make a VQGAN type thing for audio
!!Puffy Bird!!#7496: something that reconstructs audio to enable easier modifcation
!!Puffy Bird!!#7496: using TransGAN or something
EricHallahan#1051: We've been vector quantizing audio for decades.
!!Puffy Bird!!#7496: I mean something thats not a waveform
!!Puffy Bird!!#7496: I mean audio2latentspace
EricHallahan#1051: I posted this code a month ago that takes in a Codec2 binary and transforms each frame into an embedding.
alstroemeria313#1694: I tried once and it was kinda bad but I also didn’t try very hard
alstroemeria313#1694: Well, it was quite bad
wav#9464: this is pretty much what jukebox is
wav#9464: a couple VQGANs with transformers that upsample you up the latent hierarchy until you get a full waveform
inox#5400: soundstream is hierarchical VQ-VAE for audio <https://arxiv.org/abs/2107.03312>, it's in the citations for lucidrain's vector quantize repo <https://github.com/lucidrains/vector-quantize-pytorch>
|
EricHallahan#1051: Did anyone ever reproduce their results?
Awesome_Ruler_007#7922: I would have thought an autoencoder approach to audio + transformers would have yielded good results.
Maybe you are looking to research some new architecture/method? 😁
!!Puffy Bird!!#7496: yup
!!Puffy Bird!!#7496: I am
Yourself#9837: hey all, any robot learning / RL efforts in this group?
kurumuz#5695: We're training stylegan3 on 4xA100 on anime portraits dataset by gwern. it learned structure first
kurumuz#5695: going crazy fast
kurumuz#5695: just been an hour
Louis#0144: Based and GAN pilled
kurumuz#5695: im doubling it to 8xA100 soon
kurumuz#5695: this was step 30 https://cdn.discordapp.com/attachments/729741769738158194/897827252941770772/fakes_40_6e62746bd698cb537e01.png
kurumuz#5695: https://cdn.discordapp.com/attachments/729741769738158194/897827271677722634/fakes_90_11c357258c867f63c14c-2.png
kurumuz#5695: step 90
StellaAthena#3530: There’s some interest, but nobody has sat down and made it happen. If you have ideas I would be happy to talk about them
kurumuz#5695: @alstroemeria313 might be interested
SecondMover#8029: That gives a new meaning to "moe blob"
Louis#0144: Honestly I'm unimpressed by stylegan3
Louis#0144: lol
kurumuz#5695: but slime anime girls!
|
kurumuz#5695: bet i could already sell those as NFT
Louis#0144: Lmaooo
kurumuz#5695: seriously, why are you unimpressed tho
kurumuz#5695: its doing awfully good at anime rn
m_wAL99#1923: fixed seed sample?
sg3 has a good visualization util, maybe can try
Vrööm#4253: Anyone here has heard of ddln.ai or has some info of them? I've been contacted by a recruiter and I wanna make sure they're legit before continuing.
Vrööm#4253: https://cdn.discordapp.com/attachments/729741769738158194/897836631325818950/Screenshot_20211013-154920_LinkedIn.jpg
EricHallahan#1051: The company is legitimate, because they made a pretty big splash presenting PilotEye at AirVenture this year. I cannot say anything about the legitimacy of the recruiter, but if you trust the platform overall then I would trust the overall message as legitimate.
Vrööm#4253: Ayyyyy thank you man 👍
Vrööm#4253: The platform in question is LinkedIn so I suppose I trust it haha
triggerhappygandi#0001: how do you know this trivia
EricHallahan#1051: I watched a video about it a few months ago.
EricHallahan#1051: lol
triggerhappygandi#0001: small world lol
Yourself#9837: there a lot of interesting papers recently using pretrained LMs for representation learning that transfer *really* well zero-shot to robotics datasets. a few interesting papers in CoRL as well as a few ICLR submission. I'm more interested in transfer the other way: can embodiment + learning physical priors transfer knowledge back to language understanding? what are physical understanding gaps of current LMs, and how can finetuning in dynamic environments address these gaps?
ethan caballero#6044: y'all saw this? Feels like "huggingface of efficiency pareto frontiers" to me:
https://twitter.com/ethancaballero/status/1448351525198307346
StellaAthena#3530: Are you familiar with the phrase “multimodal grounding” or “grounded language modeling”? There’s some existing work on mixing info from language and from the real world, but the stuff I’ve seen has mostly been about image datasets, not robotics.
Yourself#9837: to be honest i am not familiar with LM that much, my background is in multitask RL and robot learning; interested in learning more though. IIUC there's been a lot of interest in vision, language, video, audio as common modalities - but i think action as a modality is fairly interesting, since a lot of inductive biases we have are grounded in the laws of physics
|
chilli#5665: damn that's pretty great
Awesome_Ruler_007#7922: stumbled upon this vid https://www.youtube.com/watch?v=m3vIEKWrP9Q
Awesome_Ruler_007#7922: not to say anything against tom scott (after all, he is not a comp sci genius) but I am pretty sure he used GPT2 wrongly - because it was fine-tuned for DnD story generation etc.?
Awesome_Ruler_007#7922: ~~I find it offensive when people claim our overfitted parrots can't complete basic tasks~~
bmk#1476: i havent watched the video and i dont feel like watching it because i can already guess what he's going to say, but i think this is a really interesting phenomenon: influencers who are more significantly more knowledgeable than normies, but not really expert tier
bmk#1476: i mean, tom scott was a programmer like ages ago
bmk#1476: so his content about software related stuff is way better than random normies who just read off press releases
Awesome_Ruler_007#7922: yea, but still I have some respect for him as a content creator
Awesome_Ruler_007#7922: AI was pretty new then too, so I guess we can cut him some slack
Awesome_Ruler_007#7922: general buzz didn't really go around until GPT3 where every non-tech was on the waitlist
EricHallahan#1051: *Schmidhuber is visibly angry.*
Awesome_Ruler_007#7922: anywho, how do we do it properly? how do I frame the prompt?
someKindaBean#8471: ugh, i don't like watching videos of things, they feel like the least efficient way to take in information
someKindaBean#8471: but the dataset he talked about is cool. https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
bmk#1476: lol
bmk#1476: i knew it would be winograd schemas
someKindaBean#8471: it's also unclear from skipping through the video if he actually tested them or just is relying on second hand knowledge that GPT-2 can't answer the majority of them
someKindaBean#8471: He shows a clip of AI Dungeon where he tries to put in a winograd-esque prompt when talking about GPT-2, so idk
Awesome_Ruler_007#7922: he apparently did. on some public DnD GPT2 access
bmk#1476: i chuckle every time someone speaks about winograd in hushed tones, and meanwhile to me it's is just another number go brrr on the big list of tasks i can like throw at my model if i feel like it
|
someKindaBean#8471: why would one speak of winograd in hushed tones?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/897952537548840980/unknown.png
someKindaBean#8471: also, isn't it part of GLUE tasks?
bmk#1476: idk it gets the whole "one weird trick to make a task that robots cant solve!!1" crowd going
Dromarion#3383: I watch a lot of YouTube videos at 2x speed because everyone talks so slow lol
bmk#1476: idk why it's winograd that always gets the spolight either
someKindaBean#8471: thanks for sharing it though, it motivated me to read about winograd
bmk#1476: models also suck at stuff like piqa or lambada
someKindaBean#8471: is there a good review paper that covers the majority of the fun tasks that exist?
bmk#1476: or hellaswag
Awesome_Ruler_007#7922: GPT3 can do it apparently
```
Q: in the sentence, "The trophy would not fit in the brown suitcase because it was too big ", what does "it" refer to?
A:
A: trophy
```
Awesome_Ruler_007#7922: big F U to scott
Awesome_Ruler_007#7922: I will bet 5 cents GPT2 can do it too
bmk#1476: man I wonder why hellaswag isn't more popular given its attention attracting name
bmk#1476: just scroll down the list of tasks in eval harness lol
|
someKindaBean#8471: lol, fine. but then i have to look each one up
Awesome_Ruler_007#7922: is it? lol
bmk#1476: oh youll be at it for a while either way
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/897953534836883466/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/897953556852768828/unknown.png
someKindaBean#8471: https://aclanthology.org/W18-5446.pdf yep - listed as WNLI
someKindaBean#8471: ffs, how did you compile this list?
bmk#1476: man sometimes I get the different Winograd derivative tasks mixed up
bmk#1476: there are like 3 different ones in eval harness
bmk#1476: WSC, WSC273, WNLI
bmk#1476: plus Winogrande ofc
Awesome_Ruler_007#7922: ```
Q: In the sentence, "A suitcase fell on the dog, and it was taken to the hospital", what does "it" refer to?
A: suitcase
```
maybe its overfitted after all
Awesome_Ruler_007#7922: and scott's right 😠
someKindaBean#8471: WNLI is true/false hypothesis questions, but there's different versions of winograd schemas
bmk#1476: no need to try single cases by hand
bmk#1476: just use eval harness
|
someKindaBean#8471: here's a big ol' survey of winograd schema datasets and approaches: https://arxiv.org/pdf/2004.13831.pdf
Awesome_Ruler_007#7922: maybe for a more comphrensive test, but if it cant perform well on the test set then... 🤷♂️ ?
bmk#1476: I mean
bmk#1476: you have a sample size of 2 right now
bmk#1476: here https://cdn.discordapp.com/attachments/729741769738158194/897954473236922428/unknown.png
bmk#1476: enjoy
Awesome_Ruler_007#7922: huh
bmk#1476: 63.5% accuracy
Awesome_Ruler_007#7922: still not that good, but didn't expect more ig
someKindaBean#8471: lol, nice. how does that benchmark against other models?
bmk#1476: idk I'm too lazy to go run more lol
Awesome_Ruler_007#7922: how is the task framed BTW? seq2seq?
someKindaBean#8471: lol, i'm not curious enough to do it myself
Awesome_Ruler_007#7922: ahh, SOTA is T5 @ 83%
Awesome_Ruler_007#7922: doesn't seem too bad for GPT3 then
someKindaBean#8471: T5 paper says 93.8% on WSC
bmk#1476: lambada is still my favorite rule of thumb task
ilovescience#3282: lol same
Awesome_Ruler_007#7922: my bad lol, I was seeing the small models
Awesome_Ruler_007#7922: huh. Neo actually does "better" than GPT3 and ...... makes sense? :ultrathonk:
|
Awesome_Ruler_007#7922: ```
“The suitcase fell on the animal, so it had to be taken to the hopsital”, here "it" refers to:
[…] the suitcase, which was very heavy, had fallen and was under the animal,
```
*does* demonstrate having a world model :berk: this what Imma show the GOFAI people
bmk#1476: :goose2: you can't just make inferences based on one datapoint
Awesome_Ruler_007#7922: cant convery the tone, but it was supposed to be sarcastic
Awesome_Ruler_007#7922: wait lemme do try to show them this :ultraberk:
bmk#1476: oh k
nostalgebraist#3542: spent all day at work wondering why my contrastive learning models all sucked... then i realized i was initializing the log temperature to 0.07 when it should have been log(1/0.07) 🤦♂️
nostalgebraist#3542: you love to see it https://cdn.discordapp.com/attachments/729741769738158194/898022132951429160/fixed_temp_loss.png
bmk#1476: f
EricHallahan#1051: f
Ernomeyer#6988: Hello everyone! I am 19 years old. I'm from Argentina and I like to learn and try projects AI on my free time. I found this community in the EleutherAI while researching more about GPT-J. Looking forward to help people and learn.
EricHallahan#1051: Welcome!
Some Point Process#3793: It looks like distill.pub is back from their hiatus: https://distill.pub/2021/gnn-intro/
chilli#5665: that was posted a while ago
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/898064214982287380/unknown.png
ersatz#0001: I’m still hoping that Google or another of the big boys will fund the journal
|
coffeelatex#3040: iirc there were a few core tensions that they had w/ the current arrangment of Distil (like editor impartiality), doubt that's sth they could resolve w/ funding
chilli#5665: I don't think the constraint is funding
ersatz#0001: I mean to hire people
cognomen#6297: or start a discord
chilli#5665: I don't think that's a constraint either
chilli#5665: the constraint is the people who can actually do a good job editing/filtering the posts
ersatz#0001: yeah they are expensive
ersatz#0001: that's why money is needed
ersatz#0001: people are doing far more boring stuff for far less money than what Google could throw at them
chilli#5665: not the right people
Parker#3197: https://distill.pub/2021/distill-hiatus/
Parker#3197: they said why they were taking a break there
Parker#3197: some of the reasons include
```
- Mentorship is in Tension with Being a Journal
- Editor Articles are in Tension with Being a Journal
- Neutral venues can be achieved in other ways
- Self-Publication Seems Like the Future (in most cases)
- A Half-hearted Distill May Cause Harm
|
Why a Hiatus?
- Burnout```
ersatz#0001: Yeah burnout
Louis#0144: https://twitter.com/lcastricato/status/1448673227123871750?s=20 posting this here bc its relevant to eleuther stuff
Louis#0144: ah actually looks like people noticed in #gpt-j
Louis#0144: lol
EricHallahan#1051: This has been all over this server yesterday lol
Louis#0144: im dumb
Louis#0144: dont mind me
alstroemeria313#1694: https://twitter.com/rahiment/status/1448459166675259395 well that's a conjecture
alstroemeria313#1694: Permutations only?
Louis#0144: how is this applicable to us
Louis#0144: maybe im missing the importance tbh'
Some Point Process#3793: I think there was an earlier result shown that if models are mode connected in the loss landscape then you can interpolate between those models somehow
Some Point Process#3793: so, since, transformers are permutation invariant -> easily interpolate transformers(?)
Artia#1759: *How could it be push back a week when there’s no ETA?*
bmk#1476: it gets pushed a *month* back every time *this* question is asked
cfoster0#4356: See also: https://twitter.com/ak92501/status/1435050375434878976?t=_64lCRy9OmKThbvLDloGuA
alstroemeria313#1694: the individual dimensions of d_model would be the things that they are talking about permuting
Some Point Process#3793: Oh wow, hmm
|
Some Point Process#3793: It looks like it was permutation of hidden units, which still sounds like it should perturb the model a lot even tho they selected for certain "valid permutations"; but there was an earlier result showing that resnets in some cases can have their layers permuted so I'm not surprised that these exist
alstroemeria313#1694: so the latest AWS deep learning image is Ubuntu 18.04?
alstroemeria313#1694: is there a 20.04 or should i use the 18.04
alstroemeria313#1694: `Deep Learning AMI (Ubuntu 18.04) Version 50.0 - ami-0050625d58fa27b6d`
alstroemeria313#1694: Hey does haiku mixed precision like, work.
alstroemeria313#1694: I can't find a loss/gradient scaler
alstroemeria313#1694: in haiku
alstroemeria313#1694: (This will be on GPU)
guac#4716: https://github.com/deepmind/jmp haven't used it but maybe it leads you in the right direction
alstroemeria313#1694: oh ty
alstroemeria313#1694: was maybe figuring there wasn't one because you don't need one with bf16
alstroemeria313#1694: but i will be using V100s
alstroemeria313#1694: um
alstroemeria313#1694: `WARNING: infoROM is corrupted at gpu 0000:00:16.0`
alstroemeria313#1694: Is this bad.
alstroemeria313#1694: Should I tell AWS
bmk#1476: that sounds bad
bmk#1476: :goose6: https://cdn.discordapp.com/attachments/729741769738158194/898268035318677524/unknown.png
alstroemeria313#1694: Where is like. Any recent Python on this box.
alstroemeria313#1694: guess i should email aws then
|
random_lurker99b#8614: I dont think this lib is used much so would tread with caution/not spend too much time on it fyi
guac#4716: thanks for the heads up 👍
alstroemeria313#1694: hi, how can i get an AWS image with newer CUDA than 10.0 on it
alstroemeria313#1694: I need to run JAX on the thing and it doesn't support older than 11.1
alstroemeria313#1694: Does Nvidia have AMIs that are up to date.
alstroemeria313#1694: I can bring the entire rest of the environment up myself, I'm just wary of trying to upgrade CUDA inside virtual machines since I've broken VMs/containers that way so often.
Louis#0144: do you know a good alternative
Louis#0144: i imagine mixed precision is probably quite important to deepmind lol
StellaAthena#3530: Dunno if this is useful to anyone, but I just found a source of >150 college-level math problems and proofs: https://orion.math.iastate.edu/ehjohnst/PoW/PoW.html
inox#5400: do you have to pay for the deep learning AMI? <https://docs.aws.amazon.com/dlami/latest/devguide/features.html> it's supposed to have the most recent CUDA on it
alstroemeria313#1694: it actually had 10.0
alstroemeria313#1694: I found an Nvidia one that had 11.2
inox#5400: that problem sounds familiar
alstroemeria313#1694: Also the Nvidia one used Ubuntu 20.04
alstroemeria313#1694: Instead of 18.04
inox#5400: I think I installed the cuda drivers manually and saved that image
alstroemeria313#1694: So it already had recent Python
inox#5400: did you get AWS research credits?
alstroemeria313#1694: Someone is letting me use their credits for free rn
alstroemeria313#1694: ^^;
|
alstroemeria313#1694: I did actually get it to work
alstroemeria313#1694: JAX on the AWS box that is
𓅬 gabriel_syme 𓅬#3220: nice!
𓅬 gabriel_syme 𓅬#3220: I need to use my credits too, but the interface is very alien to me dammit
𓅬 gabriel_syme 𓅬#3220: not sure if this is interesting for the things you're doing alstro: https://jaxopt.github.io/stable/
alstroemeria313#1694: I um. Still don't know how to use the JAX memory profiler for even as simple a task as determining how much is in use.
alstroemeria313#1694: How do I tell.
EricHallahan#1051: The fact that this task is non-trivial tells you a lot about JAX lol
alstroemeria313#1694: > The JAX device memory profiler emits output that can be interpreted using pprof
alstroemeria313#1694: No
alstroemeria313#1694: No
alstroemeria313#1694: > At the time of writing, installing pprof requires first installing Go and Graphviz, and then running
alstroemeria313#1694: No
EricHallahan#1051: Just no
alstroemeria313#1694: Can I make the training loop.
alstroemeria313#1694: Print out the amount of memory in use.
alstroemeria313#1694: > The output of device_memory_profile() is a binary protocol buffer that can be interpreted and visualized by the pprof tool.
alstroemeria313#1694: Guess not lol
EricHallahan#1051: :grimberk:
𓅬 gabriel_syme 𓅬#3220: installing graphviz yikes
|
aero#1357: @chilli kuru said I should message you about this
I've been training stylegan3 on 4x a100 and randomly getting a strange pytorch distributed error after a while. Doesn't break training but pretty weird `RuntimeError: open(/tmp/tmpwzag0na8/.torch_distributed_init): No such file or directory`
Any idea what thats about? I can send full stack trace too
aero#1357: only found 1 post online with the same error but no info there
aero#1357: <https://githubmemory.com/repo/mit-han-lab/data-efficient-gans/issues/80>
Kia#2550: 4 A100:surprise:
chilli#5665: I'd post on GitHub or on the pytorch forums
chilli#5665: I can ping the appropriate parties about it if you do
Louis#0144: ikr lmao i didnt realize NAI was so cheap
Louis#0144: *only* 4 A100s
Louis#0144: pfft
aero#1357: will do
EricHallahan#1051: > ~~We are not the place to ask for technical support or beginner questions~~
We can't really help you too much with PyTorch here, I would definitely open an issue though.
kurumuz#5695: this is for hyperparameter sweeps
kurumuz#5695: lol
kurumuz#5695: is it a beginner question though :berk:
Louis#0144: i would have DM'ed chilli
|
aero#1357: well its not a blocking issue since training continues anyway just bizarre
Louis#0144: tbh
Kia#2550: *lol*
AI_WAIFU#2844: anything that involves less than 8 A100s is a beginner question here
kurumuz#5695: 😔
AI_WAIFU#2844: for tpus the number is 32
aero#1357: yay 👏
kurumuz#5695: I argue it should be at least 256
AI_WAIFU#2844: Yeah that's pretty true, I said 32 because that's when you gotta start doing spmd
StellaAthena#3530: It’s 32 if you’re doing something fancy, 256 if you’re doing something pedestrian
kurumuz#5695: what do we say about 8xA100s then
elderfalcon#4450: Time to get @ LucidRains one too. 😂😂😂😂
elderfalcon#4450: Come on NVIDIA reps in the chat, you can do it! We believe in youuuuuu~~~~!
elderfalcon#4450: We gotta get our open source transformer magic tested #fasterer and #gooderer. It's the only way. Help us save the world, magical NVIDIA chat lurkers, please.
EricHallahan#1051: With Interpretability Reading Group moving to a fortnightly cadence, we have had the desire to restore weekly events. To remedy this we are therefore proud to announce the pilot meeting of **EleutherAI Show & Tell**, scheduled for Saturday, October 16, 2021 2:00 PM and currently planned to recur fortnightly between meetings of Interpretability Reading Group.
Show & Tell is envisioned as an informal event for the discussion of projects and research by EleutherAI contributors. Presentations and demonstrations will be given in an impromptu manner by contributors who volunteer to do so on a first-come, first-serve basis. Projects can be anywhere in the gamut from projects marked as TODO on the Task Board to established projects like those found in the project channels: It is all fair game! Presentations covering small-scale experiments and research results are also encouraged.
Once the active list of volunteers is exhausted, the session will fallback upon general voice chat (with an emphasis towards topics commonly discussed in #research) until another volunteer voices their desire to present.
We hope that Show & Tell will promote the discussion of active projects and provide exposure to projects that might otherwise be overlooked. We hope to see you there!
|
elderfalcon#4450: Fantastic idea. Two thumbs up! 👍
StellaAthena#3530: Oops didn't see you post this so I made a (lightly edited) announcement
EricHallahan#1051: You also didn't copy the date code. :P
StellaAthena#3530: I did, I think it just doesn't copy right
Dwarf#6935: wait so this is just *my* time-zone?
StellaAthena#3530: What's the code
EricHallahan#1051: \Saturday, October 16, 2021 2:00 PM
EricHallahan#1051: Need to remove the slash lol
StellaAthena#3530: @Dwarfyes it shows up in your time zone
StellaAthena#3530: (assuming you've set your time zone in Discord correctly)
Dwarf#6935: :goose7:
EricHallahan#1051: I don't know whose idea it was to develop this system and not be able to easily construct timecodes within the application.
EricHallahan#1051: Anyway we have been working to put together Show & Tell after I originally pitched the idea around a month ago and this is the first opportunity we had. We wanted to make sure there was at least a couple project at the ready to be presented for the first event so we could ensure it runs relatively smoothly.
EricHallahan#1051: So I am pretty excited at the prospect at giving this a shot.
Kazumi#1297: aaa 3am
MicPie#9427: Yes, sounds very interesting!
Is there already a list of the presentation topics?
𓅬 gabriel_syme 𓅬#3220: Hello! Have a look at the project board here: https://github.com/EleutherAI/project-menu/projects/1
And also, just hang around with us, plenty of cool ideas are thrown around here from time to time 🙂
|
ersatz#0001: is there an EleutherAI podcast?
StellaAthena#3530: No
StellaAthena#3530: With Interpretability Reading Group moving to a fortnightly cadence, we have had the desire to restore weekly events. To remedy this we are therefore excited to announce the pilot meeting of **EleutherAI Show & Tell**, scheduled for Saturday, October 16, 2021 2:00 PM and currently planned to recur fortnightly between meetings of Interpretability Reading Group.
Show & Tell is envisioned as an informal event for the discussion of projects and research by members of the EleutherAI community. Projects can be anywhere in the gamut from ideas marked as TODO on the Task Board, to pet projects with promising results, to established projects like those found in the project channels: It is all fair game! Presentations covering small-scale experiments, research results, and papers-in-progress are also encouraged.
Our goal with Show & Tell is to promote the discussion of active projects, provide exposure to projects that might otherwise be overlooked, and make it easier to stay in touch with everything that's going on. We hope to see you there!
-- The EleutherAI Team
aaronrmm#3198: Does this mean when we are done learning one domain of data we can change to another or focus on a subdomain that has bad accuracy? That'd be huge
StellaAthena#3530: I mean, this has been true for close to a decade 😛
aaronrmm#3198: without losing accuracy on the original domain?
AI_WAIFU#2844: yeah remember when telling cats apart from dogs was hard?
StellaAthena#3530: https://xkcd.com/1425/
aaronrmm#3198: I don't think I've ever heard of an algorithm remove items from the training set when those items become accurate enough
StellaAthena#3530: This specific algorithm is new, but your comment described transfer learning
random_lurker99b#8614: not open source one no
Louis#0144: https://tenor.com/view/let-me-in-eric-andre-wanna-come-in-gif-13730108
Furk#5259: I'm trying to download pile_subset. But I realized the-eye is still down. Is there a different host/server for pile ?
StellaAthena#3530: We have it on a Google Bucket, but they charge egress fees >.>
|
I have a copy that's been pre-processed for use with Megatron. Dunno what your application is, but if you're down to learn how to use Megatron's dataloader that may be an option.
Furk#5259: Yeah, I'm tinkering GPT-neoX rn.
StellaAthena#3530: Oh perfect! It's ready made for that 🙂 Do you have somewhere you'd like me to upload it?
Furk#5259: It would be great if you can upload it to Kaggle.
StellaAthena#3530: @Furk Is there a way to do that upload via the command line?
Furk#5259: Uh I found this:
```
Create a New Dataset
Here are the steps you can follow to create a new dataset on Kaggle:
Create a folder containing the files you want to upload
Run kaggle datasets init -p /path/to/dataset to generate a metadata file
Add your dataset’s metadata to the generated file, datapackage.json
Run kaggle datasets create -p /path/to/dataset to create the dataset
```
and datapackage.json :`
|
```
{
"title": "My Awesome Dataset",
"id": "timoboz/my-awesome-dataset",
"licenses": [{"name": "CC0-1.0"}]
}
```
Furk#5259: These might cause some authentication issues though. If you have a easier place to upload, I can download from there.
ersatz#0001: is there something like a text model but with voices? that is trained on English speaking voices for example, I know it's a strange question but I wonder
CRG#8707: https://ai.facebook.com/blog/textless-nlp-generating-expressive-speech-from-raw-audio/
catkage#7500: https://ai.facebook.com/blog/textless-nlp-generating-expressive-speech-from-raw-audio/
ersatz#0001: it's exactly that! and it's very recent too.
StellaAthena#3530: With Interpretability Reading Group moving to a fortnightly cadence, we have had the desire to restore weekly events. To remedy this we are therefore excited to announce the pilot meeting of **EleutherAI Show & Tell**, scheduled for Saturday, October 16, 2021 2:00 PM and currently planned to recur fortnightly between meetings of Interpretability Reading Group.
Show & Tell is envisioned as an informal event for the discussion of projects and research by members of the EleutherAI community. Projects can be anywhere in the gamut from ideas marked as TODO on the Task Board, to pet projects with promising results, to established projects like those found in the project channels: It is all fair game! Presentations covering small-scale experiments, research results, and papers-in-progress are also encouraged.
Our goal with Show & Tell is to promote the discussion of active projects, provide exposure to projects that might otherwise be overlooked, and make it easier to stay in touch with everything that's going on. We hope to see you there!
alstroemeria313#1694: `params = jax.tree_map(lambda x: x / 2, params)`
alstroemeria313#1694: I didn't want to think too hard about how to make haiku do what I wanted for the init.
alstroemeria313#1694: And the default init made an untrainable model
|
chilli#5665: lol
inox#5400: that kinda makes me like jax
fe#0483: 😡 cuda/nvidia/driver pit of doom. no one likes to be there.
fe#0483: gah!
fe#0483: kernel: NVRM: Xid (PCI:0000:85:00): 79, pid=0, GPU has fallen off the bus.
kernel: NVRM: GPU 0000:85:00.0: GPU has **fallen off the bus.**
kernel: [133B blob data]
kernel: NVRM: A GPU crash dump has been created. If possible, please run
NVRM: nvidia-bug-report.sh as root to collect this data before
NVRM: the NVIDIA kernel module is unloaded.
fe#0483: sigh.
fe#0483: ok it's friday and I give up: wandb.vendor.pynvml.pynvml.NVMLError_GpuIsLost: GPU is lost
bmk#1476: ouch
bmk#1476: is this your own hardware?
bmk#1476: if so try reseating the gpu
fe#0483: hrm
fe#0483: yes, here in the basement
fe#0483: I'll try
bmk#1476: take it out of its slot, make sure the slot and connector are clean, then put it back making sure that the connector is all the way in and the latch at the back is engaged with the card
fe#0483: how did you know my 3090 was on a sketchy pci extender?
|
fe#0483: :berk:
bmk#1476: lmao
bmk#1476: thats.. rprobably why
bmk#1476: why cant you just put your card directly into the slot? airflow? not enough space in your case?
fe#0483: it's actually not a low quality one, but I'll go reseat it
bmk#1476: in general pcie risers are sketch
bmk#1476: even good ones
bmk#1476: i wouldnt use one unless i totally had to
fe#0483: Yea, me neither.
fe#0483: I love how it pegs one of the 24 cores at 100% after dying and I have to restart as well.
bmk#1476: to be clear, are you using one of *these* things? https://cdn.discordapp.com/attachments/729741769738158194/898687929512259624/unknown.png
bmk#1476: or these https://cdn.discordapp.com/attachments/729741769738158194/898687999833952288/61j6rxvtt9L.png
EricHallahan#1051: This is #general sir.
Louis#0144: :berk:
fe#0483: I leave the mining to the professionals
fe#0483: no go. hrm. I'll have to look at it later. too tired.
iOhadRubin#3747: Can anyone recommend a good library for visualizing cross attention?
iOhadRubin#3747: Preferably with nice integration to HF
StellaAthena#3530: You visualize self attention, and then cross your eyes. Duh /s
이온#9962: Is GPT-J better than GPT-Neo?
|
이온#9962: What is GPT-J-6B?
StellaAthena#3530: !faq
Carl-bot#1536:
EricHallahan#1051: Uhhh... that isn't covered by the FAQ?
kindiana#1016: maybe it should
kindiana#1016: 😉
EricHallahan#1051: :guilty:
EricHallahan#1051: Welcome! Depends on your definition of "better". It has better downstream performance on tasks but it is larger and therefore harder to run on less-powerful hardware.
EricHallahan#1051: I suggest reading https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/
이온#9962: Thanks!
philip_rhoades#7596: Self Introduction:
People, I am retired (I started off in BioMedical Research but then spent 35 years in IT) but one of my two non-profit projects is The Neural Archives Foundation:
http://neuralarchivesfoundation.org
It has been operating quietly since 2008 but now, as well as preserving neural tissue, we want to start doing our own research - whether in-house or outsourced. The two main projects are creating AI Avatars for our clients who have their brains already frozen - but also for our still-breathing people (starting with Avatar Phi Rho for me). Our other (more ambitious, longer-term) project is setting up a facility for HiRes brain scans - although if we make progress with that exercise, we will need some serious compute and network power - so this will likely be of use for the Avatar project too of course. I am interested in chatting about anything and to anyone which / who might be relevant to these projects.
PS I was quite interested in GPT-3 of course but very unenthusiastic about the MS connection . . so I am very happy I have found this project which suits my sensibilities much more . .
Zeronix#2793: hello! I am a recent CS/AI grad, interested in using AI to solve interesting problems. My current interests lie in reinforcement learning and embodied AI for robotics, however I can always get interested in a challenging problem. I am fluent in Python and PyTorch and am interested in potentially contributing to a project here. nice to meet you all 🙂
|
richardblythman | Algovera.ai#3425: Hi everyone, I just quit my job as an ML engineer in a big tech company. I'm really interested in how we can do AI without the middle man (i.e. the extractive tech company) and how decentralization of AI can help to mitigate some of the risks of AI. This server is the most interesting use case I've come across. I'm interested in investigating how we can go even further than the amazing work you're doing. How can we continue to coordinate as the scale of this group increases? How can we reward contributors so that they can do this full time? How can we do the commercial side of what a tech company does but where the value flows back to the community? I'd be really interested in chatting to people here if they have ideas on this? Also, where can I find a team here and start doing some research work?
𓅬 gabriel_syme 𓅬#3220: Welcome! For the last question, you can take a look at the project board: https://github.com/EleutherAI/project-menu/projects/1 A lot of ideas there that you might like. Of course, you are also welcome to submit your own ideas, either in the chat or there. The best way to get involved I feel is to stay involved, hang around in the chat, exchange thoughts and maybe grab one idea that you like from the many great ideas floating around this server from time to time.
𓅬 gabriel_syme 𓅬#3220: Welcome! Take a look at the above and hang around with us in here 🙂
𓅬 gabriel_syme 𓅬#3220: This is a rather quiet time on the server (timezones and such), although that will change any minute now
rom1504#5008: Coordination of open source initiative is indeed an important and difficult topic. If that's something you're interested in, i bet there would be a bunch of interesting work to do around listing and organizing the various projects that people talk about around here, creating links, proposing to build common components, figuring out what papers are related,...
It's mostly done in discord around here, but I bet it can be even better
Awesome_Ruler_007#7922: > AI is now an actual arms race rather than a figurative one. AI researchers have traditionally seen the AI arms race as a figurative one -- simulated dogfights between competing AI systems carried out in labs -- but that is changing with reports of recent use of autonomous weapons by various militaries.
:ultrazucc:
𓅬 gabriel_syme 𓅬#3220: great, let's make an arena, let the autonomous weapons fight themselves and the winner wins the spoils of war
𓅬 gabriel_syme 𓅬#3220: imagine if people actually did something smart like that vs destroying each other.
genetyx8#7543: anyone here know much about infinite-width NNs/NNGPs? I'm specifically looking for papers to read/cite
rom1504#5008: The next step is to do it in a simulation
rom1504#5008: Win the war game, win the war
Awesome_Ruler_007#7922: `robowars`?
Awesome_Ruler_007#7922: damn, I wanna research militaric uses of AI
Awesome_Ruler_007#7922: Imagine a Robo-cop human/AI hybrid - a HUD which uses sound and mini-drones to maintain a 3D structure of a building highlighting objects of danger and allowing the soldier to predict hostiles and encounters in advance
Awesome_Ruler_007#7922: you wouldn't even need an expensive exoskeleton - all you need is a helmet and a swarm of drones.
and what if you had a small long-range rocket placed strategically around a city, such that the cop can call a swarm of drones at any location within minutes..?
Awesome_Ruler_007#7922: :ultraberk:
|
thenightocean#6100: yeah, like in this old movie: https://en.wikipedia.org/wiki/Robot_Jox
𓅬 gabriel_syme 𓅬#3220: I think I glimpsed at one in ICLR
𓅬 gabriel_syme 𓅬#3220: https://openreview.net/forum?id=tUMr0Iox8XW
𓅬 gabriel_syme 𓅬#3220: I understand nothing 😄
genetyx8#7543: thx
genetyx8#7543: I looked the Neural Tangents library, they have a few references
𓅬 gabriel_syme 𓅬#3220: there's some NTK papers I think in there
Kazumi#1297: I ended up never getting TRC access
nev#4905: rip
circuit10#0158: I have it but all I did was start one TPU, install neofetch, post a screenshot, make sure GPT-J worked on it then turn it off
circuit10#0158: I haven’t come up with a good use yet
circuit10#0158: Well I have some idea for fine tuning but I’d need to gather data first
𓅬 gabriel_syme 𓅬#3220: My TPUs have been running non stop since August 10. TRC has literally fueled (i.e. funded) almost all of my latest research. Pretty insane
StellaAthena#3530: If you’re looking for suggestions I have a list of cool projects to do
EricHallahan#1051: https://board.eleuther.ai
circuit10#0158: I would like to do projects but I’m not a proper AI researcher, just someone who played with Talk To Transformer back in the GPT-2 days and got hooked on it
circuit10#0158: So I probably don’t have the experience for anything complicated
Zeronix#2793: could I get the details 👀
StellaAthena#3530: 1. You could take VQGAN-CLIP and instruction-tune it (see https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html?m=1 https://openreview.net/group?id=ICLR.cc/2022/Conference)
2. You could implement “A Fast Fourier Transform for Fractal Approximations” by Calvin Hotchkiss, Eric S. Weber
|
3. You could train a Transformer to play a game (chess, rubix cube)
4. You could take an adversarial attack against text models and use it to attack GPT-NeoX
Zeronix#2793: @StellaAthena the FFT paper seems interesting. what applications does the proposed method have?
StellaAthena#3530: How much do you know about how CNNs work?
Zeronix#2793: I understand the basic principle of CNNs. Given an input 3D tensor (or 4D for batch), a Conv2D layer applies a linear mapping to 3D subpatches of fixed size at each location in the input.
Zeronix#2793: I also know there's lots of additional tricks used by modern computer vision NNs but not really familiar with those
StellaAthena#3530: CNNs are MLPs with certain geometric structure physically built into them. If you're not working in a space with a Euclidean geometry, you need to modify how CNNs are constructed from the "default" implementation which is specifically designed for euclidean space.
StellaAthena#3530: The way you do this is closely connected to Fourier Analysis. I hope to use this fractal fourier transform to compute CNN with a *fractal geometry* built in, allowing CNNs to be applied to new types of data where they are currently ineffective. This is primarily a theoretical pursuit for me, but I do have a dataset of independent interest to CS researchers that natively carries a fractal geometry. I plan to show that this "fractal CNN" is more effective on the dataset than a traditional CNN
Zeronix#2793: interesting. do you have constraints on the language / framework used to implement it? if I did it I would do it in Python + Numpy / Numba / Pytorch
StellaAthena#3530: That would be my ideal actually
Zeronix#2793: cool beans. I'll take a shot at it
Zeronix#2793: have you previously contacted the author(s) of this paper to ask if he already implemented it?
StellaAthena#3530: Yup. They have not done so
StellaAthena#3530: @Zeronix There's some more details and optional reading if you want to learn about the theoretical background here: https://github.com/EleutherAI/project-menu/issues/39
StellaAthena#3530: But really for implementing the paper the theory is pretty optional IMO
Zeronix#2793: @StellaAthena thanks, that's helpful. I'm currently reading the paper. It's a bit annoying that they don't have a concise summary of the proposed algorithm. Looks like I'll have to tease it out slowly.
I also need to read up on what iterated function systems are and how they generate fractals.
𓅬 gabriel_syme 𓅬#3220: if I found time I'd try #3
jordiae#4107: Which is the standard way of selecting the learning samples in few-shot learning settings?
jordiae#4107: Are they fixed for all the test samples?Are they randomly selected for each case?
|
Deno#3312: Ok
Kazumi#1297: oh right, there's show and tell today, and I slept through it
MicPie#9427: Still going on!
Kazumi#1297: joininng
alstroemeria313#1694: Hey does anyone have the FFHQ original images?
alstroemeria313#1694: Like so we can make the unaligned dataset.
alstroemeria313#1694: Nvidia put them on Google Drive and they are, of course, over quota
alstroemeria313#1694: Internet Archive has them but they're incredibly slow.
alstroemeria313#1694: Academic Torrents doesn't have them (only the preprocessed aligned ones)
EricHallahan#1051: These?
https://drive.google.com/drive/folders/1ZX7QOy6LZuTLTnsOtQk-kmKq2-69l5hu
alstroemeria313#1694: Yes, `in-the-wild-images`
alstroemeria313#1694: I tried their download script and it got 9000/70000 before Google Drive IP banned us for a day.
fe#0483: @alstroemeria313 have you tried https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement ?
alstroemeria313#1694: not yet
fe#0483: Wondering your impression of you have
fe#0483: K
alstroemeria313#1694: We are trying to do similar things
alstroemeria313#1694: Can compare
alstroemeria313#1694: ...What kind of loss function do they use.
|
cfoster0#4356: Pretty sure it's the same as usual, L2 on eps
alstroemeria313#1694: i found it hard to find
alstroemeria313#1694: also they maybe used L1
alstroemeria313#1694: I mean this repo not the SR3 paper.
fe#0483: Hrm misinterpreted your sentence.
Furk#5259: I'm tokenizing pile_subset. And it takes 1 second to tokenize 0.8 mb. So will it take 5 hours to tokenize pile_subset or my math is wrong? https://cdn.discordapp.com/attachments/729741769738158194/899176475708755988/unknown.png
Orz#3023: probably right
But the amount of time required for training/evaluation of pile makes this neglegible
Orz#3023: I'd suggest you to do this at runtime in a saperate thread
Its not really that much of a overhead imo
richardblythman | Algovera.ai#3425: Do you ever do things like design sessions on a Miro board around here e.g. how to improve ML models or even brainstorm on issues of coordination/organisation? Do you have a calendar to subscribe to with all of the reading groups etc?
richardblythman | Algovera.ai#3425: Also, do you use a forum like discourse at all as a more formal way to propose changes?
Furk#5259: yea you are probably right
Furk#5259: it took 4.5 hours to tokenize just one jsonl.zst
Furk#5259: it would be better if I do it seperately.
StellaAthena#3530: @richardblythman | Algovera.ai We aren’t quite as organized as you seem to think 😛 We currently have one reading group, which you can follow via the link in #interpretability-reading-group
StellaAthena#3530: Generally people just chat about what’s working, what’s not working, etc. there’s also semi-regular community surveys where people are explicitly asked to give feedback. Largely this group is community driven though: if you can convince people it’s worth doing something people will do it. That’s how our project board started, for example.
triggerhappygandi#0001: There's a hacky workaround for that. Let me see if it works
Lord Parfington#0012: does anyone here have any experience working with tensorflow lite?
Vrööm#4253: what do you need specifically?
|
Lord Parfington#0012: well, i was just curious about how i might be able to try converting one of these visual models into onnx and then trying to map a tflite inference with it
Vrööm#4253: map a tflite inference?
Vrööm#4253: so you're trying to do something like meta-learning?
cfoster0#4356: I truly don't understand Ben Goertzel's positioning in the AI landscape
cfoster0#4356: Like on the one hand he's got a "AGI on the Blockchain" type project and on the other, he gets serious speakers like Chollet and Bengio at his conference :thonk:
StellaAthena#3530: "Influencer"
AI_WAIFU#2844: grift, his positioning is grift
AI_WAIFU#2844: It's pretty effective actually
AI_WAIFU#2844: he's gotten millions
ethan caballero#6044: He co-coined the term "AGI", @3:04:
https://www.youtube.com/watch?v=4X2xYyIk5x0&t=184s
Furk#5259: Is he the CEO of the company that built Sophia?
Awesome_Ruler_007#7922: What
https://www.reddit.com/r/MachineLearning/comments/q9kbkm/researchbiologicallyinspired_neural_networks_for/
Awesome_Ruler_007#7922: how is it so robust on so less parameters?
cfoster0#4356: IIRC they use a convolutional neural network to feed into those neurons. So in total the system is actually in the many thousands of parameters
Awesome_Ruler_007#7922: still
Awesome_Ruler_007#7922: https://pub.towardsai.net/a-new-brain-inspired-intelligent-system-drives-a-car-using-only-19-control-neurons-1ed127107db9
cfoster0#4356: They're also dealing with an extremely reduced version of the driving problem
|
Awesome_Ruler_007#7922: the point is that even slightly-biologically inspired nets are actually even working outside toy datasets
Awesome_Ruler_007#7922: from my skim, it seems NCPs are also task agnostic - though I don't really know how they achieve that
cfoster0#4356: There's some major survivorship bias here. For every biologically inspired architecture that solved the task, how many equally biologically inspired architectures failed to?
Awesome_Ruler_007#7922: I may argue that the degree to how they were biologically inspired would be a strong variable to their chances of faikure
Awesome_Ruler_007#7922: I don't see how its extremely reduced - it seems decent, and they aren't touting it as autopilot. IMO its much better than MNIST atleast
cfoster0#4356: I think I'd agree with you, except I would expect the sign of the correlation to be flipped!
Awesome_Ruler_007#7922: biologically-inspired architecture just need more data and researchers :10IQ:
We are waiting for another "schmidhuber" to get more people into it :chadhuber:
cfoster0#4356: I'm sure it'll get more data and researchers once it starts delivering results
cfoster0#4356: If it works it works
Awesome_Ruler_007#7922: DL was in the same state before; imagine if it didn't get the same. we would still be doing least squares
cfoster0#4356: That sounds backwards. DL has gotten the level of investment it has the past decade or so because it worked at the compute & data scales of the day
cfoster0#4356: I haven't heard Hawkins or the like saying, "well, I think this stuff will work but only with 10x more data or more researchers working on it"
cfoster0#4356: He in particular seems pretty darn confident that they'll figure it out in the next couple years, at existing investment levels
Some Point Process#3793: https://trueagi.io/
Seems to be related to the ongoing 2021 agi conference: http://agi-conf.org/2021/
Louis#0144: Omg the opencog guy
Louis#0144: That's so funny!!!!
kurumuz#5695: is he going to make waifus tho
ABCD#7698: https://www.reddit.com/r/MachineLearning/comments/q9vdhy/d_if_one_of_the_faang_companies_offers_you_a_ml/ Curious to know what others think of this
|
Awesome_Ruler_007#7922: > Dr. Ben Goertzel
> Father of Artificial General Intelligence,
huh?
nev#4905: ah yes
nev#4905: FoAGI
bmk#1476: I genuinely can't tell if this site is sincere or if it's mockery of goertzel
Awesome_Ruler_007#7922: pretty sure its sincere (typical sillicon valley 🙄) but
> pioneer of AI for humanoid robotics, genomics, finance
seeing this is proof of that
cfoster0#4356: We made it, y'all 💜 😭 💜 https://cdn.discordapp.com/attachments/729741769738158194/899407974961909760/Screenshot_20211017-172520.jpg
rb#3159: hey everyone, i am currently reading about binary-neural-networks, i have tried experimenting with binary-layers (0/1 weighted) in the mlp blocks of transformer.
rb#3159: https://cdn.discordapp.com/attachments/729741769738158194/899422073775095808/real.png
rb#3159: attaching screenshots for train-loss for real and binary weighted transformer https://cdn.discordapp.com/attachments/729741769738158194/899422253387767848/binary.png
rb#3159: currently i am on limited compute-budget so i have tried with a really small (2.538291e+07 params) model. I want to check if binary-layers in transformers perform on-par or slightly less better than regular ones with a larger model,
rb#3159: need opinion of people who have experience with binary/ quantized neural networks
𓅬 gabriel_syme 𓅬#3220: does it scale is the question?
Vrööm#4253: so you basically replaced the linreg in the mlp with logreg?
rb#3159: Logistic regression weights need not be binary I am talking about binary weights
Vrööm#4253: gotcha
Vrööm#4253: @rb wouldn't that indirectly mimic dropout?
|
richardblythman | Algovera.ai#3425: Thanks @StellaAthena. Do you think more scheduled weekly events would be something that people would be interested in e.g. design sessions, hacking sessions, town hall meetings, social calls? Discord is great but if working for EleutherAI was my only job I reckon you would want other forms of communication too. For example, one of the few things I miss from the office are the serendipitous conversations. Unexpected conversations bring new interesting ideas. Also, more events would prob build even more community here.
rb#3159: not, in dropout the weights are still real-valued. here they strictly restricted to binary
Vrööm#4253: the weight nature is irrelevant, dropout randomly selects connections to be discarded
rb#3159: Okay so?
Vrööm#4253: I'm curios whether binary weights would act more or less act as a learnable (?) dropout layer
nev#4905: that's dropconnect
nev#4905: dropout drops random units
Vrööm#4253: True true, my bad, thank you for the clarification
chilli#5665: what's the difference 🤔
CRG#8707: Dropout removes entire rows/columns, dropconnect removes individual weights
chilli#5665: ah
chilli#5665: wasn't clear what "connections" meant, but seems obvious now
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/899614154715979786/unknown.png
rb#3159: should'nt dropout/drop-connect still has weighted edges, main reason for binary-nn is that arithmetic operations will be replaces with bitwise.
Vrööm#4253: the weights don't matter since the units or weights get removed randomly
rb#3159: How would the weights not matter?
Vrööm#4253: because dropout does not need them, it decides *randomly* which units to drop
StellaAthena#3530: I’m certainly not going to discourage anything like what you’re suggesting. We held a “show and tell” this weekend where people gave short presentations on their work-in-progress that was very successful and went on for three hours. If you want to hold a research jam, or just hang out with people in voice chat, go ahead. There are no special permissions necessary to do so.
As far as I am aware, there is nobody who is working for EAI as their only job. The closest we come to that are a couple students (@𓅬 gabriel_syme 𓅬 and @Louis for example) whose grad school research is closely tied with their participation in EAI. However every single person with moderation powers (myself and the people with purple names) and the majority of people who have contributed to EAI projects have full time jobs. We do not pay people to do research because we don’t have the money to do so.
|
Louis#0144: I did EAI as my only job for a few months
Louis#0144: It was fun
bmk#1476: I'm NEET currently, but because of immigration bullshit, not because nobody wants to hire me
StellaAthena#3530: Wait, so you quit your old job and are just sitting around waiting to be allowed to move to SF?
StellaAthena#3530: That makes sense, I just hadn’t consciously realized that
Vrööm#4253: why not ~~yolo~~ toptal it?
EricHallahan#1051: > Do you think more scheduled weekly events would be something that people would be interested in e.g. design sessions, hacking sessions, town hall meetings, social calls?
Show & Tell and Interpretability Reading Group are exceptions, not the norm. We use high-bandwidth forms of communication in a very limited manner, as they tend to be synchronous (something that is not desirable for a worldwide collective). It also isn't very documentable---one of the things we have had consistent positive feedback upon is the accessibility of the accumulated knowledge here that has been naturally documented over time.
> Discord is great but if working for EleutherAI was my only job I reckon you would want other forms of communication too.
Like Stella said, this isn't something we do as our only jobs. I might spend a lot of time here, but that is in my free time and I am still a full-time undergraduate student.
> For example, one of the few things I miss from the office are the serendipitous conversations. Unexpected conversations bring new interesting ideas.
The discussion channels (especially #research and #off-topic) plus #art fill this role for us.
> Also, more events would prob build even more community here.
This was part of my motivation to come up with Show & Tell: I thought it would be beneficial to have something else to fill the schedule with and so I proposed the idea. I was honestly blown away by the positive response it had. However, I do not see us adding any more events in the near future just because of the aforementioned synchronous nature and scheduling issues.
𓅬 gabriel_syme 𓅬#3220: I wish hanging here would be my job 🙂 But yeah, [un]fortunately the grad school stuff is part time next to a full time job.
EAI is all voluntary though, a wonderful and free flowing exchange of information, knowledge and goose memes.
𓅬 gabriel_syme 𓅬#3220: If you want to get involved in stuff within or around EAI, my go to advice is to just hang out in here
richardblythman | Algovera.ai#3425: Amazing, thanks to everyone for all the info. I guess one of the main questions I'm looking into at the moment is how do we make it possible for people to work full time for groups like EAI? These distributed open source groups doing AI are a relatively new thing. Is it possible to not even need centralized tech companies (and would this be better for the world)? Great points on the importance of async and documentation. Agree for the most part but still thing video can be good (especially if you can record the streams, document the outcomes). I've been working with people around the world for a while (US, Europe, Vietnam, Australia) and it seems to work fairly well (although Asia + Oz tend to keep later hours).
richardblythman | Algovera.ai#3425: For sure, I'm going to dive in, try organise some events, help out with some research and continue to ask some annoying questions 😆
StellaAthena#3530: The number one limit is money. I've said for a while that if someone wanted to come and pay me to do EAI full time I would. If someone has the money and wants to fundamentally change the research landscape of AI they can. Hell, they can do that "merely" by funding a GPT-3 replication with the purpose of publicly releasing the trained weights.
|
sl24#8080: Hey, any plans for a code model like Copilot? Sorry if this has already been answered.
Parker#3197: someone commented about something they created earlier that is related to this.
https://discord.com/channels/729741769192767510/747850033994662000/899649970263711824
https://discord.com/channels/729741769192767510/730095596861521970/863074714305429534
but, I don't think there are any plans here at the moment to do this
rb#3159: https://discord.gg/68NZFfxHxD
StellaAthena#3530: We fucked around with something like this but IDK if it was ever finished or released @researcher2
EricHallahan#1051: We could not be where we are now without the generous support of the Google-backed TRC and CoreWeave (who relies on NVIDIA technology). Both paths lead to centralized tech companies. If you think new AI semi startups make that irrelevant, trace back even farther and you'll quickly realize that the high-performance semiconductor supply chain they rely on is highly centralized upon fabs like TSMC (something I am sure @AI_WAIFU would be happy to explain in detail if asked).
TL;DR: This space is centralized tech companies all the way down.
richardblythman | Algovera.ai#3425: This is the dream. I think it's fantastic that you would dive into this full time 🙂 There's quite a few funding mechanisms out there for open source projects these days (currently exploring ways to use them for my own project). Maybe we could have a call to brainstorm on how EAI could work towards this direction some time?
Awesome_Ruler_007#7922: its kinda new that I can't find better resources on it - apart from the papers
Awesome_Ruler_007#7922: so I only have a high level idea of it - can't comment at all
StellaAthena#3530: We get a lot of people who say stuff like this and then nothing happens, and so we've generally found such meetings to be a waste of time. If you genuinely know how to marshal a million dollars or more of unrestricted funding and can prove that we can talk, but I am too busy to take speculative or brainstorming meetings
kurumuz#5695: @chilli looking at new macbooks you guys need to make pytorch GPU work at m1x lol
kurumuz#5695: 64 gigs of VRAM with 400GB/s
kurumuz#5695: it will be so crazy
Louis#0144: Oh did the announcement happen
|
Louis#0144: 64gb is crazy
chilli#5665: there's ... stuff happening
kurumuz#5695: like that bandwith is so crazy
kurumuz#5695: more than A100 40GB RAM GPU can access lol
Louis#0144: That's actually insane though
Louis#0144: People will use their macs for ML
Louis#0144: Didn't think it would ever happen
chilli#5665: what are its flops?
kurumuz#5695: dunno. probably close to 3070-3080 i assume, it's so big
gollark#3909: I wonder if that would make it economically viable, given the high cost of discrete GPUs, to have mining farms made of MacBooks.
gollark#3909: It would be very funny and also somewhat horrific.
gollark#3909: Anyway, it's a shame the PyTorch Vulkan support doesn't seem to actually be... used for anything.
kurumuz#5695: people will buy macbooks to do ML
kurumuz#5695: lol
kurumuz#5695: at least if it supports pytorch properly
kurumuz#5695: i will
kurumuz#5695: new macbooks are so thicc tohugh
kurumuz#5695: im so happy apple got over making stuff thin
kurumuz#5695: proper cooling too
gollark#3909: It's really annoying to me that you can only get the best CPUs with Apple's ridiculous ecosystem and design.
|
gollark#3909: Apparently they did lose most of their CPU design team to some other company recently, so who knows.
Orz#3023: well
is ARM better than traditional cpu?
gollark#3909: That's not a very valid comparison. But Apple's cores are somewhat better than available x86 ones.
gollark#3909: ARM is an instruction set. "Traditional CPU[s]" use the x86 instruction set. People argue a lot over which design is best but broadly speaking there doesn't seem to be *that* much difference, although x86 has some advantages like I think greater code density and downsides like variable length instructions being annoying to decode.
gollark#3909: They had designed ARM CPUs for ages for their phones. Recently they got good enough and/or Intel annoyed them enough that they switched over.
Orz#3023: yeah
makes sense
richardblythman | Algovera.ai#3425: OK that's fair and sorry to hear about the wasted time (it's probably a tough problem to solve). I'm at the more speculative stage for sure. My approach is more bottom up e.g. apply for smaller monthly grants to gradually support individuals. Anyway, thanks for all the tips 🙂 I'll hang around and dive into all the stuff you and others mentioned!
alstroemeria313#1694: did they bring magsafe back
alstroemeria313#1694: I only use my laptop for testing super small MNIST stuff
alstroemeria313#1694: No usable GPU.
kurumuz#5695: ""The Apple M1 Max is powered by a 32-Core GPU which features 4096 Execution units and delivers up to 98,304 concurrent threads. The GPU offers 10.4 TFLOPs compute, 327 GTexels/s and 165 GPixels/s rates. The 16-core GPU on the M1 Pro features 2048 Execution units and delivers up to 49,512 concurrent threads. Its performance is rated at 5.2 TFLOPs compute, 164 GTexels/s, & 82 GPixels/s rates." @chilli
faraday#0862: is T0 old news around here? did you guys discuss it somewhere ?
cfoster0#4356: I don't think we necessarily discussed it, but folks did post about it here and in #research
cfoster0#4356: Seems to corroborate some other recent papers that find substantial gains to zero shot performance by training with prompts, or even just training with unrelated examples in context
faraday#0862: thanks. I don't know if it'll be #off-topic if I try to discuss any practical observations here. T0 looks amazing to me in terms of the quality and performance.
faraday#0862: https://huggingface.co/bigscience/T0pp
cfoster0#4356: Not off topic at all. Go right at it
StellaAthena#3530: Definitely not off topic. I'm excited to hear your thoughts
|
kurumuz#5695: 10 TFLOPS, so fast
kurumuz#5695: it's crazy
Kharr#7888: 10.4 TFLOPS vs 100+ on RTX 3080+
kurumuz#5695: GPU cores only
kurumuz#5695: 3080+ shader cores dont do 100 TFLOPS
kurumuz#5695: they use tensor cores too
kurumuz#5695: which you can on m1X too
Kharr#7888: I didn't see anything about tensor cores in the description?
kurumuz#5695: they do have 12 core neural engine, i dont have performance details about it
kurumuz#5695: but they have a chunky silicon, its big as CPU
kurumuz#5695: just need pytorch support 😢
kurumuz#5695: then we will fly
Kharr#7888: I guess we will see. I will be quite surprised if it is anywhere close to existing silicon for ML.
bw#3136: 10 tflops puts it under a desktop 3060's 12 tflops for fp32. not to bad for a laptop gpu.
Kharr#7888: Depends on cost and how long it can sustain it without cooking the machine. You can get laptops with full RTX cards too. Odd are you'll be able to buy a full RTX 3090 or two + a high end desktop for the same price.
kurumuz#5695: RTX laptops dont have 64gigs of VRAM
kurumuz#5695: and 400GB/s bandwith on that ram to GPU
kurumuz#5695: also these laptops will have really good cooling systems
kurumuz#5695: they're quite thicc too
kurumuz#5695: i will benchmark it when it arrives on my hands
|
Awesome_Ruler_007#7922: atleast TF2.5 works
Awesome_Ruler_007#7922: and pretty well from what I hear. I might buy an M1 iMac if I get the money
Awesome_Ruler_007#7922: This here https://towardsdatascience.com/yes-you-can-run-pytorch-natively-on-m1-macbooks-and-heres-how-35d2eaa07a83 says they have native support for Pytorch
alstroemeria313#1694: that's for cpu
alstroemeria313#1694: Meaning it is compiled for ARM and is not an emulated Intel binary.
kurumuz#5695: TF runs on the GPU
alstroemeria313#1694: yep
kurumuz#5695: for m1
kurumuz#5695: oh
kurumuz#5695: yeah they dont have GPU torch support
kurumuz#5695: sorry read it wrong
alstroemeria313#1694: How is JAX support
Awesome_Ruler_007#7922: ohh, so it doesn't work for M1?
Awesome_Ruler_007#7922: well it does kinda help in development - a couple of iterations on CPU and then rest you can put on cloud
alstroemeria313#1694: it runs on M1 CPU
kurumuz#5695: ...
kurumuz#5695: CPU speeds are not really usable
Awesome_Ruler_007#7922: like Neural Engine
Awesome_Ruler_007#7922: which I assume is like tensor cores or something....?
kurumuz#5695: neural engine is not exposed via public api
|
kurumuz#5695: george hotz reverse engineered binaries and make it work on his tinygrad repo though
alstroemeria313#1694: ahah
Awesome_Ruler_007#7922: damn
Awesome_Ruler_007#7922: doesn't make sense to me why they would do that
kurumuz#5695: <https://github.com/geohot/tinygrad>
kurumuz#5695: dunno
Awesome_Ruler_007#7922: more packages for M1 and AI means more sale of M1s for data science? 🤔
kurumuz#5695: definitely
rb#3159: unable to run pretrain script for neox due to this error, i think the LocalSlidingWindowSparsityConfig was removed from deepspeed, is this a known error? https://cdn.discordapp.com/attachments/729741769738158194/899764981497737286/Screenshot_from_2021-10-19_02-23-33.png
kurumuz#5695: Apple actually did a TF port
kurumuz#5695: we need pytorch support tho who uses TF anymore
kurumuz#5695: pytorch metal backend wen
Awesome_Ruler_007#7922: yeah, but a lot of SOTA stuff does use pytorch unfortunately
Awesome_Ruler_007#7922: buncha dummies. still, keepin my eye on that Mini 👀
kurumuz#5695: or core ML ig
kurumuz#5695: https://github.com/pytorch/pytorch/issues/47688#issuecomment-899019103
cfoster0#4356: 👀 https://twitter.com/percyliang/status/1450188510330122240?t=hqKHImaB0jt3-qEpg7a5TA&s=19
StellaAthena#3530: Well that’s disappointing
cfoster0#4356: I suppose. I wasn't expecting anything more
cfoster0#4356: This is pretty similar to the way Stanford institutionally responds to critiques, more generally
|
StellaAthena#3530: Fair. Maybe “sad” would be more accurate
StellaAthena#3530: Oh? I don’t know anything about the topic more generally
StellaAthena#3530: I pinged someone I know at Stanford and they said
> I'd die for Percy-as-advisor, but Percy-as-CRFM-head is different entity
StellaAthena#3530: Strategically I don’t understand what purpose it serves.
Ass covering to people who know the words “you should respond to public criticism” but don’t actually care about the details beyond making sure a box is checked? Like, nobody would actually read this and think it was a meaningful engagement with the public criticism received.
So it doesn’t appease people who are actually invested. And it doesn’t matter to people who don’t care about the criticism. That just leaves you with people who want to *appear* to care about criticism, it seems.
cfoster0#4356: Institutionally there's 0 appetite for "activism", but there's a substantial willingness to acknowledge critique even they don't plan on changing anything
cfoster0#4356: > That just leaves you with people who want to appear to care about criticism, it seems.
:gameryes:
cfoster0#4356: Remember who keeps the lights on, at the end of the day
elderfalcon#4450: Maybe version pinning? Don't know if u have cross version features tho
elderfalcon#4450: Also, totally unrelated but the main🧵 seems to have lulled, is chilli basically a level X here and also mod over on the r/machinelearning subreddit? How do you guys find the time hahaha
𓅬 gabriel_syme 𓅬#3220: Oh NO YOU DONT
𓅬 gabriel_syme 𓅬#3220: Did they discuss battery? My worry is that we end up with a loud, hot, heavy laptop that has limited battery which is everything the M1 was avoiding with Air
kurumuz#5695: all of them are better than m1, 16 inch is phenomenal though
kurumuz#5695: 22 hours on video
|
kurumuz#5695: so, yeah...
𓅬 gabriel_syme 𓅬#3220: Hmm ok
𓅬 gabriel_syme 𓅬#3220: I'll wait for someone to buy it first lol
𓅬 gabriel_syme 𓅬#3220: How much money is it btw, did they say?
StellaAthena#3530: Also a full time ML dev
elderfalcon#4450: https://c.tenor.com/mZZoOtDcouoAAAAM/stop-it-get-some-help.gif
elderfalcon#4450: Well, I need to move to whatever anomalous low energy density zone they live in so I can get that magic time dilation too haha. Very impressive.
elderfalcon#4450: The decision I think to go CHONK and have a neverending battery life was a total breath of fresh air for me. What a battery life, that's insane.
kurumuz#5695: very expensive. look into 13" air/pro if you dont need this much power
kurumuz#5695: i will pay 4k$ for my machine, starts from 2000 i think
𓅬 gabriel_syme 𓅬#3220: 13" pro or Air though?
kurumuz#5695: which one to buy? idk i got air personally
𓅬 gabriel_syme 𓅬#3220: yeah a bit less than $4k if you max out, dang
𓅬 gabriel_syme 𓅬#3220: good news I don't need any of that so I'll compare the pro and air
𓅬 gabriel_syme 𓅬#3220: are the thunderbolt ports for vga out?
𓅬 gabriel_syme 𓅬#3220: no right? I think that's the only problem of Air. doesn't support 2 monitors
kurumuz#5695: hmm, it doesnt?
kurumuz#5695: didnt know tbh
kurumuz#5695: new pros can do up to like 5 monitors
kurumuz#5695: pretty crazy
|
𓅬 gabriel_syme 𓅬#3220: think so
𓅬 gabriel_syme 𓅬#3220: that said, that's not a problem for me but for many who use this for work
nshepperd#2316: so good news, passing `settings=wandb.Settings(console="off")` seems to have stopped wandb from costing me hundreds of GB of egress on GCP
nshepperd#2316: bc it was in fact repeatedly uploading the entire 17M console logs every two seconds :goose10:
Ernomeyer#6988: Hey, did some ever use the ChestX-ray8 dataset?
I am having doubts if the coordinates are correct when I draw a bounding box...I think not. There are two labels which are the original image pixel spacing y the x and y axis
Sorry if this question is not for this channel/group
𓅬 gabriel_syme 𓅬#3220: So if you don't mind working for facebook and want to be/move in Europe, this might be a good time for it
researcher2#9294: We did the sublime plugins, however a competitive model hasn't been trained yet. Several others were working on that but haven't heard much recently.
PeterSchmidtNielsen#8484: Hey folks. Is https://github.com/EleutherAI/gpt-neo/blob/master/configs/gpt3_2-7B_256.json the config file that corresponds to `EleutherAI/gpt-neo-2.7B`, as I would download from huggingface? More specifically, I'm trying to find the _exact_ architecture of this model.
이온#9962: RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:79] data. DefaultCPUAllocator: not enough memory: you tried to allocate 67108864 bytes.
이온#9962: Does anyone know how to solve this? I think I should allocate the memory manually.
𓅬 gabriel_syme 𓅬#3220: I don't but google might
StellaAthena#3530: @이온 It seems like you don’t have enough space.
Ernomeyer#6988: What was the name of that project (if you know it)? I would love to contribute to a code/text generation model
StellaAthena#3530: What info do you need that the HF repo doesn’t provide
PeterSchmidtNielsen#8484: @StellaAthena Aha, thanks for the suggestion to check the HF repo, I don't know why that didn't occur to me. This shows the architecture quite clearly, thanks. And I guess the above config doesn't correspond.
researcher2#9294: So right now we have ghpy https://huggingface.co/lg/ghpy_20k which is python only, rb is working on eval tasks for code models for benchmarking. We've had the ghpy and plugin sitting around for a while now, I'll open this repo up sometime in the next few days (it's got eleuther specific tunneling for colab currently).
At that point you could either fine tune a better model (+add other language support), work on distilling ghpy so it could work faster locally with consumer gpu etc.
|
faraday#0862: Explain this like I'm five: "What happens when an unstoppable force meets an immovable object?" ... Now here is an answer from T0: "The object is moved" https://huggingface.co/bigscience/T0pp?text=Explain+this+like+I%27m+five%3A+%22What+happens+when+an+unstoppable+force+meets+an+immovable+object%3F%22 all those years.... I can cross this off my bucket list
faraday#0862: or... directly asking "What happens when an unstoppable force meets an immovable object?" yields the answer "The force is transferred to the object" https://huggingface.co/bigscience/T0pp?text=What+happens+when+an+unstoppable+force+meets+an+immovable+object%3F whichever answer you like... 🙂
rom1504#5008: Asking meme questions getting meme answers
faraday#0862: exactly right
faraday#0862: https://huggingface.co/bigscience/T0pp?text=There+are+two+ducks+in+front+of+a+duck%2C+two+ducks+behind+a+duck+and+a+duck+in+the+middle.+How+many+ducks+are+there%3F "There are two ducks in front of a duck, two ducks behind a duck and a duck in the middle. How many ducks are there?" T0pp answer: "five"
EricHallahan#1051: This discussion may be better suited to #prompting. 😉
bernaise#6161: T0: `I'm sorry Eric, I'm afraid I can't do that...`
Dwarf#6935: try monitoring your ram usage while it's running. Often times these errors can be a bit confusing cause they'll tell you they ran out of memory while trying to allocate a small amount of data, but in reality, they had allocated a ton of data, and that was just the latest allocation that ran over the limit
mrShiba#4412: does anyone know any discord groups for neural machine translation (NMT)?
𓅬 gabriel_syme 𓅬#3220: sphinx might, they'll be around a bit later I guess
Sphinx#2092: I do not, unfortunately. If you find one, let me know. You can probably discuss nmt in this server as well.
Louis#0144: You know
Louis#0144: It's harder and harder to find NMT people
Louis#0144: Is the field shrinking?
Sphinx#2092: I think its the opposite. NLP is just super hot right now and lots of applications are popping up.
Sphinx#2092: MT was just one of the first so now it seems overshadowed. There's still plenty of people doing it, WMT is still active, research is thriving, etc.
StellaAthena#3530: I would bet heavily that the percentage of NLP people doing NMT is at the lowest it’s been in four or so years, but would also expect that the actual number had gone up
mrShiba#4412: I guess the NMT community is kinda fractured because everyone worked mostly on the language pair they know
mrShiba#4412: like my main interest is in Japanese to English translation
Louis#0144: i love shibas omg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.