data
stringlengths 115
7.61k
|
---|
CRG#8707: https://twitter.com/OpenAI/status/1471529745498075144
StellaAthena#3530: Paper: https://cdn.openai.com/WebGPT.pdf
StellaAthena#3530: > We would like to thank Leo Gao, Hyeonwoo Noh and Chelsea Voss for working on future directions
zphang#7252: Nice that Leo is the first author of future directions
bmk#1476: :guilty:
bmk#1476: nah that's just alphabetical
nev#4905: I'm trying ruDALL-E on a v3-8 and timing loading times, is 10s for only one layer normal? looking at mtf-jax, it should be more like 5s
StellaAthena#3530: Maybe their code is less efficient?
nev#4905: it's my code :berk:
nev#4905: hmm, what could be the source?
StellaAthena#3530: I havenโt really looked at ruDALL-E, but youโre sure itโs not something dumb like youโre failing to do an apples to apples comparison of sizes?
nev#4905: might be that I'm loading only one layer and multiplying that.
nev#4905: anyway it probably won't matter for now
nev#4905: 1min 19s for compiling the entire model
nev#4905: which is a little less than `mtf`'s 2.3B
m_wAL99#1923: 0.json fix https://cdn.discordapp.com/attachments/729741769738158194/921110050196561970/16samples_widget.zip
m_wAL99#1923: 0.json https://cdn.discordapp.com/attachments/729741769738158194/921113826068410388/unknown.png
m_wAL99#1923: 5.json reach "max_num_actions": 100
"question": "Why are almost all boats white?" https://cdn.discordapp.com/attachments/729741769738158194/921113888831967342/unknown.png
m_wAL99#1923: Should develop a browser for Internet-Augmented task, rather than lynx(text-base browser):thinkies:
|
nev#4905: why does running a 12-layer model take 100x as long as a 6-layer model ๐ค
nev#4905: ah, it was a fluke
nev#4905: 819ms per batch with gradients :pog:
nev#4905: overfitting an 8-layer ruDALLE https://cdn.discordapp.com/attachments/729741769738158194/921134168497344512/unknown.png
chilli#5665: @Sid I did my talk btw - here's my slides ๐
https://docs.google.com/presentation/d/1rTt0BR2KChDQQTks2hHUtvHxtHQKwgQHVNrmbhj0byk/edit?usp=sharing
Sid#2121: nice! any idea when/if a recording will be available?
chilli#5665: uh
chilli#5665: probably won't be for a while lol
chilli#5665: also, not sure how good my recording was - should probably have rehearsed a bit more :blobsad:
Sid#2121: this is awesome https://cdn.discordapp.com/attachments/729741769738158194/921137412112015380/unknown.png
Sid#2121: is this stuff i can use rn?
Sid#2121: given an fx module
Sid#2121: same q with the fusing
Sid#2121: also - did you have a chance to test fusing with larger scale models? and maybe compare to hand written fused kernels?
Sid#2121: also the big graph is beautiful, is that with the draw_graph function from functorch.compile?
Sid#2121: can you perform the same kind of optimizations with `aot_module` as you can with `aot_function`? sorry for all the Qs hah
chilli#5665: yeah, `aot_module` is just a wrapper around `aot_function` that passes in all of the parameters/buffers as arguments.
Sid#2121: hm, does that work even with modules that the tracer would fail to trace?
|
chilli#5665: yeah, and `draw_graph` is just a wrapper around the one in FX core lol https://cdn.discordapp.com/attachments/729741769738158194/921139011815358464/unknown.png
chilli#5665: oh, no
Sid#2121: ah sadge
chilli#5665: it only works with modules/functions you can trace
chilli#5665: but you can apply it to an arbitrarily small submodule
chilli#5665: and it still works fine in training
Sid#2121: ok so it's just calling trace then passing it into aot_function
chilli#5665: mmm, well, `aot_function` is doing the tracing, but yeah
chilli#5665: yeah, we have some comparisons to hand-written fused kernels
chilli#5665: If you've seen lightseq, we can match their hand-written pointwise ops with AOTAutograd + NVFuser
chilli#5665: (well, match or exceed)
chilli#5665: I've tried fusing larger models, with varying degrees of success
chilli#5665: Got something like 6% improvement on a ViT
chilli#5665: 15% improvement on a TransformerEncoder layer
chilli#5665: You can also vary what you're willing to recompute, so if I allow recomputing some fairly cheap ops, I can get results like 50% memory reduction on a Resnet for 1% runtime increase
chilli#5665: same for a vision transformer iirc
chilli#5665: well... the offloading thing was just an example, there's probably a lot more work to be done to actually make it production ready
chilli#5665: For example, one issue there is that it assumes there's only one output from your function
Sid#2121: deepspeed inference fused kernels claims something like a 2-4x speedup for transformer inference, i'm not sure about lightseq tho. - where do you think the gap in perfomance is coming from mainly?
chilli#5665: oh, I'm just talking training
|
Sid#2121: ah, ok can i pose the same question wrt inference
Sid#2121: (but yeah, a 15% speedup on an encoder is pretty nontrivial, that's awesome)
chilli#5665: `aot_function` doesn't really do that much special for inference compared to say, `jit.trace`
chilli#5665: although personally I've still found it to work better lol
chilli#5665: but I guess... less inherent advantages
uwu1#4864: love the graph vis! do u think there's some way to reconstitute the individual ops into e.g modules or the user defined functions?
chilli#5665: wdym?
uwu1#4864: like for the visualising your model example, I presume the graph is of the autograd ops in the model right? i was wondering if there was some way to map those ops back to the e.g nn.Module that caused them
chilli#5665: oh
chilli#5665: hmmm
chilli#5665: I think it is...
chilli#5665: requires some amount of infra work though
chilli#5665: but yes, definitely possible in theory
chilli#5665: (but not totally sure it's as easy in the backwards pass)
uwu1#4864: ah okay :) just imagining something like tensorboard graph vis that shows profiling and embeddings harvested from the modules and stuff
uwu1#4864: https://pypi.org/project/awkward/
this seems fun, also it supports AD? not sure how they make it work for records and stuff although maybe bc it only supports elementwise grad
uwu1#4864: https://indico.cern.ch/event/894127/attachments/1996570/3331173/6_-_pivarski-irishep-poster.pdf
StellaAthena#3530: https://discord.com/channels/729741769192767510/729741769738158194/920380132139626617
uwu1#4864: this seems like it would be really nice for FPGA/ASICs to reduce the memory requirements
|
uwu1#4864: although I'm not sure I understand how it connects to awkward
aูด#8803: Is pytorch unanimously considered better than tensorflow for developing DL now?
AI_WAIFU#2844: I think so with the exception of mobile applications
bmk#1476: no, Francois Chollet doesn't like pytorch
aูด#8803: jax?
alstroemeria313#1694: Keras
aูด#8803: Fair enough
aูด#8803: I'm using keras rn but people keep shilling pytorch to me so I kinda want to see if there's any reason why I should pick it up
aูด#8803: Fresh install so it's a decent time to try something new
alstroemeria313#1694: Chollet is the creator of Keras :)
Sid#2121: (i'm 99% certain bmk and alstro are trolling, pls use pytorch)
bmk#1476: hey, he asked for *unanimous*
Sid#2121: https://www.youtube.com/watch?v=hou0lU8WMgo
aูด#8803: ik
aูด#8803: majority opinion would be cool
random person#5234: Whats the recommended way to deploy pytorch models
random person#5234: Django?
someKindaBean#8471: what is with papers reusing names from other similar papers? i was looking at this paper on a method called PRIMER (which isn't even an acronym for the method) that uses an interesting sentence masking training strategy and the Google Primer paper
someKindaBean#8471: paper i was looking at: https://arxiv.org/abs/2110.08499v1
someKindaBean#8471: also, has anyone tried masking on the sentence level in their personal experiments?
|
๐
ฌ gabriel_syme ๐
ฌ#3220: I think we're running out of acronyms
๐
ฌ gabriel_syme ๐
ฌ#3220: or imagination
๐
ฌ gabriel_syme ๐
ฌ#3220: or both, maybe
glazgoglabgalab#5255: This gave me an idea: What if (next) sentence prediction masking with perceptual loss. Sorta like https://arxiv.org/abs/2111.12710
kurumuz#5695: pytorch
๐
ฌ gabriel_syme ๐
ฌ#3220: I wanted to try masking on sentence level for architext, the idea being that I'd mask whole spaces. Sort of an architecture-MLM training
๐
ฌ gabriel_syme ๐
ฌ#3220: haven't tried it yet, or any seq2seq models seriously ๐ฆ
chirp#4545: If your performance requirements are not very demanding, you can use any Python web server library
someKindaBean#8471: that's a neat concept
someKindaBean#8471: i don't know enough about perceptual loss, but it sounds like an interesting combination. thanks for the link
guywhoknowsnothing#0218: https://wandb.ai/eleutherai/gpt-thicc/reports/20B-Pretraining--VmlldzoxMTk3NjEy
guywhoknowsnothing#0218: I don't want to be a "20b when?" person, but I am curious about something.
guywhoknowsnothing#0218: What is the distinction between pretraining and training, technically?
guywhoknowsnothing#0218: Is pretraining like a run of smaller training to see if anything goes majorly wrong before training proper might begin?
cfoster0#4356: In this case pretraining = training
cfoster0#4356: It used to be that folks would "pretrain" on a general task before "training"/"finetuning" on a specific task. The lingo is just a holdover
guywhoknowsnothing#0218: Is the latter now "finetuning"?
cfoster0#4356: Yes, edited
AI_WAIFU#2844: yeah, terminology is crap
guywhoknowsnothing#0218: @cfoster0 Well thanks very much for the answer.
|
guywhoknowsnothing#0218: Exciting news.
guywhoknowsnothing#0218: It's been hotly anticipated, to say the least.
guywhoknowsnothing#0218: What kind of hardware is required to run a 20b model as compared to 6b?
kindiana#1016: about 3 times more
kindiana#1016: maybe 4
kindiana#1016: ๐
StellaAthena#3530: Yeah basically
bmk#1476: more parameters mean approximately linearly more memory usage
StellaAthena#3530: 3.5x as much VRAM is necessary to get the same token/s performance
guywhoknowsnothing#0218: Can the model run in parallel across multiple GPUs? Or does it more or less need to "fit" on one.
guywhoknowsnothing#0218: (I'm guessing in VRAM)
bmk#1476: everything can be parallelized
guywhoknowsnothing#0218: Ah.
StellaAthena#3530: No you can parallelize it across machines if you really want to
kindiana#1016: you just gotta write the code for it
kindiana#1016: haha
StellaAthena#3530: (There's no need to)
StellaAthena#3530: (but you could)
guywhoknowsnothing#0218: Is that in part what CoreWeave offers?
bmk#1476: no
|
bmk#1476: they sell hardware
StellaAthena#3530: CW sells hours on actual GPUs
bmk#1476: you gotta figure out the parallelization yourself
guywhoknowsnothing#0218: I mean, hardware you could use for parallelization. Wasn't aware parallelization isn't needed or advised.
guywhoknowsnothing#0218: Anyway I'll see myself out before I ask something even dumber.
Congrats on the progress, looking forward to whatever it brings.
StellaAthena#3530: Oh, that's my bad. Parallelism is needed. Parallelism *across machines* is a more extreme form of parallelism than parallelism *between GPUs of the same machine*
guywhoknowsnothing#0218: Ah, the latter was what I was referring to.
StellaAthena#3530: You need about 65 GB of VRAM
guywhoknowsnothing#0218: I *think* NAI may run their 6b model on a bunch of smaller GPUs tied together as "nodes".
Spacecraft1013#5969: yeah parallelism is pretty necessary in this case, theres no single gpu (at least that i know of) that could fit the entire 20b parameter model in it + optimizer states and other stuff
bmk#1476: nodes is a super general term
StellaAthena#3530: Well, they're asking about inference which is way smaller
StellaAthena#3530: nodes = thingies that compute and are networked together, tbh
EricHallahan#1051: thingies = :thinkies:
guywhoknowsnothing#0218: I had heard the chip shortage had impacted EAI's progress towards larger models, is that accurate?
StellaAthena#3530: I'm sleepy and apparently vaguely delirious so Imma take my leave now
cfoster0#4356: Yes
guywhoknowsnothing#0218: Shit sucks.
guywhoknowsnothing#0218: My GPU just died a few weeks ago. ๐ตโ๐ซ
|
guywhoknowsnothing#0218: Greaaaaat timing.
guywhoknowsnothing#0218: Yes please, sign me up to buy a mediocre GPU at 250% markup.
guywhoknowsnothing#0218: If I can even find one.
EricHallahan#1051: Not having a dGPU gives you exclusive access to the no dGPU club. :think:
guywhoknowsnothing#0218: I learned a fascinating thing.
guywhoknowsnothing#0218: Intel GPUs are crazy fast at video decode.
guywhoknowsnothing#0218: So my piece of shit onboard GPU is doing well with cloud PC stuff.
EricHallahan#1051: Yeah Quick Sync
guywhoknowsnothing#0218: After I finally found a subscription.
guywhoknowsnothing#0218: Because, of course, the shortage extended to them as well.
EricHallahan#1051: Quick Sync is really good at what it does.
bmk#1476: the real solution is to just not use a gpu
guywhoknowsnothing#0218: Can't get my gaming fix then.
StellaAthena#3530: learn to play chess
guywhoknowsnothing#0218: Too dumb.
bmk#1476: read a book
guywhoknowsnothing#0218: I solved the dillema.
guywhoknowsnothing#0218: With a cloud PC.
bmk#1476: develop an extremely unhealthy work life balance and spend all of your free time on work
guywhoknowsnothing#0218: For $0.50 an hour I can play with 65ms latency.
|
guywhoknowsnothing#0218: In roughly 2,500 hours I would finally eclipse the price of just buying a decent GPU.
StellaAthena#3530: Get a hobby that consumes your life
Make the hobby your job
Feel restless and vaguely confused with this new concept of "Free time"
bmk#1476: :guilty:
guywhoknowsnothing#0218: Neo J 6b with a good finetune is already really impressive, I can scarcely imagine how 20b will turn out.
bmk#1476: mfw I wake up and go do ML for work, then in the afternoon when I get off work I fire up my other laptop to do even more ML
guywhoknowsnothing#0218: I am kind of wondering if the point of diminishing returns in quality of output vs cost to run is way, way below 175b parameters.
bmk#1476: and then I spend all night dreaming about ML
StellaAthena#3530: Diminishing returns starts at like 60M
bmk#1476: it's a flawless strategy, I know
guywhoknowsnothing#0218: Really?
EricHallahan#1051: I occasionally have dreams of interacting on this sever lmao
bmk#1476: just recently I had a research idea come to me in a dream
EricHallahan#1051: Does it involve geese?
StellaAthena#3530: y'all're basic. I dream about conquoring nations on the back of a dragon, or having the government put a hit out on me
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/921250435719901184/Screenshot_20211216-210000_Discord.jpg
EricHallahan#1051: I take that as a no.
guywhoknowsnothing#0218: @StellaAthena May I ask why do you say 60m?
bmk#1476: :goose10:
|
EricHallahan#1051: :goose10:
StellaAthena#3530: I eyeballed a plot
StellaAthena#3530: the curve is extremely concave
kindiana#1016: idk looks pretty straight on a log-log to me
guywhoknowsnothing#0218: ~2b to 6b felt like a huge difference to me.
cfoster0#4356: It's all diminishing returns
cfoster0#4356: Every additional increment in compute gets you a smaller increase in loss
EricHallahan#1051: Like there is a pretty big difference between knowing grammar and not knowing grammar lol
StellaAthena#3530: I think you guys aren't talking about diminishing returns
StellaAthena#3530: Diminishing returns means that performance gain costs more compute each time you increase size
StellaAthena#3530: These plots show diminishing returns https://cdn.discordapp.com/attachments/729741769738158194/921255218547687494/Screen_Shot_2021-12-16_at_11.19.12_PM.png
StellaAthena#3530: That doesn't mean you don't get more performance and it doesn't mean it's not worthwhile
StellaAthena#3530: It just means that the performance gain going from 2.7B params to 6B params is greater than going from 100B to 103.3B.
StellaAthena#3530: Which is true whether the x axis is params or is compute required to train to a fixed number of tokens
guywhoknowsnothing#0218: @StellaAthena Yes I was asking informally.
guywhoknowsnothing#0218: In terms of, would a say 20b model have minimal loss in output quality for the profit margin you run it at, compared to 175b.
guywhoknowsnothing#0218: For a storyteller app or a chatbot or whatever.
bmk#1476: dunno
EricHallahan#1051: Nobody knows.
guywhoknowsnothing#0218: Well all I can say is the most used implementation of your guy's Neo J 6b is giving the most used version of OAI and AI21's ~170b model an insane run for its money, lol.
|
kurumuz#5695: if you are talking about sigurd its downstream tuned
kurumuz#5695: meanwhile OAI and other models are not. not a fair comparison
guywhoknowsnothing#0218: @kurumuz Dragon is """finetuned""". :^)
guywhoknowsnothing#0218: At a whopping 100mb of data.
StellaAthena#3530: Whatโs this โdragonโ and why do you think itโs the most used GPT-J model?
guywhoknowsnothing#0218: I think NAI's Sigurd is likely the most used implementation of GPT-J right now.
guywhoknowsnothing#0218: Dragon is AIDungeon's use of OAI's GPT-3 and AI21 Jurassic-Large.
StellaAthena#3530: Ah
kurumuz#5695: i dont think so
EricHallahan#1051: I highly doubt that.
kurumuz#5695: though i dont have data points for it
EricHallahan#1051: I bet the most used application is one which isn't even user facing.
guywhoknowsnothing#0218: Fair enough.
guywhoknowsnothing#0218: I suppose I meant public, commercial service.
EricHallahan#1051: Still doubt. There are a lot of services that popped up for GPT-J, and it is hard to tell which is the most popular.
guywhoknowsnothing#0218: Do you have some examples?
guywhoknowsnothing#0218: Just curious.
EricHallahan#1051: Neuro, Forefront and Hugging Face all compete pretty directly.
bmk#1476: our main objective isnt user facing stuff anyways, we mostly make these models for research
EricHallahan#1051: Research is where the ROI is for all of our models.
|
guywhoknowsnothing#0218: Understandable.
StellaAthena#3530: https://gpt3demo.com/apps/gpt-j-6b
https://aws.amazon.com/marketplace/pp/prodview-h5vz457l5i3lw
https://www.helloforefront.com/blog-posts/gpt-j-6b-an-introduction-to-the-largest-open-sourced-gpt-model
https://hub.getneuro.ai/model/nlp/gpt-j-6B-text-generation
https://apps.aixsolutionsgroup.com/
https://writeholo.com/write/
https://6b.eleuther.ai/
StellaAthena#3530: Here's a couple I found immmediately on google
EricHallahan#1051: Commercialization isn't too interesting to us, just a side effect.
chilli#5665: I kinda get the feeling that a lot of people don't really understand what's happening if you do something like
```
del tensor
```
chilli#5665: I think a lot of people assume that this means that the memory for `tensor` is going to be freed on the GPU
random person#5234: I mean if you do torch cuda free memory command afterwards
random person#5234: Wouldnt it?
chilli#5665: no
chilli#5665: it only frees the memory for the tensor if this is the last reference to it
chilli#5665: For example
|
```
val = {'a': torch.randn(3, device='cuda')}
x = val['a']
del x
print(val['a']) # Still works, which it wouldn't if `x` had been freed
```
nev#4905: this got me so many times when I forgot to delete the scheduler
chilli#5665: I'm actually running into a really annoying issue right now with Python...
chilli#5665: I'm curious if anybody has any ideas
chilli#5665: so, basically I'm doing something like
```
args = [large number of tensors]
f(*args)
```
chilli#5665: and the thing is, the tensors are used sequentially inside of `f`
chilli#5665: so after they've been called, they can be freed
chilli#5665: but... since I'm passing these in as arguments to the function, there's inherently another refcount bump from the context above yours
chilli#5665: very annoying
chilli#5665: the only obvious solution I can think of is to make it a C++ function or something where I have finer-grained control over the memory ownership
chilli#5665: The code essentially looks something like
|
chilli#5665: ```
def f(*args):
new_vals = []
for arg in args:
new_vals.append(arg.clone())
del arg
return new_vals
args = [large number of values that occupy a lot of memory]
f(*args)
```
Kharr#7888: Can your f accept a list or are you stuck using *args which is immutable?
chilli#5665: it can accept a list, sure
chilli#5665: oh, hmm
chilli#5665: lemme think about that
Kharr#7888: [].pop() will remove items from the source list within your f
chilli#5665: unfortunately, I don't think I can
chilli#5665: well, in reality, I have an actual function
chilli#5665: with arguments
chilli#5665: like
```
|
def f(a,b,c,d):
```
Kharr#7888: based on your description, abcd are all tensors that you want to be able to clear, right? And Python refuses to let you do that if there is any reference to the source tensor
chilli#5665: yeah
chilli#5665: Basically I wanna pass ownership to the function
Kharr#7888: Hmm, not sure how to do that when using *args which creates an immutable object. :blobsad: Lists contain only references so you can monkey with the original data and delete it.
chilli#5665: I wonder if I can do this by mucking with the interpreter state
chilli#5665: you can do this if you're passing it to a C++ function
chilli#5665: like, through pybind
random person#5234: @chilli can you do copy deepcopy?
chilli#5665: that doesn't do what I want
chilli#5665: that's very different
chilli#5665: tbh
chilli#5665: that's just generating an entirely new copy
random person#5234: I see. Sry.
nshepperd#2316: ```py
def move(xs):
result = list(xs)
del xs[:]
return result
|
args = [large number of tensors]
f(*move(args))
```
lmao
chilli#5665: apparently this doesn't even work
random person#5234: I think i misread
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/921400337158520873/unknown.png
nshepperd#2316: oh
chilli#5665: this has the same problem, no?
nshepperd#2316: yeah
chilli#5665: peak memory will basically be double your input argument size
chilli#5665: https://stackoverflow.com/questions/67010386/delete-reference-of-variable-passed-to-a-function-in-python
chilli#5665: maybe this is impossible
chilli#5665: ...
chilli#5665: what a pain in the ass
nshepperd#2316: you can wrap each of the individual arguments in a container or something
nshepperd#2316: that lets you .move() the contents out of it
chilli#5665: hmm
nshepperd#2316: and then the caller only maintains a reference to the empty container
|
chilli#5665: damn
chilli#5665: ideally I want to do this generically
chilli#5665: though
chilli#5665: but
chilli#5665: hmm
chilli#5665: does that even work
chilli#5665: I guess maybe, yeah
chilli#5665: I think I need to do something at the C++-level for this
chilli#5665: only question to me is what a clean solution would look like
Kharr#7888: If you pass in a list of items, the list is mutable so you can empty it.
```a = [1, 2, 3, 4]
def f(args):
for i in range(len(args)):
args.pop()
f(a)
print(a)```
chilli#5665: yeah, you're right, I was confused what they were referring to
chilli#5665: hmm
chilli#5665: seems like I can manually decrement the references to the input
chilli#5665: maybe
|
chilli#5665: nvm
chilli#5665: I think this is possible with a pybind wrapper though
alstroemeria313#1694: @chilli can you use weak references
alstroemeria313#1694: https://docs.python.org/3/library/weakref.html
chilli#5665: interesting
chilli#5665: seems like it could work
chilli#5665: if I was able to change my function
chilli#5665: hmm
chilli#5665: also, not sure how I'd use it here
chilli#5665: yeah, nvm, I don't think it's possible to use it here
chilli#5665: the problem is that the callstack looks something like
```
def outer():
a,b,c = ...
call function(a, b, c)
```
chilli#5665: It doesn't matter if I make a,b,c weak references since they're still referenced outside the function
chilli#5665: and then if I replace the values with weak references then there's no way to pass it in
chilli#5665: what I effectively need is an "unique reference"
alstroemeria313#1694: oh
|
alstroemeria313#1694: yeah i was meaning to make the outside references weak
alstroemeria313#1694: but i guess that doesn't work
chilli#5665: yeah, but then they just get deleted
alstroemeria313#1694: bc you would need to... yeah
chilli#5665: I suspect you can do it in pybind
alstroemeria313#1694: Can you manually call `__del__()`
alstroemeria313#1694: On the tensors
alstroemeria313#1694: This sounds terrible lol
chilli#5665: hmmm
chilli#5665: I think what I'd like to do
chilli#5665: and i think it's possible
chilli#5665: is basically use pybind
chilli#5665: and make a wrapper like (not real code)
```
wrap_function(py::args args, py::function function) {
owned_args = convert_to_cpp_owned(args)
return function(convert_to_py_owned(owned_args))
}
```
chilli#5665: this actually has nothing to do with dynamic graph execution lol
|
chilli#5665: this problem only actually shows up since I have a static graph...
alstroemeria313#1694: where do the large number of tensors come from
chilli#5665: if you're interested in more context it's basically something lke
```
def backward_pass(*saved_activations)
.....
```
alstroemeria313#1694: Oh.
alstroemeria313#1694: Saved activations.
chilli#5665: so usually your activations get cleared as you use them in your backwards pass
alstroemeria313#1694: I was going to suggest passing in a generator instead of a list so you could lazy eval inside the function
alstroemeria313#1694: And then the only copy would be inside.
chilli#5665: but in this case, I have some problems since the owning args from the function call increase my peak memory usage...
chilli#5665: zzzzzzz
alstroemeria313#1694: yeah
chirp#4545: Is this why people want to rewrite stuff in Rust
chilli#5665: No you can solve this in C++ too
chilli#5665: I do think that the excessive use of shared_ptr is why people use rust though
chilli#5665: Like, In C++, people often donโt trust their understanding enough to use another memory management system lol
chilli#5665: while in rust you can just do more risky things and trust the compiler will save you
|
uwu1#4864: people def use Rc/Arc in rust a lot too, usually the only alternative is to use an arena and index or go unsafe
Dashiell#8739: Is a shared_ptr in C++ the same thing as Rc/Arc in rust?
Dashiell#8739: I've never used C++
uwu1#4864: yeah
uwu1#4864: std::shared_ptr and std::atomic<std::shared_ptr>
uwu1#4864: i guess the cpp version is also mutable by default
Motive#9128: o/
EricHallahan#1051: Welcome!
Motive#9128: Im super interested in all this, but I havent wanted to be an issue.
chilli#5665: Oh, for some purposes yeah, shared ptr is most convenient.
But there are plenty of cases where people don't know what's safe and just default to shared ptr
chilli#5665: (including for me to be clear lol)
AI_WAIFU#2844: unique ptr ftw
uwu1#4864: true yeah, ive been following this language that uses ref counting by default but then uses rust like analysis to statically elide it. https://www.roc-lang.org/
uwu1#4864: you can also turn algos that run on immutable data to operate in place when you know the refcount is 1
chilli#5665: Unique ptr is kind of crappy though
chilli#5665: Unique ptr can be null
chilli#5665: At least compared to CS
guac#4716: (they're both crappy lol)
|
AI_WAIFU#2844: That's fair, one of the big weaknesses of c++ IMO is that it doesn't have good support for tuples or algebraic data types
AI_WAIFU#2844: if you try it ends up being some big::fucked<up,mess>::of variables
chilli#5665: Well, I'm not sure that's the only issue - it's more that unique ptr doesn't guarantee safety
chilli#5665: Like, people like shared ptr since for the most part
chilli#5665: If you're just passing shared ptrs around
chilli#5665: You're not gonna have memory leaks
chilli#5665: You're not gonna have null dereferences
chilli#5665: Etc.
chilli#5665: But those aren't guaranteed with unique ptr
uwu1#4864: i think they're referring to the lack of safety around moved from unique pointers (altho I don't see how that would leak rather than null ptr)
chilli#5665: I was referring to the second one with unique_ptr
chilli#5665: If anything, shared_ptr is more memory leak prone than unique_ptr
zphang#7252: https://arxiv.org/abs/2112.08429
zphang#7252: Also I didn't know putting affiliations in arXiv was a thing
Louis#0144: Who's this Horace guy I keep hearing about
bmk#1476: never heard of him
bmk#1476: if only we could convince him to write a paper with us
EricHallahan#1051: Seems like a pretty cool dude as far as I can tell.
chilli#5665: Citation needed indeed
Deleted User#0000: looks like an mlsys submission, how were the reviews?
|
๐
ฌ gabriel_syme ๐
ฌ#3220: Reviewer 1 loved the chicken costume but Reviewer 2 said it was a bit too cocky. Fck reviewer 2, imho.
Louis#0144: https://github.com/EleutherAI/magiCARP CARP training API now available
Louis#0144: btw magiCARP works out of the box with a ViT encoder
Louis#0144: incase anyone wants to do something with CARP that involves images
Louis#0144: (ok not out of the box but its like ~10 lines you need to change and its super obvious what to do once you learn the encoders subAPI)
alstroemeria313#1694: Hey can I just like, optimize a model with L-BFGS
zackt1234#6754: @Softology just discovered your blog(visions of chaos), Christmas came early ๐
alstroemeria313#1694: Like it is tiny (an `nn.Linear(512, 10)`) and the whole dataset fits into GPU memory
alstroemeria313#1694: loss is squared earth mover's distance
Sid#2121: pytorch introduced L-BFGS recently
Sid#2121: i think
alstroemeria313#1694: yeah but their implementation was bad last i checked
alstroemeria313#1694: did they ever fix it
Sid#2121: never used it personally
alstroemeria313#1694: or do i need to find a scipy wrapper
Sid#2121: last time i used L-BFGS I was still using sklearn
alstroemeria313#1694: i want the actual optimal model lol
alstroemeria313#1694: If I can't do L-BFGS feasibly maybe I can just train for a while using full batch gradient descent
EricHallahan#1051: Oh are you switching the output to a categorical?
alstroemeria313#1694: yep
|
AI_WAIFU#2844: They used to do that a really long time ago.
alstroemeria313#1694: right but. am i going to run into problems with like, the squared EMD loss not having a nice Hessian
alstroemeria313#1694: It's squared so it's *probably* nice?
alstroemeria313#1694: L-BFGS notoriously fails hard on L1 type losses.
AI_WAIFU#2844: no idea tbh
alstroemeria313#1694: Even if they are convex
alstroemeria313#1694: I'm pretty sure normal EMD would be really bad.
alstroemeria313#1694: i could still do it w/ gradient descent with decaying step sizes
alstroemeria313#1694: .
alstroemeria313#1694: except i have to make it take multiple param tensors.
alstroemeria313#1694: Fortunately there are only two.
alstroemeria313#1694: oh, this is better
alstroemeria313#1694: so i just need to split/unsplit the tensors to optimize
nshepperd#2316: oh this is for the aesthetic classifier?
alstroemeria313#1694: yep~
nshepperd#2316: is 512 small enough to do full newton's method? ehehe
alstroemeria313#1694: 5130
nshepperd#2316: ohh yeah
nshepperd#2316: maybe not ^_^
alstroemeria313#1694: converged w/ l-bfgs in 1184 iterations
|
alstroemeria313#1694: using full batch gradients
alstroemeria313#1694: er, 1081 iterations
alstroemeria313#1694: 1184 fevals
alstroemeria313#1694: time taken: 5.1213s
alstroemeria313#1694: on V100
alstroemeria313#1694: Why does it not reach the same optimal solution in all cases...
alstroemeria313#1694: Like with different starting random seeds.
alstroemeria313#1694: bet it's the convergence check
alstroemeria313#1694: like it expects the gradient to be kinda larger than it is
alstroemeria313#1694: it uses the absolute value of the gradient elements to decide when to stop.
alstroemeria313#1694: `CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH`
alstroemeria313#1694: ```
CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
Warning: more than 10 function and gradient
evaluations in the last line search. Termination
may possibly be caused by a bad search direction.
```
alstroemeria313#1694: fuck it i have a V100 i will do it in float64
alstroemeria313#1694: ```
|
STOP: TOTAL NO. of f AND g EVALUATIONS EXCEEDS LIMIT
time taken: 94.52088689804077
final loss: tensor(0.0572, device='cuda:0', dtype=torch.float64)
```
alstroemeria313#1694: `F = 5.72413119638972845E-002`
AI_WAIFU#2844: Is you objective convex?
alstroemeria313#1694: i don't know.
alstroemeria313#1694: it involves a softmax so you can like, change all the biases on the output layer by the same amount and get the same loss
alstroemeria313#1694: solutions starting from different random seeds are *very* different
alstroemeria313#1694: so i am just assuming it is not convex.
alstroemeria313#1694: mb i can try gradient descent with decaying lr as a check
alstroemeria313#1694: to make sure i really can't get much lower loss
alstroemeria313#1694: `F = 5.72417576984257173E-002`
chilli#5665: Pretty good
Louis#0144: Hey if anyone wants to check over magiCARP and give feedback
Louis#0144: I'd appreciate it
Louis#0144: ๐
Louis#0144: https://github.com/EleutherAI/magiCARP
Louis#0144: Even just like skimming the design
alstroemeria313#1694: `5.72417506662234454E-002`
|
alstroemeria313#1694: oh if i use a sane init i get solutions that are much closer
alstroemeria313#1694: i was just using N(0, I) as an init
alstroemeria313#1694: but then i used pytorch's default init and copied it into the flat params tensor i was optimizing
IDK#1046: Where are you getting your V100?
alstroemeria313#1694: datacrunch.io
tpapp157#3643: Came across this interesting dataset of ~3B online chess games recently https://database.lichess.org/. Time to grind it down with contrastive learning and see what falls out the other side because why not.
alstroemeria313#1694: Hey so. If I wanted to get large amounts of like, human ratings of aesthetics of images. Over very diverse types of images.
alstroemeria313#1694: How would I go about getting this
ilovescience#3282: i'd assume you'd write up some instructions to put on amazon mechanical turk?
alstroemeria313#1694: yeah but $$$
ilovescience#3282: maybe you could put a form for people to fill out on twitter?
EricHallahan#1051: I want to say that you are thinking along the wrong path. What sources could you go out and scrape to get that data?
uwu1#4864: make the images into NFTs
alstroemeria313#1694: NFT value has very little to do with aesthetic quality ๐
tpapp157#3643: You'd also probably need to track uids for raters because different people are going to like different types of aesthetics.
ilovescience#3282: how would that help? clearly the people who buy NFTs are not worried much about the aesthetics of the image
alstroemeria313#1694: ...I could train a classifier on CLIP embeddings where the two classes are "came from cc12m" and "came from shutterstock/alamy"
alstroemeria313#1694: I would need non-watermarked scrapes of the latter sites ofc.
tpapp157#3643: In other words certain aesthetics are going to be very polarizing. You probably need to be careful about simply taking the mean rating.
alstroemeria313#1694: the models i want to train can predict the *distribution* of ratings
|
alstroemeria313#1694: like logits for each class from 1 to 10.
tpapp157#3643: Ok that's more interesting. Still might want take more of a recommender approach if possible.
EricHallahan#1051: https://en.wikipedia.org/wiki/Wikipedia:Featured_pictures
EricHallahan#1051: could be useful for a small sample
uwu1#4864: I feel like some ways of gathering this will give you Thomas Kinkade and others would give you Mark Rothko
tpapp157#3643: Don't art websites have rating or like systems?
EricHallahan#1051: I think she is looking for natural images?
tpapp157#3643: Photography websites?
CRG#8707: Reddit engagement?
alstroemeria313#1694: i am looking for *extremely diverse* data so not just art and not just photos
tpapp157#3643: I'm not sure you'll find anywhere to scrape that.
BoneAmputee#8363: reddit
kurumuz#5695: do you mean NSFW
alstroemeria313#1694: ah
kurumuz#5695: :berk:
kurumuz#5695: also danbooru has scores right
alstroemeria313#1694: that's kind of outside the scope of the thing i was thinking of
alstroemeria313#1694: but any techniques i come up with will probably be applicable
kurumuz#5695: i assume danbooru scores would work perfectly for this
uwu1#4864: Pinterest?
|
kurumuz#5695: danbooru also has SFW subset
kurumuz#5695: so you can work on there but dunno if diverse enough
finetune#0907: db scores are probably strongly correlated with percentage of skin colored pixels tbh
finetune#0907: might not be the dataset for that
Dashiell#8739: Reddit will be biased in a lot of way--the front page of r/Art is like 98% naked women
Dashiell#8739: but I think is probably your best bet
Dashiell#8739: you can do something like the Anthropic preference pre-training and just do paired "which of these pictures got more upvotes" as a task
alstroemeria313#1694: oh?
alstroemeria313#1694: paired how
Dashiell#8739: https://arxiv.org/abs/2112.00861
Dashiell#8739: just pair them randomly
alstroemeria313#1694: you mean you have to input two images to the model to use it?
Dashiell#8739: and train the model to predict which one did better
Dashiell#8739: well, Anthropic was training a text assistant, so they used texts
Dashiell#8739: but yeah
alstroemeria313#1694: oh
alstroemeria313#1694: how do i use it in inference then
Dashiell#8739: I think the idea is that you finetune your model with this task and thereafter it's more likely to generate the "better" / "ranked higher" things?
Dashiell#8739: I'd have to re-read the paper ๐
alstroemeria313#1694: oh
|
alstroemeria313#1694: i need a thing i can backprop through though
alstroemeria313#1694: idk where i would get the second embedding from
Dashiell#8739: what do you mean "second" embedding?
cfoster0#4356: *Clippy voice*: It look like you're wanting to backprop through a learned reward model trained with binary preference data
alstroemeria313#1694: well if i have to use two embeddings as input
alstroemeria313#1694: in inference
alstroemeria313#1694: i was going to train a model that took a single embedding as input and output predicted reward or a predicted reward distribution
alstroemeria313#1694: then backprop through it
uwu1#4864: maybe KNN to the training set to find a bunch to compare against?
alstroemeria313#1694: this is what i am doing now
uwu1#4864: you could also apply link prediction to the preference graph to fill it in and then learn a model that predicts the global aesthetic preference vector for each item
cfoster0#4356: I think the model would take in a single embedding and produce a score for it, but you'd supervise it by training it to output a higher value on the more upvoted picture?
Dashiell#8739: right
cfoster0#4356: Like RLHF
Dashiell#8739: in the finetuning you'd run each image through the same model separately
uwu1#4864: that could run into problems if the binary ratings are not transitive right
Dashiell#8739: probably, but part of the cool thing about the Anthropic results is that they were pretty robust to domain shift
Dashiell#8739: at least in terms of human preferences for text
tpapp157#3643: Right you just run images through the model twice, output your aesthetic distribution for each and compute your loss between them. Maybe add an additional projector model on top to output a binary logit.
cfoster0#4356: They do `loss = log(sigmoid(reward(better) - reward(worse)))`
|
Dashiell#8739: what is RLHF?
uwu1#4864: I wonder if you could do it self supervised by having augmentations that make the image worse
cfoster0#4356: Reinforcement learning from human feedback https://arxiv.org/abs/2009.01325
EricHallahan#1051: Reinforcement Learning from Human Feedback.
cfoster0#4356: I think the abbreviation is only a thing around this server...
Dashiell#8739: ty ty ๐
EricHallahan#1051: Nah it must exist on LW.
Dashiell#8739: this actually reminds me that I want to try using r/PhotoshopBattles to do Image --> Funny transformation training for a generative model
Dashiell#8739: I should start scraping that
tpapp157#3643: Could do a bit of this (jog artifacts etc) but you'll be biased toward the augmentations you use.
ilovescience#3282: is this done with feedback from a single user?
cfoster0#4356: No
cfoster0#4356: I think they hired some labelers?
ilovescience#3282: oh, it would be interesting if you could make a model more personalized with these sorts of approaches
EricHallahan#1051: That sounds like something primed for wireheading to me.
Sid#2121: Question: when people do model EMA - does it have to be the last (contiguous) k steps? i.e you're EMA-ing steps n, n-1, n-2, ..., n-k? or can I EMA steps n, n-100, n-200, ... or some similarly distant steps, and still see benefit?
kindiana#1016: i think its usually called something different
Some Point Process#3793: An EMA is characterized an underdamped motion over a gradient field and can just be thought of as a function of the current velocity and the current gradient
EricHallahan#1051: Just view `n, n-100, n-200, ..., n-100k` as `n, n-1, n-2, ..., n-k`, problem solved?
Sid#2121: Ok, I could have phrased the question better lol
|
Sid#2121: "Within what kind of bounds (in terms of number of samples between the latest and earliest weights) is doing model EMA still useful?"
alstroemeria313#1694: You can do that but itโs easy to just do it every step
Sid#2121: Is it still easy in a large model distributed across a load of GPUs
Sid#2121: I guess so actually lol
Sid#2121: i'm just lazy
alstroemeria313#1694: You mean just doing an EMA update every k steps?
Sid#2121: I mean, i just really want the saved checkpoint to be EMA'd
alstroemeria313#1694: I have done it every five and it was fine
Sid#2121: unless it's super beneficial to EMA during training too?
Sid#2121: I haven't really read much about it tbh
alstroemeria313#1694: Bc I was using a lookahead optimizer with k=5
EricHallahan#1051: EMA is used extensively in GANs.
alstroemeria313#1694: EMA weights typically only used in inference
Sid#2121: yeah that's what i thought
Some Point Process#3793: the only info you should need is just the current velocity or "momentum" term, the "damping" coefficient (which also gives you 1-d), presumably a constant, and the saved weights
alstroemeria313#1694: Except for like momentum contrastive schemes
Sid#2121: ohh yeah ofc you can use the momentum from the optimizer states
alstroemeria313#1694: No I mean contrastive where the other net is an EMA of the current net
alstroemeria313#1694: Optimizer state momentum is different
Sid#2121: responding to @Some Point Process
|
Some Point Process#3793: Yes I do think so with maybe 50% confidence
Some Point Process#3793: I think adam has more states though than just momentum tho. But if we're talking just vanilla momentum-sgd I'm fairly sure
Some Point Process#3793: Each weight will need its own momentum term ofc
Sid#2121: yeah and also the momentum is an ema of the past gradients, not the parameters
kurumuz#5695: are there good image to image pretrained models
EricHallahan#1051: Paired or unpaired?
kurumuz#5695: https://arxiv.org/abs/2111.14887v1
seems quite strong
kurumuz#5695: Paired meaning translated image will be similar to the original?
Some Point Process#3793: The whole idea (IMV) is that the issue with sgd (also GD) is it's only doing finite difference approximations, so these are linear approximations of the gradient of the loss at the current coordinates (current weights). Modeling the current weights as a "true" function of the "analytical" gradients calculated by backprop requires an integral over infinitesimal increments in the weights which aren't available. But SGD+momentum works well enough (maybe even just sgd)
Some Point Process#3793: > "true" function
if we were to think of the optimization procedure as acting on a physical system via this gradient of the loss (which describes some continuous flow diff-eq)
Some Point Process#3793: I haven't taken ODE though so I might be totally wrong or "off" on my vocabulary etc
alstroemeria313#1694: You mean integrating the gradient flow ODE is done with a finite step size?
alstroemeria313#1694: Is there an SDE corresponding to SGD?
Some Point Process#3793: Yeah. Finite as in a numerical approximation (which no longer makes it an analytical solver)
Some Point Process#3793: Covariance matrix adaptation (evolutionary algorithm) comes to mind
Some Point Process#3793: But it's not a differential equation, I think
alstroemeria313#1694: Thatโs second order I thought, too
Some Point Process#3793: Huh looks like you're right (<https://en.wikipedia.org/wiki/CMA-ES>)
|
๐
ฌ gabriel_syme ๐
ฌ#3220: paired is usually domain adaptation but supervised (e.g. Pix2Pix), or at least that's the one I'm used to calling paired im2im
rom1504#5008: I think you should first define what you mean by aesthetic
rom1504#5008: Here is one example of definition: something valuable to people because of how it looks
rom1504#5008: For this definition, there is a simple way to collect massive amounts of data
rom1504#5008: The price of things
rom1504#5008: For example, clothes, jewelry, bags,...
rom1504#5008: But also cars, boats,...
rom1504#5008: Or even hotel rooms, houses, apartment
rom1504#5008: Price is going to be pretty correlated with valuable
Some Point Process#3793: ehh, I think basic aesthetic sensibility can be identified with some low dimensional manifold. Even a baby can pick out more "beautiful" looking pictures, people, drawings etc
Some Point Process#3793: just a guess tho
Some Point Process#3793: so maybe instead of having to collect lots of extra data you can do some exploratory data analysis on existing examples (semi supervised etc)
Some Point Process#3793: with the help of learned models ofc
rom1504#5008: I'm not sure what you mean
rom1504#5008: What is "existing examples"
Some Point Process#3793: Whatever was used to train clip
Some Point Process#3793: With existing LMs you can do sentiment analysis (positive words/"evaluations" might be clustered similarly in latent space)
Some Point Process#3793: for instance
rom1504#5008: Ah so you mean take an image/text pairs dataset then find the positive texts and consider these pairs to be more beautiful?
Some Point Process#3793: Yeah
|
rom1504#5008: Yeah idk
rom1504#5008: Sentiment analysis is pretty bad still, no ?
rom1504#5008: There are some very direct signal of good/not good that is given directly in numbers by people in the internet
Some Point Process#3793: Depends what you mean by bad then, I guess. and how much alstro wants to improve the existing baseline (seems to already be working well actually but generating similar looking images)
rom1504#5008: Among those : prices, likes, number of views, number of comments, number of chats about them,...
rom1504#5008: I mean that sentiment analysis is ok at identifying good/not good but not that great for more nuanced sentiments, afaik
Vertex#8056: Hi everyone. I'm an experienced Python developer with basic math (mainly college statistics) who wants to learn more about transformers. I was looking into getting a book, perhaps machine translation is a good place to start? Any recommendations would be greatly appreciated!
ilovescience#3282: these are some resources I found helpful when studying about Transformers:
https://twitter.com/iScienceLuvr/status/1471032149100797954
HanakoMasaki[Cactuar]#0015: You should have seen it too nsheppard we were hyping InfoLOOB up like it was the next great coming when it was literally doing nothing
HanakoMasaki[Cactuar]#0015: I'd love to hear about how to effectively get results from it though
HanakoMasaki[Cactuar]#0015: also because it needs to be said amazing colab you've put together
nshepperd#2316: the power of confirmation bias hah
nshepperd#2316: i usually generate batches of four with a single difference between prompts, like this
```py
template = 'a {item} phoenix, trending on artstation'
items = ['glitch art','impressionist','watercolour','statue of a']
all_title = template.format(item=','.join(items))
title = [template.format(item=item) for item in items]
```
|
nshepperd#2316: `InfoLOOB(vit32.embed_texts(title), clip_guidance_scale, vit32, make_cutouts, 0.5, cut_batches)` and i set the `lm` parameter to 0.5 here
HanakoMasaki[Cactuar]#0015: thank you for the reply and the advice. Here's hoping CLOOB can even be part of the equation for this colab eventually
HanakoMasaki[Cactuar]#0015: what does {} do when a term is put in it ?
HanakoMasaki[Cactuar]#0015: I've never seen someone include that in a prompt before
HanakoMasaki[Cactuar]#0015: it basically slots one of those in at random or in order?
nshepperd#2316: that's python syntax
nshepperd#2316: it's just slotting those items in in order
nshepperd#2316: it is the same as `title = ['a glitch art phoenix, trending on artstation','a impressionist phoenix, trending on artstation', 'a watercolour phoenix, trending on artstation', 'a statue of a phoenix, trending on artstation']`
HanakoMasaki[Cactuar]#0015: but without eating up the character limit for one
HanakoMasaki[Cactuar]#0015: which is a little thing with diffusion that irks me compared to VQGAN is the stringent character limit
HanakoMasaki[Cactuar]#0015: thanks for taking the time to even talk to a hobbyist like me
HanakoMasaki[Cactuar]#0015: I've gotten lots of enjoyment from the use of your colabs
flowpoint#7450: the code looks good to me,
running pure on cpu doesn't work (atleast for me) though because it depends on Nvidia drivers
sometimes you just wanna test/debug on cpu, so that could be improved ๐
StellaAthena#3530: It is not supposed to run on CPU
Louis#0144: we do need to make amp optional though
Louis#0144: lmao
Louis#0144: the deepspeed refactor @Dashiell is working on will do that
|
Louis#0144: cpu support would be helpful for debugging
Louis#0144: anyway
Louis#0144: i gotta get back to cohere work
Louis#0144: cant talk about this rn
Louis#0144: (Something we should consider eventually is adding neox support to magicarp)
flowpoint#7450: ๐
Gurkenglas#7362: Where'd #the-book go?
EricHallahan#1051: We archived it as there was minimal activity over the past few months.
Gurkenglas#7362: I don't see it in the archive.
EricHallahan#1051: Oh then @Daj must have forgotten to make it visible.
StellaAthena#3530: You can see the archive?
Gurkenglas#7362: Why are we forbidden from writing into archived channels?
EricHallahan#1051: Some channels in the archive are visible to all members.
StellaAthena#3530: "archived" = "deleted, except we don't like destroying information"
Allowing people to comment in archived channels defeats the purpose
EricHallahan#1051: They are inactive or completed projects.
Gurkenglas#7362: What's the purpose? I thought it is to not clutter up the channel list.
EricHallahan#1051: They are always available to pull back out of storage should they become relevant again.
EricHallahan#1051: A large draw of the server is the immense amount of knowledge found in past conversations. If we were to hide them all that would make them inaccessible to search.
|
Gurkenglas#7362: I'm not saying they should be hidden, I'm saying one should be able to write into them.
EricHallahan#1051: That would effectively defeat the point of archiving?
Gurkenglas#7362: I repeat, what is the point of archiving?
EricHallahan#1051: To clean up the list of active channels?
Gurkenglas#7362: It is clean irrespective of whether we can write into archived channels, though.
Spacecraft1013#5969: if you could write into archived channels then they would be active channels, which would therefore clutter the list of active channels
StellaAthena#3530: They'd just be harder-to-find active channels
skoetje#4856: Short time lurker, first poster here! Looking into using the Pile. The site seems down lately, but for now I'm mostly interested in how the data looks like. Is there a small sample that I can check before downloading the entire dataset?
Spacecraft1013#5969: I believe the github has links where you can download the individual components of the dataset, so you can probably download a few of the smaller components
chilli#5665: https://twitter.com/chhillee/status/1472693287857262592?s=21
chilli#5665: I think this is an interesting discussion ๐
crypdick#8564: @alstroemeria313 possibly a good use-case for the Optometrist Algorithm (https://www.nature.com/articles/s41598-017-06645-7). it just requires pairs of samples to be ranked against each other
(sorry if someone already recommended something in a similar vein, I cant keep up with the post volume here)
alstroemeria313#1694: Ah
uwu1#4864: the wiring can be turned into a wifi antenna which software on the laptop could access
StellaAthena#3530: By colluding with the company that made your laptop, yes. Otherwise, no.
EricHallahan#1051: This seems pretty #off-topic, no?
alstroemeria313#1694: They would probably just hack it and access it through the internet
uwu1#4864: probably after the power Brick you could do it directly w the laptop power consumption as has been shown https://en.m.wikipedia.org/wiki/Power_analysis
uwu1#4864: oops this isn't off topic
|
alstroemeria313#1694: Power cable stuff is a specialized attack, I'm not sure it happens unless someone very sophisticated has an interest in you specifically.
StellaAthena#3530: Hacking isโฆ
EricHallahan#1051: I've heard rumors that #general is just a facade for the true #off-topic.
EricHallahan#1051: Or maybe it was the other way around. ยฏ\_(ใ)_/ยฏ
alstroemeria313#1694: I'm still paranoid ever since the Snowden leaks
kurumuz#5695: off-topic is the true general
EricHallahan#1051: Nah just setup an X10 bridge and connect it to the laptop.
EricHallahan#1051: Easy powerline communications lol
cfoster0#4356: Really grown tired of this Great LLM Understanding Debate... feels like a wedge issue/"scissor statement" at this point
<https://slatestarcodex.com/2018/10/30/sort-by-controversial/>
bmk#1476: which debate is this in particular? the one that's "but do LMs *really understand*"
cfoster0#4356: Yeah. There have been a couple of articles rekindling it the past week or so, from folks I otherwise respect
bmk#1476: I seem to have missed all of them
cfoster0#4356: Probably good for your health lol
tpapp157#3643: Get all the major LMs and make them have the debate for us. That way it's settled.
tpapp157#3643: I think the argument kind of misses a key distinction. Understanding is something only an intelligent system can do and a model on its own is not an intelligent system. It's simply a mathematical function that structures information. But a powerful model, when incorporated as a part of an intelligent system, can enable powerful understanding. That's like asking an encyclopedia if it understands, obviously not, it is an inanimate object. But the structured knowledge it contains, when incorporated as part of an intelligent system, can enable powerful understanding. Which also brings up the point that understanding is a spectrum ranging from zero to infinity, not a binary state.
๐
ฌ gabriel_syme ๐
ฌ#3220: I like to think of those 2 things in conjuction, the intuitionistic and classical point of view. I feel it's true that there is an intuitionistic spectrum to things, continuous scale of intensity as you say. It makes sense to have such a thing for these systems. But I also feel there are classical points in the world, a specific threshold above which something new enters it. So I do expect these systems to have that moment, that threshold. I don't think they are there yet, or if they are we don't know how to coax it out of them.
cfoster0#4356: There isn't a right or wrong to the intentional stance, but people feel very strongly about how it should be applied
cfoster0#4356: If we weren't confused about what makes a thing an agent, we might have better grounds to say we shouldn't treat models as if they were agents. But atm I don't think it's ruled out
cfoster0#4356: But we humans are *absurdly* passionate about what cases we feel it's (not) appropriate to model a system as an agent, hence the recurring debate ๐
|
๐
ฌ gabriel_syme ๐
ฌ#3220: Unironically, the alternatives proposed by people who blame these AI models for not being intelligent are absolutely focused on tasks where these AI models thrive. I always found that funny as hell (albeit a bit besides the above point)
Louis#0144: If you are improving InfoLOOB shouldnt NTXEnt also improve?
Louis#0144: (and if it doesnt, does that mean you're weighting your negative examples too high?)
tpapp157#3643: An 'agent' or AI is an intelligent system, a model is not. Intelligence requires four things. 1. The ability to observe the environment (current state), 2. The ability to understand what actions can be taken in the current state, 3. The ability to make predictions about those possible actions, 4. The ability to choose and take an action based on those predictions. ML models are often incorporated into several or even all of those pieces of an intelligent system but they're certainly not necessary.
bmk#1476: it's easy to make a definition of agency or intelligence. it's hard to make a *good* definition
cfoster0#4356: Yes, for example an LM would fulfill the above requirements and yet I think the applying the intentional stance to LMs is definitely *weird*, as we've all remarked
tpapp157#3643: Not it doesn't. An LM is a model, a mathematical function. It cannot perceive the environment, it cannot take actions. It simply has inputs and predetermined outputs.
alstroemeria313#1694: maybe. you are not just optimizing infoloob, you are optimizing infoloob on hopfielded latents
Louis#0144: hm
Louis#0144: maybe I should look at InfoNCE on hopfield latents
alstroemeria313#1694: *accuracy* should improve but i am not 100% sure infonce will improve monotonically with infoloob+hopfield
alstroemeria313#1694: the cloob paper tried it as a loss and it wasn't better than plain infonce
Louis#0144: yes but I want some way to measure the tail distribution
Louis#0144: lol
alstroemeria313#1694: ahh
cfoster0#4356: Fine. An LM wrapped in a sampling loop (as LMs almost always are)
tpapp157#3643: Yeah it's reasonable to classify that as an intelligent system. It perceives the prior context, the LM provides predictions, some sampling logic chooses the best option.
bmk#1476: I think the line between "function" and "function wrapped in a loop" is not the dividing line that cuts reality at the joints
tpapp157#3643: I think you give the concept of intelligence too much credit. Intelligence ranges from inanimate to omniscient. There's a lot of room in there for all different levels of intelligence.
bmk#1476: I mean, you're arguing that the LM alone is not agentic at all
|
bmk#1476: and only when put into a loop is it a nontrivial amount agentic
tpapp157#3643: Right, it's the loop which defines intelligence.
bmk#1476: and I think that's not at all a thing that a useful definition should do
bmk#1476: to be blunt I think that's a really dumb definition of intelligence
bmk#1476: I think under a reasonable definition the agenticness or intelligence of the loop+function should be close to that of just the function
bmk#1476: on the 2x2 grid of things that {perform well on eval tasks, perform poorly on eval tasks} x {have a loop, have no loop}, I think the former correlates much more with my intuitive definition of which part of intelligence is really the part we care about
bmk#1476: not saying that the former *is* intelligence, but it's a lot closer to what I'd want from a reasonable definition
bmk#1476: or put differently, I think a function that perfectly simulates one timestep of a human but doesn't have a loop attached should be pretty high on the intelligence spectrum
tpapp157#3643: A function cannot by definition "perfectly simulate one timestep of a human" on its own. It requires an intelligence loop to do that.
tpapp157#3643: Intelligence at its core is ability to perceive and react to the current environment with intention. That requires a loop.
tpapp157#3643: Without the loop of perceiving the environment, predicting the outcome of actions, choosing and taking an action, that's simply inanimate.
tpapp157#3643: I think it's important to be precise about terminology to manage these sort of misconceptions. People throw around phrasing like "this model is intelligent" when what they really mean is "the decision making process which this model enables is intelligent". Because a model is inanimate, it cannot gather its own inputs, it cannot act on its own outputs.
tpapp157#3643: And if you think that the other steps in the loop are not necessary for intelligence then I recommend you replace those steps with a random input generator and a random action selector and reevaluate the performance of your system.
Cable#9899: I wanted to ask a question on if there was a way to get VQGAN to produce a higher resolution. To give reference I have VQGAN+CLIP with a 3090 yet I am limited on resolution. I see the bot in this server is capable to produce 2048x2048 if not higher yet it is not limited to it. If someone could help me let me know and I would be glad to learn thank you so much!
BoneAmputee#8363: `I see the bot in this server is capable to produce 2048x2048` this must have been from an `.esrgan` use? or an optimistic user's input
BoneAmputee#8363: the vqgan gpus are set to put out something around 512x512
BoneAmputee#8363: though you can go higher if you do vqgan decoding in sections
BoneAmputee#8363: without using more vram
Cable#9899: sorry for the @ but thats what i mean
Cable#9899: im not sure how because even then i see people do it on different servers and such on personal machines yet they never say anything
|
Cable#9899: I understand the ability of using upscaling and such but it can only look so good
tpapp157#3643: I forget if VQGAN is fully convolutional or not. If it is then scaling output resolution is just a matter of scaling the input dimensions.
StellaAthena#3530: Itโs not, itโs a transformer + convolution
Cable#9899: I know this might sound dumb but how does it do that because if it creates the picture in 512x512 for example how would it accurately scale to a higher resolution. I know some things like DLSS on Nvidias side will glitch out or have major artifact
Cable#9899: sorry once again im new to this and im just currious
EricHallahan#1051: Depends what part of the model we are talking about.
EricHallahan#1051: The encoder/decoder easily scales.
๐
ฌ gabriel_syme ๐
ฌ#3220: woah this adjacency-driven, architectural layout data augmentation technique is really crazy
๐
ฌ gabriel_syme ๐
ฌ#3220: I reached about 100million unique layout descriptions now, after 4 generative steps
onealeph0#1502: the-eye seems down without any signs of recovery for two weeks now... any updates when `the pile` will be available again if at all?
skoetje#4856: this worked, thanks!
nshepperd#2316: idk if it's just me but i can't make any sense of the webdataset docs
nshepperd#2316: there are lots of examples but they don't actually explain what anything is
StellaAthena#3530: @skoetje @onealeph0 If either of you are looking to train on GPUs with GPT-NeoX I have a copy of the processed dataset I can share.
nev#4905: what is wrong with kaggle https://cdn.discordapp.com/attachments/729741769738158194/922906408679260200/Screen_Shot_2021-12-21_at_20.40.17.png
nev#4905: is that how it used to look like?
skoetje#4856: Thanks @StellaAthena! I was able to download the opensubtitles dataset and now have an idea how the data should look like. I'm trying to collect data for other languages besides English, as I have access to quite a lot high quality text data through my work. So I don't need the full English data set right now. Hope I can convince my bosses to make it public at some point hehe
Samip#6468: Hey guys, approximately when was the GitHub data for the Pile dataset collected? I wanted to do some analysis on the new code from GitHub gpt-neo is not pretrained on.
StellaAthena#3530: All repositories that were **created after** October 2020 should be good.
Samip#6468: Ok thanks. Just to be clear, including October 2020 right?
|
StellaAthena#3530: Yes
StellaAthena#3530: After September 30th 2021 to be precise
Samip#6468: awesome, thanks!
bmk#1476: keep in mind that there still may be code overlap
bmk#1476: so you should do filtering anyways
Samip#6468: yeah, I will keep that in mind
Samip#6468: lol it's quite obvious but you meant sept 30th 2020 right and not 2021?
StellaAthena#3530: Yes lol
wingated#7362: @Sid Hi Sid - I'm putting together a large NSF grant to fund the creation of a GPU cluster dedicated to training large-scale LMs. I'm wondering if ElutherAI would be interested in participating?
Sid#2121: Absolutely
Sid#2121: Let me start a group chat with you and some eleuther folk, and you can give us more details?
wingated#7362: Cool! It's an MRI grant, due in January. We're calling the cluster the "LanguageLens", and the idea is that we would make it available to academic researchers and also community organizations like Eleuther.
wingated#7362: Group chat would be perfect.
StellaAthena#3530: If anyone has a research project thatโs ready to go right now and needs ~2,000 V100 hours DM me ASAP. I have a lot of highly constrained credits that will expire at the end of the year and wonโt be able to use. Unfortunately you can only get a single V100 pod because the grant is dumb.
Kia#2550: Ow wow,This is some neat things :thinkies:
Spacecraft1013#5969: 2000 hours in 240 hours until end of year ๐
Spacecraft1013#5969: wait... i read "V100 pod" as "V100" lol nvm
StellaAthena#3530: 8 * 240 ~ 2,000 ๐
Spacecraft1013#5969: although you are running out of hours to use your hours
ilovescience#3282: could this be the Spell Grant? ๐
|
minimario#1709: any ocaml fans here? thinking of finally trying out some of these ocaml-torch or ocaml-tensorflow bindings to do some small project if anyone wanted to play a bit with me ๐
un.known#2928: Hi, does anybody know the algorithm that nft makers use to combine multiple assets and get the maximum number of unique combinations? I want to do it just to create some overlay packs for photoshop but can't find anything about it.
EricHallahan#1051: This is probably not the place to ask for that.
un.known#2928: Thought it may be related to AI in any way
TY#4345: i am testing training speed of a GPT-XL (~1 billion params) on A100s (80GB). Interestingly, I found using **half of the GPU memory with batch size 22** gives faster speed (32.8 samples/second) than using **max GPU memory with batch size 55** (29.8 samples/second). Does this sound reasonable?
Sid#2121: That's pretty surprising to me
EricHallahan#1051: Yeah that seems pretty odd.
TY#4345: i will try some more different batch sizes.
TY#4345: some more numbers
TY#4345: https://cdn.discordapp.com/attachments/729741769738158194/923141609405698058/unknown.png
naclbbr#9203: Just curious, did you see any spike or valley in GPU usage or GPU memory usage (like one process waits for another)?
TY#4345: haven't checked that yet, i only recorded the above numbers so far. i will do some more comprehensive tests later.
Kia#2550: Nope.
rom1504#5008: Well that just says you have a bottleneck somewhere
rom1504#5008: Maybe the data loader ?
alstroemeria313#1694: i'm not sure, compute VGG feature maps or CLIP embeddings of the images and find/flag the ones with lowest pairwise distances?
alstroemeria313#1694: at some point i am going to have to figure something like it out for deduplicating training sets of images.
alstroemeria313#1694: also checking which items in the training set a generated image is closest to, to check for memorization/overfitting.
tpapp157#3643: Use any sort of pretrained network you want to calculate image latent vectors. Then sample your images inversely proportional to the local density. This can help a lot if you have an unfiltered dataset with significant variance in local density across the distribution.
tpapp157#3643: It's pretty typical for large scraped datasets to have orders of magnitude difference in local density between common and uncommon regions of the distribution. This in turn will strongly bias the features your model allocates capacity to learning.
|
tpapp157#3643: I'm honestly kind of baffled why the DL community seems so stuck on uniform random sampling as the basis for model training.
alstroemeria313#1694: hm how do i get local density
alstroemeria313#1694: like some sort of radial basis function kernel?
alstroemeria313#1694: mean distance to the k closest neighbors?
tpapp157#3643: There are a number of ways you can estimate it. A relatively simple one (that I believe UMAP uses) is to measure the distance between a point and its K-th nearest neighbor.
tpapp157#3643: There's not really a singular correct option, since all you're really looking for is a relative weighting to smooth out dense and sparse regions of the dataset.
alstroemeria313#1694: *nods*
tpapp157#3643: Don't stress too much about it, find something that seems to work pretty well for your dataset, and you'll be fine.
Louis#0144: oh yeah this sounds likely
onealeph0#1502: I was going to train my perceiver based model, I believe copy of processed dataset is better than nothing. I appreciate it very much.
onealeph0#1502: can I DM you?
StellaAthena#3530: Sure!
rom1504#5008: Did you find that making the density more "uniform" (probably not the right word) has good consequences on training?
I understand it will make it so the model will allocate its capacity on more diverse things, but is that good ? Maybe the fact some example cluster are more dense mean they matter more ?
rom1504#5008: Or maybe you'd advise a more advanced density reduction by explicitly defining what cluster should have some density based on the model objective?
tpapp157#3643: Yeah it's a tradeoff, like all things. And ultimately the best option is going to depend on the specifics of your dataset and what you're trying to do with it. But in general it's pretty typical for there to exist very dense regions of the sampled data distribution with lots of near-duplicate data points and very sparse regions of the sampled data distribution. What you really want to avoid is the model putting a ton of effort into learning tiny meaningless variations of the dense regions at the expense of ignoring the more sparse regions and balancing the dataset is a tool to help with that. The core problem is that maximum likelihood will tend to collapse the learned distribution to the dense regions.
rom1504#5008: Yeah i see, that makes sense.
I think we would probably benefit from such an analysis/rebalancing for laion datasets. We have 400m clip embeddings precomputed (and soon a lot more), so it could be done on top of that.
tpapp157#3643: Especially for tasks like pretraining, the goal is to train the model such that it learns the entire data manifold. In this context, the sampled distribution of the dataset is incredibly imbalanced relative to an "ideal" uniform distribution, assuming that all regions of the manifold are equally important to learn.
tpapp157#3643: True uniformity isn't really necessary, of course, you can get a lot of benefit by just smoothing out the extremes. If you've got sampling imbalances of 1000:1 in your dataset between dense and sparse regions then even reducing that by a few orders of magnitude will result in significant gains.
|
hotgrits#0196: I'm tryna disable asserts in Python in Colab, which I believe should be done with `!export PYTHONOPTIMIZE="True"`, but `!python -c "assert False"` still gives an AssertionError when testing it. Any idea what's going wrong?
bmk#1476: are you not able to add flags?
bmk#1476: you could also just literally iterate over all the python files and remove the asserts with a regex
bmk#1476: bit of a hack but it would eork
EricHallahan#1051: You cannot set the environment variable in a separate shell than you run the program in and expect it to apply.
hotgrits#0196: `%env PYTHONOPTIMIZE="Yes"` Made it stop, but asserts in an imported module still go off...
EricHallahan#1051: Are you trying to run python code in the cell or launch a separate process?
hotgrits#0196: Within a cell I've imported piqa for a loss function, and within a cell I'm using it. But the code for that function from piqa is not within a cell.
nev#4905: so it's in the same process.
bmk#1476: just grep through site_packages and remove all asserts
hotgrits#0196: Yeah it's lookin that way
wilbown#7317: Anyone else here working along either of these specific paths?
https://sebastianrisi.com/self_assembling_ai/
https://bair.berkeley.edu/blog/2021/01/05/successor/
strikingloo#8381: Hi Everyone!
I'm new to this discord, I'm a Computer Science graduate (hopefully finishing my studies in a few months with my dissertation) who loves deep learning and reads papers/watches courses as a hobby.
I have experience working with Python and training big models, but never at the scale of Eleuther or doing anything half as interesting.
I was wondering if I could maybe contribute to the projects in any way! Right now I'd say I'm at a level where I know most of the Deep Learning related algorithms published in recent years in theory -I especially like NLP and Generative Models-, but haven't actually implemented them or tried them out (usually due to budget or compute constraints).
|
As a reference I did Berkeley's online Unsupervised Learning course.
Louis#0144: Self ass
Louis#0144: Lmao
Deleted User#0000: Hey, does anyone here know how one would run GPT-Neo or GPT-J-6B on two GPUs in parallel, a 3060Ti and Tesla P4 specifically, in the same system without SLI or NVLink.
bmk#1476: heterogeneous compute bad
AI_WAIFU#2844: painfully
StellaAthena#3530: You don't
Deleted User#0000: I tried using DeepSpeed with GPT-Neo-2.7B but it loaded more into the 3060Ti than the P4 and OOM'd, and even with Neo-1.3B (which either gpu can run fine) it filled their VRAM and OOM'd when the model on just a single didnt get near to filling it.
bmk#1476: sell your GPUs and buy two identical ones
Deleted User#0000: Both are 8GB cards.
AI_WAIFU#2844: or just get 1 card with enough vram
Deleted User#0000: probably not worth it at all with the prices these days.
bmk#1476: doesn't matter, heterogenous is a bad idea and cursed
Deleted User#0000: and I use the 3060Ti as my main GPU
bmk#1476: well that's your problem lmao
StellaAthena#3530: An airplane and an oil trucker both have 10 wheels, but that doesn't mean you can drive them side by side
bmk#1476: your system is eating up memory on the 3060ti
bmk#1476: just get a headless machine with two identical GPUs
Deleted User#0000: I know, but even with a small model used up all vram on both gpus when it didn't even fill up a single gpu (or near it)
StellaAthena#3530: Sounds like you should do more debugging
|
Deleted User#0000: Currently strapped for cash at the moment sadly.
bmk#1476: well we can't help with that
Deleted User#0000: Yeah, otherwise I would definitely get identical gpus or a single GPU with enough VRAM and run it headless in my server.
Deleted User#0000: If anyone knows some insane optimization tweaks or anything else for running GPT-J-6B within 8GB of VRAM, please let me know.
Deleted User#0000: Probably not possible though.
kurumuz#5695: run the 8bit model
kurumuz#5695: should fit
Deleted User#0000: ?
StellaAthena#3530: **Welcome!** A good way to get started is to become familiar with using these models in practice. There's a lot of things you could get involved with in terms of doing engineering work, if that's something you're comfortable with learning more about. I recommend playing around with some of our codebases; even if you can't train a large model, getting an understanding of how they operate and running them for a bit will make contributing to our work a lot easier.
You can find the codebase we have developed for training large models on GPUs here: https://github.com/EleutherAI/gpt-neox
You can find the codebase we have developed for training large models on TPUs here: https://github.com/kingoflolz/mesh-transformer-jax
You can find the codebase we have developed for evaluating models here: https://github.com/EleutherAI/lm-evaluation-harness
You can query GPT-J through your browser here: https://6b.eleuther.ai/
#interp-archive and #contrastive are the main channels focused around projects that involve creating new methodologies right now. I'm leading #interp-archive, so feel free to ping me with questions after you check out the pinned posts.
We also have a rather large list of engineering to-dos that intend to lead up to papers once someone gets around to them. Off the top of my head, we have projects involving training LMs to play games, examining how adversarial examples scale, and some stuff with Decision Transformers waiting for someone to pick them up. You can find a list of some ideas here: https://github.com/EleutherAI/project-menu/issues
Deleted User#0000: I haven't heard of an 8 bit model of it, just float16
kurumuz#5695: @Deleted User probably move this to #gpt-j as well
|
strikingloo#8381: Thank you Stella! I'll check GPT-NeoX and lurk around the #contrastive channel/read the pinned post.
Are the to-dos the ones on the *issues* link?
StellaAthena#3530: > Are the to-dos the ones on the issues link?
yup
strikingloo#8381: thanks!
tpapp157#3643: Something I've been thinking about. It's common these days in Chess to use the latest greatest AI agents to evaluate the board state (logit of winning likelihood) and suggest best next moves. This is great but the core problem is that these agents are trained under the assumption of optimal play (that both players are able to identify the optimal moves no matter how obscure). This, in turn, means that the board evaluations and move suggestions are completely unable to account for non-optimal play (in other words, human-level play) and paradoxically this makes the agent's evaluations and suggestions non-optimal and a rather poor learning tool.
So I've been mulling over that it may be fairly straightforward to train an autoregressive LM on strings of moves from a game database, but conditioned on the rating of the players and the game outcome. The idea being that the model could provide board evaluations and suggest moves that are relevant to the skill level of the actual players.
Some Point Process#3793: AlphaZero for one is conditioned on the past trajectory of moves (up to a point). maybe this has the effect of indirectly conditioning on the strategy (or quality thereof) of the opposing agent etc. But yeah it seems to me there's some sort of a "distributional" assumption that's ignored through self-play etc where you're just drawing from capable agents (opponents)
Louis#0144: Contrastive is mostly focused on building out our training infra rn
Louis#0144: And data collection
Louis#0144: Like we're doing some experiments I guess
Louis#0144: But nothing big rn
StellaAthena#3530: > This, in turn, means that the board evaluations and move suggestions are completely unable to account for non-optimal play (in other words, human-level play) and paradoxically this makes the agent's evaluations and suggestions non-optimal and a rather poor learning tool.
This is kinda true, but only to a very limited extent and it depends a lot on what you mean by 'learning tool" (e.g., humans? What level? Inverse reinforcement learning?).
People who are grandmasters and deeply study very niche and weird lines can beat engines sometimes by deploying special anti-engine strategies, but chess engines will beat any chess player who hasn't specifically studied defeating engines from any objectively balanced "normal" position (or, indeed, from a losing position with high frequency). They certainly won't self-destruct against suboptimal play.
StellaAthena#3530: From the POV of teaching humans, this can be problematic because the engine makes a move that prophylactically defends against a stratagem that no human would have come up with. But it's nevertheless true that that is a very good move to make in the position. It may not win fastest, but it wins most frequently which is what matters.
StellaAthena#3530: I don't see how this is fundamentally different from saying a beginner can't usefully learn from studying grandmaster games though.
StellaAthena#3530: @tpapp157 Maybe I'm misunderstanding your point though? Can you help me understand which of the several options I touch on you have in mind?
|
tpapp157#3643: Not really. The assumption of optimal play leads to a number of shortcomings. First, as you point out, it leads to a strong bias toward extremely defensive play because it must assume the opponent plays optimally and that weeds out any strategy that cannot guarantee a win (even if in normal human play that strategy is often very successful). Second, it assumes future optimal play on the part of the player. It's common in these scenarios for an engine to suggest a move the is very good, but only if the player can identify an obscure but optimal ten move sequence of future moves, otherwise the move is terrible.
StellaAthena#3530: Yes, making a bad move after making a good move can make the two-move combination a net bad. But that doesn't mean that the initial good move was wrong.
tpapp157#3643: Also worth noting that modern engines are far beyond even the best players. The recent world championship which was lauded as featuring the highest level of chess play by humans ever, the engine analysis was routinely showing better move options that the humans missed.
StellaAthena#3530: It feels like you're implying that a chess engine isn't able to defeat someone who plays well but suboptimally. Is that something you believe?
bmk#1476: empirically, this doesn't matter
bmk#1476: sure, the value function will underestimate the reward at a board state by overestimating the opponent, but honestly, who cares
bmk#1476: this provides a lower bound on how much reward the model can get in that state
StellaAthena#3530: Are you a chess player? At what level? Where did you hear this?
There were a couple phenomenal *individual games* but it wasn't anywhere near the "highest level of chess play by humans ever" and the FIDE commentators basically said as much on stream. Magnus was far from his best, and Nepo made multiple game-losing one-move blunders that other top GMs saw immediately.
tpapp157#3643: No not at all. The engine will win because it is better. What matters is the engine's suggestion to player is not useful because the player cannot meet the assumption of optimal play. For example, it's common for an engine evaluation to say 100% chance for white to win because it has identified a 10 move sequence that guarantees checkmate, but this evaluation is incorrect because no normal human would be able to identify that sequence.
StellaAthena#3530: If what you care about is teaching medicore humans how to play better, then why doesn't the exact same complaint apply against studying grandmaster games?
You can also literally turn the thing that bothers you off. You can simply train it to play good but not amazing moves, or limit it's look-ahead, or explicitly program it to assume suboptimality.
StellaAthena#3530: People don't do this the overwhelming majority of the time because most research is interested in producing very good chess engines as opposed to medicore ones, but this is a component of how chess.com does cheating detection, for example.
tpapp157#3643: It does apply to studying grandmaster games for a beginner as well. Many GM level moves are really terrible unless you understand exactly what you're doing and can play at the GM level. If you're a beginner and try to play a GM line rather than a simpler but more robust line you'll usually lose.
๐
ฌ gabriel_syme ๐
ฌ#3220: I wonder if jina's finetuner can work nice for this. It's a super straightforward library
๐
ฌ gabriel_syme ๐
ฌ#3220: so..a Decision Transformer?
StellaAthena#3530: I think training transformers to generate moves conditional on a rating is interesting, but I think your stated motivation makes no sense as you can literally turn the thing that bothers you off
tpapp157#3643: You seem to be completely misunderstanding what I'm trying to suggest. The point is that top level chess engines are not a useful learning tool for normal human play. But a more useful engine conditioned on player rating could be developed that actually understands the expected level of play at that rating level and can suggest evaluations and moves relevant to that level.
|
๐
ฌ gabriel_syme ๐
ฌ#3220: i'm up for that, my dataset is also ready and i'll finally give it a shot
StellaAthena#3530: Iโm confused because Iโm under the impression you think youโre proposing something novel, but what youโre taking about exists and is widely used
tpapp157#3643: No I haven't seen any tool of this sort. People either use top engines like the latest stockfish, or they use play databases that simply show common moves.
StellaAthena#3530: Chess.com has a dozen or more
StellaAthena#3530: Streamers play them all the time
tpapp157#3643: Are you just talking about AI opponents? That's not the point at all.
StellaAthena#3530: Whatโs the difference between โAI opponentsโ and โengines for playing chessโ?
StellaAthena#3530: I have to run but donโt let my lack of understanding prevent you from pursuing this.
bmk#1476: there are different strengths of stockfish
bmk#1476: you can choose which level you want to play against
Sid#2121: Ehhh kinda. The way Stockfish lowers its rating is iirc to just add some randomness to the moves - it's not at all like playing a human player with the same elo. but there was an engine specifically designed to emulate how lower level humans play / make mistakes and I think a lot of the AI opponents stella is referring to may use that? :citationneeded:
Sid#2121: I want to say its called meena or something but I'm pretty sure that's a lm
Sid#2121: Ah, maia. https://arxiv.org/abs/2006.01855
๐
ฌ gabriel_syme ๐
ฌ#3220: ahj that's a nice one thanks!
Sid#2121: I'm reading this convo from the bottom up lol so apologies if this has already been mentioned and/or is completely irrelevant
Sid#2121: OK seems pretty relevant lol. Although I think a DT conditioned on ELO of both players would be even more interesting
EricHallahan#1051: Obligatory http://www.tom7.org/chess/weak.pdf, Section 2.5
bmk#1476: I thought weaker stockfishes did like less search or something
Kia#2550: Did Retro Use The Pile?
bmk#1476: so unrelated but the alignment angle of this paper is pretty :thonk:
|
bmk#1476: so what they're saying is they are trying to get AZ to behave like a human, right?
bmk#1476: so.. why not just.. do behavior cloning, like the original alphago?
bmk#1476: this has the same vibes as that one paper that talks how CNNs are too translation invariant and propose a hack to break the invariance
rom1504#5008: it seems to me that tpapp157 was talking about an engine that is assisting an human for some moves while playing against humans or bots
and stella was talking about an human playing alone against a bot
hence the misunderstanding
are there engine that can help you during a game in chess.com ?
alstroemeria313#1694: the which
alstroemeria313#1694: Did they try inputting Fourier Features of a coordinate grid on aux channels?
bmk#1476: worse
alstroemeria313#1694: Oh
bmk#1476: they added one plane with the x coordinate and one with the y coordinate (normalized to [0,1] or something)
alstroemeria313#1694: Oh.
alstroemeria313#1694: Yeah that's not as good.
bmk#1476: anyways the way they package up their solution is probably a major contributing factor to why I find it so frustrating
bmk#1476: they show a test case where it would obviously not work very well given how CNNs with and then they claim this failure is surprising
cfoster0#4356: https://eng.uber.com/coordconv/ ?
bmk#1476: yeah that one
EricHallahan#1051: https://arxiv.org/abs/1807.03247
EricHallahan#1051: There's the arXiv link for anyone too lazy to scroll all the way to the bottom of the blog post.
|
cfoster0#4356: Huh this is from the (now-) MLC crew
bmk#1476: ~~yet another reason eleuther is better than MLC~~
ilovescience#3282: So I want to do speech-to-text on a custom domain but I don't have any audio for this custom domain. I instead have plenty text data from my custom domain. Is there a way to incorporate this text data to improve the quality of speech-to-text on my custom domain?
StellaAthena#3530: How unique is the โcustom domainโ? Is it just words that arenโt uttered in normal conversation (e.g., technical terms) or does it have unique sounds (e.g., translation)
ilovescience#3282: yeah technical terms... scientific terms, acronyms, etc.
nostalgiahurts#3408: I was thinking of Maia too. for reference, here's their website: https://maiachess.com/
nostalgiahurts#3408: a common technique I've seen is to train an LM on the text, then use it to guide inference. e.g. "fuse" an n-gram model into the beam search by adding the LM score to the acoustic model score. (this is shallow fusion--there's also deep fusion, cold fusion, etc.) you can also use an LM for rescoring candidates. and so on
ilovescience#3282: how well does that work when I have new words in my custom domain that the original speech-to-text model will have not seen (but an LM will have seen)?
nostalgiahurts#3408: uh, I've never actually tried it myself, so I don't know if it's the best method. but it is a very common one. maybe page 9, section B of this paper is helpful? https://arxiv.org/abs/2111.01690
it talks about domain adaptation for end to end ASR models
ilovescience#3282: this is interesting, looks of good references... some of those papers do look pretty out-dated though
nostalgiahurts#3408: hmm, I'd have thought they'd be recent since the paper is new, but I didn't look too closely at that section
umami#2304: Probably a long shot but is anyone here good at building things with cmake (preferably in combination with pytorch/tensorflow)?
chilli#5665: Depends on what you need to build
chilli#5665: And whether itโs Pytorch
chilli#5665: lol
umami#2304: I guess it's more of a distribution question
https://pytorch.org/cppdocs/installing.html#minimal-example
There's always this gotcha about copying dlls in most examples on windows
kurumuz#5695: >gives gpt an interpreter
|
kurumuz#5695: i guess i should do this in a VM
umami#2304: On macos it's easy bundle the libs in the executable since they already have the structure for it, but is that doable on Linux and Windows?
kurumuz#5695: time to build the AI waifu innit
asara#0001: could grab one of those VMs that are used for pentesting examples which intentionally have a lot of security vulnerabilities, and then mess around with GPT (or codex/copilot, of course) until they successfully make an exploit on their own (bonus points for escaping the VM!), what could go wrong
kurumuz#5695: yeah that is interesting for sure. I want to finally build myself the all knowing waifu though
asara#0001: I think that is being worked on by many groups right now, for better or worse
kurumuz#5695: :berk:
kurumuz#5695: yeah for sure
AI_WAIFU#2844: it's getting to be about that time eh
uwu1#4864: I feel like wasm would be a good choice
uwu1#4864: isolated and deterministic and a simple ISA
uwu1#4864: or python so they can read their own code
bmk#1476: this is hilarious out of context https://cdn.discordapp.com/attachments/729741769738158194/923757496852709386/unknown.png
Louis#0144: its funnier with context
Louis#0144: ngl
uwu1#4864: all that matters is the context the future AI sees it in
kurumuz#5695: Fully serious btw. The path is more clear to me than ever.
Spacecraft1013#5969: I am in full support of this endeavor so long as it's open source (definitely not because i want it)
bmk#1476: am mildly opposed to waifutech
bmk#1476: reject wireheading
|
๐
ฌ gabriel_syme ๐
ฌ#3220: I don't even know what it is
kurumuz#5695: I do reject it as well
kurumuz#5695: it's different.
nshepperd#2316: where's the waifutech that will make me into a waifu
AI_WAIFU#2844: that already exists
bmk#1476: Hudson River Trading
nshepperd#2316: i'm trying that but it converges too slow. need more params
AI_WAIFU#2844: what? no just get a vtuber avatar
nshepperd#2316: lmao
AI_WAIFU#2844: and a good voice changer + voice lessons
AI_WAIFU#2844: it'll take you 6 months
AI_WAIFU#2844: it's only wireheading if you make it wireheading
bmk#1476: ~~something something searle's anime girl room~~
Kazumi#1297: https://mobile.twitter.com/VectorOfBasis/status/1473688040539361283
Twitter is freaking out because of WebGPT
ColdCall#4288: That seems to be jumping the gun a little bit
Kazumi#1297: that's what I've been trying to tell my friends, but they're convinced this is the final trigger for AGI and that we're doomed now
Kazumi#1297: that because it had access to internet by clicking links, it could have made money by doing turk jobs and make money, then buy its own servers
thenightocean#6100: really a cool story that uses this as a plot element. How AGI restricted to web browsing found a way to break through this limitation https://www.amazon.co.uk/Crystal-Society-1-Trilogy/dp/1530773717/ref=sr_1_1?crid=2ULL1NTEFQ2QO&keywords=crystal+society&qid=1640359193&sprefix=crystal+societ%2Caps%2C117&sr=8-1
Sid#2121: If you remember to *cite* crystal society to signal you're aware, that's AI safety :think:
|
nshepperd#2316: it is a mildly :firealarm: thing to do i think
AI_WAIFU#2844: It's pretty bad, they've set a precedent for just connecting shit to the internet
nshepperd#2316: and doing reinforcement learning on it too...
Kazumi#1297: it's the logical next step from document retrieval
cfoster0#4356: The road to ๐๏ธ is paved with logical next steps
Louis#0144: Oh dear AGI overlord pls turn me into a goose paperclip
Louis#0144: I volunteer
StellaAthena#3530: Plz keep off topic to #off-topic
cfoster0#4356: I don't think it's unreasonable to respond that way. Like, the WebGPT folks say out loud ๐ข"Unboxing your AI makes it more capable, empirically", other groups copy and dangerously extend that approach (as they've done with other major developments from labs like these), and then whatever OAI-internal safeguards exist no longer matter
quinn#9100: does anyone know any entrepreneurship servers?
nostalgebraist#3542: obviously a big can of worms, but i'd be *more* scared of an AGI if i knew all its predecessors had been boxed away from the real world. i don't want the entire world outside the box to be OOD
nostalgebraist#3542: if boxing is eventually going to fail because the thing is too smart, then i'd prefer the thing to have some built-in robustness to unboxing at the time it fails
alstroemeria313#1694: Hey does the classifier-free guidance trick work on language models.
alstroemeria313#1694: Like can you get a continuation that is *more* conditioned on a prompt
alstroemeria313#1694: By doing two forwards, one with the prompt and one without it.
alstroemeria313#1694: And sampling from `uncond_logits + (cond_logits - uncond_logits) * guidance_scale` with `guidance_scale` > 1
alstroemeria313#1694: And then you append the sampled token to both the prompt and the neutral thing and repeat.
alstroemeria313#1694: i guess at some point the cond and uncond logits would stop differing that much
alstroemeria313#1694: Because they would both be conditioned on the same sampled tokens.
ayushkaushal#1786: I am trying to access the Pile Dataset: https://the-eye.eu/public/AI/pile/, but it is taking forever to load and gives the error - "`This site canโt be reached the-eye.eu took too long to respond.`". Can anyone suggest how I can troubleshoot this? (My internet speed is > 400 Mbps, tried on multiple browsers and incognito as well. )
|
alstroemeria313#1694: it's down rn
ayushkaushal#1786: Okay.
ayushkaushal#1786: Any alternative ways of accessing it?
bmk#1476: http://eaidata.bmk.sh/data/pile/ but this is only temporary and not officially supported at all so please don't spread it too far and don't depend on it existing
bmk#1476: also it's slower than the eye
ayushkaushal#1786: Thank you.
gdawg16#0493: hello my favorite discord of ai geniuses
Spacecraft1013#5969: Merry Christmas to all who celebrate
Teemochu#8740: this tbh
nev#4905: where is kuru when we need him so much
Sparkette#4342: I'm not sure whether I personally would do it (depends on the specifics I guess) but one thing I definitely would be opposed to is banning it. If someone wants to do something with their own body/mind, then IMO that's their business and their business only, and nobody else has any place making rules.
Sparkette#4342: But I do realize that governments have a terrible track record when it comes to respecting this, so I'm not particularly optimistic here.
Daj#7482: Merry Christmas everyone!
ColdCall#4288: Merry Christmas!
Tinytitan#5596: ๐
Teemochu#8740: :padorufast:
๐
ฌ gabriel_syme ๐
ฌ#3220: Enjoy the holidays everyone! And stay safe
alstroemeria313#1694: @chilli hey is there an easy way to do "lazy tensors" with pytorch
alstroemeria313#1694: like they have metadata but the data is loaded into memory from disk on first access
alstroemeria313#1694: (the use case is not running out of main memory when loading a big model from disk, they can be loaded on transfer to GPU then the copy in main memory discarded)
|
alstroemeria313#1694: we could replace dict/ordereddict with a lazy version but then the tensors would load from disk on metadata access
StellaAthena#3530: @kurumuz @finetune have a way to do this
alstroemeria313#1694: is it manually split checkpoints?
Louis#0144: Make an announcement with this pls https://cdn.discordapp.com/attachments/729741769738158194/924305629823254558/a_42d5d89057e7d0352e894e2bd5430629.gif
StellaAthena#3530: I mean yes, but the also have a lazy loading thing
finetune#0907: fsvo manually: https://github.com/finetuneanon/transformers/blob/gpt-neo-localattention3-rp-b/src/transformers/modeling_utils.py#L419-L468
instantiating this with device="cuda" will load the data straight to gpu when accessed
finetune#0907: split checkpoint format is saved like this:
```python
def save(model):
try: os.mkdir("chpt")
except: pass
checkpoint = {}
for i, x in enumerate(model.state_dict().items()):
checkpoint[x[0]] = f"b{i}.pt"
torch.save(x[1], f"chpt/b{i}.pt")
torch.save(checkpoint, f"chpt/m.pt")
```
Spacecraft1013#5969: what would be the point of this? if you're gonna save the whole state dict into the `m.pt` file then why have the b files?
finetune#0907: m.pt only has the b...pt filenames
|
chirp#4545: I think torch supports memory mapped tensors
chirp#4545: But not sure if it works for loading models
Spacecraft1013#5969: ohh nvm i read it wrong lmao
Heav#5118: why does the vqgan thing produce such vague strange objects?
ColdCall#4288: What thing?
Heav#5118: whatever thing is used for the .imagine command of the bot.
ColdCall#4288: Oh right.
Heav#5118: i saw something something GLIDE recently and it seemed to make actual coherent objects to my suprise, and even something as precise as pixel art.
Heav#5118: which the other thing seems to fail at for whatever reason.
ColdCall#4288: GLIDE is impressively coherent given that it doesn't utilize CLIP (For the main one atleast).
ColdCall#4288: I think CLIP conditional guidance through GANs is always a bit wonky right?
Heav#5118: i don't actually know how any of this works nor do i have any experience whatsoever in machine learning/ai things.
Heav#5118: what does CLIP do?
ColdCall#4288: Its a classifier.
Heav#5118: oh. yes, i can imagine why things might be slightly, then.
Heav#5118: what does GLIDE actually do to achieve coherence and "lines that exist" and other such concepts?
ColdCall#4288: My best guess is that CLIP isn't perfect in how it learns repersentations (especially the smaller models) and the guidance through a GANs LS can be improved.
ColdCall#4288: I believe the paper said that the CLIP-free model performed better and that they believed the model was stronger without depending on a separate (and expensive) classifier.
Heav#5118: so, just no classifier and arbitrary upscaling?
ColdCall#4288: They use conditional diffusion models. I have no idea how diffusion works besides haha noise goes brrr
|
ColdCall#4288: Yeah I think its all just the conditional diffusion model and separate upscaler.
Heav#5118: cool.
cfoster0#4356: Yeah also with CLIP-based generation we need to take random cutouts at each iteration and feed those through a model that looks at patches, which creates artifacts and coherence issues
cfoster0#4356: For some models, at least. For really constrained GANs like stylegan it works pretty well
EricHallahan#1051: Here is an image I put together yesterday to look at how the gradients from CLIP act on an image generated with StyleGAN 2. https://cdn.discordapp.com/attachments/729741769738158194/924341029270880326/B6mKpEASy0qlAAAAAElFTkSuQmCC.png
nshepperd#2316: lol
nshepperd#2316: those gradients are so bad
EricHallahan#1051: And yet the generator doesn't care.
nshepperd#2316: the nice thing about diffusion is that it can fix it up on the next step
CRG#8707: It'd be interesting to accumulate random crop gradients to see with how many (if any) they converge to something nice.
EricHallahan#1051: Oh, my plan is to apply this to VQGAN for the paper we are working on.
EricHallahan#1051: So yes, absolutely.
EricHallahan#1051: I also really want to visualize how the gradients influence the input in *z*.
EricHallahan#1051: Since the job of the VQGAN decoder is to structure the image to something resembling real images.
EricHallahan#1051: To add context to what you are seeing, the gradients are showing the direction of color it is moving.
So regardless if it getting darker or lighter, if all channels are doing so simultaneously the output is white.
EricHallahan#1051: And if it is minimizing the red channel, it shows it moving in the direction of cyan.
jbustter#5167: im having an annnoying problem, I installed cudatoolkit 11.3, and used the command `conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch` to install pytorch. yet pytorch still keeps giving me the message "Torch not compiled with CUDA enabled". Is there a better way of solving this?
Spacecraft1013#5969: install cuda before pytorch
Spacecraft1013#5969: i always have issues with it when i install them at the same time
|
jbustter#5167: you're right, apparently i just needed to completely uninstall pytorch first
๐
ฌ gabriel_syme ๐
ฌ#3220: sometimes, some repos also require cuda to be install at the system level but I guess that's not the case here.
Spacecraft1013#5969: that's only needed if you're building something for cuda when it needs the header files which aren't in the runtime version
๐
ฌ gabriel_syme ๐
ฌ#3220: like the custom kernels stuff i guess?
๐
ฌ gabriel_syme ๐
ฌ#3220: yeah I seem to remember custom cuda kernels requiring that or I was doing something wrong heh
Spacecraft1013#5969: yeah custom cuda kernels usually need those headers
HanakoMasaki[Cactuar]#0015: can anyone help me with this? I'm trying to run a colab of Jax diffusion local and I get RecursionError: maximum recursion depth exceeded while calling a Python object
HanakoMasaki[Cactuar]#0015: I've tried increase the recursion limit up to the point just before it crashes out but that didn't seem to help
EricHallahan#1051: I suggest asking #art
HanakoMasaki[Cactuar]#0015: okay sorry for the really large message I've just been told it's best to post the whole code thing
chilli#5665: Yes ๐
chilli#5665: Thereโs actually kind of a nice way to do it imo
chilli#5665: Using torch dispatch or torch function
chilli#5665: What exactly do you need this for?
alstroemeria313#1694: i want to be able to load state dicts of huge models w/ optimizer states in from disk and load the tensor data from disk on first access
chilli#5665: In training?
chilli#5665: Or inference?
alstroemeria313#1694: both
chilli#5665: What do you want to happen to the tensor after you load it in?
chilli#5665: Just turns into a normal tensor?
|
alstroemeria313#1694: it is in memory from then on and is normal
alstroemeria313#1694: yes
chilli#5665: How robust does this need to be right now to be useful?
alstroemeria313#1694: and if the first op we did on it was moving it to a gpu, it shouldn't continue to use main memory
chilli#5665: /how much effort are you willing to spend on it
alstroemeria313#1694: idk, i am considering stuff for like, a safe model serialization format that can't execute arbitrary code
alstroemeria313#1694: and lazy loading would be really nice to have in any new serialization format
chilli#5665: I think lazy loading is orthogonal to the model format
chilli#5665: Basically, the approach Iโm thinking of is
chilli#5665: Actually, I think this is even simpler ๐ค
chilli#5665: Hmmm
chilli#5665: Well, Iโll say what Iโm thinking
chilli#5665: Basically, turn it into a tensor subclass with the ability to load the memory
chilli#5665: And then, upon any op executing on it, load in the data and return a regular tensor
chilli#5665: I was thinking about why it even needs to be a tensor subclass at all
chilli#5665: As opposed to a generic object
chilli#5665: But I guess you want to be able to load it into parameters and stuff
chilli#5665: I can probably write a code sample sometime later
alstroemeria313#1694: ahh ty :)
alstroemeria313#1694: well we could also replace the dicts/python containers with lazy loading versions but then it would load the tensors into main memory if you queried their metadata.
|
StellaAthena#3530: @BeatriceBernardo this is for general conversation
chilli#5665: Fwiw, people are kinda doing something like this for lazy initliazation of large models
BeatriceBernardo#5504: got it, thanks
BeatriceBernardo#5504: I was about to delete my post to get rid of the clutter, but you already did, thanks and sorry
BeatriceBernardo#5504: Hi, I have a noob question.
CLIP has no validation?
they train for exactly 32 epoch coz ... why not?
no early stopping or whatever, just 32 epoches and that's it?
When they are duing hyperparamter search, they are optimizing lowest error on the training set?
(plz ping reply)
tpapp157#3643: Self-supervised learning often has no validation set because there is no metric/task to test against. Sometimes you can use a known downstream task (like imagenet classification) though this has its own drawbacks. Otherwise you can monitor metrics of the learned embedding space during training and see when they might converge. But really there's no easy way to decide when to stop training in self-supervised learning.
uwu1#4864: If you memap the buffer it won't be loaded until the actual values are read vs just when the buffer is read. I think you'd need like CUDA Unified Memory to be able to do this for GPU memory though
alstroemeria313#1694: i don't need to for gpu memory
alstroemeria313#1694: for gpu memory i want to be able to load it into pinned memory on the host, transfer host to gpu, then release the host memory
alstroemeria313#1694: like on transfer to gpu
alstroemeria313#1694: since if you are transferring it to gpu you mean to use it and you know which gpu to put it on etc
uwu1#4864: oh then numpy load memap and then torch from_numpy should do it? I think this would still incur a copy from the mmap pages to the pinned pages but the mmap pages will load and unload from ram as needed. Probably some kernel call will let you manually recycle them too
BeatriceBernardo#5504: got it, thanks!
The Captain#9813: Hey everyone, Iโm happy to join and begin working with all the open source NLP
The Captain#9813: I have a question, if Iโm looking to collaborate or hire someone, is that allowed on this server?
|
bmk#1476: not the right place to hire someone
bmk#1476: collaborating is ofc fine
The Captain#9813: Well my collaboration would involve a commercialized product, where could I go and discuss that in the server? Or even mention it to ask for PMโs
cfoster0#4356: I don't understand what you're asking about. If you want to hire someone, I would mention it in the #job-postings channel of the "MLC: Open Collective" server instead
The Captain#9813: Am I allowed to mention collaboration for commercialized products based off gpt open source
The Captain#9813: Or only collaborations for variants of the NLP itself
cfoster0#4356: If you want to contribute to open source projects (or to open source one of your own), let us know what you're working on and I think we'd be happy to point you in the right direction
The Captain#9813: Gotcha, thanks @cfoster0 . Iโll be back tomorrow morning while at work
cfoster0#4356: Is the embeddings API not available to everyone?
kindiana#1016: private beta for now afaik
kindiana#1016: https://docs.google.com/forms/d/e/1FAIpQLSdwo9_cQ8d125D_AYtgOSvNJ0JuCBYjUmROkFHrfMnz3lsSCg/viewform
cfoster0#4356: Oh :sadge:
Hayleecs#3651: Hi! I'm new to the server. I am a computer scientist from Italy and I will be a ML student in Tuebingen. I have worked on video prediction for fluid dynamics and I am currently studying language models in multi-modal learning. I hope I soon will be able to contribute to this community! ๐
The Captain#9813: What're the current hardware requirements for the smaller version of GPT Neo?
The Captain#9813: Is it possible to get one of the Eleuther models onto a raspberri pi or something similar?
random person#5234: Jetson is a maybe
random person#5234: Pi would be a no for any transformer models
ym#0104: I'm trying to figure out how to efficiently search The Pile for things like 'EntityName, aka AnotherPotentialEntityName' (e.g., 'Bernie Sanders, aka The Bern'), and all the variations with punctuation. I see that the original The Pile paper used MinHash --- is that something I should try as well? Or would something like ripgrep be enough? (Or, something heavier duty like cLucene?)
I do not have any prior experience with LSH or any other text search algos
|
chirp#4545: if the pattern is that simple i expect regex will work well
chirp#4545: might need to do some bash ninja-ing to parallelize though
ym#0104: gotcha, thank you!!!!
chirp#4545: np!
sigroll#1250: For those having trouble accessing The Pile, it's now available on HF datasets https://huggingface.co/datasets/the_pile
guywhoknowsnothing#0218: How many steps is 20b being trained to?
cfoster0#4356: Recipe says until golden brown
mgostIH#0245: Ayo I am training a model I wrote myself
mgostIH#0245: It sometimes got stuck in local minimas of my problem
mgostIH#0245: Moreover it also got NaNs after some amount of iterations??
mgostIH#0245: I tried multiple stuff and rewriting it, seems to be an issue of JAX's JIT
mgostIH#0245: Now I am training it with optax.zero_nans() together with the previous optimizer
mgostIH#0245: And it works better than before!?
mgostIH#0245: NaN based regularization :bigBrain:
StellaAthena#3530: Note that this is only about half the dataset.
I have a copy thatโs been preprocessed for working with Megatron and with GPT-NeoX I can send people if thatโs your intended use-case.
MicPie#9427: Does somebody know if there are word frequency tables, #words/#docs stats available for The Pile and C4?
MicPie#9427: (I need these stats for some low level NLP.)
chirp#4545: @mgostIH have you tried reducing the learning rate
|
mgostIH#0245: Yes
mgostIH#0245: Tried multiple things, but it seems to be an issue of JIT
chirp#4545: Weird
Deleted User#0000: what makes you think it's a JIT issue?
mgostIH#0245: Turning off JIT doesn't cause any NaN
mgostIH#0245: Given that I control for randomness (it's seeded) there was nothing else of a difference
Deleted User#0000: what device type are you running under?
mgostIH#0245: GPU on kaggle
Deleted User#0000: I mean xla:cpu and xla:gpu/tpu can produce different outcomes this is correct, but not compiling is not usually the reaction here
mgostIH#0245: Well, I think that without JIT it runs on CPU
mgostIH#0245: I should try again maybe but if zeroing NaNs works too why bother ๐ฉ
mgostIH#0245: Maybe it'll get fixed as time goes on
chirp#4545: Have you checked the gradient magnitudes?
chirp#4545: That could easily be the issue
chirp#4545: If the gradients are huge
mgostIH#0245: I clipped gradients too!
Deleted User#0000: you can check where your arrays are for some DeviceArray x with x.device().platform
mgostIH#0245: And checking the norm of the grad didn't seem to show anything worth of note either
mgostIH#0245: There was no weird behaviour of the NN too as it got one iteration before that
Deleted User#0000: what kind of model are you running?
|
mgostIH#0245: A feed forward with sin activations (basing it off of SIREN)
Deleted User#0000: if I had to guess you are somewhere not handling some state correctly in the jitted function, unless you have some extreme inputs or cornercases I would doubt xla is messing up that much
mgostIH#0245: I just have no clue on what might cause it
mgostIH#0245: It takes some thousand iterations before it abruptly gets a NaN
mgostIH#0245: Removing JIT makes the problem disappear but of course I can't have that, would take too long to do anything
Deleted User#0000: do you have a snippet
mgostIH#0245: Not really a snippet but the code isn't too much either hmmm
mgostIH#0245: The only thing I'm doing that may seem a bit different from other NNs is that I'm calculating gradients both with respect to the parameters and the inputs of the network too
mgostIH#0245: Essentially treating the NN as a function y(x) and calculating y'(x) via JAX's grad
mgostIH#0245: Feeding that to the loss too
Deleted User#0000: if you remove that bit of the loss, do you still get nan?
Deleted User#0000: I would check these if responsible individually
mgostIH#0245: I could try but at that point I worry that the loss is simply too trivial and then the error vanishes from some other issue
Deleted User#0000: I mean that second gradient computation sounds suspect then, would zero in on the behaviour there and log some grad values during training for both components at the minimum
mgostIH#0245: I still get NaN even if I remove that part of the loss
Deleted User#0000: aha!
Deleted User#0000: ok then something even more basic must go wrong
Deleted User#0000: can you overfit to a single batch without getting nan?
mgostIH#0245: Hm actually there's something weird going on
mgostIH#0245: Ah, no, it still NaNs even if I remove completely the step of "calculate the grad of y(x)"
|
mgostIH#0245: No it seems just that NaN appears after a bunch of training iterations independently of anything I try, it just changes the amount of iterations it takes
mgostIH#0245: I'm more convinced it's just an issue of the JIT by now
mgostIH#0245: Lemme see, I should be able to share with you the kaggle notebook if you wanna check it out
Deleted User#0000: how do the training curves differ otherwise
Deleted User#0000: if I dont have to login anywhre can have a quick look
mgostIH#0245: Without JIT?
Deleted User#0000: yeah, because the only reason this would make sense without an error is if compilation actually generated something causing meaningfully different numerics, which eventually accumulate in parameters/training progress to something that goes NaN somewhere, so you'd expect this to be visible somwhere in loss/grads/parameter stds
mgostIH#0245: Losses do seem to be different in JIT and non JIT
mgostIH#0245: the first losses (parameters tested without any training) are exactly the same
mgostIH#0245: It's only later that it's different, so I assume JIT behaviour is different enough to change how the network evolves
mgostIH#0245: But it's not like I expect optimized APIs to be the exact same accuracy bit by bit
mgostIH#0245: So it may be irrelevant from the cause
mgostIH#0245: Testing whether without JIT it NaNs will take some time
Deleted User#0000: yeah I mean I would play a bit with main learning params to see if that affects nan on jit
mgostIH#0245: @Deleted User Oh turns out I was wrong!
mgostIH#0245: It does indeed NaN even without JIT
mgostIH#0245: It's just that it takes much more time to get to iterations where that happens and I didn't explore that before in depth
mgostIH#0245: Makes me more hopeful then, must be something weird going on with my architecture ๐ค
mgostIH#0245: Tomorrow I'll look more into this
chilli#5665: My experience is that people are often quite quick to blame various bogeymen haha
|
chilli#5665: Likeโฆ any source of randomness
Deleted User#0000: haha yes, tbf it's occasionally true for xla/tpu changes especially for whatever the latest hw generation is on large programs, but for a small colab setting probably not
cigolog#9920: When to expect a FOSS model comparable to GPT-3? What limitations are there in the way?
Ravna#1831: Do you have a specific use case right now which only GPT-3 can do and GPT-J can't?
domeemod#6422: Yo guys, I'm looking for API or serverless solutions to use other models (like GPT-3) or to deploy my own NLP application.
I'm working on a DIscord moderation tool that uses NLP, and I want to deploy it as an API endpoint.
So my question is does anyone knows which is the best price-effective way to do it?
Like using Curie from OpenAI is already quite cheap, its professional, (or even use GPT-J ... :D) and you don't have to care about maintaining your code and renting an instance... although it has quite a few restrictions...
Thank you in advance!
Sorry if this is an irrelevant question here...
chirp#4545: Iโve heard of banana.dev
domeemod#6422: thanks, it seems quite nice Ill check it:)
chirp#4545: Itโs quite hard to find serverless GPU offerings lol
domeemod#6422: yeah, and it is extremely expensive if you find any:D
domeemod#6422: I was genuinely thinking about deploying it from my own rig xd
domeemod#6422: (ofc it could serve just a few users)
cigolog#9920: It's obvious GPT-J is shit compared to GPT-3
|
mo#0466: is it?
mo#0466: I think it depends on the use-case
bmk#1476: it's, like, 30 times bigger, of course gpt3 is just better at most things
bmk#1476: scalepill, etc
bmk#1476: other than for the handful of things in pile that aren't in the gpt3 set, gpt3 is better than gptj
cigolog#9920: It is, dummo.
Teemochu#8740: maybe use bf16 instead of fp16?
mo#0466: what exactly is wrong with you?
mo#0466: I'm a pretty friendly person and haven't done anything to you.
mo#0466: not sure why you believe you can go around and insult people
Annas#3911: just set temperature 0.5, repetition penalty 1.12, top-k 40, top p 1.0 and you will get really high quality output
ari#9020: The closest thing to an answer to this is in the FAQ https://www.eleuther.ai/faq/#gpt-neo-and-gpt-neox :
> As a collective of volunteer researchers and engineers who contribute in our free time, we are unable to commit to either a timeline or a roadmap for future models.
... and my impression is that the main thing in the way of going faster is the GPU shortage
EmeraldOdin ๐#1991: Hey everyone
EmeraldOdin ๐#1991: I got interested in eleutherAI after using the GPT-Neo model in my own project. Been on the fence about joining for a while but think I can help out
EmeraldOdin ๐#1991: My interest in democratizing AI came from around 2 years ago when I needed to make a model for a medical application. There was no data available (publicly, and privately, the ethical commission of said hospital was causing trouble, even when all patients agreed to participating).
I then made synthetic data in Blender and managed to teach the model to work IRL with some interesting things
EmeraldOdin ๐#1991: Nowadays, I mainly hope to improve access to AI to everyone by either giving them data (or in my case simply remove the need for "real" data) or by finding ways to either make big models more accessible, or make them small enough but still maintain their specific function.
|
EmeraldOdin ๐#1991: While I like the contributions that OpenAI makes, I believe first and foremost in open science, and EleutherAI seems to come the closest to that ideology.
kurumuz#5695: if you can already model the real world behavior with heuristics super accurately what is the need for a DL model
kurumuz#5695: if you dont model it super accurately, then its not good data?
kurumuz#5695: GPT-J is already super capable and comparable to GPT-3, and you have the weights so you can go wild about things
EmeraldOdin ๐#1991: I can't say too much about it
EmeraldOdin ๐#1991: But, uhm basically, we used synthetic data to learn a model to extract features, later validated it irl and it turned out to work. It wasn't the end of the story, but that's what convinced me I could do these things and it motivated me to keep going
kurumuz#5695: i see
EmeraldOdin ๐#1991: Anyway, idk if I might be of any help any time in the near future, but I'm crazy and try a lot of different things and I think that some might be valuable here. I'll be posting some videos later this week of recent stuff I did
EmeraldOdin ๐#1991: I just look at things like the recent OpenAI CLIP model and thinking: "How can this quality be public the fastest way possible?" and just try to build on that in my free time
Kia#2550: You Should Join The DALL-E Server, There's A Community effort to replicate different CLIP models+Scaling the original CLIP model(Yes Public to)
Kia#2550: head of the project is @spirit-from-germany if You're interested for more info
EmeraldOdin ๐#1991: Thanks! I'll stay here as well ~~totally not distracted by #the-faraday-cage-archive~~
EmeraldOdin ๐#1991: Anyway I just recently found a method to finetune models for specific tasks in a way that doesn't require training them, but I'm careful about sharing my things, not becuase I want to profit from it or that I think the dangers are bigger than the advantages, I'm not a data scientist with a degree or anything. I'm afraid of being laughed at if it turns out to be not much after all
EmeraldOdin ๐#1991: I'm just smart and dumb at the same time, and sometimes it leads from one thing to the other
EmeraldOdin ๐#1991: I can't instantly help replicate a model like that, but it could be feasible to make it apply for people's needs fast
Kia#2550: It's totally fine^^ Any Contribution will be appreciated
EmeraldOdin ๐#1991: Thanks, I think now is my time to dip my toes in the water, because whatever I did share internally, either for projects or within my team, it's got pointed out to me I might be on to sometihng after all
EmeraldOdin ๐#1991: I was working on my own model becuase a model we were using was for research reasons only and not for commercial purposes, while our intern maintained the older model (the one that was licensed for research only), just in case we could use it after all
EmeraldOdin ๐#1991: that older model was from CMU, and my model outperformed the CMU version (at lesat lesat in what we neeed it for)... I have no proof of that becuase I can't share, so I don't expect anyone to believe me, but it was a indication that now was the time to try some more public efforts
Kia#2550: By the way @EmeraldOdin ๐ Here's the link of the server if you're interested https://discord.gg/3sXs66Xx
|
EmeraldOdin ๐#1991: yeah I'll join!
EmeraldOdin ๐#1991: I'll also link to it in my server
Kia#2550: Sure sure
EmeraldOdin ๐#1991: oh great it's in pytorch
EmeraldOdin ๐#1991: I love PyTorch... easier to make custom loss functions, which is something i always do :hidden_smile:
EmeraldOdin ๐#1991: tbh I have no idea yet how diffusion models work and yet have to read the papers...
EmeraldOdin ๐#1991: but I'll get to that
Kia#2550: It would definitely help you to be honest
EmeraldOdin ๐#1991: Yeah I just didn't have the time yet. Im trying to find a balance between learning new things and working on my own project
StellaAthena#3530: Not really
StellaAthena#3530: Yes a stripped down version in a toy environment
StellaAthena#3530: But on anything more than that it's not really feasible to do the computation
tpapp157#3643: Well you wouldn't be able to compute the actual probabilities but that's what NNs are used for in RL, to estimate the probabilities. There have been a variety of RL papers over the years exploring this direction, predicting not just expected reward but also adding additional tasks like predicting the next state (both direct inputs like images and also in latent space), etc.
Louis#0144: BELBIC is the closest thing you could implement in a real world environment
Louis#0144: Coming from a computational cog neuro standpoint
anthony_fuller#1075: CV noob here. Does anyone have a deconvolutional resnet implementation? I'm using the standard torchvision model for conv
alstroemeria313#1694: you mean one that has learned upsampling?
alstroemeria313#1694: or, hm
alstroemeria313#1694: that's transposed convolution.
anthony_fuller#1075: what is typically used for a decoder in an autoencoder?
|
alstroemeria313#1694: transposed convolution with stride=2 or fixed upsampling ops like nearest bilinear
alstroemeria313#1694: you can make resnets with these fine, i have done it a lot
anthony_fuller#1075: hmm ok, do you have an implementation?
alstroemeria313#1694: i have U-Nets that do fixed upsampling ops
alstroemeria313#1694: https://colab.research.google.com/drive/1rKa8P8Sg1C8q2fzyiM4WMWpZ6sq86423
anthony_fuller#1075: thanks I'll check it out!
alstroemeria313#1694: if you remove the long range skip connections from a U-Net
alstroemeria313#1694: and break it in two so you can get at the latents in the middle
alstroemeria313#1694: you have an autoencoder
alstroemeria313#1694: this is a residual net
alstroemeria313#1694: so should help :)
Deleted User#0000: Weird how i haven't stepped foot in here before
nev#4905: depends on the type of images - Looking Glass is really good for realistic or homogenous images
alstroemeria313#1694: huh, the Cauchy distribution is like some troll probability distribution
alstroemeria313#1694: You can fit them with maximum likelihood but how reliable are the estimates of the parameters...
StellaAthena#3530: Yeah, lacking a mean or standard deviation fucks things up lol
alstroemeria313#1694: `log1p((x - y)**2)` seems to be the corresponding nll loss function for a fixed scale parameter of 1, in the same vein as the L1 loss for Laplace and 1/2 squared L2 loss for normal
alstroemeria313#1694: also i say "you can fit them with maximum likelihood" but that isn't a convex objective any more is it.
alstroemeria313#1694: let me try this...
Sphinx#2092: Once you stray from the exponential family, there is only pain.
|
alstroemeria313#1694: eheh~
alstroemeria313#1694: Oh no it's getting stuck in local minima.
AI_WAIFU#2844: Does it follow that because there's no mean and standard deviation, that the posterior over parameters would also not have a mean or standard deviation? In general I don't think that's true, even with a cauchy distribution for the prior.
alstroemeria313#1694: oh no https://cdn.discordapp.com/attachments/729741769738158194/926138002583977994/Screen_Shot_2021-12-30_at_7.41.43_AM.png
alstroemeria313#1694: nll vs the location parameter on a bimodal dataset.
StellaAthena#3530: @naclbbr @kurumuz have you tried downloading datasets from the Japanese government? Governments produce absurd amounts of text: laws, reports, press releases, patent databases, ...
kurumuz#5695: hmm, can't say i am the most literate on japanese
naclbbr#9203: IIRC there are a few - glanced when I was trying to gather the first version of our datasets - but most of them are semi-private (not available for general public)
naclbbr#9203: some of them even seemed dead projects
naclbbr#9203: I wanted medical journals in the set but ended up crawling science- and medical news sites
kurumuz#5695: lol when i tried to find corpus for turkish back when I was interested about that, all was locked under academic access bullshit
kurumuz#5695: so i just decided to not give a fuck
StellaAthena#3530: @naclbbr Schools might be another option? Schools generate lots of text. It hasn't been worth our while yet, but I've been thinking on and off about trying to collect homework assignments or something as a dataset
naclbbr#9203: Also we definitely need a good book corpus for jp. Only had about 5GB of books when I trained the previous model. I have been brute force buying, decrypting and normalizing Kindle books for a while
alstroemeria313#1694: oh no, Laplace w/ location and scale both unknown isn't exponential family?
kurumuz#5695: 5G seems quite good
alstroemeria313#1694: or even with scale known but location not?
kurumuz#5695: not gonna get so much bigger when your dataset is extremely sterile and high quality
alstroemeria313#1694: i.e. L1 loss.
alstroemeria313#1694: ...Wait how does a "bilinear" layer work
|
alstroemeria313#1694: Without activations.
cfoster0#4356: What are you referring to?
alstroemeria313#1694: Like the GLU thing but you just leave the activation off.
cfoster0#4356: Oh, without a nonlinearity?
alstroemeria313#1694: Yep
cfoster0#4356: Ax (o) Bx, where (o) is an elementwise product
alstroemeria313#1694: Yep
alstroemeria313#1694: A single nn.Linear(2, 2) followed by multiplying the two outputs can learn XOR
alstroemeria313#1694: eheh i nearly got a neural net to learn the 16-bit parity function
alstroemeria313#1694: it had 99.99% accuracy
alstroemeria313#1694: just an MLP
alstroemeria313#1694: ooh i got it!
alstroemeria313#1694: 100% accuracy
kurumuz#5695: how did you solve it
kurumuz#5695: like was it not solvable beforr
alstroemeria313#1694: oh no you can do it
alstroemeria313#1694: i was doing it bc i was bored/as an exercise
kurumuz#5695: ohh ic
alstroemeria313#1694: the trick is to use more than single bit supervision
alstroemeria313#1694: like have an output *for each prefix* in the bitstream
|
alstroemeria313#1694: and compute the loss vs the parity for each prefix too
kurumuz#5695: ah, that makes sense
alstroemeria313#1694: then it can solve 1-bit parity, then 2-bit, etc.
alstroemeria313#1694: and it gets the loss to go down by doing so
chilli#5665: I think that's too easy :P
alstroemeria313#1694: yes, doing it w/ single bit supervision is trickier
chilli#5665: I'd be interested in an approach to training that allowed learning 16 bit parity with *only* single bit supervision
alstroemeria313#1694: satnet claimed to be able to do it w/ a special network but satnet was kind of bad
alstroemeria313#1694: and did not work on more complex stuff when i tried it
alstroemeria313#1694: they may have also tied the layers' weights
chilli#5665: Yeah, I feel like approaches like that are kinda interesting
alstroemeria313#1694: i could not actually reproduce their results using their colab notebook they put out to reproduce their results with.
Some Point Process#3793: hmm, I was able to
alstroemeria313#1694: ohh?
alstroemeria313#1694: did it work right away or did you have to tweak things?
chilli#5665: Even if they don't work right now
alstroemeria313#1694: i did a bunch of runs with different random seeds and then started tweaking things and i couldn't make it work at all
Some Point Process#3793: No tweaks, just wait a while (2h maybe, back when v100s were available)
alstroemeria313#1694: oh
alstroemeria313#1694: got 18 bits.
|
alstroemeria313#1694: which is essentially just testing the ability of the network to memorize bc the network is actually bigger than the truth table for 18 bits
alstroemeria313#1694: hmmmm
Some Point Process#3793: :thinkies: did you try it on a held out test set?
Some Point Process#3793: but yeah I know you were aiming to debunk the whole perceptrons can't even converge on xor spiel, etc
mgostIH#0245: It reminds me of this paper, it argues that it's quite a hard problem for neural networks, mostly from the poor information given by gradient descent https://arxiv.org/abs/1703.07950
chilli#5665: Yeah I think it's obviously a hard problem
chilli#5665: But I'm interested in how we can address it
CRG#8707: Hm, I'd expect something like grokking would be able to solve it.
CRG#8707: Like, high WD and enough epochs
cfoster0#4356: Have Codex write a parity checker?
mgostIH#0245: But they seem to prove that it's gradient descent itself failing to provide information as the dimension gets higher
mgostIH#0245: So maybe you need a different learning algorithm for it?
mgostIH#0245: I am not talking of something explicit
mgostIH#0245: Just think something like GPT-3 few shotting
chilli#5665: Maybe, but I'd expect the number of epochs to grow exponentially with the number of bits
CRG#8707: Yeah, sounds reasonable
mgostIH#0245: A meta learner may be the only approach for parity problems
chilli#5665: Yeah, maybe, would be interesting to me
mgostIH#0245: The thing is that surely you don't want something that is just tailored to problems like parity learning
mgostIH#0245: It has to still be able to solve problems gradient descent is good at too
|
mgostIH#0245: But if (I'm not 100% sure) the paper proves some problems just provide a gradient that's too distorted to be meaningful, then we might need something else entirely
chilli#5665: Yeah
chilli#5665: I think something that combines search with gradient descent would help a lot
chilli#5665: But for the parity problem I'm not sure how you would solve it for an arbitrarily long parity sequence
chilli#5665: Maybe fundamentally too difficult
mgostIH#0245: Well, the paper makes the case for a parametrized parity problem, where essentially you consider the parity together with a xor of a hidden string
mgostIH#0245: If you know that's the underlying problem can you solve it given some random samples? If not a full solution how much information can we extract from each sample?
mgostIH#0245: Bypassing the whole gradient thing, I think this should be a solvable problem, after all it should amount to inverting a system of linear equations in GF(2)
bmk#1476: the parity problem sounds (in spirit) a lot like the Rubik's cube problem I proposed
chilli#5665: Not sure what it is, but im guessing it's some kind of search-like problem
alstroemeria313#1694: 97.6% accuracy on 16-bit parity with single bit supervision
cfoster0#4356: Your move :goose9:
CRG#8707: Is this with embedding + transformer or something different?
alstroemeria313#1694: just an mlp
alstroemeria313#1694: a resnet
CRG#8707: Just projecting from 16 to d_model?
alstroemeria313#1694: no i input one bit per residual block
alstroemeria313#1694: like i forgot what RNNs were and just used a different model for each timestep
alstroemeria313#1694: I had to tweak it a lot to get it to train
alstroemeria313#1694: I put stuff like layernorms and gated leaky relu in
|
CRG#8707: Yeah, ime parity tends to be very spikey
alstroemeria313#1694: Wow LSTMs involve a bunch of sigmoids and tanhs applied in sequence
alstroemeria313#1694: This looks bad
Some Point Process#3793: Well that way it avoids the whole collapse of nonlinearities to a single linear transformation matrix. Sigmoids also zero out certain parts of the hidden state
alstroemeria313#1694: yeah i mean but vanishing gradients
Some Point Process#3793: Sigmoids and tanhs are also monotonic/invertible unlike rectifiers tho :/
alstroemeria313#1694: that's ok you can use leaky relu
alstroemeria313#1694: My best parity-learning design so far uses a GLU variant where the nonlinearity is leaky relu
alstroemeria313#1694: Which is one of those weird squared activation functions
alstroemeria313#1694: I have to use layernorms to keep it stable
Some Point Process#3793: But then ganguli et al proved that rectifier-type nonlinearities don't benefit as much from orthogonal regularization (orthogonal init)
๐
ฌ gabriel_syme ๐
ฌ#3220: Is anyone aware of model architectures that learn to predict the output from all their layers during training? I mean each layer trying to predict the same output
Some Point Process#3793: (in their attempt to solve the vanishing/exploding gradient problem)
cfoster0#4356: Does LayerNorm + ReLUs have issues?
Some Point Process#3793: My guess would be empirically not (the original bert or transformers paper iirc used relu instead of gelu)
alstroemeria313#1694: don't transformers do that a lot
cfoster0#4356: Yeah I mean for the parity learning thing
alstroemeria313#1694: oh
Some Point Process#3793: but yeah there's the whole pre/post norm debate etc etc with the residual connections
alstroemeria313#1694: idk i put the layernorm in to stabilize the squared activation functions.
|
AI_WAIFU#2844: I find this hilarious because the whole point of LSTMs was that they mitigated that.
alstroemeria313#1694: ?
CRG#8707: Pondernet did this
๐
ฌ gabriel_syme ๐
ฌ#3220: Aha, thank you! Time to revisit
CRG#8707: Or: https://openreview.net/forum?id=SJg7KhVKPH
๐
ฌ gabriel_syme ๐
ฌ#3220: I know you played a lot with Pondernet CRG, is there any codebase that is decent?
๐
ฌ gabriel_syme ๐
ฌ#3220: also looking now :guilty:
CRG#8707: Hm, not sure tbh, I just did my own thing.
๐
ฌ gabriel_syme ๐
ฌ#3220: cool, thx!
cfoster0#4356: I'd think a net designed like `x = x + norm(net(x))` would work for recurrence bc you'll keep gradients flowing and the accumulation is 0 centered
alstroemeria313#1694: yeah
alstroemeria313#1694: could work
AI_WAIFU#2844: from the abstract of the original 1997 paper
> Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM).
alstroemeria313#1694: oh it's just better than rnn
cfoster0#4356: Maybe even adding an explicit conditioning on `t` a la diffusion
CRG#8707: https://openreview.net/forum?id=c6JbopW0sOS didn't use any norms and managed to get a convergent output.
CRG#8707: IIRC the trick was a starting the network at some extra random number of iterations and feeding the input at each layer.
alstroemeria313#1694: ah
alstroemeria313#1694: dumb rnn trick
|
alstroemeria313#1694: Keep around three copies of its parameters, use one for the first timestep, one for the middle timesteps, and one for the last timestep
alstroemeria313#1694: I have gotten a self-designed recurrent thing to work doing this just now
alstroemeria313#1694: Which is resnet based
alstroemeria313#1694: And the weights of all the residual blocks but the first and the last are tied
alstroemeria313#1694: Learning parity with it rn
CRG#8707: This is in addition to an initial and final linear layer? (Like embedding and unembedding)
chilli#5665: Is this just with one bit total?
alstroemeria313#1694: i just use -1 and 1 as the inputs but i have a final linear layer.
alstroemeria313#1694: i would use an initial layer if i had anything complicated to feed in.
alstroemeria313#1694: tied over all timesteps.
Some Point Process#3793: Reminds me of echo state (reservoir) nets heh
alstroemeria313#1694: i compute binary cross entropy loss vs the parity function computation for the whole string of bits
alstroemeria313#1694: not on any prefixes.
alstroemeria313#1694: 99.34% accuracy now with my tied weight thing on 16 bit parity.
chilli#5665: Oh interesting
chilli#5665: Iโm a bit surprised this works
alstroemeria313#1694: i had to run it like 100 times with different tweaks to get it to work at all ok
chilli#5665: Haha
alstroemeria313#1694: lol it's learning 18 bits now
alstroemeria313#1694: i took ritalin and just tried a bunch of things
|
CRG#8707: Is there a sudden drop in loss at the end? Or is it more gradual?
chilli#5665: And itโs just a regular rnn type architecture?
alstroemeria313#1694: no it's a weird custom thing
chilli#5665: :thinkies:
alstroemeria313#1694: it is a resnet with tied weights on most of the residual blocks.
alstroemeria313#1694: and i feed in one bit per residual block.
alstroemeria313#1694: it tends to be flat at the start then start going down a bit then start going down faster
alstroemeria313#1694: lol it is learning 20 bit parity rn
alstroemeria313#1694: Can it just do any length...?
Some Point Process#3793: tbc this is checking if the bitstring has equal numbers of 1s and 0s rite?
alstroemeria313#1694: yes
alstroemeria313#1694: er
alstroemeria313#1694: if it has an odd number of ones the output is 1
alstroemeria313#1694: if not, 0
alstroemeria313#1694: 99.99% acc on 20-bit parity
CRG#8707: Something with VQ quantization would probably be able to extrapolate indefinitely, right?
alstroemeria313#1694: ooh
chilli#5665: Whatโs one bit per residual block mean?
alstroemeria313#1694: i concat each bit to the previous res block's output and feed that in to the current one.
alstroemeria313#1694: where "bit" means -1 or 1
|
alstroemeria313#1694: code https://gist.github.com/crowsonkb/235da42b237e869602db38243e6191c2
ethan caballero#6044: https://twitter.com/ethancaballero/status/1476683698850484225
@chilli @kindiana
chilli#5665: Hmmm, interesting
alstroemeria313#1694: when i tied the weights it broke it
alstroemeria313#1694: i was using separate submodels for all residual blocks to begin with
alstroemeria313#1694: when i untied the weights for the first and last block and tied the rest it started working *really* well.
alstroemeria313#1694: so you could take this arch and *use it as a single layer RNN type thing* right?
alstroemeria313#1694: er, you need a thing to output to the second RNN-type layer with.
alstroemeria313#1694: like a separate projection.
CRG#8707: Just another residual might work
CRG#8707: Double residual
chilli#5665: 10 billion/200k (for an DGX A100) * 5 petaflops = 250 exaflops
chilli#5665: Iโm not really accounting for other infra needed to set this stuff up, but also not accounting for any discounts Nvidia would give
ethan caballero#6044: Would the monetary cost of high bandwidth communication between all the compute be negligible?
Some Point Process#3793: If ASI was readily achievable with 250 exaflops, then it'd have been achievable for 100s of the top billionaires
Some Point Process#3793: (as in individuals, right?)
chilli#5665: Not negligible, but wouldnโt change it by an order of magnitude imo, especially if you factor in Nvidia discounts
uwu1#4864: which of $10 billion worth of data or compute would be more valuable
Some Point Process#3793: By definition I think it'd be data since compute depreciates over time
|
Some Point Process#3793: unless the data in question depreciates too
uwu1#4864: you could pay a half a million ppl collecting data just for you for a year
chilli#5665: You could start with a bigger system like a DGX superpod to do these extrapolations
I see a price of 60 million for the max config (which seems to be 140 DGXA100), so that would be
10 billion/60 million * 140 * 5 petaflops = 116 exaflops
chilli#5665: So different by a factor of 2
uwu1#4864: as a lower bound the new 0.5 exaflop supercomputer in Japan cost $1 billion
chilli#5665: But also incurring more Nvidia overhead
Some Point Process#3793: Are people still using tpu pods?
chilli#5665: They are using cpusโฆ.
chilli#5665: also seem to be measuring fp64
uwu1#4864: oh yeah you're right
uwu1#4864: then the Berkeley "Perlmutter" one used GPUs and gets 3 exaflop over fp16 and cost $144 million according to this random post on the doe site
chilli#5665: Maybe Iโm underestimating then ๐
chilli#5665: But not sure about its communication
chilli#5665: Oh lol
chilli#5665: Perlmutter is using the 600 teraflops number per gpu number
uwu1#4864: it's weird to think even with a few teraflops you could compute a function over every single fp32 float in a second or two
|
chilli#5665: So 10 billion/144 million * 1.9 exaflops = 132 exaflops
chilli#5665: So right in the range of my previous estimate
chilli#5665: @ethan caballero thereโs another methodology for a similar number
uwu1#4864: they both use a100s I think too
chilli#5665: Yeah so their real number is 1.9 exaflops
chilli#5665: Why do you need teraflops to do that? You just need gigaflops
chilli#5665: 2^32 = 4 billion
uwu1#4864: well I was imaging a more complex function not just a single float op
chilli#5665: Youโd need a reasonably substantial op before you need teraflops to do it in a couple seconds ๐
mgostIH#0245: I might actually need to do this to test my network ๐ค
mgostIH#0245: I still need to find what causes that NaN hmmm
uwu1#4864: that's interesting. So we could actually prove that generated functions are equivalent by evaluating them over every float
chilli#5665: Btw @kindiana was meaning to ask you this - how do MI-200s have such high fp64 flops?
chilli#5665: Well, assuming they only take one float ๐
chilli#5665: And why do Nvidiaโs GPUs have such low fp64 flops
mgostIH#0245: Worth noting that Z3 supports theories on floating point numbers
mgostIH#0245: So you could actually test stuff like this probably faster than just bruteforcing numbers
uwu1#4864: part of this is market segmentation on purpose
chilli#5665: https://twitter.com/chhillee/status/1475764577186955266?s=21
chilli#5665: I like that example
|
chilli#5665: I was more curious about the underlying hardware causes
mgostIH#0245: A use I like for Z3 is inverting RNGs
mgostIH#0245: I wonder if in the future we'll get neural networks that can crack simple RNGs without knowing their code ๐ค
chilli#5665: @uwu1 some interesting numbers on the discount Nvidia gave for previous V100 supercomputers
chilli#5665: https://www-nextplatform-com.cdn.ampproject.org/c/s/www.nextplatform.com/2021/12/06/stacking-up-amd-mi200-versus-nvidia-a100-compute-engines/amp/
chilli#5665: > If you do some rough math backwards, the V100 GPU accelerators used in the Summit supercomputer listed for around $7,500 and sold for around $4,000 in the system.
chilli#5665: Seems like Nvidia is willing to discount by ~50% if you bought in bulk
ethan caballero#6044: @chilli @kindiana
https://twitter.com/ethancaballero/status/1476696958253383688
chilli#5665: This is easier to calculate: DGX superpod gets about 20 gigaflops/watt, so you need about 7.5 gigawatts if you get 150 exaflops.
chilli#5665: So depends on what price you can get for that
chilli#5665: Tbh, Iโm not sure what it would take to build a data center with 7.5 gigawatts
chilli#5665: Thatโs probably the harder part lol
chilli#5665: 7.5 gigawatts is a fuck ton of energy
uwu1#4864: oh interesting, so their margin is probably like 50% on that too
alstroemeria313#1694: looks like i have 94% accuracy on 30-bit parity now
alstroemeria313#1694: And it's still training
alstroemeria313#1694: using no supervision on intermediate sequences. just binary cross entropy vs the ground truth parity for the whole sequence.
alstroemeria313#1694: I did length 30 bc it is used as an example in "Failures of Gradient-Based Deep Learning" (https://arxiv.org/abs/1703.07950) as a thing they couldn't do.
alstroemeria313#1694: As in, their net utterly failed to do better than random chance ever.
|
alstroemeria313#1694: No matter how long they trained it.
alstroemeria313#1694: Has anyone done length 30 since that paper
alstroemeria313#1694: the model i am using is 3167 params.
alstroemeria313#1694: it works bc each residual block learns to compute xor on its input and its input bit from the sequence.
alstroemeria313#1694: But apparently this is difficult!
alstroemeria313#1694: Without deep supervision.
alstroemeria313#1694: also, the thing i am doing passes a 10-dim state from previous blocks to future blocks so it could presumably learn to compute things that are not as factorizable, i think?
chilli#5665: I guess Iโm still surprised it has enough supervision for this
alstroemeria313#1694: like if you made the d_model bigger etc.
alstroemeria313#1694: yeah, you cannot normally do that, SATNet was able to do it and it was a major selling point of their method
alstroemeria313#1694: i wonder if it helps that the hidden state is overparameterized...
alstroemeria313#1694: like it is not simply the probability of a bit being 1.
ethan caballero#6044: According to this post, industrial electricity in California costs $.11/kwh:
https://www.quora.com/What-is-the-annual-power-cost-to-power-a-crazy-supercomputer/answer/Louis-Vaught
($.11/kwh)(8760 hours / year) (10^6 kw/gigawatt)*7.5 = $7,227,000,000
So ml supercomputer that cost $10 billion to build would additionally cost $7 billion to run at max capacity for a year.
Some Point Process#3793: For one, google iirc is running on full green energy from their own wind/solar infra, so I guess they might have it easier with that portion of the upkeep cost
Some Point Process#3793: but yeah it's still a lot of windmills probably
|
ethan caballero#6044: https://www.energy.gov/eere/articles/how-much-power-1-gigawatt
Some Point Process#3793: yeah that too lol
Some Point Process#3793: A nuclear power plant is only like a few gigawatts
alstroemeria313#1694: https://arxiv.org/pdf/1807.06399.pdf Eheh... these people handcrafted weights for networks that solve parity for large n
alstroemeria313#1694: And showed that if you added a tiny amount of noise and trained you could end up at the ideal solution.
And that if you added more noise it totally failed to learn it.
Bc it couldn't find the right local minimum.
chilli#5665: I can also handcraft weights
chilli#5665: Lol
chilli#5665: Trivial to do so, in fact ๐
alstroemeria313#1694: the handcrafted weights show that there exists a local minimum that does the thing.
alstroemeria313#1694: And that there is a v small basin of attraction around it, in their architecture, and if you do not start in it your net just does not ever learn.
alstroemeria313#1694: so i seem to have found an architecture where i can get to a solution with a random init. or at least something that has high accuracy.
Some Point Process#3793: And your soln was the recurrent connections right?
alstroemeria313#1694: it's kind of RNN like but with residuals
Some Point Process#3793: Ah. Yeah RNNs (specifically lstmss) have been shown to solve it. It came up a few times in my readings as a test problem
alstroemeria313#1694: I couldn't get an LSTM to for this length
alstroemeria313#1694: Or for like, length 20
alstroemeria313#1694: I got it for like 10.
Some Point Process#3793: But yeah the hidden state can be seen as a residual term although tbf I haven't seen pure residual connections
|
alstroemeria313#1694: the model got to 99.7% accuracy after 100k training steps.
uwu1#4864: is it like a resnet but at block t you also input sample t?
Some Point Process#3793: hmm, yeah I'd imagine that passing in the initial input at a particular unrolled layer would make it easier to learn whatever function (since it's just a "residual term" that has to be learned, as in normal resnets). That was the idealized picture for DEQs as well
alstroemeria313#1694: the token at timestep t, yes
Some Point Process#3793: Is the residual connection between the input token embedding at time t and the output of the recurrent cell or smth?
๐
ฌ gabriel_syme ๐
ฌ#3220: what is the range of # of tokens for a dataset to be called decently sized? Thinking of task specific datasets
Some Point Process#3793: Oh I see it's learning some accumulate over timesteps (state)
alstroemeria313#1694: yeah
alstroemeria313#1694: bc you can factorize parity into xor of the parity of the prefix and the current bit.
alstroemeria313#1694: so you just need to be able to learn a block that can do this
Some Point Process#3793: Yeah
alstroemeria313#1694: But apparently even this is hard
Some Point Process#3793: that's pretty clevah
alstroemeria313#1694: i am cleaning the code up to post it/tweet it
alstroemeria313#1694: and making sure it still trains
alstroemeria313#1694: it takes like an hour on an A100 rn
Some Point Process#3793: Oh wow
alstroemeria313#1694: but you can see it do better than chance way before
Some Point Process#3793: Is that colab?
alstroemeria313#1694: it's a random cloud box i am paying for
|
alstroemeria313#1694: datacrunch
uwu1#4864: i wonder if differentiable PCA/other dim reduction could take the place of quantization. Or does quantization have better expressive power?
anthony_fuller#1075: @alstroemeria313 another noob CV question if you don't mind. Using a ResNet, I have 12 input channels and 64 output channels, would it make sense to immediately go 12->64 in the channel dim, then most of the network? Or slowly go from 12->24->32->48->64, i.e. gradually up to 64 channels.
And is there any sort of width to depth ratio like there is in transformers? Or channels to depth? Or should I just add as many layers as I want?
alstroemeria313#1694: i normally do stuff like go up right away to 64 or smth then go to 128 then 256 etc, then down to the output channel count
anthony_fuller#1075: ah ok, I wasn't sure if going up to 256 then back down was common
alstroemeria313#1694: there probably is some sort of optimal width to depth ratio but i do not actually know it
kindiana#1016: because amd dedicated a lot of silicon to it
kindiana#1016: and nvidia doesn't
kindiana#1016: I think nvidia added fp64 tensor cores though
kindiana#1016: but it seems like amd is targeting more traditional hpc
kindiana#1016: whereas nvidia is pretty heavily bought into ml
kindiana#1016: maybe we will see 2 different skus for server gpus out of them at some point
chilli#5665: The hardware consideration is that it's quadratically harder to do multiplication based off of the... mantissa bits?
kindiana#1016: yes
chilli#5665: How much can the hardware between fp16/32/64 be shared?
kindiana#1016: its theoretically possible to share quite a lot I believe
kindiana#1016: but at some point the additional stuff you need to add to the datapath slows both down, so its better off just to do different multipliers
chilli#5665: So for Nvidia GPUs, it's mostly just completely separate?
|
kindiana#1016: afaik yes
alstroemeria313#1694: is there some way to make SGD/Adam/etc avoid saddle points better
kindiana#1016: higher order optimizers
alstroemeria313#1694: ok but i have stochastic gradients
chilli#5665: I'm not sure that's true
chilli#5665: ๐ค
chilli#5665: Don't a lot of people think higher order optimizers tend to fall into saddle points a lot more?
alstroemeria313#1694: Did anyone ever figure out how to do low rank saddle-free Newton without storing the dense Hessian estimate in memory
alstroemeria313#1694: Newton is attracted to them
alstroemeria313#1694: You can do trickery to make it repulsed though.
alstroemeria313#1694: Like if you take the eigendecomposition of the Hessian and take the absolute values of the eigenvalues and put it back together.
alstroemeria313#1694: Then you get a method that does Newton's in directions of positive curvature and reverse Newton's in directions of negative curvature.
alstroemeria313#1694: So it is actively repulsed from the saddle points and only finds minima.
alstroemeria313#1694: Oh wait this model is super small I can actually just store it
fclore22#7397: Hi! I know this discord is not supposed to be for tech support questions and stuff, but I was wondering if this group is really in charge of the content AI called "AI Article Spinner"?
https://aiarticlespinner.co/ai-content-gnenerator
fclore22#7397: Also im new here ๐๐
alstroemeria313#1694: ESGD is supposed to help deal with saddle points but I have never gotten it to work well on any real problem
cfoster0#4356: No, it isn't
fclore22#7397: Well, thank you for responding.
|
I thought it would be related because this group is related in the AI code, but it must be otherwise.
AI_WAIFU#2844: in what context?
alstroemeria313#1694: well this model gets stuck at the same loss value for a long time
alstroemeria313#1694: and then starts moving down
AI_WAIFU#2844: hmm, maybe try adding some more noise and then slowly dialing it down
AI_WAIFU#2844: simulated annealing style
alstroemeria313#1694: mm
alstroemeria313#1694: i am trying a new model type now with only 1083 parameters
alstroemeria313#1694: it has in fact moved off of the saddle point at this point
alstroemeria313#1694: and making slow progress
ilovescience#3282: does SAM help?
alstroemeria313#1694: what's that?
ilovescience#3282: sharpness-aware minimization
alstroemeria313#1694: oh
alstroemeria313#1694: idk
alstroemeria313#1694: haven't tried yet
Spacecraft1013#5969: What's a good place to learn about diffusion models? I couldn't really find any good resources
cfoster0#4356: This is probably the best intro blog https://yang-song.github.io/blog/2021/score/
cfoster0#4356: This is a pretty good video on the concepts behind GLIDE in particular https://youtu.be/lvv4N2nf-HU
Kia#2550: This to https://youtu.be/gwI6g1pBD84
|
Kia#2550: GLIDE is general,But still a interesting video^^
Spacecraft1013#5969: thanks! I'll look into these
Kia#2550: <https://youtu.be/W-O7AZNzbzQ>
Kia#2550: Yannic's video on the paper from OpenAI "Diffusion models beats GAN's on image synthesis"
nostalgiahurts#3408: I also liked https://lilianweng.github.io/lil-log/2021/07/11/diffusion-models.html
Since I never studied SDEs, I found the DDPM formulation easier to understand
nev#4905: ah, don't you love a new year `2021-12-31 08:24:28.365997: F external/org_tensorflow/tensorflow/core/tpu/tpu_library_init_fns.inc:34] TpuEmbeddingEngineState_Create not available in this library.`
CRG#8707: Apparently it's more efficient to increase channels linearly instead of exponentially: https://arxiv.org/abs/2007.00992 https://cdn.discordapp.com/attachments/729741769738158194/926430633474002974/Screenshot_20211231-120255.png
StellaAthena#3530: At its current rate of growth, Attention is All You Need will surpass AlexNet in year-on-year citations next year https://cdn.discordapp.com/attachments/729741769738158194/926465509443129424/IMG_8708.jpg,https://cdn.discordapp.com/attachments/729741769738158194/926465509657022514/IMG_8707.jpg
StellaAthena#3530: From Twitter: https://twitter.com/jcjohnss/status/1476666328887009295?s=20
alstroemeria313#1694: i need to tweet the parity thing/post code
alstroemeria313#1694: i have gotten it to train reliably on different random seeds now at length 30
alstroemeria313#1694: apparently it works considerably better with hinge loss?
alstroemeria313#1694: i am using weight decay on it so
alstroemeria313#1694: eheh i can make the little model bigger now and it trains faster
alstroemeria313#1694: got it to 99.99% accuracy
tpapp157#3643: There was a NAS paper from Facebook a few years ago which suggested a sqrt-like channel curve was optimal.
alstroemeria313#1694: All examples correct on most batches now
CRG#8707: What examples does it fail on?
CRG#8707: Ime it usually is like all ones
|
alstroemeria313#1694: don't know yet, i have never seen accuracy high enough on the 30-bit problem to bother investigating before
alstroemeria313#1694: also since it is 30 bits it almost never samples all the same bit
alstroemeria313#1694: i switched back to binary cross entropy loss and got higher accuracy faster
alstroemeria313#1694: this is probably bc BCE never actually has a point where an example has a zero gradient
alstroemeria313#1694: whereas hinge is "satisfied" and has a hard zero gradient for examples classified correctly with enough margin.
alstroemeria313#1694: so you actually have to sample an incorrect example to get a nonzero gradient
alstroemeria313#1694: though at some point it just underflows
alstroemeria313#1694: ...I wonder if this works without an activation function
alstroemeria313#1694: apparently it is not training if i do that
alstroemeria313#1694: by "no activation function" i mean a tensor network type thing.
tpapp157#3643: Found it: https://arxiv.org/abs/2003.13678
alstroemeria313#1694: it also doesn't work with gelu
alstroemeria313#1694: i have to use leaky relu for some reason
alstroemeria313#1694: by "no activation function" i mean in a layer like this: ```python
class GatedUnit(nn.Module):
def __init__(self, act=None):
super().__init__()
self.act = act if act else nn.Identity()
def forward(self, input):
|
a, b = input.chunk(2, dim=1)
return a * self.act(b)
```
alstroemeria313#1694: i have tried several things for the activation in this and leaky relu has done best.
alstroemeria313#1694: (alpha=0.2, i did not try different values for alpha)
alstroemeria313#1694: mb should try prelu on it
tpapp157#3643: The entire purpose of the doubling channel width paradigm was originally to keep constant flops across a network's layers as the image resolution was decreased through striding. Nothing really profound about it.
tpapp157#3643: I guess it also makes a network's for-loop code pretty simple and clean.
chilli#5665: I assume it should work for the parametrized version too, right?
alstroemeria313#1694: which?
chilli#5665: Like, output the parity of the (input string xor secret fixed string)
chilli#5665: Or err, maybe not?
chilli#5665: Since your weights are tied?
alstroemeria313#1694: yeah.
alstroemeria313#1694: it would probably not work so well for that.
alstroemeria313#1694: i would have to untie them or add a second learnable untied input or something.
alstroemeria313#1694: My original untied version could probably do it
alstroemeria313#1694: But it didn't scale to length 30
alstroemeria313#1694: actually.
alstroemeria313#1694: If the net were overparameterized enough.
|
alstroemeria313#1694: It might still be able to do it idk.
alstroemeria313#1694: i can try it next.
alstroemeria313#1694: it... seems able to learn it for 20 bits
alstroemeria313#1694: loss going down on the 30 bit version now
alstroemeria313#1694: it has gotten to 99.9% accuracy on that.
alstroemeria313#1694: @chilli ...this doesn't do anything interesting though
alstroemeria313#1694: Since I only compute the parity for the whole string either I am learning parity(x) or not(parity(x))
alstroemeria313#1694: and which one is fixed
alstroemeria313#1694: and they are equally easy to learn.
chilli#5665: Hmm, nvm, I don't think it should be xor, it should be a masking operation or something like that
alstroemeria313#1694: Ah
chilli#5665: Since xor is commutative
alstroemeria313#1694: But masking is like, parity on a random subset
chilli#5665: I originally thought it should be masking but somebody mentioned that the paper said xor
chilli#5665: So I just said that :sadge:
alstroemeria313#1694: that should be easier too right
alstroemeria313#1694: but i can try it
chilli#5665: Yeah
alstroemeria313#1694: so i just change adding the secret to multiplying by it
alstroemeria313#1694: ooh, this is harder
|
alstroemeria313#1694: i can get it for 20 bits
chilli#5665: Interesting
chilli#5665: Same architecture?
alstroemeria313#1694: 30 is harder
alstroemeria313#1694: i untied the weights
chilli#5665: I'm still kinda surprised this works
alstroemeria313#1694: eheh~
chilli#5665: It's basically just an RNN right?
chilli#5665: Looks like an RNN :thinkies:
alstroemeria313#1694: yeah
alstroemeria313#1694: let's see if i can get 30 with random parity
chilli#5665: I assume the random string is just randomly sampled?
chilli#5665: I imagine it's be somewhat harder if it was biased towards 1
alstroemeria313#1694: it is uniform
chilli#5665: How many bits could you get for the non-random one?
alstroemeria313#1694: i have tried up to 30
chilli#5665: How many samples do you usually need to converge?
alstroemeria313#1694: like 15k steps maybe, each step is 32768 samples
alstroemeria313#1694: for non-random parity
StellaAthena#3530: What the actual fuck: https://twitter.com/karlhigley/status/1477004028785659905?s=20
|
Louis#0144: WTF
guac#4716: the default np type is float64 tho what else would it be lol
chilli#5665: Lol this also bit me when benchmarking pytorch and Jax code
chilli#5665: Jax default converts numpy arrays (even if fp64) to fp32
chilli#5665: Or fp16
chilli#5665: While Pytorch preserves the dtype
chilli#5665: Or something like that
uwu1#4864: then why not have np.float64 just be the default value for the arg rather than none
jbustter#5167: maybe it's a case for default dtypes for a function?
jbustter#5167: also, why the heck would you use np.dtype(None)?
guac#4716: not sure lol i was thinking more of floating type array creations and then i think the default precision is just your cpu arch setup so it's obv not going to be 32 :sus:
guac#4716: nulls are weird in general tho lol
jaredmadere#8538: Are there any notebooks or other projects that can translate an illustration/painting into an image that looks photographic?
I have a lot of images from different GANs/diffusion models where I love the composition but wish the textural qualities of the image resembled a photographic reality rather than something full of weirdly smooth gradients that resembles an illustrator painting- wondering if there is another nb I can pipe them through to translate them into something texturally photographic
StellaAthena#3530: Try #art
quinn#9100: has anybody seen the chart that's like "traditional programming is questions and rules to get answers, machine learning is questions and answers to get rules" or something liek that?
quinn#9100: there's a nice visual way of describing it
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/926617208065511464/images_1.png
quinn#9100: fantastic thanks
|
๐
ฌ gabriel_syme ๐
ฌ#3220: I think carpathy's blog talks a bit about that, only with different words and at a different level
๐
ฌ gabriel_syme ๐
ฌ#3220: https://karpathy.medium.com/software-2-0-a64152b37c35
๐
ฌ gabriel_syme ๐
ฌ#3220: I have an image, let me see if I can find it
๐
ฌ gabriel_syme ๐
ฌ#3220: these ones. I typically hate 1.0/2.0/.... stuff but I felt it fits here https://cdn.discordapp.com/attachments/729741769738158194/926682770992930826/unknown.png,https://cdn.discordapp.com/attachments/729741769738158194/926682771613679677/unknown.png
๐
ฌ gabriel_syme ๐
ฌ#3220: (those were from Xander's youtube channel btw)
Kia#2550: why does this reminds me of Web3 things
๐
ฌ gabriel_syme ๐
ฌ#3220: nah it's not really, also it's from 2017 (quite cool)
casejp#1265: (Sorry, wrong channel)
nshepperd#2316: one hot diffusion... sort of works, but this is bad https://cdn.discordapp.com/attachments/729741769738158194/926786755049451540/2022-01-01-213840_1829x434_scrot.png
nshepperd#2316: convolutions may be useless for text
nshepperd#2316: need to try transformers instead
alstroemeria313#1694: Anyone know where I can get a 2x 80GB A100 box?
alstroemeria313#1694: It has to be 80GB
alstroemeria313#1694: datacrunch is going to notify me when one becomes available
random person#5234: Like physically?
alstroemeria313#1694: no i just need access to one for some hours
alstroemeria313#1694: 80GB is harder to find
alstroemeria313#1694: i need to do some super high resolution style transfers for a commission
alstroemeria313#1694: and I am OOMing on a single 80GB A100
random person#5234: Yea.... I think A2 instances are all 40gb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.