data
stringlengths 115
7.61k
|
---|
zphang#7252: We have a project page here: https://github.com/EleutherAI/project-menu/issues
but I recommend just starting by hanging around the server!
camel_case#8962: what are the chances we could train a model on github code (and/or comments) to get something like this? https://copilot.github.com
bmk#1476: we already did
bmk#1476: https://huggingface.co/lg/ghpy_20k
camel_case#8962: wow
bmk#1476: and we plan on doing a bigger one soon™
camel_case#8962: incredible work
camel_case#8962: mind if i get a quick tutorial on how to use this model?
bmk#1476: basically the same as any other LM, except this is tuned on python code
StellaAthena#3530: @bmk That's supremely unhelpful. It would be much more helpful if you provided info about what HF class to use to import the model, and maybe even a code snippet showing how to do generation.
bmk#1476: well it's exactly the same as using any other HF model
StellaAthena#3530: Not all HF models are used the same way as each other.
bmk#1476: go google `huggingface model generate`
StellaAthena#3530: So saying "it's the same as a HF model" without saying *which* model is less helpful
bmk#1476: all HF models that can generate use the GenerationMixin which presents a common interface
StellaAthena#3530: Is this a GPT-Neo model?
StellaAthena#3530: Should some other class be used to load it?
bmk#1476: yes and it's exactly the same as a gpt2 model, just use AutoModel
StellaAthena#3530: FYI, that's information that you have never shared to anyone asking for help with this
|
bmk#1476: just like any other HF model
bmk#1476: you can literally google and see it
camel_case#8962: So i'll clone the tranformers library, write a python script to load the ghpy model with transformers, and run?
Daj#7482: Should probably also clarify this is "just" a model trained to predict python code
Daj#7482: No plugin or fancy tricks
Sid#2121: which is exactly what copilot is :berk:
Daj#7482: But a plugin could be constructed around it
bmk#1476: I did, I said it's just the same as any other LM except tuned on python
camel_case#8962: https://huggingface.co/transformers/main_classes/model.html?highlight=generate
camel_case#8962: anyone have a spare boilerplate code snippet?
Sid#2121: ^
Sid#2121: just sub in neo 2.7B for whatever the ghpy ident is
camel_case#8962: thanks!
bmk#1476: https://huggingface.co/blog/how-to-generate
bt#7597: if you want to make an extension out of it the code generation model there's this repo which is a good starting off point: https://github.com/hieunc229/copilot-clone
here's something someone has done using the hf inference api: https://github.com/ncoop57/code-clippy-vscode
bt#7597: is there any more information on the https://huggingface.co/lg/ghpy_20k and the other ghpy_x models anywhere? i'm assuming it's fine-tuned on the Python files from the GitHub portion of The Pile and the 20k means 20k training steps? or batches? any script on how the fine-tuning was performed? i.e. what hyperparameters were used and how the Python files were recognized.
tin482#5219: Has anyone seen this?Alphafold2 generalizes 0 shot to multiprotein complexes with the simple addition of a long linker https://twitter.com/Ag_smith/status/1417063635000598528
natedog#8669: @camel_case we have been attempting to make an open source version of Copilot (model and vscode extension) our model training didn't go well, but the extension did. You should be able to plug in the ghpy model ibto it super easily to get it working with python only. Here's the link to the extension if interested: https://github.com/ncoop57/code-clippy-vscode
natedog#8669: Would people here be interested in getting multiple models fine-tuned on all Langs, not just python? We have the dataset already for it (https://the-eye.eu/public/AI/training_data/code_clippy_data/) and some training scripts, but the training was unstable and we didn't have that much compute (just single TPUv3 8 core) and we definitely want to continue the project and would appreciate any help
|
StellaAthena#3530: @natedog What’s the dataset you’re using?
bmk#1476: what filtering did you do to get this data?
bmk#1476: also how much data is this?
bmk#1476: I can train a 6B on this data if you want, but I'm worried it might not be big/clean enough
bmk#1476: even the pile GitHub data wasn't clean enough in retrospect
Indestructible Virgin#0777: Could anyone point me on the right direction on learning how to train models/fine tuned models
Indestructible Virgin#0777: Very new to this, but I've been messing around with prompts for the past week and I finally want to start training models for more accurate results
Indestructible Virgin#0777: Anything on scraping data and training it would be amazing.
Thank you in advance for your time.
AI_WAIFU#2844: Step 1. Read the FAQ
AI_WAIFU#2844: Step 2. Download the pile
EricHallahan#1051: (https://eleuther.ai/faq)
AI_WAIFU#2844: Step 3. Aquire a million+ dollars in compute.
Indestructible Virgin#0777: Thank you
Indestructible Virgin#0777: Also apologies for not reading the rules
natedog#8669: We created it
natedog#8669: Here is the info We use this tool: https://seart-ghs.si.usi.ch/, it only has about ~1million repos max to filter on. We added some additional filtering:
- <= 10 stars
- <= 2 contributors
- < 1 commit
|
- no forks
- must have a license
- > 70708 bytes repo size
This gives us about 500,000 repos and then we merge these with the original repos from the pile (removing dups) which gives around 670,000 repos that we ended up downloading (around 99.6 success rate). We did a bit of testing for duplicate code in a subset of our dataset and found it was quite bad ~28% near duplicates, but we haven't finalized the deduplicate process yet to see how bad it will be for the entire dataset. Yeah I'm guessing they are tons more, I'm just not sure how to get it easily. Maybe a ton of personal tokens to do all of the API calls to github?
natedog#8669: That would be awesome!!
kindiana#1016: Have your looked into ghtorrent and gharchive?
bmk#1476: did you use the big 600GB archive or the smaller one?
bmk#1476: for pile github
natedog#8669: Smaller one
bmk#1476: maybe try with the big one
bmk#1476: also i can get you more github repo urls, would that help?
natedog#8669: That would help a lot, but we also just need stable training script. Like our training was all over the place even with this small set
bmk#1476: I think I can get you a link to every single GitHub repo ever
bmk#1476: I can do training for you
bmk#1476: btw I looked at your thing on the eye and it looks like you committed lmd once every single file
bmk#1476: you should probably commit once every like 10k or 100k files or something
natedog#8669: Okay then how about this. We do data stuff and get it ready for you and then you can handle the training?
bmk#1476: sounds good
bmk#1476: do you still want that list of repos?
|
natedog#8669: Yeah we were on a deadline and so the servers we had were ending soon so we patched something together
bmk#1476: ah yeah
bmk#1476: what are you guys doing, anyways?
AI_WAIFU#2844: If you could do that tha'd be great
natedog#8669: Yeah please share
bmk#1476: do you guys have your own server or something where you coordinate
AI_WAIFU#2844: We were literally about to train one ourselves but our data wasn't up to snuff
natedog#8669: We're trying to make an open source Github Copilot essentially
Indestructible Virgin#0777: actual mad lads
natedog#8669: Yeah it was for this huggingface community effort and so they gave us a section to discuss in their discord, but we were gonna break off into our own that is dedicated to all things code generation
bmk#1476: ah nice
bmk#1476: invite pls? or do you not have it made yet
AI_WAIFU#2844: same
AI_WAIFU#2844: The discord AI community grows ever larger
natedog#8669: Will do, should be up soon
bmk#1476: actually @kindiana tells me he already has a list of all gh repos
bmk#1476: so uh he can send that over
natedog#8669: Sweet!!
bmk#1476: but yeah make as big a dataset as you feasibly can
AI_WAIFU#2844: Then send it to us
|
bmk#1476: (without needing to compromise on quality ofc)
bmk#1476: like literally multiple TBs is fine
bmk#1476: @kindiana with custom tokenizer, code probably compresses more than text
AI_WAIFU#2844: Also compute optimal training =/= deployment optimal training
natedog#8669: Yeah @Arto had a cool idea for how to handle the tokenizer
bmk#1476: also we can always literally just leave it training longer yeah
natedog#8669: Yeah we have a few ways of trying to control for quality. We were thinking of course deduping but there is also near deduping that's shown to be important for source code and we were thinking of running a vulnerability checker as well
natedog#8669: Also lot of debate over licensing, we were thinking just going for MIT and Apache licenced code if there is enough
CRG#8707: Looks like the code for the deduplicating LM paper is out: https://github.com/google-research/deduplicate-text-datasets
AI_WAIFU#2844: oh how convenient
CRG#8707: It'd be interesting to see what % of the pile is near duplicate: https://discord.com/channels/729741769192767510/747850033994662000/865554360637456404
StellaAthena#3530: A lot. Even if we assume the source data has 0 duplication the train set would have about a third duplication
CRG#8707: Because of the upsampling?
natedog#8669: that's about what we found for just the github
StellaAthena#3530: Yup. 100 units of text become 1.52 units of text via upsampling.
StellaAthena#3530: Vulnerability checkers are 💩
natedog#8669: T^T true, the tool also can check if they repo has CI/CD so that might be another thing we can filter by
Zac-HD#7996: Thinking about https://www.fast.ai/2021/07/19/copilot/
Zac-HD#7996: The "much of the training code is out of date" problem could be mitigated by refactoring tools like https://pypi.org/project/pyupgrade/ or https://pypi.org/project/shed/
Zac-HD#7996: Costs a little compute to set up, but not _that_ much.
|
bmk#1476: I think another thing worth trying is doing some kind of RL thing to make it write only code that compiles
bmk#1476: I swear there was a paper about something like this
bmk#1476: but I can't find it rn
natedog#8669: we had a member discussing that, not sure how it worked. I'll ping them to see how they were thinking of doing it
Cheesy#0202: @natedog :peepoWave:
Louis#0144: https://arxiv.org/abs/2106.04985
chase#6932: I saw something on twitter using EBMs to only make code that compiles
chase#6932: was just going to post that lol
AI_WAIFU#2844: EBMs or "we throw out all the samples that don't compile and call it an EBM."
Louis#0144: It’s genius
Cheesy#0202: Oh ya I saw this on twitter looking forward to the presentation. I think it's getting presented in NLP4P @ACL 2021
bmk#1476: yeah that's the one
bmk#1476: I'm adding it to my reading list
Teemochu#8740: wow a post by Jeremy
Teemochu#8740: oh there have been 3 this year
Teemochu#8740: last time I checked was May and he hadn't posted any since October
Teemochu#8740: oh I should add fastchan to my channel list
Teemochu#8740: The API usage part looks nice
Teemochu#8740: being able to search 100k people's code is underrated until you can do it
uwu1#4864: > I think another thing worth trying is doing some kind of RL thing to make it write only code that compiles
|
@bmk
one way to do this is to have an interpreter which generates the instructions as it executes :schmid: . If you stare at that for a bit it turns into the PLT problem of filling typed holes with values
uwu1#4864: also equivalent to the "amb" operator in a language with runtime codegen/eval
uwu1#4864: i have a wasm interpreter that does this but i haven't done the data collection and cleaning and hook up to ML model part...
uwu1#4864: (maybe wasm is a bad choice too and extending Common Lisp or some ML that already supports holes is better... but if one wants the model to metasearch to better search algos might as well have a low level language and a potentially huge set of programs to learn from)
AI_WAIFU#2844: This is the level of simplicity that I aspire to:
```py
def main():
optimizer = optax.adam(0.001)
dataset = MNISTDataset(64)
network = MNISTDiffusionNetwork([300, 100, 1])
dm = DiffusionModel(network, optimizer, dataset)
for i in range(1000):
loss = dm.train()
if i % 10 == 0:
print(i, loss)
samples = dm.sample()
|
display(samples)
```
bmk#1476: just the right amount of abstraction
bmk#1476: beautiful
bmk#1476: thank goodness there's no `dm.optimize(optimizer="adam", num_iters=1000, metric="acc",log_every=10)` bullshit
AI_WAIFU#2844: Yeah, it makes me realize that the one thing jax is really missing is a thin wrapper library that's really easy to use for the standard ML workflow where you're doing some flavor of SGD.
guac#4716: ah yes hiding `params` would be nice lol
bmk#1476: wen jax monads
AI_WAIFU#2844: I think you can basically have that with coconut no?
uwu1#4864: that code is like a well tended garden
chilli#5665: It’s not easy tbh
chilli#5665: It’s not like the flax developers are idiots
chilli#5665: It just isn’t an easy problem
zphang#7252: flax deprecating flax.optim in favor of optax is funny
uwu1#4864: i always feel uneasy though when you don't see the for loop over the data because you know one day you'll need it
bmk#1476: i love the for loop
bmk#1476: the `i % 10` however, im fine not seeing that
guac#4716: 1000 epochs on mnist bold
AI_WAIFU#2844: Like I think there needs to be a few levels of abstraction.
chilli#5665: I would also prefer not to have `i % 0`, since that'll throw a divide by zero 🙂
|
chilli#5665: but the logging itself seems ... harder to remove
chilli#5665: in a satisfactory way
AI_WAIFU#2844: There should be a jax library that does bog standard nn training workflow and does it well.
bmk#1476: i think you should have a separate function or a class or something that extracts the % whatever garbage
bmk#1476: it always clogs up my code and i dislike it
chilli#5665: and then that separate function doesn't compose with your code
chilli#5665: lol
chilli#5665: I think you're underestimating the complexities in a "bog standard nn training workflow"
chilli#5665: if people were really happy with something like that
chilli#5665: more researchers would like Keras
AI_WAIFU#2844: yeah but
1. Keras is just bad
2. The point is to do the most standard workflow, if you need to do something more complicated you use something else, just like when you need to do anything non-trivial you don't use keras.
zphang#7252: the flax examples are pretty good, I learned a lot from just studying them closely
zphang#7252: it's easier to see google libraries as "libraries developed for internal use, that are incidentally open-sourced"
uwu1#4864: i want like a tqdm for ml that has plotting support
AI_WAIFU#2844: Like even just look at optax,
you instance an optimizer
you call opt.init (pytree) and then you get a state
then you gotta manually lug this state and the network state around. I don't have to do that in pytorch, and neither jax nor it's ecosystem will let you put it somewhere sane.
|
AI_WAIFU#2844: You gotta make your own containers
AI_WAIFU#2844: Oh and same for the rng
guac#4716: coming from pytorch that shit pisses me off lol
uwu1#4864: put it in a closure
uwu1#4864: or a few closures
AI_WAIFU#2844: no, you put it in an object
AI_WAIFU#2844: a fucking class
guac#4716: does anybody actually like functional programming
uwu1#4864: i never took that one
bmk#1476: :gameryes:
AI_WAIFU#2844: This is the primary benefit/downside of OOP. You don't need to see everything. You can just mutate the state of things.
uwu1#4864: you can do the same here too
uwu1#4864: by newtyping for example
bmk#1476: honestly having state in exactly one context object isnt the worst idea
EricHallahan#1051: :thisup: guy.
guac#4716: this is just react/redux for DL
bmk#1476: just keep the state mostly slim
bmk#1476: you dont want.. *this*:
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/867249887414321182/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/867250133421916201/unknown.png
|
bmk#1476: this object stole months of my life
AI_WAIFU#2844: Like the way around this is what I did in my code. You make sub items for the different related configs, and then pass those into your bigger container.
zphang#7252: I tried that for a bit, until all the weird research ideas broke down all my abstractions
zphang#7252: that's when I abandon library and rewrite
zphang#7252: I no longer know what a task is
bmk#1476: need a new context object? have fun :mesh: https://cdn.discordapp.com/attachments/729741769738158194/867250545786224670/unknown.png
uwu1#4864: we use attrs for that works well for me but i prefer the dynamic ones where you request hps at runtime
AI_WAIFU#2844: yeah, a possible solution might be context locals
AI_WAIFU#2844: ```py
with getConfig(path) as cfg:
#your code here
```
uwu1#4864: like
```
@cooldataclass
class mycoolexperiment:
lr = loglinear(some range)
lm = lm_config(defaults)
decoder = choose([beam_search, greedy])
```
|
but its true @zphang that this method is an ideal rather than reality...
AI_WAIFU#2844: yeah, research code will tend to break abstractions anyways
AI_WAIFU#2844: but 99% of people don't need to do cutting edge research
zphang#7252: those people can use keras :berk:
AI_WAIFU#2844: and that's my point
AI_WAIFU#2844: there's no keras for jax
uwu1#4864: one could make a node editor gui for jax
AI_WAIFU#2844: although I feel like if you told the jax devs this they would vomit
zphang#7252: I guess I would say: jax isn't meant for those folks, not yet anyway
uwu1#4864: since visual editing for dag languages is normal it seems now a days
zphang#7252: I don't think any of the pytorch keras-type wrappers really took off either
AI_WAIFU#2844: pytorch is good enough
AI_WAIFU#2844: Again, it manages state cleanly with modules and optimizers
AI_WAIFU#2844: nothing in jax does this
guac#4716: haiku/flax modules are a lieeeee burn em all down
AI_WAIFU#2844: The reason keras exists at all is because of tf.Session
AI_WAIFU#2844: that was bullshit
zphang#7252: I think it's just too early for one to emerge yet
zphang#7252: JAX itself is pretty cutting edge, and even its NN libraries are still in flux
EricHallahan#1051: I never wrapped my head around `tf.Session`.
|
AI_WAIFU#2844: I think that people writing libraries for jax feel they need to keep things pure functional because that's hip, but in practice it's really inconvenient, especially in python, a language that's not equipped for it.
bmk#1476: idk I dislike context managers personally
bmk#1476: isn't tf.Session needed for holding the graph
AI_WAIFU#2844: I can't remember and I don't want to
AI_WAIFU#2844: But I think it was somehow worse than that
bmk#1476: uh oh
AI_WAIFU#2844: like there was a separate graph object and session object
zphang#7252: mm isn't JAX functional by default though? it feels like if you want to make it "inherently stateful" like pytorch, you need to go out of your way to hide the passing around of state from the user
bmk#1476: but then what did session do
zphang#7252: a session can use a graph, or something
AI_WAIFU#2844: yeah and that's exactly what i did in the code I posted
AI_WAIFU#2844: the dataset, network, and optimizer all need to be stateful
AI_WAIFU#2844: and that state is managed behind the scenes
guac#4716: there's not that many stateful pieces so it shouldn't much to hide
zphang#7252: I don't know how useful I find that personally, but maybe I'm not in the right frame of mind for it
AI_WAIFU#2844: exactly
guac#4716: readability soars lol
zphang#7252: I think choosing the right things to hide is not so easy
cfoster0#4356: ~~just hide the fp stuff~~
cfoster0#4356: Minus vmap/pmap
|
AI_WAIFU#2844: like with the traditional method, your opt-state, rng, and net-state all need to be front and center, and each function needs to do some flavour of `new_state1, new_state2 = function(old_state1, old_state2)`
AI_WAIFU#2844: when you could just do `container.function()` and be done with it
bmk#1476: ~~what if you put the state in a State monad~~
AI_WAIFU#2844: Well that's my point, in haskell you can do that and it's clean.
bmk#1476: then you can just do `state.bind(function)`
AI_WAIFU#2844: in python you can't
AI_WAIFU#2844: hmm
bmk#1476: why not
zphang#7252: I think they all get wrapped up in the optimizer
zphang#7252: which carries the params, the opt-state, and depending on the library also handles the RNG
AI_WAIFU#2844: elaborate
bmk#1476: no idea lol do i look like i understand how monads work
chilli#5665: I don't agree
AI_WAIFU#2844: ok then I stand by my point that python already provides a mechanism for when you need a few functions operating on some related bits of state
chilli#5665: It really isn't easy to do this
zphang#7252: I think pytorch is unusually well-designed as far as libraries go
chilli#5665: Even for regular users, they'll often find that they need to break the training loop abstraction
chilli#5665: and it really isn't a good experience to have this discontinuous jump from "wow everything's so easy" to "wtf is all this stuff I need to manage"
chilli#5665: If your answer to people who want to say, reuse the first 5 layers of their network is
chilli#5665: "haha go switch your code over to haiku"
|
chilli#5665: it's not going to be very popular
AI_WAIFU#2844: that's fair but I still stand by my point that nothing built on top of jax containerizes state, and that holds the library back.
AI_WAIFU#2844: pytorch is really good about that
AI_WAIFU#2844: gradients, parameters, rng, optimizer state...all hidden away from you
chilli#5665: I mean, part of the issue here
uwu1#4864: it complicated the pytorch serialisation story
chilli#5665: is that some of it is impossible
uwu1#4864: but they eventually made a nice compromise and made it work together
chilli#5665: I'm not sure you can make an optimizer API like PyTorch
chilli#5665: where you pass a reference of the model's parameters to an optimizer
chilli#5665: and then you call `optimizer.step()`
AI_WAIFU#2844: why not?
uwu1#4864: params += optimizer.step()
chilli#5665: Because that would require you to hold 2 references to a tensor
chilli#5665: and to allow aliasing + in-place updates
chilli#5665: Like, you basically need this
```
a = param_tensor
...
a += 0.1 * grad
|
<param_tensor reflects updates to a>
```
uwu1#4864: params = params + optimizer(loss(model(params), data), params)
chilli#5665: and this is explicitly not possible in Jax
chilli#5665: because you don't have aliasing
AI_WAIFU#2844: can you go into more detail?
chilli#5665: There is no notion of in-place updates or aliasing in Jax
chilli#5665: Like, to have 2 tensors that both refer to the same memory
uwu1#4864: you don't need the model to access the tensors when the optimizers update is being applied tho
zphang#7252: I think he means you can't have the params stored both in a model object as well as a separate optimizer object?
chilli#5665: yes, that's one of the things that no aliasing implies
AI_WAIFU#2844: Like JAX must have some sort of inplace update mechanism
zphang#7252: whoops meant "can't"
chilli#5665: no
uwu1#4864: Ohhhh i see what you mean now
chilli#5665: they do not
chilli#5665: and this is an explicit design decision
uwu1#4864: if they didn't choose to go that path then they might as just be pytorchnumpychainergluon
AI_WAIFU#2844: then they should have some kind of optimization to reuse buffers right?
uwu1#4864: well for autograd you can't have inplace updates
|
AI_WAIFU#2844: How do they keep form having to allocate new space every time you call a function?
chilli#5665: yeah, but this is on the XLA side
AI_WAIFU#2844: Well there you go then
AI_WAIFU#2844: just leverage that
chilli#5665: what, no?
bmk#1476: even if there is, im guessing that optimization is probably intentionally tucked away in a way to be hardto use
chilli#5665: XLA does not represent it
uwu1#4864: like pytorch has an impure api but it builds up a dynamic computation graph made up of pure operators
chilli#5665: XLA does it at the end
chilli#5665: none of the stack in the middle can handle aliasing/in-place updates
AI_WAIFU#2844: right, but that doesn't matter so long as the compiler can deal with it
AI_WAIFU#2844: just write you code without inplace updates
Teemochu#8740: No no no no no *runs through the door and bursts outside*
AI_WAIFU#2844: but in such a way that the compiler can optimize away allocating a new buffer every time
Teemochu#8740: This is worse than my PHP from high school
chilli#5665: these semantics still aren't possible
uwu1#4864: why not you can do it in the same way as pytorch
Teemochu#8740: (Self-made project, gave me a few grand in ad revenue over the years once you deduct hosting costs)
majnemer#9957: XLA's provides operations like `dynamic_update_slice` and `scatter`
uwu1#4864: have an impure api that supports aliasing, build up a dynamic graph of pure operators (jax) and run it
|
uwu1#4864: so the user thinks they're inplace updating but its not
majnemer#9957: Those operations behave as-if they produce a new array which is backed by a new buffer.
AI_WAIFU#2844: like in your api just go
```py
class optimizer:
@jax.jit
def step()
self.state = function(self.state)
```
majnemer#9957: However, the compiler tries to unify the source and destination where possible.
majnemer#9957: It is possible to construct cases where a copy must appear in order to handle uses which want the original buffer but must be scheduled after the update.
majnemer#9957: ```
x = ...
x_prime = lax.dynamic_update_slice(x, ...)
y = x * x_prime
```
uwu1#4864: we need jax in rust so we can use the linear types and do it at the compiler level :3
cfoster0#4356: Possibly of interest to this crowd, given the discussion https://youtu.be/XuTzJCvE62M
cfoster0#4356: (event is in about a week)
AI_WAIFU#2844: well there we go, should totally be possible to write an API that from outside looks stateful, inside is written with pure functions and no aliases, and at the compiler level behaves statefully just like how it looks from outside.
|
uwu1#4864: bear in mind that you probably don't actually want inplace updates with mutable aliasing at the bottom "most compiled" level even if the semantics allow it for when you want speed since writing to where you're reading from could be bad than writing to somewhere with less cache conflicts
AI_WAIFU#2844: right?
chilli#5665: I still disagree
chilli#5665: lol
chilli#5665: First of all, those functions that david mentions still don't give you the right semantics
chilli#5665: AFAIK
chilli#5665: Like, we're not talking about performance considerations here
chilli#5665: XLA can reuse buffers perfectly fine (much better than dynamic PyTorch), but it still won't give you the right semantics.
majnemer#9957: What are the desired semantics?
chilli#5665: Like, you have code that basically looks like this:
```
class Foo.__init__(self, params):
self.a = params
class Bar.__init__(self, params):
self.b = params
def Bar.update():
self.b += 1
```
|
And you want
```
params = jnp.zeros()
foo = Foo(params)
bar = Bar(params)
bar.update()
print(foo.a) # prints out 1
```
AI_WAIFU#2844: Why is that hard? Just make a wrapper for params at the python level such that both foo and bar point to it, but it points at params
chilli#5665: Then params is no longer a jnp.ndarray 🤔
AI_WAIFU#2844: yeah but who cares since you're only interacting with it indirectly through bar.update()
AI_WAIFU#2844: which can deal with the indirection
Louis#0144: PipeMare is about horses right?
AI_WAIFU#2844: \*cough\* pytorch \*cough* Variables \*cough\*
chilli#5665: It's not just `bar.update()`
chilli#5665: Presumably, Foo (which is the model in this case), also needs to use params when it's actually doing the forward pass
uwu1#4864: you could have
```
def wrap(model, params):
|
curr = [params]
def wmodel(x):
return model(x, curr[0])
return wmodel, optimizer(curr)
model, optim = wrap(model, params)
loss = model(x)
optim.step(gradient (loss))
```
AI_WAIFU#2844: Sure, and it also has a reference to the wrapper that it can leverage to evaluate the forward pass
AI_WAIFU#2844: I don't see the problem.
chilli#5665: You'll still need to pull out these values in order to pass them to the JIT and stuff as arguments. In which case, I think you lose a lot of your advantage.
chilli#5665: For example, you can't do this
chilli#5665: ```
a = [jnp.zeros(3)]
b = a
def f(x):
a[0] += 1.0
f(a) # a = 1, b= 1
|
jit(f)(a) # a = 1, b = 1
```
Teemochu#8740: What's PipeMare? ~~Sounds sexy.~~
uwu1#4864: I don't think jit would even work in that case?
AI_WAIFU#2844: yeah well don't jit functions with side effects
chilli#5665: yeah, so now you need to make sure that you're only jitting code that doesn't modify your wrapper
uwu1#4864: your whole model can still be jitted
chilli#5665: and since you want to jit your model's forward pass/optimizer steps, you need to make sure that you pull your params out of your wrapper before your optimizer/model
AI_WAIFU#2844: I don't see how that's a problem for 99% of workflows. Just put the wrapper around stuff last.
chilli#5665: yeah, I agree
chilli#5665: but at this point, you have an API that looks a lot more like the existing (Jax) ones
chilli#5665: wdym
bmk#1476: just dont jit your wrapper
bmk#1476: jit first and then w rap
uwu1#4864: non-interior mutability is just not composable
bmk#1476: you dont need to, just put it on the very outside
AI_WAIFU#2844: ^
chilli#5665: the issue remains that this now imposes pretty strict constraints on what you can and cannot function transform. If you ever jit one of the wrappers, this'll cause all of your aliases to go out of sync.
chilli#5665: If you only want to put this on the top, then sure, I guess it would work.
chilli#5665: I'm not sure that it would be any better than the existing Jax nn APIs
|
chilli#5665: Like, this is the antithesis of a composable system lol
nshepperd#2316: You folks might be interested in my https://github.com/nshepperd/jaxtorch, although it's really more of a concept than an actual library at this point
kindiana#1016: jaxtorch
functorch https://cdn.discordapp.com/attachments/729741769738158194/867285062650953728/C-658VsXoAo3ovC.png
kindiana#1016: :berk:
kindiana#1016: I do find it funny that people are interested in going both ways from stateful to functional with both pytorch and jax
nshepperd#2316: the idea is that you can have 'parameters' in pytorch-like modules that are just identifiers used to look the actual parameters up in the actual dictionary that contains all tensors
nshepperd#2316: so instead of `def forward(self, x): return x @ self.w + self.b` it's `def forward(self, cx, x): return x @ cx[self.w] + cx[self.b]`
nshepperd#2316: `cx` have all sorts of stateful-like stuff, including parameters, rng state and mutables buffers included into it in a way that the overall computation is pure and jit-able
bmk#1476: i think the problem is that functional and imperative styles both have some areas they do *really good* in, but they dont mix very well with each other
bmk#1476: you have languages like haskell that are full functional with some imperative tacked on by piping state through monads, and languages like python which are imperative but tacked a bit of fp on with map/lambdas/etc
bmk#1476: i wonder if anyone's come up with a language that's mainly fp with imperative tacked on but also isnt as much of a headache as haskell
bmk#1476: clojure *might* fit the bill but i dont know enough about it
nshepperd#2316: well there is imperativeness at different levels. a python function that iterates over a loop doing x += f(x,y) is imperative but from the jax.jit point of view (tracing) it's more like an imperative procedure which creates/defines a pure function
nshepperd#2316: the imperativeness of repeatedly updating the locals dictionary is 'erased' by execution/tracing
nshepperd#2316: i guess that's what you'd call interior mutability
chilli#5665: Right, that’s what I mean by the “borders”
chilli#5665: Well, tbh, most of the functorch functionality is not necessarily related to going from stateful to functional
chilli#5665: Like, I think I want to provide just a BatchedTensor construct
chilli#5665: That you can construct anywhere
|
chilli#5665: That way you don’t need to wrap things in a vmap in order to get autobatching functionality
chilli#5665: It’s kinda like the implicit autograd in Pytorch vs a grad transform in Jax
chilli#5665: Tracing is actually very nice in that way
chilli#5665: If you think about how pytrees + tracing works it’s very elegant imo
zphang#7252: scala is... something
Gurkenglas#7362: if one has some precise data and infinite data with some noise added, when in training should one use the better data?
noodlecake#7091: Hi! Thought I'd introduce myself. I mostly just joined to have a peruse. I'm an artist who know very little about coding or machine learning beyond a very basic surface level, but I have been playing around with novelAI and some text to image generators to find inspiration/a direction for some of my work.
Fight The Power#4451: Hi!
Fight The Power#4451: Is anyone alive...
Louis#0144: no
Fight The Power#4451: Oh no....
Louis#0144: we're all dead inside
Louis#0144: have been for a while now
Fight The Power#4451: https://tenor.com/view/kool-aid-man-kool-aid-juice-gif-8291586
inox#5400: https://cdn.discordapp.com/attachments/729741769738158194/867425390552547379/E6J05yKVkAI6ohV.jpg
StellaAthena#3530: Excuse me, you’re committing the heretical of partialism
inox#5400: honestly heretic is a good aesthetic
StellaAthena#3530: 10/10 do recommend
Fight The Power#4451: https://tenor.com/view/philosophy-lobster-gif-13404154
CarsonPoole#0640: are there plans to do an Eluther version of DALLE?
|
CarsonPoole#0640: sorry if this has been discussed before, there're just a lot of channels in here
EricHallahan#1051: I'll quote this comment from a few days ago which explains it better than I can.
https://discord.com/channels/729741769192767510/730484623028519072/866801285738790922
Fight The Power#4451: Whats dalle
EricHallahan#1051: https://openai.com/blog/dall-e/
https://arxiv.org/abs/2102.12092
Candle#3905: I am very new to AI. Is there some sort of AI playground that will teach you everything you need to know?
Candle#3905: I mean not
Candle#3905: EVERYTHING
ym#0104: fastai
Candle#3905: Okay I am looking into it right now thanks
EricHallahan#1051: You can also look at the resources in #communities.
AI_WAIFU#2844: Idea: Do diffusion on top of a rev-net
inox#5400: rev-net or flow?
AI_WAIFU#2844: does one preserve volume or something?
cfoster0#4356: I don't think either of them necessarily preserves volume
cfoster0#4356: What are you trying to do? Or is this just for fun
AI_WAIFU#2844: Just tossing around ideas
cfoster0#4356: Ah ok. Yeah diffusion on a revnet seems desirable for the usual reasons
Mandelion#1648: Anyone familiar with normalizing flows who's brain I could pick? Been stuck on something for a bit
|
AI_WAIFU#2844: what's up?
Mandelion#1648: This is the writeup, fairly simple but feels like there must be a better way than what I am doing now https://cdn.discordapp.com/attachments/729741769738158194/867581892567367710/unknown.png
Mandelion#1648: The points are actual points in a point cloud btw, the set being the whole cloud
AI_WAIFU#2844: what is the relationship between x and y?
AI_WAIFU#2844: is x just a point in the cloud?
Mandelion#1648: x and y are point clouds sampled at different times, x_n is a point in the cloud
Mandelion#1648: So conditioning on previous time to do change detection under the conditional distribution
Deleted User#0000: what i dont understand is the p(y0,y1,...|y) part
Deleted User#0000: that looks like p(y|y) ?
flowpoint#7450: :harold: im not sure if you're aware, but i was digging through the pile's val.jsonl and found some pretty insane samples.
line 91037 is like 100k characters repeating `The game` (sample is from openwebtext)
and 91054 is i believe korean (ca 4 mb of it) and looks like `\u00ec\u0097\u0090 \u00eb\u008c\u0080\u00ed\u0098\u0095\u00ec\u0084\u00a0 `(sample is from ubuntu-irc)
^ it also has <imsu> tags like : `<imsu> \u00ec\u0095\u0088\u00eb\u0085\u0095\u00ed\u0095\u0098\u00ec\u0084\u00b8\u00ec\u009a\u0094 ^^\n<imsu>`, i couldn't find the meaning,:harold:
wanted to let yall know
kommy#6565: What GPUs does the batbot mchorse bot use? Sorry if this information is already somewhere else in the server.
Mandelion#1648: Yeah that's what it is essentially, the points are put through the flow conditioned on a latent representation of the whole cloud. Seems odd but that is why I'm looking for an alternative 😅 . Only using this to get a standard deviation and mean to judge the likelihoods by.
Deleted User#0000: i mean p(y|y) is not a thing that makes a lot of sense to use. the p(x|y) where x and y are two diffrent time points does make sense tho
Deleted User#0000: but then im not sure why you are looking for an alternative to NFs. What's failing with NFs?
Mandelion#1648: Yeah I'm not being clear enough my bad. Basically I have a trained nf that conditions on some cloud t_0 and then can evaluate individual points (log likelihood). So I can calculate P(t_1 | t_0) which gives me those likelihoods for each point but I need a way to classify each of these as changed/anomalies. In order to do that I need some kind of sttistic to compare the likelihoods to and what I am using right now is the mean and std of p(t_0 | t_0) so basically the likelihoods the model assigns to the points of the initial time given the same cloud.
|
CarsonPoole#0640: quick question about huggingface--what's the best to generate text for GPTNeo or something, but only using the forward method (meaning I can't use the generate method)
bmk#1476: why can't you use generate?
CarsonPoole#0640: mostly just as a learning exercise
CarsonPoole#0640: like obviously I _can_, I just want to implement a simple version of the behavior there
CarsonPoole#0640: so another way to phrase the question would be: what does the generate function actually do?
cfoster0#4356: ~~that's something only Thomas Wolf himself knows~~
cfoster0#4356: The generate function does a million and one things
cfoster0#4356: The most basic is running a token through the model and sampling from the output logits, in a loop
CarsonPoole#0640: https://huggingface.co/transformers/internal/generation_utils.html#transformers.LogitsProcessor
CarsonPoole#0640: is this function related?
CarsonPoole#0640: `LogitsProcessor`
CarsonPoole#0640: can you elaborate on this?
cfoster0#4356: I can't help with understanding exactly how HuggingFace does it, you'd have to ask them. The basic concept behind sampling from an autoregressive language model, though, is pretty simple
cfoster0#4356: Lemme find a link
bmk#1476: yeah the huggingface code is a bit of a mess
bmk#1476: mostly because the generate function does everything but the kitchen sink
bmk#1476: so there's just a ton of complexity
CarsonPoole#0640: yeah adding things like temperature/etc seems like a lot
cfoster0#4356: Maybe this https://medium.com/deep-learning-with-keras/sampling-in-text-generation-b2f4825e1dad
nev#4905: I made a voice cloning colab using a different method from Real-Time-Voice-Cloning
|
nev#4905: but it's more for voice conversion
nev#4905: where can I share that?
EricHallahan#1051: #sp3 is kinda in hibernation right now (:guilty:), so I wouldn't mind of you dropped it there.
nev#4905: I'll tidy it up tomorrow then
nev#4905: ofc if I don't have to wake up early to get on a train :guilty:
nev#4905: otherwise I'll DM it to you
alexyz#3459: OOH! please share 😄
alexyz#3459: i've been looking for something like that for a while now
nev#4905: https://drive.google.com/drive/folders/1ZWRJbtwpDsJPM3cViHj_IAQcfYeypwFT?usp=sharing
nev#4905: results for now
nev#4905: it's more of a voice conversion actually
kurumuz#5695: you made voice to voice?
nev#4905: yep
nev#4905: but it requires text lol
kurumuz#5695: 🤔
kurumuz#5695: how so
nev#4905: it can be paired with normal tacotron to make voice cloning tts
nev#4905: uhhh
implementation details
kurumuz#5695: so it's not directly voice -> voice
|
nev#4905: it can be replicated in like 2 hours if you know what you're doing
kurumuz#5695: well cant be a waifu yet ig
nev#4905: you can finetune a network for realtime
nev#4905: but that's waaay later on
nev#4905: it struggles with rickrolls https://cdn.discordapp.com/attachments/729741769738158194/867883995903557642/download_15.wav
nev#4905: better AI rickroll uploaded to gdrive
nev#4905: ugh, if only I had this one year ago
cfoster0#4356: I can't tell if he's bitterpilled or not https://twitter.com/ID_AA_Carmack/status/1418263964211982346?s=19
kindiana#1016: only a million people? obviously not
alstroemeria313#1694: https://twitter.com/Love2Code/status/1418268943215730692
alstroemeria313#1694: We... don't have that much compute
alstroemeria313#1694: I have tried CMA-ES with my CLIP methods and it just kind of can't explore well enough to produce good results quickly
alstroemeria313#1694: That was with the 512-dim StyleGAN Z
alstroemeria313#1694: Trying it on the 9216-dim StyleGAN W+ space just kind of fails.
EricHallahan#1051: StyleGAN exploration is nearly impossible lol
EricHallahan#1051: It is stupidly difficult.
EricHallahan#1051: I can throw in hundreds of thousands of vectors to look up from and it can still suck.
inox#5400: where we're going we don't need to optimize weights https://arxiv.org/abs/1906.04358
inox#5400: (it's sort of a genetic algorithm)
alstroemeria313#1694: Embrace the GPU, sample 512 latents and pick the best one, then use a few hundred gradient descent steps
|
EricHallahan#1051: I pick the top-*k* out of a million or so instead.
EricHallahan#1051: I've already embraced the GPU.
𓅬 gabriel_syme 𓅬#3220: QD is better anyways, can't change my mind
𓅬 gabriel_syme 𓅬#3220: but yeah, I guess I could see open-endedness as the sort of dream-state of the Bitter Lesson
𓅬 gabriel_syme 𓅬#3220: truly OE process learning for a gazillion GPU hours or smth
sweg#8920: yo guys does anyone know if any projects here need pytorch/tf/something code converted to jax
sweg#8920: now that ive learned jax
sweg#8920: i want to contribute
sweg#8920: 🥺 👉 👈
xcodevn#9003: I wonder if anyone has also noticed that the VQ-VAE model from the paper "Neural Discrete Representation Learning" is not *really* a variational autoencoder. it is just an autoencoder. There is no KL loss in the loss function.
xcodevn#9003: While OpenAI's discrete VQVAE is a variational autoencoder. It has a KL loss.
guac#4716: https://cdn.discordapp.com/attachments/729741769738158194/868035907884843038/image0.jpg
guac#4716: Idk why it wouldn’t be a VAE lol
xcodevn#9003: In that sense, any deterministic AE is also a VAE.
guac#4716: What lol not sure what to make of that
xcodevn#9003: let me make it a bit clearer. In the original VQ-VAE, we assume all probability mass is at a single code
guac#4716: The prior isn’t even deterministic lol
xcodevn#9003: and there is no variational loss to make it close to the uniform distribution.
guac#4716: it's still a variational loss lol like the kl div of a uniform distribution is just a constant right? so there's no need to keep that term in the elbo for optimization purposes
guac#4716: hmm i guess i see your perspective tho
|
guac#4716: the framework is very well VAE so i wouldn't say it's not really a vae lol
xcodevn#9003: It is a VAE by following the paper logic. but it isn't *really* VAE in the sense that there is a penalty when the posterior is far from the prior.
xcodevn#9003: and as I said, follow the paper logic, any deterministic AE, aka traditional AE, is also a VAE.
Daj#7482: Interesting offer, I'm not sure any project has considered that. Generally, any project willing to use JAX and TPUs can make use of the many TPUs we usually just have sitting around idly, so that can be a big win
Kia#2550: TPU'S laying around👀
Daj#7482: Yeah, we have more TPUs than we can generally put to use at a given time
Kia#2550: Ow wow, Probably the Dalle pytorch group can use it
Kia#2550: But The TPU'S EleutherAI Property:berk:
Daj#7482: Pytorch doesn't play nice with TPUs
Daj#7482: That's one of the big drawbacks of TPUs
Kia#2550: Ow yeah
guac#4716: the penalty is `log(num_latents)` though:berk:
mgostIH#0245: Unbased genetic algorithms
mgostIH#0245: If people really had that much compute use full bayesian inference 😤
xcodevn#9003: i know, but it is ... meaningless 😂
Kia#2550: Forgot someone made this
https://github.com/tgisaturday/dalle-lightning
xcodevn#9003: seriously, I think the KL loss in DALL-E VAE has a negative impact on the image quality.
guac#4716: yeah i'm pretty sure that's a thing people notice in practice with *discrete* VAEs. See this from a local https://discord.com/channels/729741769192767510/730484623028519072/861326573557514260
xcodevn#9003: oh, thank you for that info.
|
𓅬 gabriel_syme 𓅬#3220: after my TRC runs out I might poke you for a few experiments with semantic generation 🙂
Daj#7482: Sure, we'll see what we can do 👍
SpaceX&OpenAI#9998: When will OpenAI release GPT-4?
Kia#2550: Did they ask 4 channels when will Gpt-4 be released:surprise:
Kia#2550: They didn't even care to ask in #off-topic :surprise:
kommy#6565: and it's a new account
kommy#6565: did they make this account for the sole purpose of asking when GPT-4 will be released?
Kia#2550: Probably
Kia#2550: Now they did ask...
voxs#0001: can i have the regular role pls
Daj#7482: The regular role is generally given out at sort of random when someone has contributed significantly to technical discussions, projects and server culture in a positive way over several months
voxs#0001: ah alright
voxs#0001: i see the majority of ppl here are roleless
Daj#7482: Yeah, don't take the roles too seriously
voxs#0001: k
voxs#0001: are there like special channels
Daj#7482: There is an administrative channel for level-5, there is no special regulars chat
Daj#7482: We don't want things to be exclusive
Kia#2550: Ow wow :o
Aran Komatsuzaki#5714: i hope one day i will make it to level-5
|
Daj#7482: I have good news for you :berk:
Kia#2550: Ow what's the new Art Mod role?
Kia#2550: The only way getting that by Spamming #art I guess:guilty:
Daj#7482: We haven't actually used that yet, we were considering handing it out to help moderate #the-faraday-cage-archive
Kia#2550: Owww,👀
Kia#2550: Sounds lovely to be honest
alstroemeria313#1694: oh?
alstroemeria313#1694: I want to train Gumbel VQGANs and they have a KL loss
alstroemeria313#1694: (But it's usually at a really low weight, so...)
xcodevn#9003: have you tried to replicate the OpenAI DALL-E KL loss with beta=6.6?
xcodevn#9003: it seems to me that with such large penalty to KL divergence, the model will have to predict a distribution which closes to uniform
xcodevn#9003: I personally think beta should be large at beginning of the training
xcodevn#9003: that will force the model to explore all the codes in the codebook
xcodevn#9003: however, at later steps, beta should decay to zero which allows the model to learn
xcodevn#9003: useful (deterministic) code
alstroemeria313#1694: i have not tried this with VQGAN training yet
alstroemeria313#1694: i did train a Pokemon sprite discrete VAE (much smaller, could do several training runs in a day) and i warmed the KL loss weight up over time
alstroemeria313#1694: the OpenAI value of 6.6 needs to be taken in context of the relative scaling of their other losses, i think
alstroemeria313#1694: i warmed mine up to 1e-2 or 2e-2 depending on the training run, which i think given our different normalizations for the other losses ended up being close to the OpenAI value
xcodevn#9003: as I understand, reconstruction loss is the sum over H W C, 256x256x3
|
𓅬 gabriel_syme 𓅬#3220: next time wish for a million dollars !
xcodevn#9003: while KL is sum over 32x32 latents
alstroemeria313#1694: yes, for mine they were both means
alstroemeria313#1694: reconstruction loss was mean over 56x56 pixels, KL loss was mean over 7x7 latents
alstroemeria313#1694: also my reconstruction loss was cross-entropy
alstroemeria313#1694: (since i was outputting 4 channels per pixel where each channel was the logit for that palette index)
xcodevn#9003: i see, openai actually compute mean reconstruction loss
xcodevn#9003: by dividing both losses by 256x256x32
alstroemeria313#1694: yes
alstroemeria313#1694: For VQGAN the reconstruction loss is LPIPS which is a mean
alstroemeria313#1694: And the KL loss is also computed as a mean
alstroemeria313#1694: So relative scaling should be more in the ballpark of mine
alstroemeria313#1694: But in practice VQGAN actually uses a KL weight of 1e-8.
alstroemeria313#1694: IDEK why, is it even going to do anything if it's so low?
alstroemeria313#1694: Also they don't warm up over time
alstroemeria313#1694: (nb, there are two VQGAN types, the original is vector quantization and has no KL loss, like VQVAE, the new one is Gumbel quantization, like OpenAI's dVAE)
alstroemeria313#1694: The original VQGAN type suffers badly from codebook collapse
flowpoint#7450: i am working on the pile for EAIrnie 3.0. so i would like to know in what way is the pile version from the-eye is preshuffled.
is it only the document order? (basically the jsonl lines?)
StellaAthena#3530: @bmk
|
bmk#1476: yeah only document order
AI_WAIFU#2844: Ok I fiddled with the architecture a bit and I'm now getting things that are starting to look like MNIST digits.
AI_WAIFU#2844: It looks like if the rank of your MLP isn't large enough and you don't use enough brrr you'll get some weird generations where the diffusion model will throw out some completely out there generations. Like individual pixels with magnitudes of 1e8 when the training data was normalized
Dashiell#8739: > I'm worried my supervisors are going to think it's science fiction and immediately stop listening
is this in a business / industry context?
Dashiell#8739: are your supervisors technically (w/r/t ML) literate?
Dashiell#8739: basically, why and in what sense do you need to justify what you want to do to your supervisors?
Dashiell#8739: does the model not perform according to their metrics?
Dashiell#8739: basically I think this is more of a business / bad manager problem than a technical one
AI_WAIFU#2844: I'm 3 sections into the neural operator paper and I still can't tell if they're taking the fourier transform in time or space
Dashiell#8739: if the model sucks and they don't care then there's nothing you can say
wabi-sabi#5811: There's a blog post for it that's much more helpful, I also had to read the FNO authors' paper on Graph Neural Networks to get the context necessary to understand it.
cfoster0#4356: Space
AI_WAIFU#2844: k that's what I thought
Dashiell#8739: are they ok with you spending _any_ time on improving the model?
Dashiell#8739: if they want you to improve the model, do something more straightforward quickly and show them it doesn't work and you need to try something else
Dashiell#8739: if they don't want you to improve the model then that's really their problem, and you as a cog in their machine are fucked vis a vis trying to actually do something interesting and useful
Dashiell#8739: speaking as a fellow cog
AI_WAIFU#2844: Can't you just throw more brrr at it?
flowpoint#7450: also, how should one go about splitting the documents into sequences.
|
do you delimit the sequences with `\n\n` , is this correct?
or what did you do for gpt-neo/ gpt-neox?
i read the preprocessing of gpt-neox but i couldn't find `--split-sentences` argument from the readme implemented in `tools/preprocess_data.py`
cfoster0#4356: Actually it looks like they also have a version that does it in spacetime
wabi-sabi#5811: I'm terrible at paying attention to meetings, closing Discord now.
AI_WAIFU#2844: Yeah I had to get to section 5.3 before I figured out wtf they we're doing
AI_WAIFU#2844: Ok after having looked at it I wouldn't start with FNOs. Make a roadmap of how you intend to tackle the problem. If I we're you I would start with something simple and dumb like gluing Fourier features to an existing solution or making the network bigger and regularizing harder. If that works then you're done. If it doesn't then you can put forth the idea of using an FNO. Your justification is then "everything else didn't work, our data is cyclic, and the paper says they work better than everything else".
Teemochu#8740: regulars chat is called #memes (similarly, a coconut is a mammal)
wabi-sabi#5811: https://www.math3ma.com/blog/matrices-probability-graphs
EricHallahan#1051: Wait, Transformers are GNNs?
Always have been.
🌍 🧑🚀 🔫 🧑🚀
cfoster0#4356: Yes you can draw out the structure of your computation graph and make the parameters edge weights, but framing them that way isn't always a useful abstraction, especially when people have something else in mind by "neural network connections"
wabi-sabi#5811: I think it's useful to recognize that the edges live underneath the other concepts, because it invites us to think about whether there might be other foundations that could be used.
EricHallahan#1051: I almost never think of an NN as edges and nodes.
bmk#1476: I always think of the parameters as nodes
wabi-sabi#5811: I think about NNs as edges because I think of them as a composition of linear transformations and pointwise nonlinearities. The edges are the linear part.
wabi-sabi#5811: https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
wabi-sabi#5811: When I think about the higher levels of abstraction, I'm thinking about the kinds of distortion we want to be doing to the previous layer's manifold.
kurumuz#5695: same here
|
wabi-sabi#5811: Like, technically what I think about is the method of adjoints, because I've been doing Neural ODE stuff, but that's just a continuous version of discrete edges.
wabi-sabi#5811: https://towardsdatascience.com/graph-neural-networks-as-neural-diffusion-pdes-8571b8c0c774
wabi-sabi#5811: So I think about what happens "in between" layers too, but still basically like the edges version.
CRG#8707: Re: The QKV matrices. You can think of the attention projections as connections building another set of connections, but it's a bit of a mess to draw. (Here drawn one token attending to itself :berk: ) https://cdn.discordapp.com/attachments/729741769738158194/868280782475628595/IMG_20210724_015441196.jpg
cfoster0#4356: The weight sharing between items makes it even messier to think of in this way
bmk#1476: just think of parameters as nodes in the computational graph
wabi-sabi#5811: This is what Schmidhuber's "I invented transformers in 1991" paper presents transformers as, if anyone wants elaboration.
wabi-sabi#5811: Why is that nicer than thinking about them as linear transformations?
bmk#1476: because that's how you implement it I guess
wabi-sabi#5811: That's fair. I spend a lot more time with the math than I do programming.
cfoster0#4356: If I'm explaining what parameters are to a newbie I wouldn't go the route of "edges in a graph". The easier model is probably more like "knobs you can tune" or something
cfoster0#4356: Like, I'd much rather have an analogy to the familiar, particularly one that highlights the affordances instead of the implementation
wabi-sabi#5811: I'd do knobs you can tune for linear regression, yeah.
uwu1#4864: i like the way the tf playground visualises it - both as a graph and also as a nonlinear warping of decision boundaries
xcodevn#9003: @alstroemeria313 my implementation of Discrete VQ-VAE has codebook collapse when the annealing temperature is close to 1/16. I use large KL weight at beginning and decay to zero when training. Do you have similar experience with codebook collapse?
alstroemeria313#1694: you mean Gumbel-Softmax temperature? decay it slower
xcodevn#9003: I use exponential decay from 1 -> 1/16 over 100k steps.
alstroemeria313#1694: I used exponential decay for mine
alstroemeria313#1694: Oh hm
alstroemeria313#1694: Yeah mine worked
|
alstroemeria313#1694: I decayed from 1 to 1/16 over 5000 epochs
alstroemeria313#1694: (There are 24 steps per epoch)
xcodevn#9003: one thing I did differently is decaying KL weight (instead of increasing it) to zero in 50k steps. So, after 50k steps, KL divergence goes up to maximum value.
xcodevn#9003: interestingly, I just noticed that Open AI use cosine schedule https://cdn.discordapp.com/attachments/729741769738158194/868302314715377664/unknown.png
chirp#4545: https://twitter.com/JordanTeslaTech/status/1418413307862585344?s=20
𓅬 gabriel_syme 𓅬#3220: I saw this, kind of hilarious and so scary
𓅬 gabriel_syme 𓅬#3220: wonder how many adversarial attacks one can do to these cars right now
kurumuz#5695: feature engineering :berk:
kurumuz#5695: not much
kurumuz#5695: its one of the classic safety arguments again self driving cars
kurumuz#5695: very dumb though
kurumuz#5695: "well if we mark the stop sign with this image, then it doesn't see it!"
kurumuz#5695: lol, you can do that for humans
𓅬 gabriel_syme 𓅬#3220: I get what you're saying, although I don't think it's entirely correct
𓅬 gabriel_syme 𓅬#3220: you can paint any single sign in the streets I use every day and it wouldn't matter
𓅬 gabriel_syme 𓅬#3220: but yeah, I can imagine there are way bigger issues than this with self-driving cars. I certainly have different issues with them (and ind. mobility in general), just thought this was a funny example
kurumuz#5695: it's funny because it clearly shows how feature engineering doesn't work
kurumuz#5695: waymo is this but %100000 worse
kurumuz#5695: and people somehow think they're winning self driving cars
bmk#1476: https://xkcd.com/1958/
|
kurumuz#5695: they already lost but too scared to explain as they already burned through billions of capital
kurumuz#5695: their approach makes 0 sense
bmk#1476: :brr:
kindiana#1016: https://xkcd.com/1897/
kurumuz#5695: LOL
bmk#1476: I still think the xkcd I posted is more relevant to the conversation
kindiana#1016: I agree
kurumuz#5695: also you cant force me to sit in a vehicle with 30 people
kurumuz#5695: nope, not happening
kurumuz#5695: cars are based
bmk#1476: is this a pandemic thing
kurumuz#5695: nah, about this
bmk#1476: what's wrong with being in a vehicle with other people
kurumuz#5695: i dont like to be seen
kurumuz#5695: i like being alone
zphang#7252: a well-run subway is the most based of all
bmk#1476: don't worry nobody gives a fuck about anyone else on the bus
kurumuz#5695: ofc they don't, my brain is not capable of understanding that though
zphang#7252: where else would I be able to play my switch games
bmk#1476: also trains are based
|
kurumuz#5695: I agree with that
kurumuz#5695: well, trains with rooms
bmk#1476: the light rail is based too
kurumuz#5695: anyway other than social anxiety problems, it's not really comfortable.
bmk#1476: I even saw a goose poster in an LRT station once
kurumuz#5695: Cars are ideal transportation tbh
bmk#1476: disagree
kurumuz#5695: :shrug:
kurumuz#5695: trains cant go everywhere
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/868383127658188840/20210627_202311.jpg
bmk#1476: check out this cool fucking goose poster
kurumuz#5695: yo that is cool asf
kurumuz#5695: i want one
bmk#1476: or this really nice view from a platform https://cdn.discordapp.com/attachments/729741769738158194/868383438598717441/20210628_185832.jpg
bmk#1476: unfortunately no goose in this picture
𓅬 gabriel_syme 𓅬#3220: yeah I literally love a city with one. Athens completely transformed from a shithole to an amazing place with the metro
𓅬 gabriel_syme 𓅬#3220: I miss outside 😦 it really seems like a great place once you can go out
Buckion#6619: Apologies if wrong channel, Does anyone know if there are any (proposed or demonstrated) advantages of BPE as opposed to raw utf8 bytes from an output quality perspective for generative text to text models? https://arxiv.org/abs/2105.13626 ByT5 has got me very excited for a token free future in theory, I imagine the performance story is very solvable
Daj#7482: Hey everyone, we're doing an AMA on reddit. This is your chance to ask us anything you want, from the biggest of :bigbrain: to the smallest of :smallbrain:
https://www.reddit.com/r/Futurology/comments/oqriew/we_are_eleutherai_a_decentralized_research/
|
Kia#2550: Ow god wish not a single soul asked about gooses there
Daj#7482: gj guys https://cdn.discordapp.com/attachments/729741769738158194/868509238505390110/Screenshot_2021-07-24_17-05-30.png
Kia#2550: Well
Kia#2550: Tell them :v
Louis#0144: I wonder if someone is gonna ask about KGs or CARP
Louis#0144: lmao
Steven4547466#1407: Someone forgot a `#` https://cdn.discordapp.com/attachments/729741769738158194/868537373259079770/t66dfe_95911.png
nanowell#3224: I think I made a github copilot
nanowell#3224: using gpt-j
nanowell#3224: so powerful
nanowell#3224: but I can't make vscode suggestions
nanowell#3224: only in debug output
nanowell#3224: but it's too fast
nanowell#3224: and accurate
StellaAthena#3530: Congrats
nanowell#3224: thank you
𓅬 gabriel_syme 𓅬#3220: cool, want to share some examples maybe? maybe in #the-faraday-cage-archive
Kia#2550: Congratulations🥳
alexyz#3459: i really want a channel that's for showcasing outputs seperate from #the-faraday-cage
alexyz#3459: because #the-faraday-cage is just filled with bots
|
EricHallahan#1051: That checks out based on the channel description.
> Dedicated botspam channel and playground. Strictly SFW only, always follow our #rules. Don't open the AI Box!
alexyz#3459: then I just want a channel for showcasing outputs lol
nanowell#3224: Yeah
nanowell#3224: Lets gooooo
nanowell#3224: I made auto generation
alexyz#3459: waw
nanowell#3224: It is a very simple solution
kurumuz#5695: :thonk:
nanowell#3224: I will upload it on github
EricHallahan#1051: :thonk:
kurumuz#5695: threads go brrrr https://cdn.discordapp.com/attachments/729741769738158194/868661822499221544/unknown.png
kurumuz#5695: (multithreaded single file tokenization)
kurumuz#5695: man tokenizers are slow
kurumuz#5695: gonna make them faster
alexyz#3459: "Activate Windows"
kurumuz#5695: :smugS:
EricHallahan#1051: 🪟🪟🪟🪟
kurumuz#5695: fine, i will do it
EricHallahan#1051: Who cares? It is just a watermark.
|
alexyz#3459: no but like why not use linux
EricHallahan#1051: I don't use Linux. I probably should, but I don't.
kurumuz#5695: ssd too smol, i wanna play vr games
alexyz#3459: ah
EricHallahan#1051: I digress though, too #off-topic for #general.
alexyz#3459: ye
kurumuz#5695: i have wsl anyway
kurumuz#5695: and I can get a supercomputer on cloud with linux whenever I want to :ultraberk:
bmk#1476: just stop playing games
kurumuz#5695: just stop doing linux
bmk#1476: games were not meant to be played
alexyz#3459: TPU gaming
kurumuz#5695: i like flying planes tho
bmk#1476: wanted to play games anyways for a laugh? we had a tool for that, it was called debugging
kurumuz#5695: its really fun
kurumuz#5695: in vr
EricHallahan#1051: Same. I used to do X-Plane but I have a potato of a laptop and uninstalled it. Now I use FlightGear, and still get abysmal performance. `:|`
EricHallahan#1051: Just doesn't have the rasterization performance I guess.
quinn#9100: Hi -- I'd like to create a new community around multi-multi delegation. Multi-multi delegation is simply problems arising in scenarios of multiple human stakeholders and multiple AIs. I want this community to be
1. not a generic alignment and ML community, but focused on multi-stakeholder and/or multiple AI scenarios
|
2. friendly to beginners in the multi-* space, but expecting prior exposure to the broader alignment, ML, and/or AI Governance communities.
3. a source of networking, collaboration, socials/VC, with weekly research updates. Eventually a paper reading club in the computational social choice space?
**Please DM me if you'd like to join.**
bmk#1476: how much commitment is expected?
bmk#1476: id like to lurk but i probably wont be able to dedicate too much time to it
quinn#9100: you're welcome to lurk.
AI_WAIFU#2844: that's some weak shit https://cdn.discordapp.com/attachments/729741769738158194/868701725983404062/unknown.png
guac#4716: `brr.py` lmao
kurumuz#5695: I can do that aswell :berk:
kurumuz#5695: what are you running there
AI_WAIFU#2844: 🤐
EricHallahan#1051: `brr.py` obviously.
𓅬 gabriel_syme 𓅬#3220: diffusion scaling laws
Teemochu#8740: waifu.py
uwu1#4864: not on my level https://cdn.discordapp.com/attachments/729741769738158194/868736985022480384/unknown.png
kurumuz#5695: weak.
HAMZA#5616: Hello 👋
TruGerman#6672: So instead of comparing their junk, people have switched to comparing thread counts or whatever. Curious.
TruGerman#6672: :squint: alright, who ghost pinged me?
kurumuz#5695: @TruGerman
|
TruGerman#6672: :SquidWoke:
kurumuz#5695: :goose6:
TruGerman#6672: IT WAS YOU?! :catgun:
EricHallahan#1051: :paperhonk:
alexyz#3459: https://cdn.discordapp.com/attachments/729741769738158194/868872207139504198/image.png
alexyz#3459: @TruGerman
alexyz#3459: i assume it's the deleted comment :thonk:
TruGerman#6672: So it was Louis...gah, if I ever catch that guy he'll be sorry!
Louis#0144: Honk
kurumuz#5695: :goose6:
TruGerman#6672: :pepegun: Get back here! :xqcDitch:
fenton#9978: I'm giving a talk next week about transformers and their importance. I am looking for basic introductions that I can send to the audience as pre-reading.
Ideally this will include the best articles or short videos explaining what it is, strengths, limitations and potential applications to automation of misinformation.
Do you know of any great resources? 🙏
companioncube#0123: Is there a channel for The Pile?
StellaAthena#3530: Not anymore
companioncube#0123: Is the project considered complete?
nanowell#3224: Hello!
|
nanowell#3224: Anyone tried ERNIE 3.0?
Louis#0144: working on it
Louis#0144: There’s a separate discord
Louis#0144: They’re making the pile KG right now
Louis#0144: https://discord.gg/6E6EJV3z
nanowell#3224: I've made a baidu account to test it
nanowell#3224: good results so far
Louis#0144: Nice
norman#1944: has anyone looked into buying used mining GPUs from China? https://www.theblockcrypto.com/post/110638/chinese-crypto-miners-dump-gpu
DivisibleByZero#7650: > buyers have to pick them up at a power station along the Yarlung Tsangpo river.
Damn
Sid#2121: yep! We don't plan to update it, but we may make a V2 one day
EricHallahan#1051: It is marked as complete on the website by the way if you need even more evidence.
https://www.eleuther.ai/projects/pile
Untouch#9150: miner GPUs tend not to last very long
uwu1#4864: the random faults can be used as a source of entropy and regularisation
MasterScrat#6910: true story, i once saw my score improve in an RL challenge after one of my GPUs half-burned and started giving back partially corrupted images (on the MineRL env)
Louis#0144: LMAOO
AI_WAIFU#2844: Alright progress update, my diffusion model has gone from taking ~3 days to converge, to roughly 5 mins
alstroemeria313#1694: Oh?
|
AI_WAIFU#2844: Yeah, always norm your networks kids.
alstroemeria313#1694: Ohh
alstroemeria313#1694: MNIST diffusion?
AI_WAIFU#2844: Yep, although I haven't tapped into any serious compute yet
AI_WAIFU#2844: Still just debugging on a 1650
chilli#5665: haha
chilli#5665: one of the blogs talking about alphafold
chilli#5665: described it as having "layernorm everywhere to grease the connections"
inox#5400: ran the flax pixelcnn++ example and it converges way faster than the README says but also it's at 10 times the loss
AI_WAIFU#2844: That is an amazing analogy, especially if you've worked with anything mechanical that required lube.
inox#5400: I love that the initial "reducing internal covariate drift" eventually got vindicated after years of sneer
Louis#0144: Did anyone ever explain what this means
AI_WAIFU#2844: Nobody denies that it works
chilli#5665: *internal covariate shift
chilli#5665: I don't think it really got vindicated lol
chilli#5665: partially because it never had a great definition
AI_WAIFU#2844: Did the BN paper ever get published?
chilli#5665: yeah?
inox#5400: there was a paper that gave "internal covariate drift" a reasonable interpretation and showed that it made sense
chilli#5665: I think it was accepted the first time
|
chilli#5665: which one
inox#5400: fuck there's no way I'm gonna find it now
chilli#5665: lol
AI_WAIFU#2844: I could have sworn it didn't
chilli#5665: from what I remember the batch norm theory papers I saw demonstrated that it didn't reduce "internal covariate drift' for a reasonable definition of that
inox#5400: ok I think I got this take from this blog post https://myrtle.ai/learn/how-to-train-your-resnet-7-batch-norm/
inox#5400: so yes I get my opinions on batch norm from random blog posts instead of the actual theory papers
chirp#4545: https://twitter.com/tszzl/status/1419461306747285507
cfoster0#4356: Every additional term in the loss function doubles the odds I throw aside the paper
𓅬 gabriel_syme 𓅬#3220: cool stuff lie somewhere in the middle
zphang#7252: b-but inductive biases!
𓅬 gabriel_syme 𓅬#3220: tbh, I love the AF2 paper and the tons of details and domain specific knowledge
𓅬 gabriel_syme 𓅬#3220: it makes people like me optimistic we can offer help to all this
𓅬 gabriel_syme 𓅬#3220: if it was just matmuls and scale, all you'd need is developers making scalable code and hardware people building stuff in between. That's cool if it happens, I'd rather there's more 🙂
AI_WAIFU#2844: bigger model -> better inductive biases
AI_WAIFU#2844: Solmonoff induction go brr
Louis#0144: is this true in general
bmk#1476: well, it works for gpt
mingerz#7355: hey guys. any thoughts on how we can add continual learning to gpt-j. To add our own learning data on top of what was already trained
mingerz#7355: saw an example here from a company providing a SaaS product of it
|
https://continual.ai/post/introducing-continual
Louis#0144: Unimpressed
Louis#0144: Continual learning is massively unsolved
Louis#0144: I’m sure most continual learning methods that work with any autoregressive model would work with GPT J
Louis#0144: But I don’t know anyone who has looked into it
Rainmaker#5609: Quick question, any license for the pile dataset?? I don't see any and since data is from open source,
Rainmaker#5609: One more thing, What was the latest paper data for the arxiv dataset? Like 2020. Dec?
mingerz#7355: ahh i see. but any reason why it remains unsolved? what are the key challenges?
Louis#0144: I can’t speak to that
Louis#0144: It’s not my domain
mingerz#7355: alright. got it. thanks for the input
cfoster0#4356: ~~just train on more data~~
Adnan Fakhar#4238: I'm (Adnan Fakhar ) senior ML engineer at Inabia AI. We are HQ in Redmond WA and have offices in Karachi, Islamabad Pakistan and Jaipur India.
We are working with Fortune 500 companies in the Seattle area. And working to develop NLP models and provide data annotation services.
I'd love to contribute in building GTP Neo. Looking forward to learning from you all. Contributing to this great project. And making my company Inabia and country Pakistan proud.
Kia#2550: New people 👋
TruGerman#6672: More 5heads :pogu:
Kia#2550: Hm:hyperthonk:
|
Kia#2550: More 5heads I guess
TruGerman#6672: Guess I'll invite some 3head friends to balance the scales
sweg#8920: hello fellow pakistani!
Adnan Fakhar#4238: hello
alstroemeria313#1694: CLIP's loss function was v well-chosen. But there was just the one term in the loss.
alstroemeria313#1694: Like compare CLIP to ALBEF or smth
alstroemeria313#1694: ALBEF has a considerably more complicated loss function and training scheme
alstroemeria313#1694: That I think is mostly to try to make it work better w/ less data and compute for training.
ethan caballero#6044: :morelayers: :brr: :
https://discord.com/channels/729741769192767510/785968841301426216/855739185428168714
Chr0my#0173: Hi, (not sure if this belongs here or #gpt-neox-devs) just to check, (I can't remember), are these the recommended settings for tuning 125M Neo? For runtime the options are: gptneo, python3. For Hardware accelerator the options are: TPU, GPU. Thanks in advanced! https://cdn.discordapp.com/attachments/729741769738158194/869215457599717386/unknown.png
StellaAthena#3530: Depending on what code you’re using you might get better results from TPUs, but this good
cfoster0#4356: @CarsonPoole asked
>>> how would this be translated to torch:
```
sin, cos = map(lambda t: repeat(t[offset:x.shape[1]+offset,:], "n d -> () n () (d j)", j=2), sincos)
```
cfoster0#4356: let's ignore the outer `map` because that's just calling the same inner thing on the sine part and the cosine part
cfoster0#4356: Whenever you see a () axis added on the right hand side, that's equivalent to unsqueezing that axis
CarsonPoole#0640: and then the `(d j)` part is merging those axes?
|
cfoster0#4356: And then since you're calling `repeat`, the (d j) with j=2 means you're repeating every element in the last dimension twice. So if the last dimension was 300, it'll end up being 600
CarsonPoole#0640: ~~would~~ could it be written as `(d 2)` instead of passing that as a constant argument?
CarsonPoole#0640: or is it necessary to have it as it is
cfoster0#4356: Yes
chilli#5665: @CarsonPoole I'd recommend reading an einsum tutorial to start
leom#9779: Could anyone link me to some journals, publications, magazine articles, etc. related to the use of ML in solving mathematical problems, like nonlinear/linear systems of DEs/PDEs?
leom#9779: Unsure of where I could post this really. Sorry if this is the wrong place 😅🙏
genetyx8#7543: arxiv might be a good first start. maybe have a look at https://arxiv.org/abs/2010.08895v1
genetyx8#7543: if you've ever heard of Koopman analysis, there's also a project called Deep Koopman
Daj#7482: You might also be interested in this if you're looking for formal mathematics too https://arxiv.org/abs/2009.03393
Louis#0144: That’s a rly good paper
Louis#0144: Would recommend
genetyx8#7543: imagine using that with gh copilot/<similar model> to generate provably correct code, or to automatically prove program correctness
Louis#0144: How
Louis#0144: lol
Louis#0144: I don’t think that is trivial at all
kurumuz#5695: automatically prove program correctness?
kurumuz#5695: we have compilers and unit tests for that
kurumuz#5695: :berk:
genetyx8#7543: languages like Haskell or Coq (taken as a programming language) use strong type systems so that it's almost always the case that "if it compiles, it's correct", and I vaguely remember seeing something about a team in facebook working on automatic correctness proving, so it might not be too far fetched.
|
genetyx8#7543: of course that model would have to be called Dijkstra. Just imagine his angry ghost yelling at you to prove correctness :berk:
leom#9779: @ genetyx8 Thanks so much, these are both really interesting!!
aze#1010: auto unit test writing would be cool
genetyx8#7543: that one might be more like a variant of copilot where you generate tests from the docstring
sea_snell#0243: Use copilot to generate tests to check its own implementation and score a tree search or something
sea_snell#0243: That would be wild if it worked
genetyx8#7543: wdym by score a tree search?
sea_snell#0243: ig this isn't a full blown tree search with scoring at intermediate nodes. But I meant, just sample a function, run it on the generated unit tests and rank samples by unit test performance
genetyx8#7543: ah ok
CarsonPoole#0640: I forgot to thank you. I really appreciate the help. Definitely helped me get on the right track
alstroemeria313#1694: datacrunch.io has 80GB A100 instances now, btw
alstroemeria313#1694: 1x and 4x (no 8x)
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/869332996250021948/Screen_Shot_2021-07-26_at_2.png
Louis#0144: Damn
Louis#0144: That’s cheap too
alstroemeria313#1694: They have 8x A6000 for $8.80/h too
𓅬 gabriel_syme 𓅬#3220: damn
𓅬 gabriel_syme 𓅬#3220: is that 320gb VRAM
𓅬 gabriel_syme 𓅬#3220: :chonk: models coming up
kurumuz#5695: still cant compare with vast ai i guess
|
alstroemeria313#1694: they have persistent storage though
kurumuz#5695: @alstroemeria313 doesnt work with api and cant plug to more than one pod
kurumuz#5695: kinda useless for us
𓅬 gabriel_syme 𓅬#3220: this is pretty cool yeah, are they easy to use?
kurumuz#5695: theyre a pain in the ass
kurumuz#5695: theyre not shared FA
kurumuz#5695: fs*
alstroemeria313#1694: they are p easy i think but if you run out of money they delete your storage
𓅬 gabriel_syme 𓅬#3220: oh no
alstroemeria313#1694: i have mostly switched to them from vast so i don't have to keep setting up the container and downloading stuff/pushing code to them
𓅬 gabriel_syme 𓅬#3220: can I not put smth to stop the machine before that
𓅬 gabriel_syme 𓅬#3220: yeahj that was annoying
alstroemeria313#1694: the thing is, they charge for storage even when the machine is off
alstroemeria313#1694: you can autopay if you want, i don't
Zac-HD#7996: https://hypothesis.readthedocs.io/en/latest/ghostwriter.html
is pretty good at writing test headers that could be completed by a generative model 😉
Deleted User#0000: weirdest thing happened, got dmed by a guy claming he's made a semi prime factoring algorithm and solving 1024 bit rsa keys
Deleted User#0000: no clue if i should even believe this guy
bmk#1476: would you believe me if i told you that just yesterday i found a suitcase of unmarked bills totalling $1 million lying right on the sidewalk of main street
alexyz#3459: yes
|
Deleted User#0000: yes
Deleted User#0000: yes i would
Louis#0144: Yeah totally
alexyz#3459: the geese brought them
Deleted User#0000: beautiful
bmk#1476: would you believe me if i said that gullible was written right on the inside of the suitcase
Louis#0144: Ye
alexyz#3459: sure
Deleted User#0000: i was in that suitcase
alexyz#3459: along with the 1 million dollars?
bmk#1476: nick_wild is a goose confirmed
bmk#1476: thats the only way he could have fit in the suitcase
Deleted User#0000: man sent me this https://cdn.discordapp.com/attachments/729741769738158194/869375013336281088/unknown.png
Deleted User#0000: just said yes
Deleted User#0000: and yes i am a goose
bmk#1476: tell him to factor 21
Deleted User#0000: ill try 4 first
bmk#1476: way too easy, all the cryptographers i know advise at least 2 digit long keys
bmk#1476: i dont know any cryptographers but
bmk#1476: of the ones i do know, they all say that
|
Deleted User#0000: yes ofc
Deleted User#0000: 2 digit long keys
Deleted User#0000: there any cryptographers on this server anyways?
alexyz#3459: probably
bmk#1476: for the purposes of this joke, no
bmk#1476: not a single one
Deleted User#0000: imma just flow with that
Teemochu#8740: give him your number
Deleted User#0000: what
Teemochu#8740: and by that I mean a large semiprime
Deleted User#0000: no
Teemochu#8740: tell him to factor it
Deleted User#0000: so bascially
Deleted User#0000: https://en.wikipedia.org/wiki/RSA_numbers
Deleted User#0000: this website
Deleted User#0000: go down to the biggest number and ask him?
Teemochu#8740: rsa-309 should be enough
Deleted User#0000: alright the man is playing team fortress 2, i think he's a joke now
bmk#1476: 21
chilli#5665: obviously he's a joke lol
|
bmk#1476: factor *that*
chilli#5665: I can also factor arbitrarily large numbers
chilli#5665: that I generated myself
bmk#1476: exactly, and since he can't factor my large semiprime (21) he's lying
Deleted User#0000: oh your 21
Deleted User#0000: your own personal number
EricHallahan#1051: So would you say you are factoring that fact into the situation?
Deleted User#0000: my man, how much you pay for the number 21 to be your own?
Deleted User#0000: great joke
bmk#1476: schmidhuber already bought it back in 1991 and wont sell it
Deleted User#0000: better go for a new number then
Deleted User#0000: ive taken 69 btw
alexyz#3459: how bout 22
Deleted User#0000: brilliant
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/869379228691468368/2018.png
Deleted User#0000: alright bois, we got till february
bmk#1476: i bet 2 imaginary internet points that nobody will factor 21 by february
Teemochu#8740: 621
Teemochu#8740: ...of which 69 is a factor
Teemochu#8740: huh til
|
Deleted User#0000: did you seriouly just google that
bmk#1476: my favorite food additive
Teemochu#8740: well kinda
Teemochu#8740: I did 621/3/3
bmk#1476: teemo googles it all the time
Teemochu#8740: since I knew it was divisible by 9
Teemochu#8740: due to sum of digits
Deleted User#0000: https://tenor.com/view/calculation-math-hangover-allen-zach-galifianakis-gif-6219070
alexyz#3459: https://cdn.discordapp.com/attachments/729741769738158194/869381010679296060/2018.png
Teemochu#8740: *flavor enhancer*
EricHallahan#1051: This is galaxy brain territory right here.
Teemochu#8740: 10x (that triple equal sign) x mod 9
Louis#0144: yeah lol thats not very galaxy brain tbh
Louis#0144: most first year math students learn that
Louis#0144: in like week 1
bmk#1476: i never knew that digits could be added
bmk#1476: what next, are we going to start adding *entire numbers*?
bmk#1476: oh the horror
Teemochu#8740: we will start to multiply matrices with tens of millions of elements like they are made of candy
bmk#1476: i need to read aluffi at some point to learn addition
|
xcodevn#9003: i have an *interesting* question: how can we do `random_crop` in jax with `image`, `size` and `rng_key`?
chilli#5665: lol
EricHallahan#1051: I don't see any issue with only having those?
chilli#5665: do you need to jit it?
chilli#5665: and do you need a crop of different sizes?
xcodevn#9003: yes, inside a jitted `update_fn`
chilli#5665: hmm
chilli#5665: that seems much harder
EricHallahan#1051: yeah, that changes a lot
kindiana#1016: https://github.com/google/jax/blob/97a5719fcb40af7231b5f803f965063538282f8e/jax/_src/image/scale.py#L136
chilli#5665: wait, is it a yes to this too?
xcodevn#9003: i only need a fixed shape output
chilli#5665: oh, that's easier then
EricHallahan#1051: Wait, size controls the input window size or the output?
xcodevn#9003: `size` is the output shape.
xcodevn#9003: this differs from random crop, i think.
kindiana#1016: what exactly do you want? you have a variable sized image which you want to take a random, fixed size crop from?
xcodevn#9003: i have a fixed sized input image, and want a fixed sized randomly cropped output image.
kindiana#1016: scale and translate should do the trick
kindiana#1016: just give it a random translation
|
guac#4716: https://github.com/deepmind/dm_pix/blob/458e86f28df3f72017dc00b5449bc9ede3e0f566/dm_pix/_src/augment.py#L401
guac#4716: if you want dm_pix has random crop
Chr0my#0173: Hey! Is there a way to save models with happy gen on google colab? i just have no idea how.
nev#4905: is torch.nn's builtin transformer a meme
cfoster0#4356: Yup
nev#4905: what's the fastest way to get roformer in vanilla pytorch
Ravna#1831: it's a meme in the original sense so it gets to spread and mutate like genes? awesome!
nev#4905: the word meme itself is a meme in the original meaning
nev#4905: nvm I'll just add x-transformers
nev#4905: training the world's first dancing transformer
inox#5400: like transformer that produces dances? https://metagen.ai/transflower https://google.github.io/aichoreographer/
nev#4905: I'll pretend they don't exist.
𓅬 gabriel_syme 𓅬#3220: back-propagation through the network
https://twitter.com/i/status/1417491099137028117
Kia#2550: This is really cool wow
nev#4905: I'm trying an autoregressive architecture with binning for motion generation to see if it will work
nev#4905: need it for a project
𓅬 gabriel_syme 𓅬#3220: maybe you could share thoughts with guillefix, I know he's done quite a bit on that
nev#4905: hm?
nev#4905: training has started
|
nev#4905: the loss is dropping pretty quickly
nev#4905: it might be overfitting
nev#4905: loss looks good for now https://cdn.discordapp.com/attachments/729741769738158194/869572426625843280/EzgAAAABJRU5ErkJggg.png
nev#4905: https://cdn.discordapp.com/attachments/729741769738158194/869574920466726933/GA3fGecPYAs73vzPr5RKlGR3ZPHv0XEYkR0dblIiIiZVBBFxGJESroIiIxQgVdRCRGqKCLiMQIFXQRkRihgi4iEiPH0X1oW1vuFl.png
nev#4905: I just realised I don't have a validation dataset
nev#4905: 0.02 loss
it's definitely overfitting
nev#4905: let's hope for grokking
nev#4905: https://cdn.discordapp.com/attachments/729741769738158194/869584042381705276/unknown.png
CRG#8707: https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play
CRG#8707: > Analysing the agent’s internal representations, we can say that by taking this approach to reinforcement learning in a vast task space, our agents are aware of the basics of their bodies and the passage of time and that they understand the high-level structure of the games they encounter. Perhaps even more interestingly, they clearly recognise the reward states of their environment. This generality and diversity of behaviour in new tasks hints toward the potential to fine-tune these agents on downstream tasks. For instance, we show in the technical paper that with just 30 minutes of focused training on a newly presented complex task, the agents can quickly adapt, whereas agents trained with RL from scratch cannot learn these tasks at all.
> By developing an environment like XLand and new training algorithms that support the open-ended creation of complexity, we’ve seen clear signs of zero-shot generalisation from RL agents. Whilst these agents are starting to be generally capable within this task space, we look forward to continuing our research and development to further improve their performance and create ever more adaptive agents
CRG#8707: https://youtu.be/lTmL7jwFfdw
Daj#7482: https://pbs.twimg.com/media/E5EYLdOWYAAHk7j.jpg:large
Daj#7482: :worried_partying:
AI_WAIFU#2844: You know, I actually think DM is much closer to AGI than we give them credit for
alexyz#3459: can we add this emoji
Daj#7482: If someone makes it transparent and the right size yes
alexyz#3459: doin' that rn
|
alexyz#3459: https://cdn.discordapp.com/attachments/729741769738158194/869593152665829466/E5EYLdOWYAAHk7j.png
alexyz#3459: @Daj
Daj#7482: :worried_partying:
Daj#7482: @alexyz
Daj#7482: Thanks
alexyz#3459: np
Ravna#1831: Connor you do realize you are using perfect euclidian shapes as superstimuli for the facial recognition part of the brain right? It's almost like you are doing the a-word yourself.
Ravna#1831: :berk:
Daj#7482: I don't coom to emojis
Daj#7482: Stop projecting
Ravna#1831: No I'm not on the gwern and ai_waifu faction
Ravna#1831: I'm on the anti-hyperbole-of-either-side maybe
Daj#7482: There is only with us or against us :tribalism2:
Daj#7482: Clearly I unironically care about this completely arbitrary aesthetic preference
Daj#7482: on a deeply moral level
Daj#7482: lol
Ravna#1831: Could someone explain how this new work of deepmind is special? It's neither a breakthrough of methodology nor an impressive show-off of PR-worthy results.
Louis#0144: this is my face when I publish a paper proving my own hypothesis wrong
Louis#0144: (its happened twice :berk: )
AI_WAIFU#2844: I don't think it's special by itself. It's more just all their work put together that's kinda :firealarm:
|
AI_WAIFU#2844: Now why they don't just make a minecraft mod instead of all this rigamarole is beyond me.
AI_WAIFU#2844: Actually it's spectacualr that there isn't a good minecraft mod that let's you plug in an agent.
AI_WAIFU#2844: MineRL doesn't count because it's a cut down singleplayer non-RT version.
Daj#7482: I shudder at imagining running hundreds of instances of minecraft in parallel
AI_WAIFU#2844: Start a bunch of TPUs, start a bunch of minecraft instances, done.
Daj#7482: Seems extremely inefficient is what I mean
AI_WAIFU#2844: Those CPUs were gonna sit idle anyway
AI_WAIFU#2844: But yeah, I want to see a minecraft client I can plug a python agent into.
mgostIH#0245: Minecraft could be rewritten to remove a lot of mechanics that aren't really a thing for RL
AI_WAIFU#2844: I disagree
Daj#7482: Who needs vtubers, I want AGI to generate minecraft lets plays :bigbrain:
Louis#0144: someone explained to me the otherday unironically minecraft on a TPU would be really useful
Louis#0144: :berk:
mgostIH#0245: regarding efficiency? Just to name one chunk loading could be made smaller in caves or whatnot
alexyz#3459: That exists actually
alexyz#3459: Minecraft Pi Edition allows python scripting
AI_WAIFU#2844: Link?
alexyz#3459: it's a very... limited version of Minecraft but it's Minecraft
Ravna#1831: Just make your NNs bigger and bigger so that the TPU count can never catch up with your slow CPU simulators
mgostIH#0245: It also depends on what you want from a minecraft RL agent
|
Daj#7482: Just port minecraft to TPUs
mgostIH#0245: Mine around and find diamonds or even discover trading and use that to its advantage?
Daj#7482: Differentiable minecraft :bigbrain:
AI_WAIFU#2844: How limited are we talking about?
alexyz#3459: like really limited, imagine something like Minecraft Classic
mgostIH#0245: Even the world should be finite
flowpoint#7450: not long before minecraft simulated in gamegan is faster than the java version
AI_WAIFU#2844: that's kinda pointless then
AI_WAIFU#2844: You need the mechanics and the physics
alexyz#3459: it has the "physics"
mgostIH#0245: Well, it's finite in the game too, but not something the average player can get to
alexyz#3459: like sand'll fall and that stuff
alexyz#3459: and you can do the mining and all that
alexyz#3459: but there's no survival
alexyz#3459: it's more just creative stuff
alexyz#3459: so it is kinda pointless, yea
mgostIH#0245: Also what about offline RL for something like minecraft, just get the gameplay of thousand of users in servers and whatnot
mgostIH#0245: I wonder how good offline RL will become in the future 🤔
AI_WAIFU#2844: Sure but that's not really interesting
mgostIH#0245: But it might make online learning more efficient too
|
mgostIH#0245: I wonder if offline learning is enough to get a sort of differentiable model of the game for example
mgostIH#0245: Or what about giving the network the entire machine state too, so it doesn't just look at the game but at the entire memory and code running
kurumuz#5695: should learn vr games with 6dof controllers
kurumuz#5695: now that would be fun
mgostIH#0245: Might make offline learning much more informative on the core mechanics and being able to find exploits very efficiently
AI_WAIFU#2844: All of this makes it too easy
AI_WAIFU#2844: Realtime, same interface as a human.
mgostIH#0245: That way you could give it an actual "go and explore" objective
mgostIH#0245: You specify that the more code paths it sees in the executable, the more it has discovered of the game behaviour
mgostIH#0245: Would be an interesting kind of reward 👀
AI_WAIFU#2844: Yeah but look, you're already hacking the reward
kurumuz#5695: it should be rewarded on hacking it
mgostIH#0245: no it's kind of removing it altogether, usual game rewards are "finish this game" aka "reach this specific point"
mgostIH#0245: But we usually play games for fun too and some games may have a much harder to define goal
AI_WAIFU#2844: Sure, but the hard part is making up your own rewards. If you use the game state directly, that's cheating. IRL you don't get access to the game state, you gotta figure that shit out from observations alone.
AI_WAIFU#2844: The reward should be internal to the agent, not a function of the environment
tgrady#9501: At some point you need to at least have some sort of singular differentiable objective function that defines a meta-goal though, right? Even if not to optimize it directly, at least to attempt to minimize it via some meta-process that generates reward functions acting on data and patterns which have been grouped through some sort of unsupervised model (probably attention based). I assume in our heads that we evolved things like curiosity as meta learning functions that describe to our brain how to take the absolutely huge amounts of information being received every hundred milliseconds or so and translate it reward functions. The search space here is daunting though.
Dexxus - President of Books#8184: So I'm thinking of setting up a set of two pairs of adversarial model NNs for a bimodal conversion and generative program to specifically handle trading cards, one being a language processing model for the text, the other being an image processing model for the card art, wherein it will attempt to either generate appropriate an text/image from scratch, or generate a text/image file corresponding to an image/text file it is given. Is there perhaps a better, more efficient form of NN architecture for this task that I am unaware of?
nev#4905: malmo
nev#4905: /minerl
|
nev#4905: that's almost exactly minerl
nev#4905: but this will actually be illegal
AI_WAIFU#2844: .
nev#4905: :thonk:
nev#4905: yeah I'm surprised no one made a python api mod
AI_WAIFU#2844: right?
natedog#8669: @bmk @AI_WAIFU here is the link to our discord community that I was telling you about where we discuss all things code AI related. Anyone interested is welcome to geek out 🤓 https://discord.gg/BYcBnaTKRg. It is also where we are working on getting the super ridiculously big dataset to train on
nev#4905: also that deepmind mispelled competetive https://cdn.discordapp.com/attachments/729741769738158194/869632542435835934/unknown.png
nev#4905: also the timing is very :thonk:
nev#4905: deepmind might be making a copycat of openai's thing for hype
nev#4905: the few-shot learning is a valuable addition
Drexler#4006: Dell is cancelling Alienware gaming PC shipments to several US states - https://www.pcgamer.com/dell-is-cancelling-alienware-gaming-pc-shipments-to-several-us-states/
TruGerman#6672: Of course, everyone knows that gaming computers are the real energy guzzlers...
nshepperd#2316: the real energy guzzler was the artificially low energy prices inside us all along
gdawg16#0493: https://tenor.com/view/cat-reads-reading-cat-reading-cat-cats-gif-17859557
Zac-HD#7996: This is just fuzzing! It doesn't really work to solve games if you only use execution (coverage + state) feedback, but adding a custom metric for eg "rightward progress at each altitude" is sufficient to solve Super Mario - https://github.com/RUB-SysSec/ijon
Zac-HD#7996: For efficiency it's even better if you can checkpoint and restore, saving the time it takes to replay up to an unexplored state.
dr_moonface#4048: how should we feel about openai gym being maintained again by someone totally not associated with openai https://github.com/openai/gym/issues/2259
I'm super happy it's happening but kind of depressed it's not something they're putting work into
dr_moonface#4048: And I guess unrelated q but what frameworks are people using for their gyms these days? I'm working on one rn and I'm trying to make it gym compatible but with its future up for grabs that feels a little silly
|
ethan caballero#6044: OpenAI be like:
https://cdn.discordapp.com/attachments/747850033994662000/859138650046595082/RL.png
𓅬 gabriel_syme 𓅬#3220: big win for Ken Stanley I guess. But yeah, I've long believed this is the real way towards AGI. Not RL specifically, but the open-ended approach in general.
𓅬 gabriel_syme 𓅬#3220: there's a minecraft competition that I thought offered an environment for that, I'll look for it when I'm back on my PC
AI_WAIFU#2844: .
uwu1#4864: > For efficiency it's even better if you can checkpoint and restore, saving the time it takes to replay up to an unexplored state.
@Zac-HD
Is there a generic fast way to do this? E.g CRIU exists but its more for moving around VMs than lightly forking a state
𓅬 gabriel_syme 𓅬#3220: iiuc, I think go-explore did something similar and they described a bit of the approach
𓅬 gabriel_syme 𓅬#3220: but it really depends on the environment I guess, whether its deterministic or not?
Zac-HD#7996: People tend to build custom hypervisors, e.g. https://github.com/gamozolabs/orange_slice or https://github.com/gamozolabs/applepie
mgostIH#0245: Ye but I mean RL for fuzzing in that sense, although now I wonder how good classical fuzzers are at it
xcodevn#9003: i have this very simple idea and i want to hear your comments. Like VQGAN/DALL-E but for mel-spectrogram (of speech): Step 1. collect a huge speech dataset. Step 2. use VQVAE/VQGAN to extract sequence of tokens representing mel-spectrogram . Step 3. use a GPT-like model to learn from the extracted sequences. Step 4. generate mel-spectrogram, decode to speech using a universal vocoder (HiFiGAN/WaveRNN)
SecondMover#8029: Where would you get a sufficiently big and high quality speech dataset from though?
Jozef Poniatowski#7589: hm
Jozef Poniatowski#7589: if metaverse becomes a thing i wonder how far you can go by just having an rl agent that lives in the metaverse
Kia#2550: You can probably just put a bot in a discord server and that can work to
Zac-HD#7996: "I added ML/RL /whatever to a fuzzer" is a very common 🙄🙄 of a paper - the authors always find that (on their preferred overfitted workload) it takes fewer inputs to find a bug.
They're literally never competitive on a wallclock basis though; and if a good standard fuzzer can execute 80K inputs per second slow-but-smart just doesn't cut it. There's usually also a cute "dumb, fast, and just good enough" trick that replaces smarter approaches.
mgostIH#0245: Ye, I agree that currently they aren't practical
|
Zac-HD#7996: Plus modern fuzzers are actually really smart, eg distributing compute time by expected value of information.
Zac-HD#7996: And the reward is SUPER sparse, in steady state you'd hope to find zero bugs.
mgostIH#0245: I wonder however what could be achieved if some big company like DeepMind put their guns at it
mgostIH#0245: I think recently there was some progress on a learned sat solver (By google?)
mgostIH#0245: Or maybe I misremembered this https://arxiv.org/abs/2107.10847
xcodevn#9003: Youtube + speech detection.
SecondMover#8029: Derp. Good point.
nshepperd#2316: audiobooks
𓅬 gabriel_syme 𓅬#3220: There is a tedx talk dataset with videos, audio, and subtitle text that is nice
nev#4905: is the theoretical minimum fid for generative models on imagenet zero or is there an irreducible loss there as well
StellaAthena#3530: Label prediction problems on a finite dataset have an irreducible loss of zero. You could memorize every label in the test set and therefore get them all correct.
distractedm1nd#2062: https://huggingface.co/datasets/librispeech_asr ?
distractedm1nd#2062: Audio books are super high quality
StellaAthena#3530: This is tiny, isn't it?
distractedm1nd#2062: 1000 hours :/
distractedm1nd#2062: not super tiny but no idea how much you'd need
StellaAthena#3530: It says 500
StellaAthena#3530: The issue is that 500 hours is only 4,500,000 words
distractedm1nd#2062: Oh, I got 1k from here: "LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech"
StellaAthena#3530: huh
|
StellaAthena#3530: I was looking at "The train set contains approximately 500h of recorded speech."
StellaAthena#3530: Usually the train set is ~90% of the data. Using 50% is very unusual
distractedm1nd#2062: Yeah weird
StellaAthena#3530: But even with 1,000 hours, that's less than 10 million words
EricHallahan#1051: I have been kicking around training a language model with speech data, I just haven't gotten around to writing a data pipeline for training.
StellaAthena#3530: I have 10 million words on my bookshelf
distractedm1nd#2062: I wonder how high quality YouTube videos would be. The automatically generated subtitles are generally good, just all non speech stuff would have to be filtered out. And then you'd also have to handle dialogue differently, right?
StellaAthena#3530: By the standards of language models it's very little text. Maybe the requirements for audio are fundamentally different, but IDK
EricHallahan#1051: I should add this to the project board soon lol
EricHallahan#1051: Just restrict to videos that have human generated captions.
distractedm1nd#2062: yeah no I thought 1k hours would end up being more text than that but it's not :/
StellaAthena#3530: I did too, the first time audio models came up. Then I did the math and cried
distractedm1nd#2062: haha yeah and it takes up so much space
distractedm1nd#2062: How bad does an 8k sampling rate sound?
StellaAthena#3530: English speech is approximately 150 words per minute. Scripted speech by people who are experienced in doing it (podcasts, audiobooks) are more like 200 words per minute.
EricHallahan#1051: Narrowband is fine for speech, but not much else.
StellaAthena#3530: 1 billion words is *thousands of years* of audio
xcodevn#9003: I agree that for a speech language model to be "cool", we still need similar number of word as GPT-like model for text.
distractedm1nd#2062: yikes. nevermind 😅
xcodevn#9003: it would be helpful if we can jointly train a speech language model and a text language model.
|
EricHallahan#1051: This is an eventual goal of mine.
EricHallahan#1051: But it is long term.
StellaAthena#3530: read this as
> This is an eventual gold mine
which also works, lol
EricHallahan#1051: Yeah, there are certain things you get for free when jointly training a language model with speech, like rhymes and homophones.
distractedm1nd#2062: Aren't speech patterns much more easy than text?
distractedm1nd#2062: yeah
cfoster0#4356: At the lower level yeah
xcodevn#9003: we are talking about a model which talks like GPT-3
flowpoint#7450: for audio you want more structure too, like timing, multiple speakers, directional and so on.
flowpoint#7450: for improving conversational tasks.
EricHallahan#1051: This is the pain point when doing generative modeling of speech.
EricHallahan#1051: Tone and inflection are really important to overall meaning.
distractedm1nd#2062: Or we just wait until TTS is good enough and then have it read the Pile haha
xcodevn#9003: in the original VQ-VAE paper, the author did some experiments which generate speech-like sound with a Wavenet model.
https://avdnoord.github.io/homepage/vqvae/
wabi-sabi#5811: I think I remember someone using style transfer for this.
cfoster0#4356: IMO the data and techniques are there, but the big players aren't going for it for reasons. Maybe some mix of ethical + profit + pr?
flowpoint#7450: nvidia does their jarvis "agent", but yes
|
most profit is on low latency/bandwidth transcription probably
distractedm1nd#2062: Yeah it's hard to imagine the impact that integration of that would have on a language model because for us the spoken language is primary anyways even when we are reading.. but it would be huge for sure
distractedm1nd#2062: Like, to what extent can you actually infer tone from a text, without ever having heard tone?
EricHallahan#1051: Ask deaf people.
distractedm1nd#2062: Ohh. True
EricHallahan#1051: Or a subset of deaf people at least.
xcodevn#9003: maybe, OpenAI has been working on a GPT-like model for speech lol
EricHallahan#1051: I personally doubt it.
distractedm1nd#2062: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4528103/
This has some interesting blobs, but of course it's about perceiving emotion auditorily
StellaAthena#3530: Speaking as an autistic person who has extreme trouble discerning tonal communication (and is atonal, in a musical sense), my mental model of tone is extremely limited. You know how online you might see someone type `/s` or in stage directions you'll see `[angrily]`? My understanding of tonal communication is that there is something I don't hear that functions like that. I can infer sarcasm sometimes by looking at P(word | sarcastic) and P(word | non-sarcastic), but this has rather low accuracy
nev#4905: I mean, practically
mgostIH#0245: If we get to the irreducible limit we need better tests!
StellaAthena#3530: ImageNet is only 1,281,167 images. a 1 MB model can contain enough information to memorize the answer to every input
distractedm1nd#2062: Wow, interesting! Thanks for sharing
nev#4905: wait
mgostIH#0245: make GANs that take in an image of a theorem statement and produce an image of the proof
nev#4905: this was about generative models wasn't it
StellaAthena#3530: Happy to answer any follow-up questions. Unfortunately by virtue of my condition I'm not sure what is helpful info to share xD
|
wabi-sabi#5811: How are you with other types of fine grained classification, such as facial expressions?
distractedm1nd#2062: Well now I'm just wondering how much having any auditory information at all helps understanding (or just learning of a language) - intuitively (I know, bad to try to reason about these things intuitively) it seems pretty critical. But clearly deaf people for example can learn written English just as well as hearing people can
wabi-sabi#5811: > just as well
Not clear to me.
TruGerman#6672: You are now a language model, do not resist.
genetyx8#7543: no u
TruGerman#6672: :aPotatoSnap: nooooo
StellaAthena#3530: There’s several unstated assumptions here. For one, many Deaf people sign. Signing is equivalent to a spoken language in many regards. I don’t know of any studies on if people who sign write better than people who don’t
distractedm1nd#2062: From what I understood on my 5 minute google rampage was that writing ability develops later in general but averages out over time (with proper education+environment)
wabi-sabi#5811: Tail performance probably differs?
distractedm1nd#2062: Yes, but signing is very different than spoken English and lacks a lot of the structure - it's super interesting to see writing from young deaf children for this reason, I'll send a pic in a minute
wabi-sabi#5811: Now I'm wondering about tone in signing, via certain kinds of style to gestures.
StellaAthena#3530: Instead of making things up about people who are frequently unfairly maligned as unintelligent by hearing people, why don’t you look for actual evidence for your claims. This kind of hypothesizing is harmful both to doing science and to the people you’re stereotyping
distractedm1nd#2062: Well here's the paper
https://www.researchgate.net/publication/334525957_Writing_and_Deafness_State_of_the_Evidence_and_Implications_for_Research_and_Practice/fulltext/5d300e1092851cf4408cfa25/Writing-and-Deafness-State-of-the-Evidence-and-Implications-for-Research-and-Practice.pdf?origin=publication_detail
Boy walk see to cat say “Meow” he pet to cat. Boy walk to but balloon said help me boy hear to balloon boy climb he got to balloon. (8-year-old deaf student)
|
How are you? I’m fine. Yes I want try other cheezes on the break. What you buy cheezes other on the break? What you undecided no or yes to me? (13-year-old deaf student)
xcodevn#9003: FID = 0 when two Gaussian distributions of features on real images and fake images are identical. On practice, it is very unlikely that FID=0, but if a model can remember the whole training set, then FID=0.
wabi-sabi#5811: The claim was that it's clear there's no difference. I don't find it clear. I think that Kiki/Bouba type audio-conceptual synethesia is extremely important to prose quality, particularly in poetry, as claimed by many poets. I do not believe it is true that speculating about their lives hurts either science or deaf people. Rather, speculation is critical both to forming expectations and to engendering empathy for others.
distractedm1nd#2062: I think there was just a misunderstanding - it isn't de facto clear, but only really because the kids have to learn English adjacent to learning how to sign (it's not just like a dialect of English, of course). Historically, writing has been worse, but as long as someone is taught how to write, there doesn't really seem to be a difference from what I read. Deaf people/HH just were shut out of that kind of education for a long time
nev#4905: oh, so fid=0 is perfect memorization
nev#4905: makes sense
nev#4905: thanks
Deleted User#0000: wheres the appropriate place to make a minor feature request for the GPT-J web app?
Deleted User#0000: the line breaks in the text dont seem to copy into the clipboard
Deleted User#0000: its a little thing, but ooh its frustrating
EricHallahan#1051: Here is fine, thanks for the report! I'll pass the message along.
Deleted User#0000: no problem. the fun of experimenting on my friends with the results has been wonderful
someKindaBean#8471: Couldn't you tokenize some of this into text during the ASR process?
Obviously you'd have to come up with a process, but you could do something like: "<sarcasticTone> i love that </sarcasticTone> " or "that's a goose <**upwardInflection> " for tones/inflections
someKindaBean#8471: One obvious (seeming) answer for timing/multiple speakers would be to have an additional embedding that includes time information or just include the speaker's name/unique identifier everytime they say something.
I have been playing with some meeting transcript data and have managed to improve my summarization ROUGE scores by a couple points just by including the speaker name before each utterance ie: "program manager: lets go around the room and introduce ourselves"
wabi-sabi#5811: I think overlapping labels is a way neglected problem for supervised methods
flowpoint#7450: there are many methods, this idea is also cool.
|
i think normally you just have a softmax classification output for the speaker id and emotion. it probably makes supervision easier.
i didn't keep up with asr though.
not sure if speaker diarisation is good enough yet, before speaking about parallel recognition.
wabi-sabi#5811: How does it work when people pair input features with time data? I'm assuming a scenario where you sample feature 1 every minute, feature 2 every ten seconds, etc., and then are given a slice of features that's jagged with respect to time. I believe signal processing handles such data, but I need a reference if anyone has a short overview blog or small paper.
smallanimalfriend#4355: https://twitter.com/OpenAI/status/1420417544528171008
EricHallahan#1051: ♻️ https://discord.com/channels/729741769192767510/747850033994662000/869982508299747399
quinn#9100: Why are they doing this
quinn#9100: Did anyone tell them we're all gonna die?
quinn#9100: Like all of us?
someKindaBean#8471: I THINK you could use the same positional embedding methods, but map to temporal time rather than position in incoming sequence.
What you're discussing sometimes gets put under the category of sparse signal processing and that's not an area I'm super familiar with.
wabi-sabi#5811: I'm not sure whether positional embedding is inherently ordinal or not. Don't know much about it.
wabi-sabi#5811: The dumb point of view is just that milliseconds are an ordered array and most of the array is empty, but I feel like good mathematicians would yell at me for that
someKindaBean#8471: When I was working with sparse acoustic data (channel impulse responses that were saved as significant arrivals and noise variance), we'd just "reconstitute" it into regularly sampled data because it made everything easier. I'm sure some mathemagicians would have had better ideas.
ethanjperez#5114: Hi everyone, I'm Ethan Perez, a final year PhD student at NYU and current intern at DeepMind, working on aligning language models to human preferences. I'm looking to mentor 1-2 people to work on a project in this space. I'd expect candidates to have good software engineering ability, enough to pick up ML engineering (e.g., to finetune GPT2 to good performance on a new task) or data engineering (e.g., to quickly find high quality subsets of text within 1TB of Common Crawl data). I'm looking for people who'd be able to commit 10+ hrs/week, possibly with some funding for larger time commitments.
The project aims to address the issue that language models require significant prompt engineering/tuning to do well at tasks we care about, an issue I've explored in recent work (at https://arxiv.org/abs/2105.11447). The need for prompt engineering is a sign of misalignment between the language model pretraining objective and performance on useful tasks. I'm interested in reducing this misalignment by training on data that better resembles the tasks we care about.
|
If what I've described sounds like a good fit, just send me a message over discord, and we can chat more 🙂
Daj#7482: Hey Ethan! Glad to see you advertise this project, I think it's really valuable potential work. Hope you find some interest :)
TruGerman#6672: Another 5head joins the [REDACTED]
nev#4905: minecraft on TPU
circuit10#0158: This isn't a mod and I'm not sure if this is what you mean but https://github.com/PrismarineJS/mineflayer
𓅬 gabriel_syme 𓅬#3220: I like this idea, got a good feeling
EricHallahan#1051: Well I have been talking about it for around half a year now...
EricHallahan#1051: Actually wow, I'll have been active for six months here tomorrow. 🎉.
𓅬 gabriel_syme 𓅬#3220: Damn that sounds really interesting, especially thinking about it on my quirky language domain. Thanks for initiating it!
𓅬 gabriel_syme 𓅬#3220: Ohh yeah my bad I read that totally wrong.
wabi-sabi#5811: Obligatory ethics related complaint that I hope someone is thinking about how to prevent China from using this for censorship. Sorry to be a stick in the mud, totalitarianism is just my #1 short term unfriendly AI worry.
StellaAthena#3530: @Daj literally went on about this on a UNESCO panel until they, uh, "ran out of time" a couple months ago. Be assured this is something we care about.
sea_snell#0243: If I were to implement all the components for a transformer in purely Triton kernels, how much faster do you think it would it be? Not that I have the time to actually do this, but it seems like it would be interesting to try
EricHallahan#1051: ¯\_(ツ)_/¯
Dashiell#8739: Is there video of this? I think I'd like to hear that rant
Louis#0144: question that is relevant to eleuther
Louis#0144: if all authors contributed equally
Louis#0144: alphabetical order?
Louis#0144: or no
Louis#0144: happy half birthday
|
Louis#0144: I joined a year ago
Louis#0144: I think to the day
Louis#0144: ?
Louis#0144: Because I had my concussion on the 29th
Louis#0144: and I remember I joined a day or two before that
Louis#0144: all ive done in the last year is shitpost
Louis#0144: :^)
Louis#0144: nah
Louis#0144: I think before end of year I'll have written 3 eleuther papers
Louis#0144: YO
Louis#0144: WAIT
Louis#0144: TODAY IT WAS A YEAR
Louis#0144: LITERALLY TODAY
Louis#0144: imagine if i had never joined, eleuther without geese just feels wrong :^)
inox#5400: happy gooseday!
inox#5400: https://github.com/HIPS/author-roulette
Kia#2550: Happy anniversary :wireheading:
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/870083418669604874/Screen_Shot_2021-07-28_at_7.20.49_PM.png
Louis#0144: ffs
StellaAthena#3530: @Louis We don’t have an official policy AFAIK, but that’s what we’ve generally done in the past. I think that the best org policy is to let the authors decide for themselves tbh.
|
StellaAthena#3530: Also it’s EleutherAI, not Eleuther. Don’t let the media coverage rot your brain 😛
Louis#0144: of course
Louis#0144: :berk:
Louis#0144: freshest eleutherai paper whos hype https://cdn.discordapp.com/attachments/729741769738158194/870084059311771688/Screen_Shot_2021-07-28_at_7.23.11_PM.png
Louis#0144: no i wont change the name
Louis#0144: i love it
Louis#0144: douglas adams is ❤️
Louis#0144: (ok maybe I'll change the name if anyone has a good suggestion)
Louis#0144: anyone have a name suggestion?
Kia#2550: Because it's from Eai,I would read this :ultrathonk:
StellaAthena#3530: In general I like kitchy titles if you subtitle it with a descriptive one
StellaAthena#3530: My main worry is that someone reading the title on a list will gain no information about the paper.
Louis#0144: yeah
Louis#0144: I'll figure out a better name
uwu1#4864: we need a journal that allows dark mode papers
uwu1#4864: papers are probably be read on more oled screens than printed on paper so it'd be more environmentally friendly too
Kia#2550: Probably Just make software that turn White Paper to Black and Black text to White
Louis#0144: if someone can think of a good fish pun
Louis#0144: I'll give u internet points
Yerren#1954: Teach an AI to fish, and ~~you feed it for a lifetime~~ it will be able to tell a good story
|
someKindaBean#8471: nevermind the pollock(s)
Louis#0144: needs to be for the paper
Louis#0144: like it needs to make sense
someKindaBean#8471: i'm just finding fish puns for the halibut
Louis#0144: ugh
someKindaBean#8471: fine fine
story telling off the *scales*
bmk#1476: idk seems kinda fishy to me
someKindaBean#8471: *fin*tastical storytelling
someKindaBean#8471: i swear to cod, i want these internet points
Yerren#1954: Cut the CARP: [additional fish pun about story telling]
Louis#0144: LMAO
someKindaBean#8471: yeh, that's pretty good
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/870100696341544970/Screen_Shot_2021-07-28_at_8.29.27_PM.png
Louis#0144: I really like Cut the CARP
Louis#0144: tbh
Kia#2550: Wait we can put puns in paper:surprise:
Kia#2550: And Emoji's:ultrathonk:
someKindaBean#8471: the followup pun could be something about lines or hooks
|
someKindaBean#8471: i have an MS paint drawing in my MS thesis because of a stupid dare from a friend
someKindaBean#8471: so why not?
Kia#2550: Good point
Louis#0144: Cut the CARP: Fishing for zero shot storytelling evaluation
Louis#0144: Eh?
Louis#0144: Eh?
someKindaBean#8471: that's good
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/870105940538572890/Screen_Shot_2021-07-28_at_8.50.19_PM.png
Kia#2550: Looks great
zphang#7252: *should have gone with CARPE for CARPE DIEM*
Yerren#1954: Amazing. This is the highlight of my academic career
Louis#0144: https://www.overleaf.com/read/pnxgcjspphrc
Louis#0144: one of the next eleuther papers
Louis#0144: feel free to take a gander
Louis#0144: :goose:
𓅬 gabriel_syme 𓅬#3220: adams is cool
𓅬 gabriel_syme 𓅬#3220: CARP reminds me of carpet mostly but probably because that's the easiest association for a non-english speaker
Louis#0144: https://tenor.com/view/carp-fishing-mouth-fish-big-gif-17203710
𓅬 gabriel_syme 𓅬#3220: yeah I know it now 🙂
𓅬 gabriel_syme 𓅬#3220: both emoticon and colon, I hate u
|
Louis#0144: lmao
triggerhappygandi#0001: @Louis nitro cring
Louis#0144: yeah but we have a goose banner now
Louis#0144: Yeah took one for the team
Dwarf#6935: :goose:
triggerhappygandi#0001: pog
StellaAthena#3530: That really is a hideous shade of pink
triggerhappygandi#0001: We can change the color of booster role iirc
Dwarf#6935: oh no, the server boosters are fading away
guac#4716: orange is sweet keep that!
StellaAthena#3530: Okay at least this isn’t horrible offensive to my eyes
guac#4716: hahaa yeah the pink hurt
Louis#0144: Yooo
Louis#0144: It’s like I have my Georgia tech role back
Louis#0144: :berk:
Dwarf#6935: *i liked the pink*
StellaAthena#3530: I’m on mobile and you awkwardly can’t see the colors of the other roles while changing the color on mobile lol
StellaAthena#3530: Discord’s mobile design sucks
StellaAthena#3530: There’s an entire category of settings missing from this list rotfl https://cdn.discordapp.com/attachments/729741769738158194/870174205205938216/image0.png
Louis#0144: I remember in the old days of Eleuther when anyone could have any color they wanted basically@
|
Louis#0144: Just had to ask Connor nicely
Louis#0144: LMAOOO
guac#4716: discord may just be teetering on feature bloat lol
kurumuz#5695: yo where is my pink
kurumuz#5695: oi
triggerhappygandi#0001: it turned orange
Louis#0144: Fwiw I don’t plan to sub indefinitely
triggerhappygandi#0001: :nooo:
bmk#1476: then you wont be orange anymore
bmk#1476: also we'll lose the goose banner
Teemochu#8740: goose
bmk#1476: ~~30 boosts and we can get discord.gg/goose~~
Kia#2550: I think we don't lose the banner thing when the boost go below 15
bmk#1476: really?
bmk#1476: I thought we do
Kia#2550: Im in a tech a server and there banner is still there even for 2 boosters
Kia#2550: So yeah
triggerhappygandi#0001: damn, nice
nev#4905: ever since the MS acquisition
triggerhappygandi#0001: what happened to openai homepage
|
triggerhappygandi#0001: it is literally just about codex
Orz#3023: :CH_AlphabetF:
𓅬 gabriel_syme 𓅬#3220: have to front page your latest sell
triggerhappygandi#0001: Yeah but there's literally nothing else there
Deleted User#0000: any good resources for learning
Deleted User#0000: thanks in advance
Daj#7482: Hello! As stated in our #rules ,we aren't a beginner community, you might find better advice elsewhere e.g. in the servers linked in #communities . If you want to learn ML and can already code, I generally recommend people check out fast.ai
TruGerman#6672: They *love* beginner questions :PepperSprayLaugh:
Ajay sahu#2540: https://learning-at-home.github.io/
EricHallahan#1051: Note that we discuss hivemind in our FAQ:
https://www.eleuther.ai/faq
Ajay sahu#2540: Ok...understood
EricHallahan#1051: Though it surely is interesting.
alstroemeria313#1694: > In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails (higher kurtosis).
alstroemeria313#1694: The logistic distribution has Laplace-like tails?
alstroemeria313#1694: Like grafted on to a Gaussian-type center?
alstroemeria313#1694: ```python
def logistic_loss(x):
return -F.logsigmoid(x) - F.logsigmoid(-x) - 2 * math.log(2)
```
|
alstroemeria313#1694: And if you use this loss then it implies a logistic prior over the distribution of differences, in the same way that an L1 loss implies a Laplace prior and an L2 loss implies a Gaussian prior?
hGI.unsure#2032: Hi, I had a quick question. If you get some token id's generated from a model - and then convert it to text - and then back to a token id list, will the initial and final tokens id lists be the same ?
EricHallahan#1051: If you choose token ids at random and do that it will almost certainly not be the case. As an extreme demonstration,```[220,50,78,75,72,67,38,78,75,67,44,64,70,72,74,64,81,79]```and```[43453]```both resolve to ` SolidGoldMagikarp`, which tokenizes to
```[43453]```
EricHallahan#1051: @spirit-from-germany It is all shuffled, and I suggest reading the paper. This is what it says on the subject:
> **C.13 Wikipedia (English)**
> We use the `wikipedia/20200301.en` dataset from TensorFlow Datasets. We prepend the title to the body of each article, separated by two newlines
> https://www.tensorflow.org/datasets/catalog/wikipedia#wikipedia20200301en
A lot of the Pile subsets are hosted at https://the-eye.eu/public/AI/pile_preliminary_components/, but Wikipedia is not one of them.
spirit-from-germany#1488: oh... thx
spirit-from-germany#1488: this was easy .... 🙂
spirit-from-germany#1488: import tensorflow as tf
import tensorflow_datasets as tfds
dataset = tfds.load("wikipedia/20200301.en", split=tfds.Split.TRAIN, as_supervised=False)
for line in dataset:
print(line)
quinn#9100: Deepmind is putting out a Cooperative AI related set of benchmarks soon. Tentatively would I have an EleutherAI team if I wanted to attack it with decision transformers? like does anyone off the top of their head think they _might_ be interested?
Daj#7482: I'd be interested! Though I'm not sure how much time I could commit
quinn#9100: that rules
|
AI_WAIFU#2844: Sure I could take a crack at it
kurumuz#5695: i am interested aswell
AI_WAIFU#2844: But DTs suck
Daj#7482: Still owe me a writeup
quinn#9100: do they? I could be swayed to use classical MARL
AI_WAIFU#2844: No I owe you a demo
quinn#9100: i'm not strong of will here.
Daj#7482: I'd be much less interested in traditional RL for the record
Daj#7482: Far more interested in model based transformer stuff
AI_WAIFU#2844: Yeah DTs will probably do pretty well, but they fail in specific ways I have yet to elaborate on
quinn#9100: https://technical-ai-safety.libsyn.com/4-multi-agent-reinforcement-learning-in-sequential-social-dilemmas <- This interview from back in May is when I got the intel about teh benchmark
Louis#0144: The interns could be down
Daj#7482: This seems hard for interns
Daj#7482: MARL is actual hell
quinn#9100: ray/rllib makes it dead-easy to spin up; a bit of a hot mess if you wanna do more advanced stuff or have a certain class of bugs
alstroemeria313#1694: should evaluate if they in fact fail in these ways
AI_WAIFU#2844: yeah that's why I owe you all a demo
AI_WAIFU#2844: I'm gonna demonstrate the failure mode
kurumuz#5695: who are the interns
TruGerman#6672: Are you doing a 5head battle or what?
|
quinn#9100: To people who expressed interest: I am not keeping track right now, but thank you for responding; we will reconvene about this when the benchmark is officially dropped (I believe we're talkin a top venue competition)
StellaAthena#3530: Step 0 would be to get a decision transformer up and running at all
alstroemeria313#1694: I used a simplified version of it for generating images conditioned on text prompts
StellaAthena#3530: People have mentioned being done to write a DT module for MTJ or NeoX a couple times but so far it hasn’t happened
alstroemeria313#1694: It does in fact give me outputs that don't match the prompt as well if I ask it for less well-matching outputs.
alstroemeria313#1694: (I posted some distributions of conditioned reward vs actual reward of the sampled policy in #art a few weeks ago I think)
quinn#9100: yeah like i thought i typed this message but i didn't see it skimming the above so maybe i didn't:
quinn#9100: it _might_ be premature because ginormous environmetns for pretraining aren't built yet are they?
quinn#9100: like we need something that is really good at state-action-reward tuples seq2seq. so we can multi-agent it and finetune on whatever.
StellaAthena#3530: What if you train them on the task “guess the next word” :thonk:
quinn#9100: and finetune on a gameplay environment like a gridworld! i know i briefly thought of that
quinn#9100: but :thonk: or even :ultrathonk: is exactly what I thought
alstroemeria313#1694: Can you fine-tune a normal autoregressive transformer into a decision transformer
quinn#9100: my guess is :thonk: and no but i am super uneducated and i'm glad you asked!
alstroemeria313#1694: IDK seems like it should be possible if you expand the existing positional embedding the right way?
StellaAthena#3530: Here’s another :thonk:: I have chess data. If I train a DT on it and train a transformer on it (this time processed as text data) which does better
alstroemeria313#1694: The DT and the normal AR transformer can use the same input and output representation I think?
quinn#9100: didn't scott alexander write about fewshotting chess with gpt2 like forever ago? I think gwern was involved
alstroemeria313#1694: DT just adds extra stuff to the input for states and rewards
StellaAthena#3530: I secretly think that DTs work better if you just train them as Ts and that they added a bit to claim greater novelty
|
StellaAthena#3530: Yeah, they both have written about this
alstroemeria313#1694: DT is just an AR transformer with explicit conditioning on remaining reward?
alstroemeria313#1694: (And optional explicit conditioning on state)
quinn#9100: https://github.com/eugenevinitsky/sequential_social_dilemma_games older deepmind envs repo
StellaAthena#3530: I’m not convinced that that conditioning is beneficial and find the fact that they didn’t compare to normal Ts suspicious
quinn#9100: i think there's another repo
alstroemeria313#1694: Seems easy enough to try
quinn#9100: too
StellaAthena#3530: To be fair, it could be another example of DL people not understanding what science is. I’m not claiming it’s malicious
StellaAthena#3530: I can send you the chess data in an hour or two if you wanna try it @alstroemeria313
StellaAthena#3530: Leo also has rubix cube data
alstroemeria313#1694: ooh
alstroemeria313#1694: I wonder if I could easily just take the DT paper repo and simply zero out the rewards
alstroemeria313#1694: Like, as an ablation.
alstroemeria313#1694: And run their training script as they released it and with the ablation.
StellaAthena#3530: Sounds like a good place to start
Zac-HD#7996: I found it trivial to have the GPT-3 playground play a credible game of chess via the usual notation. Valid and reasonable play through the opening; after move 20 or so it took a few attempts to make a valid move. Still ludicrously impressive for a system with no intrinsic concept of a chessboard, let alone the game state!
StellaAthena#3530: @Zac-HD does it play theory?
someKindaBean#8471: What opening does it choose? Bongcloud?
kurumuz#5695: well obviously it tracks the game state.
|
kurumuz#5695: maybe not obvious
Louis#0144: They haven’t made themselves known yet
kurumuz#5695: sounds boring
guac#4716: there were intern applications? 😮
Louis#0144: It’s just a pilot test for now
Louis#0144: To see how it goes
Louis#0144: There’s four of them
Louis#0144: They’ll make themselves known soon
Louis#0144: They’re all v nice Dw
alstroemeria313#1694: What do the interns get...? Like what's different about being an intern rather than showing up and helping
Louis#0144: Mentorship
alstroemeria313#1694: Ah
Louis#0144: Also insider goose memes
Louis#0144: Super exclusive
TruGerman#6672: Experience :xqcHead:
someKindaBean#8471: I just tried having GPT-J play some chess games and it has a strong tendency to make illegal moves. One move it tried to do a lot was to play a bishop through a pawn.
someKindaBean#8471: It also likes to regurgitate chess tutorials, which was cool
natedog#8669: What kind of sampling technique are you using to generate the moves? I wonder if different types of sampling techniques would fair better than others
someKindaBean#8471: I'm just goofing around with the bellard.org/textsynth demo
someKindaBean#8471: https://cdn.discordapp.com/attachments/729741769738158194/870474155936587786/Screen_Shot_2021-07-29_at_8.47.52_PM.png
|
someKindaBean#8471: a more thorough investigation or fine-tuning would be a really interesting experiment
Zac-HD#7996: @StellaAthena @kurumuz It's been a while, but the impression I got was that it had memorised many openings and midgame-move-sequences, but generally didn't track whether the completion was legal. Early game the locations of each piece are pretty reliable; by midgame not so much.
Zac-HD#7996: I definitely didn't get the impression that it was tracking board state (as distinct from having previous moves in the context window)
Zac-HD#7996: It _did_ manage to play particular openings when prompted by the name of the opening, or occasionally names of famous players. "Kasparov to play, mate in four. 1. e4" was a fun prompt, apparently Kasparov is unlikely to fall for the four-move mate.
Zac-HD#7996: Fine-tuning would probably be work pretty well.
𓅬 gabriel_syme 𓅬#3220: I would be interested, if I could offer something useful. I can at least commit time to it, heh
𓅬 gabriel_syme 𓅬#3220: concerning 'trying to make Ts do things', are we solely interested in doing that through prompt tuning or also actual finetuning/training?
𓅬 gabriel_syme 𓅬#3220: in my experiments almost all GPT2/Neo models I could train (from 117M to 2.7B) could learn how to generate architecture layouts. Although I did not compare yet just with prompting.
𓅬 gabriel_syme 𓅬#3220: My next step is to add rewards-to-go for various metrics and fine tune again, to test it in the DT setting.
That said, I kind of did test this when making dungeon crawler maps with Neo, where map difficulty was included in the prompt as information. It seemed to work, although no proper evaluation happened (it was a game jam)
𓅬 gabriel_syme 𓅬#3220: So yeah, I'd be totally down for DT experiments, let me know 🙂
𓅬 gabriel_syme 𓅬#3220: I wonder if we could do it in a POET like context (or I guess as in DMs latest paper) where we have a DT generating different environments with different parameters/metrics in which agents interact.
axolotl#6372: So what's the best language model for text generation (e.g. continuations of fiction prompts) that is easy to fine-tune in Torch, excluding OpenAI API?
𓅬 gabriel_syme 𓅬#3220: Not sure about the torch part (it's definintely possible and there are notebooks for it) but it's definitely GPT-J. Check the channel, a lot more discussion about it in there
DrYazman#2737: Who are the mods in this server?
DrYazman#2737: Like what color?
Kia#2550: The purple and blue
Kia#2550: The greens/reds has no moderation power
DrYazman#2737: ahh ty
Kia#2550: No problem :o
|
DrYazman#2737: @Kia What's that alt DALL-E model? What's it based on?
Kia#2550: DALL-E?
DrYazman#2737: I see it in your status
Kia#2550: Ow yeah
Kia#2550: It's dalle pytorch
Kia#2550: https://colab.research.google.com/drive/1b8va5g852hq3p7yro7xWY3Cc-bd2CRdv
Kia#2550: Try it out
DrYazman#2737: ty
DrYazman#2737: Having a look
wyrdc#1871: I finetuned GPT-J with ~110MB of text pruned from ~250 JAX-related GitHub repos. While I'm not planning on testing it rigorously, it definitely learned how to write code that at least resembles JAX. I'm willing to release the weights under the same license as the original GPT-J if anyone is interested, but I may need advice about hosting.
nshepperd#2316: cool, maybe it can help me figure out how to use jax.lax.gather
wyrdc#1871: I included the READMEs and added about 400kb of code-heavy tutorials, so it can try to explain things (whether or not this is helpful...is beyond me atm, though I would guess it isn't) https://cdn.discordapp.com/attachments/729741769738158194/870603572893597726/unknown.png
Kia#2550: That's really cool :o
wyrdc#1871: This completion looks more reasonable but I still don't have the context to know correctness. Anyway, anyone can use this, after I make it slim (maybe? guess others might want to finetune...) and figure out hosting https://cdn.discordapp.com/attachments/729741769738158194/870605817638957066/unknown.png
wyrdc#1871: Thanks 😊
EricHallahan#1051: I suggest you read the #rules.
Exocamp#8255: Me again
Exocamp#8255: I still am wondering how it may be possible for an AI to continously learn with a small dataset getting bigger
Exocamp#8255: Progressive neural networks sounds like the key, but I wonder how you can adapt it to something like this
Exocamp#8255: Just throwing out rambles
|
triggerhappygandi#0001: Why would you start training on less data when you have more
wabi-sabi#5811: Maybe extending the train/test/val further, to additional partitions/outer layers of confirmation would be good for certain methods?
pebbles#7130: related random thought: slowly letting the neural net see more and more of the dataset in the right way might naturally lead to better extrapolation
Sphinx#2092: If you are not careful, you'll find the opposite result.
pebbles#7130: I don't mean training on a small dataset, but rather learning what to learn from a small dataset, such that you can generalise as well as possible to the larger dataset. And then training on the large dataset in same way, such that were there an even larger dataset, you'd hopefully generalise to that too.
Sphinx#2092: Sure, and I'm saying if you are not careful with approach, you'll find the opposite result.
Sphinx#2092: I think training on everything at once is a pretty strong baseline.
StellaAthena#3530: @pebbles This is very hard. If you can find a method for doing this that outperforms just training on everything at once in a variety of contexts, then you’ve made a significant breakthrough imo
StellaAthena#3530: *sometimes* and with a lot of effort one can do this for a particular type of data
StellaAthena#3530: Doing it generically is very hard
pebbles#7130: Yeah, I don't have a system which makes this work in practice, only some vague ideas
DrYazman#2737: yeah, sorry
EricHallahan#1051: No problem!
Ambisinister#1823: This is an existing field of research called “incremental learning”
Ambisinister#1823: Lots of cool papers on it iirc
Ambisinister#1823: https://arxiv.org/pdf/2103.16788v1.pdf is current Sota if memory serves
One#5919: "Context-Adaptive Recontextualizer"
One#5919: Mimic da brainnnn
One#5919: Map the relations of a set of neurons onto another set of neurons in an analogous way
AI_WAIFU#2844: Simple
|
Step 1:
Make it big enough at the start
Exocamp#8255: thank for paper
wabi-sabi#5811: Bootstrap sampling seems a little bit like this.
StellaAthena#3530: If you can figure out how to do it with bootstrap sampling I will provide you all the compute you could ever dream of to write the paper showing this
Louis#0144: Yeah that would break deep learning
Louis#0144: Lmao
Louis#0144: You’d get best paper at any conference you want
Louis#0144: Hands down
One#5919: what about a table of predefined neural structures that have been observed in previously trained models, just assign ready-made structures to groups of neurons that can reliably be guessed to best be arranged in such a predefined way
One#5919: there's no way this is dumb come on
wabi-sabi#5811: I mean it's also similar to meta learning and transfer learning too, I don't think it's that out there.
Louis#0144: Doing this would reduce the amount of training on all models drastically
Louis#0144: It’s part of the reason knowledge graphs exist
Louis#0144: To do structure based augmentation
One#5919: heck yeah
One#5919: we gotta go heavy with that
One#5919: @pebbles's idea got me thinking about it
Louis#0144: I’m saying it probably wouldn’t work :berk:
Louis#0144: People have been trying for decades
|
Louis#0144: Long before DL existed
One#5919: all it takes is a flip of a switch or a bit
Louis#0144: Like since the 80s
Louis#0144: lol
One#5919: look at transformers and large language models
One#5919: architecture matters hugely
wabi-sabi#5811: Proves it's a good idea, failed incarnates of the Nerevarine
Louis#0144: Transformers are proof that architecture doesn’t really matter
Louis#0144: 👀
Louis#0144: Not that it does matter
Louis#0144: You can throw whatever you want into a transformer
Louis#0144: And it just works
Louis#0144: Look at DALL-E
Louis#0144: They basically just throw modalities together into a transformer
Louis#0144: lol
One#5919: we had the compute before attention
One#5919: attention changed everything, with the help of compute
One#5919: architecture matters 😄
One#5919: it's what the brain does. QED
One#5919: mimic that shit you get very far
|
Louis#0144: Task specific architecture doesn’t really matter
wabi-sabi#5811: Architecture is leverage, compute is force
Louis#0144: I worked on neuroplasticity for years
Louis#0144: As a researcher
One#5919: ................yet
Louis#0144: The way the brain works is massively different than the way ANNs work
One#5919: analogies are the opposite of specific
Louis#0144: A single biological neuron has the computational power of hundreds of ANNs
Louis#0144: lol
wabi-sabi#5811: Does it?
Louis#0144: Yes
One#5919: attention
One#5919: brain shit
One#5919: sorry
wabi-sabi#5811: Would love link please when time
Louis#0144: Look up capacity of hopfield networks
Louis#0144: A single biological neuron can store I think like 300 bits or something like that
Louis#0144: I forgot the exact number
Louis#0144: There’s tons of work into this though
One#5919: complete graphs
|
One#5919: load each neuron with the distance to every other neuron
One#5919: Indra's net
Louis#0144: https://www.frontiersin.org/articles/10.3389/fncom.2016.00144/full
One#5919: lots of bits representing connection
One#5919: connected graphs i meant
Louis#0144: The capacity of a biological neuron depends on the coding scheme it’s using
Louis#0144: This is for a coding scheme called population coding
Louis#0144: I think sparse coding has a higher capacity
Louis#0144: I can’t find a paper on that though
Louis#0144: Anyway it doesn’t matter
Louis#0144: Biological neurons are drastically more sophisticated than ANNs
One#5919: maybe one day we'll be growing GPUs out of conducive spores or mold instead of silicon chips
One#5919: this is way above my understanding
wabi-sabi#5811: I don't understand why it would make sense to characterize these ideas as showing a single biological neuron has more power than an ANN, were you just speaking poetically?
One#5919: it's just our current limited architectures
One#5919: state of the art is never the end all, be all. we'll match da power
One#5919: brain heuristics will always be worth investigating, but of course there are countless other avenues of optimization
One#5919: having the building block of models be not neurons but collective pre-built patterns of neurons makes sense, it's a more sophisticated substrate and mapping connection is all about sophistication
One#5919: the weighs between the neurons in those predefined structures would be adjustable based on specific performance. you could even mix structures together like pieces of foam when the nascent neurons are suggesting two or more optimal predicted structures
ersatz#0001: do you people have a FAQ about the project?
|
EricHallahan#1051: cc @thenightocean
thenightocean#6100: Thanks for reporting! Will add it to the backlog.
thenightocean#6100: I am doing some updates anyway.
StellaAthena#3530: !faq
Carl-bot#1536:
ersatz#0001: thanks
Oberic#2303: Hello, I'd like to talk to a mod, preferably the mod that banned me last night.
Oberic#2303: 🙂 Just wanna clear my name.
Daj#7482: Hey Oberic, lets take it to DMs
Oberic#2303: 🙂
ersatz#0001: When will the new MLST be released? I gather Conor is talking with Jeff Hawkins?
Daj#7482: good question, I don't know either hah
Daj#7482: @timscarfe any comment? haha
alstroemeria313#1694: They did the ablation in the paper actually, calling it "behavior cloning"
alstroemeria313#1694: > We also report the performance of behavior cloning (BC), which utilizes the same network architecture and hyperparameters as Decision Transformer but does not have return-to-go conditioning
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/870742246897623080/Screen_Shot_2021-07-30_at_11.58.46_AM.png
StellaAthena#3530: Oh, I think the “behavior cloning” terminology threw me off
StellaAthena#3530: I remember seeing that and thinking it was something more akin to distilling / model stealing
alstroemeria313#1694: I looked into their code and they had both normal DT and BC there
One#5919: first run https://cdn.discordapp.com/attachments/729741769738158194/870748362507419688/first_run.png
|
One#5919: @mkualquiera build a bot that feeds the user's inputs at each turn and moves the piece GPT-J says to as immediate continuation
mitchg#7109: https://sites.google.com/berkeley.edu/decision-transformer
One#5919: i'm trying it manually but it proposes nonsense moves more and more as the game advances, so i keep having to rerun it more and more. it's ELO is probably 1 😄
One#5919: @StellaAthena a decision transformer would probably do a lot better since the sequencing defines the game
One#5919: ayyy it even avoids ambiguity which knight gets moved https://cdn.discordapp.com/attachments/729741769738158194/870751196296675328/Nbd7.png
mkualquiera#3484: I would but I'm super busy currently :(
mkualquiera#3484: plus I only have access to NovelAI for inference, and it doesn't support looking at the probability distribution directly, which would be cool to make it only use legal moves
One#5919: oh yeah that would be heavy
kurumuz#5695: we do
kurumuz#5695: actually
kurumuz#5695: if i didnt rekt that out
mkualquiera#3484: oh really??
mkualquiera#3484: I did ask for that feature 😅 didn't know if you had actually implemented it
mkualquiera#3484: that's cool, I'll give it a try soon
kurumuz#5695: try "next_word": True
kurumuz#5695: it should give you the distribution of top-100 or something
mkualquiera#3484: Amazing
EricHallahan#1051: Next word or next token?
kurumuz#5695: its next token :berk:
kurumuz#5695: idk why i named it next word
|
kurumuz#5695: @EricHallahan you guys can steal the model easier ig
kurumuz#5695: because of this
EricHallahan#1051: Yeah, I was about to say that sounds like a security risk. Make sure that you set that out in your terms of service like OpenAI specifies.
bmk#1476: can you make it top-50256? asking for a friend
kurumuz#5695: lmao
kurumuz#5695: that would be too easy
kurumuz#5695: do we care enough though?
EricHallahan#1051: It doesn't matter to me, I'm not attached to any consequences. :think:
kurumuz#5695: @EricHallahan maybe it has some unforeseen consequences...
kurumuz#5695: half life reference btw
bmk#1476: i am prepared
EricHallahan#1051: I was going to tell you to prepare for them lol
bmk#1476: for unforeseen consequences
bmk#1476: well, this is where i get off [disappears into the pyfra mines]
kurumuz#5695: lmao
EricHallahan#1051: Where is `pyfra` now? rewrite?
bmk#1476: yeah
bmk#1476: big rewrite under way
kurumuz#5695: smh we made it toi easy to steap the model
bmk#1476: im adding some awesome shit
|
kurumuz#5695: where is the Challenge now
kurumuz#5695: boring
kurumuz#5695: :goose6:
EricHallahan#1051: Do you have a list of planned features?
bmk#1476: yes
bmk#1476: 1sec
Chr0my#0173: hey so, just been ghosting this chat for like the last day, and have been wondering how they (developer, fabrice bellard) made the ai respond in a generative way, 3 words at a time. link: https://bellard.org/textsynth/
I have a small private website that i would like to do that on but i just cant figure out a way to do it that;
1) wouldnt re-randomize/re-generate every 3 words.
2) give a snappy and quick response time
3) (i assume) be so server intensive. (for large scale.)
TIA
bmk#1476: i dont think anyone here knows what bellard did
EricHallahan#1051: You have to understand that Bellard is both a madman and a wizard.
alstroemeria313#1694: you can display single tokens as you sample them even?
ilovescience#3282: Hey all, you might be interested in DALL·E mini.... Give it a try!
https://twitter.com/iScienceLuvr/status/1421186333888835584
alstroemeria313#1694: Avocado emoji lol
EricHallahan#1051: lol
Louis#0144: You guys are using the Bart encoder?
|
Louis#0144: I have land to train a 1b Bart
Louis#0144: If you’d prefer to use that for a future version
cfoster0#4356: Land?
Chr0my#0173: sorry, is that a question or an answer - if the latter then that would be (and is) an amazing idea!
Louis#0144: Plans **
EricHallahan#1051: You building a datacenter? :berk:
ilovescience#3282: on what dataset?
Louis#0144: The pile
Louis#0144: https://discord.gg/BYpDT9TB
Louis#0144: We’ve just been busy
Louis#0144: So we haven’t started training yet
alstroemeria313#1694: sampling a token requires a forward pass through the model. this gets you logits for the next token. you then sample from those logits.
alstroemeria313#1694: concat the token to the prompt+previous tokens, and do another forward pass to get another vector of logits
bmk#1476: I think there's a few steps of inferential distance here
alstroemeria313#1694: You can just show each token to the user when you sample it
bmk#1476: I'd recommend reading about the transformer caching trick https://scale.com/blog/pytorch-improvements
bmk#1476: that's essentially what's going on here
alstroemeria313#1694: the caching trick should get you the same results?
ilovescience#3282: Maybe we'll contact you next week when we ramp up our efforts to develop new models... thanks!
Louis#0144: Sounds good
|
cfoster0#4356: I don't understand why caching isn't just taught as a part of how the model works. Like yes you can pretend GPT is stateless but why do that since you wouldn't want to implement generation that way?
cfoster0#4356: It sets folks up for confusion imo
ersatz#0001: It’s all quite confusing tbh
𓅬 gabriel_syme 𓅬#3220: yeah that was interesting and it's nice to see it competitive with it. I do think BC was better when you could use hindsight (on how much % of data to clone on) which I guess is useless in practice 🙂
𓅬 gabriel_syme 𓅬#3220: Btw, I'm still not certain what to do for sparse rewards. Did you say you were passing the same return-to-go at each step, when you only had a reward at the end? I guess I should try, this week I'll start simulating for rewards
kurumuz#5695: something secret, it steers us 😳
supercharge19#7165: I am looking for a way to use a conversational bot to answer questions from data that is in a database. There are some table question answering, but they are not able to hold a conversation, I want to make it look like human (as much as possible). any ideas guys?
iczero#8740: There are a few NLU libraries (rasa, snips) that can do that
alstroemeria313#1694: I omitted the return-to-gos entirely, except for the first one
𓅬 gabriel_syme 𓅬#3220: Hmm interesting. Not sure if that works for me, I'd like to be able to generate from intermediate states smh
alstroemeria313#1694: this should still work i think? if your reward comes only at the very end
alstroemeria313#1694: then you prompt with desired reward + your partial sequence
𓅬 gabriel_syme 𓅬#3220: So it's always the same reward with different state and action right
𓅬 gabriel_syme 𓅬#3220: Damn it I just really need to run it and get it over with lol
APi#7462: Hi, are there pretrained character-level language models? To perform spelling correction. Pretrained multilingual models preferred
Louis#0144: ByT5
EricHallahan#1051: If you are asking if we have any models that are byte or codepoint-level, the answer is no. byT5 would probably be what you are looking for.
Louis#0144: It’s pretty good
Louis#0144: I don’t like T5 though :berk:
MasterScrat#6910: How come?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.