data
stringlengths 115
7.61k
|
---|
gwern#1782: there are a lot of impossibility and no-go theorems in cooperative game theory & mechanism design 😦
gwern#1782: ('thinking fast and slow' was great... but which parts? like half of it is probably non-replicable)
3dprint_the_world#6486: I love the D. The D Kahneman. 😜
3dprint_the_world#6486: I read it a long time ago, don't remember.
chilli#5665: this actually isn't that hard
AI_WAIFU#2844: Now do it in a circle
chirp#4545: https://twitter.com/ankesh_anand/status/1336520539171590145
gwern#1782: if you can get the money for a 'CERN for AI' why not just give it to OA? they're already a nonprofit
bmk#1476: someone needs to reply, "have you heard of this thing called eleutherai? they could definitely use a big budget"
gwern#1782: ("OpenAI. You've invented OpenAI.")
bmk#1476: i think the main complaint is "but OA isn't open anymore, we need the money for something actually open"
bmk#1476: to which i say
bmk#1476: "have you heard of this thing called eleutherai?"
3dprint_the_world#6486: How would a CERN like model work for AI though? People have a hard time agreeing what the right path is; a publicly funded project would probably just get mired in squabbles
3dprint_the_world#6486: Seems to me the reason CERN works is that pretty much everyone in fundamental physics agrees that 'smashing things together at high energies is a viable path forward'
3dprint_the_world#6486: although even that may not be true anymore. Hence why we may not get a bigger collider for the foreseeable future.
bmk#1476: 10T or bust
3dprint_the_world#6486: what's 10T
triggerhappygandi#0001: May not be true anymore? How?
triggerhappygandi#0001: You can _always_ smash stuff at higher energies |
3dprint_the_world#6486: I mean energies that would be feasible with current technology
triggerhappygandi#0001: Ah.
3dprint_the_world#6486: particle accelerator technology has pretty much stagnated.
triggerhappygandi#0001: But last I checked they were working on incorporating the existing accelerator with a bigger one
triggerhappygandi#0001: Sadly not all things can grow at the pace of Technology.
3dprint_the_world#6486: there's been lots of discussions, I don't think they've committed to a path yet.
triggerhappygandi#0001: I heard it would be ready in like
triggerhappygandi#0001: 2050
bmk#1476: 10 trillion params
bmk#1476: literally gpt3 but even bigger
3dprint_the_world#6486: oh
bmk#1476: 100x tbe
triggerhappygandi#0001: Yes.
bmk#1476: i mean, it's the most promising approach
triggerhappygandi#0001: Google's next LM would be T6. T5 with an extra T for trillion
bmk#1476: unfortunately, i can neither confirm nor deny whether google already has a 1T model
3dprint_the_world#6486: @bmk is it though? I've heard arguments to the contrary. i.e. where are we going to get the data, etc.
bmk#1476: > where are we going to get the data
*cracks knuckles* |
triggerhappygandi#0001: They probably do. They made a 600B translator just to flex the hardware@bmk
bmk#1476: *Pile v2*
bmk#1476: moe dont count
triggerhappygandi#0001: I know. It's not a language model but they literally made it to flex
triggerhappygandi#0001: They probably have a 10x GPT-3 almost ready.
bmk#1476: Whether it's a lm is irrelevant
3dprint_the_world#6486: seems to me that 'You will get new physics by going to higher energies' is way more obvious and agreed-upon than 'you can achieve AGI by scaling up models'
bmk#1476: It's moe which means 600B params is more like 20B normal params on effect
3dprint_the_world#6486: the latter seems highly debated and contested
bmk#1476: We will get new info from it, for sure
triggerhappygandi#0001: A TPU v4-4096 exists, outside of GCP. It's probably working on making GPT-3 look feasible
bmk#1476: A 10T model would tell us a lot about scaling laws and what happens near the crossover
bmk#1476: Nobody is claiming that it will solve agi any more than particle accelerators will give a theory of everything
3dprint_the_world#6486: but that's the thing though. Given a large enough particle accelerator, it *will* give us a theory of everything.
3dprint_the_world#6486: we know this with near-certainty. Or at least people agree on this with near-certainty.
3dprint_the_world#6486: of course it may have to be the size of the solar system....
triggerhappygandi#0001: But unless you employ very efficient VAEs you will never be close to agi
triggerhappygandi#0001: *the galaxy
bmk#1476: Given a solar system sized computer you can near certainly make agi too
triggerhappygandi#0001: Probably. But we don't know for sure. |
triggerhappygandi#0001: For ToE we know it for sure
3dprint_the_world#6486: no I don't think the argument is equivalent.
triggerhappygandi#0001: If we can accelerate 2 particles to near Planck mass and boom, it will definitely solve a lot of problems
3dprint_the_world#6486: given a solar system sized computer, you'd still need to program it in some as-yet-unknown way to get AGI. Hutter showed that AGI isn't as simple as just 'throw more compute at it.' Even if the scaling hypothesis is true, *you still need way more data than you actually have*
3dprint_the_world#6486: but given a galaxy-sized particle accelerator, physicists can give you a theory of everything *today*
bmk#1476: I don't think I know enough about physics to know the validity of claims about physics accelerator toe
3dprint_the_world#6486: or, at the very least, eliminate a huge class of speculative theories of everything
bmk#1476: This is literally not the same thing
3dprint_the_world#6486: because in physics the problem isn't that we don't have a ToE. The problem is we have *too many* ToEs.
3dprint_the_world#6486: and as of right now we have no way of figuring out which one is the right one.
triggerhappygandi#0001: Larger accelerator = heavier particles = potential to reach Planck mass where general relativity and quantum mechanics probably combine@bmk
3dprint_the_world#6486: ^
bmk#1476: "probably"
triggerhappygandi#0001: I say that because I'm not an expert. But I'm willing to bet they do
bmk#1476: I don't see how there are any fewer potential unknown unknowns than agi
triggerhappygandi#0001: Planck mass basically means mass of a sand grain put into a proton
triggerhappygandi#0001: There are. We know what we gotta do. The only bottleneck is the actual thing.
triggerhappygandi#0001: With AGI we don't know if simple scaling is the solution, or do we need far better unsupervised learning elements in it than we have now, or if the dimensionality reduction offered by AR/VAEs is even enough.
bmk#1476: I don't think I can really make any progress on this because i don't know anything about physics
bmk#1476: ~~Like what is a proton lol~~ |
triggerhappygandi#0001: Nucleus of Hydrogen atom
3dprint_the_world#6486: but all I'm saying is: if you gather a room full of physicist and ask, "Would y'all like a BIGGER COLLIDER??" they would unanimously say yes.
3dprint_the_world#6486: but if you gather a bunch of ML people, and ask them if they would like GPT-N, many would say "no, we want something else"
bmk#1476: Im sure everyone would say yes to a bigger computer too
3dprint_the_world#6486: sure but what do you run on the bigger computer
bmk#1476: I mean.. accelerators can run multiple different experiments too, no?
3dprint_the_world#6486: yeah well that's a better argument.
triggerhappygandi#0001: Surely not just put `n_heads = 1024` and `d_model = 65536` and train the GPT-3 again.
3dprint_the_world#6486: maybe we could have a huge computer and the GPT-N people get to train as big a LM as they want, and the other people get to do some experiments too.
3dprint_the_world#6486: that way everyone's happy.
3dprint_the_world#6486: although I don't think everyone is as optimistic about the GPT-N path as you are @bmk
3dprint_the_world#6486: especially when you think about the data bottleneck
bmk#1476: I literally spend my time doing nothing but collect data lol
3dprint_the_world#6486: personally I would like a more multi-modal approach
bmk#1476: There is a whole lotta data out there
3dprint_the_world#6486: yeah... and GPT-3 is trained on basically all of it 😂
bmk#1476: This is completely falsr
3dprint_the_world#6486: and still only achieves a compression factor of like 3
bmk#1476: Citation needed
bmk#1476: What do you mean by compression factor |
bmk#1476: Perplexity?
3dprint_the_world#6486: ratio of input data / model size
bmk#1476: I can't even tell if you're trolling
3dprint_the_world#6486: anyway I would like a more multi-modal approach. Something that we could train on not just text but also videos, images, audio, etc.
bmk#1476: Ratio of input data to model size means literally nothing
triggerhappygandi#0001: Iirc Yann LeCun said something like "we live for 3 billion seconds at max. That's not enough time to learn simply from large amount of data"
triggerhappygandi#0001: Meta learning+very high levels of abstraction are needed. More than just raw data.
bmk#1476: 1. There exists multiple orders of magnitude more data than gpt3 was trained on.
3dprint_the_world#6486: @bmk seems like it's actually 2.8 https://lambdalabs.com/blog/demystifying-gpt-3/
triggerhappygandi#0001: Indeed. But is it accessible? @bmk
bmk#1476: 2. I have a serious grudge against that lambda labs blog post
3dprint_the_world#6486: ok fair enough
bmk#1476: What do you think i do for eleuther lol
triggerhappygandi#0001: Like sure internet has probably 10 exabytes of text in total
bmk#1476: The ratio of data to model size means absolutely nothing
bmk#1476: It's completely meaningless
bmk#1476: I can train a model of a single parameter to convergence on Pile
bmk#1476: Huzzah! 3000000:1!
bmk#1476: You can train any model on any amount of data
bmk#1476: Nobody decreed that gpt3 had to be trained for exactly 300b tokens |
3dprint_the_world#6486: sure there's some hidden assumptions there, i.e. training and testing loss etc.
bmk#1476: Heck, i don't think it's even converged yet
3dprint_the_world#6486: but all else being equal, if you can train a smaller model on some data and get the same results, the model is 'better', no?
bmk#1476: Sure, but the key word is "same results"
3dprint_the_world#6486: I agree.
triggerhappygandi#0001: I had questions about it. Can you train GPT-4 on the same amount of data then without a problem?
bmk#1476: Gpt2 does not have better compression ratio or something
bmk#1476: Whatever the heck that is supposed to mean
bmk#1476: Like, same amount of data that gpt3 was trained on?
triggerhappygandi#0001: A 10T model trained on 300B tokens
bmk#1476: Sure, no problem
triggerhappygandi#0001: Yes
bmk#1476: You can do that
bmk#1476: It will have lower loss than gpt3
3dprint_the_world#6486: anyway as I said I would be more interested in multi-modal learning rather than just bigger and bigger LMs
triggerhappygandi#0001: But is it not possible that it can just memorize the entire thing
bmk#1476: There's a graph in gpt3 paper I'm too lazy to find rn
bmk#1476: Tokens trained vs loss
bmk#1476: Larger model loss is strictly lower than smaller model at any point in training
triggerhappygandi#0001: In autoencoders you have to actively stunt them in hidden layers so that they don't just learn the entire image. I thought something like that would affect autoregressive models to, if not in the exact same way |
triggerhappygandi#0001: But if you say so. Then indeed the 10/1.5 and 300/175 is an irrelevant comparison
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/786104824776622090/training_curves.png
triggerhappygandi#0001: And that's not how you calculate bits/dim anyway
bmk#1476: This graph shows that larger models are more data efficient too
bmk#1476: And no memorization
bmk#1476: Anyways i have to sleep now
3dprint_the_world#6486: I don't think memorization is the problem. Actually I'm saying the opposite. OAI had to do a lot of work in curating their data.
3dprint_the_world#6486: To get those results.
triggerhappygandi#0001: Nats/token is `ln(p(x|x_prev_entire_sequence))` right?
3dprint_the_world#6486: The point is that with a larger model, you need more data to make use of it.
3dprint_the_world#6486: Based on that graph you can see that the final performance of smaller models on the full dataset is much better than the performance of GPT-3 on a small amount of data.
3dprint_the_world#6486: and that's exactly what I'm saying
3dprint_the_world#6486: but anyway
joshy#1952: gpt neo when
Dal#7192: Hello all. While pondering AI structures I've come across a challenge. So far I've often mentally drawn a distinction between raw associations and set criteria (bounds). Is anyone aware of computational or logical structures that draw that distinction at their most fundamental? I'm considering throwing it out and treating sets as simple criteria associations to check against when determining eligibility
StellaAthena#3530: Can you say this in different words? I don’t understand the question.
Dal#7192: What is the most performant logical structure for defining a set and determining whether a given element belongs to any given set?
bmk#1476: Can you elaborate further
bmk#1476: Like describe a toy example
Dal#7192: I have a box of toys. The toys are spheres, cubes, blocks, or irregular, and blue, green, red, or yellow. What's the cheapest criteria I can apply to sort out only the ones that are yellow cubes? |
Dal#7192: That's just a 2-dimensional example, I'm hoping get a better grip on the arbitrary-dimensional case
bmk#1476: (toy example doesn't literally mean the example has to be of toys, just means a small simple to understand case)
Dal#7192: I know 😛
kindiana#1016: what's your definition of "cheap"?
Dal#7192: Cheap*est*
bmk#1476: I mention because i am now even more confused
bmk#1476: Set membership can be done in O(1)
StellaAthena#3530: @Dal do you start out with the categories, or would you like to find the categories with the greatest explanatory power about the difference between objects
Dal#7192: If I have my own question right that's part of what I'm trying to figure out
StellaAthena#3530: The “category” I am talking about here is the fact you want to sort them into “yellow cubes” and “everything else”
Dal#7192: I'm basically trying to determine if categories can be treated as extensions of topologies or if they are separate mechanisms
Dal#7192: I think.
Dal#7192: Based on whichever mechanism produces the result at lower cost
StellaAthena#3530: So you have a pile of objects. You want to sort them into categories.
If you have a fixed categorization of them, you should inspect them one-by-one and sort them according to their category. This is what @bmk is suggesting.
If you know that there are structural differences between the objects but don’t know what categories you should divide them into, then this is what ML researchers call “clustering.” You know that objects have colors and shapes and some are electronic and some are not. You have three boxes and want to put “the most similar objects in each box.”
Dal#7192: That second part is very much relevant as well to what I'm exploring. I'll look into clustering. Thank you.
StellaAthena#3530: “k-nearest neighbors” is the name of the most popular clustering algorithm for “general problems” |
StellaAthena#3530: What it does is assign each object a score based on the score of the k objects that are most similar to it. Picking larger k makes the algorithm slower but can make it more accurate. 5-NN or 7-NN tend to work pretty well, even on high dimensional data
Dal#7192: So the general model goes from raw data, to clustered data, to established categories that objects are sorted into one-by-one
Dal#7192: Do I have that roughly correct?
StellaAthena#3530: Yup
StellaAthena#3530: The process of discovering the categories (training the algorithm) sorts the training data into the categories as a side effect. But the model can then also be applied to new data to tell you what category it is
Dal#7192: Given that that seems a pretty fundamental part of any dynamic (Generally Intelligent) brain, do you know of anyone working on dedicated hardware for that?
Dal#7192: I suspect biology still has a lead on that one
andyljones#7746: sounds like kolmogorov complexity to me
StellaAthena#3530: It’s K-NN
Dal#7192: Thanks. I've been thinking a lot about symbolic storage and that fits very nicely in
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/786450450508939274/unknown.png
chirp#4545: Currently watching an AWS presentation on large-scale training
bmk#1476: imagine training on aws
chirp#4545: apparently you can now spin up a 256-GPU cluster on demand https://cdn.discordapp.com/attachments/729741769738158194/786450693527568404/unknown.png
chirp#4545: well... if you have the GPUs provisioned already
bmk#1476: this would actually be kind of cool if we had literally unlimited amounts of cash
bmk#1476: 40gb per gpu, probably a bit faster than a v3 core each, and they're fucking gpus not tpus thank heavens
chirp#4545: the GPUs in this demo add up to $1000 per hour
bmk#1476: hm
chirp#4545: i'm guessing that's a lot? |
bmk#1476: it is a significant amount considering our entire bankroll is like a few hundred dollars rn lol
chirp#4545: it's more than i make
chirp#4545: lol
bmk#1476: if any investors want to give us a few hundred k to a few Mio that would be kinda nice
chirp#4545: Did a bit of browsing and found an awesome presentation by OpenAI's head of compute: https://youtu.be/DLw-wC4zntw?t=390
- AI at scale is like HPC
- "And in our particular case of GPT-3 we saw a really big benefit from switching to an InfiniBand interconnect... we got well over a 50% improvement in performance, and just out of the box"
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/786454562378743808/unknown.png
Imperishable_NEET#1969: There have been a few AI-skeptic op-eds lately. Is this Azure presentation good news?
chirp#4545: in the grand scheme of things, i don't think it moves the needle too much
chirp#4545: biggest takeaway i guess is that ai models are continuing to scale up, and that multiple cloud providers are investing in enabling it
Deleted User#0000: @Imperishable_NEET what were the op-eds saying, in broad strokes?
Deleted User#0000: Multi-power-law learning curves in humans https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4577530/
"Our results suggest at least two processes at work in individual learning curves: locally, a gradual, smooth improvement, with diminishing gains within a specific strategy, which is modeled well as a PL; and globally, a discrete sequence of strategy shifts, in which each strategy is better in the long term than the ones preceding it. The piecewise extension of the classic PL of practice has implications for both individual skill acquisition and theories of learning."
gwern#1782: handwavy analogy: NNs do the same thing, but when we train them on large datasets there are so many different subproblems that any individual strategy shift is invisible until we look at benchmarks and can observe various transitions where the model suddenly 'gets it', hence all the GPT curves where the overall loss is super-smooth but then specific things like BPE-arithmetic 'jumps' going from 1b to 175b etc
Deleted User#0000: Yes. I've also argued for that as an explanation for the power law learning curves (https://discord.com/channels/729741769192767510/747850033994662000/783531083017879583)
Deleted User#0000: (plus in some cases it can be made non-handwavy)
cfoster0#4356: Aside from arithmetic, the GPT curves are all pretty smooth, no?
Deleted User#0000: yeah but the gpt3 ones for certain skills show a relatively abrupt jump? |
cfoster0#4356: Looking back I'm not seeing any
cfoster0#4356: At least not in the few shot context
Deleted User#0000: an interesting thing to study could be the connection between curriculum learning/transfer learning <> learning curves. If you have two tasks with certain learning curves, and they are completely unrelated, they would combine into a predictable learning curve when mixed together. But if they are related, the combined curve will probably look different (steeper).
Deleted User#0000: i think gwern was talking abou eg https://cdn.discordapp.com/attachments/729741769738158194/786658268348809246/unknown.png
cfoster0#4356: Yeah I think there are curriculum/active learning connections waiting to be found
cfoster0#4356: Right which is why I excepted arithmetic
Deleted User#0000: ah
Deleted User#0000: there's a bit of a jump here https://cdn.discordapp.com/attachments/729741769738158194/786658524456943686/unknown.png
Deleted User#0000: i mean its not like totally abrupt but
Deleted User#0000: its not power law either
cfoster0#4356: Maybe
Deleted User#0000: i find it interesting how curriculum learning seems to be more important for humans (i think?) than for (supervised learning) neural nets
Deleted User#0000: i havent seen much significant benefit of CL in SL
gwern#1782: also thinking of https://github.com/nyu-mll/pretraining-learning-curves/blob/main/When%20Do%20You%20Need%20Billions%20of%20Words%20of%20Pretraining%20Data.pdf
Deleted User#0000: thats a cool paper
now the active learning question would be. How few words could the LM learn syntatic/semantic knowledge from, if you chose the right sentences/documents?
gwern#1782: probably asymptotically much closer to linear. if you think of it in terms of datapoints introducing 'extreme' problems beyond the current range of solved problems ("NNs are just interpolation!"), as you go to n datapoints, the probability of getting a more extreme datapoint on the next sample is going to be 1/n
gwern#1782: whenever I look at data, it seems to fall very much into either (a) garbage or (b) super-redundant with a bazillion samples before it
Deleted User#0000: im not sure i fully get what you are saying, nor how it answers my question
gwern#1782: well, let me try it this way. imagine a simple model trying to predict the maximum of some integers where 'training' equals 'max of all points I've seen so far'. you can sample a random 'datapoint'. if you sample 100 integers, your max is say, 26. you can draw one more random datapoint. what is the probability you would learn anything from it changing your max? well, it's 1/100: it has to be bigger than the 100 before it. then once you sample 1000 integers, it's 1/1000 and so on. each new datapoint is only 1/nth likely to be relevant. as n gets bigger, each new datapoint is increasingly worthless. if you could prescreen your random samples with 'if new datapoint > current_max', you would change from 1/n to a constant: now every new datapoint does something useful, it bumps your max. (your active learning process might have to screen _n_ more datapoints before it finally finds an integer worth returning to the main training loop but we're thinking of scenarios where the prescreen is effectively free compared to 'training'.) |
gwern#1782: given the manifold interpretation and how people like to say 'NNs are *just* interpolation!' you could imagine something similar. your bigass LM only learns from samples which are 'extreme' in some sense, extreme in a way not represented in the previous _n_ datapoints. but if you're just sampling random data, each new datapoint has to beat out the _n_ datapoints you already have, whereas you could instead be filtering them through a process to ensure that they do *something* new. this could be asymptotic, like in my example above, where you go from requiring n more data each time just to bump the max once, vs a constant single datapoint to bump the max
mgostIH#0245: I envision it a bit like bayesian optimization
mgostIH#0245: Except under the iid assumption you can't choose what points to train on
gwern#1782: active learning is just bayesian optimization for minimizing a final learning-related loss 🙂
mgostIH#0245: I assume that humans cleaning the data can be thought of a way of exploring more different regions of the space to learn
Sid#2121: is there *any* research on active learning for generative models? It seems like low hanging fruit
Sid#2121: is it just too compute intensive to pre-screen every training item?
Sid#2121: presumably the screening / filtering process would require a forward pass through the model anyway, to measure its uncertainty in some way, at which point you might as well be training on the data
gwern#1782: oh, there's some. there was a nice paper on improving biggan by filtering. embarassingly, their strategy was mostly to throw out unusual images... it seems even biggan still struggles with underfiting/overfitting and generalization
gwern#1782: (pytorch, unfortunately, so we can't use it (yet))
gwern#1782: (it also kinda cheated by turning out to use an imagenet classifier to get embeddings)
cfoster0#4356: Related: https://arxiv.org/abs/2010.06682
cfoster0#4356: In short, for contrastive learning, hard negatives are both necessary and sufficient for full accuracy
gwern#1782: there's also a lot of observations about gradients too - you can throw away most of a gradient. good trick for reducing bandwidth
cfoster0#4356: So in principle you could throw out all the garbage and easy ones without loss of accuracy
cfoster0#4356: Yeah, I'd heard through the grapevine that the sign of the gradient is enough for adjusting parameters?
gwern#1782: well, that's what I meant by embarassingly - you'd think you'd want to throw out the *easy* ones (as redundant & waste of compute) but what that particular paper does to improve biggan is throw out the *hard* ones
gwern#1782: which is unflattering to biggan, since it implies it is too unstable and learning badly to benefit from the most informative hard examples
cfoster0#4356: Ouch
cfoster0#4356: I see |
gwern#1782: GANs are not a solved problem, needless to say
cfoster0#4356: The paper I posted also did find that the *hardest* of hard negatives are actually detrimental
gwern#1782: sure, but that seems kinda reasonable to me. a lot of those contrastive examples might very well be garbage. what if the random subcrop leaves out the key features?
gwern#1782: in contrast, this biggan paper is just operating on whole images. labels can be garbage, but imagenet's images are generally fine qua images
gwern#1782: (I think they also show some of the dropped images, and they're fine. they're hard, perhaps, like a dog shot from an unusual angle, but they're certainly not random static noise or corrupted JPGs)
Deleted User#0000: the need for negative sampling is being challenged at the moment in self-supervised learning land https://arxiv.org/abs/2011.10566
Deleted User#0000: tldr: they found a simple siamese network will not collapse if you just stop gradient one of the two branches
Deleted User#0000: no one knows why
StellaAthena#3530: *exactly* one?
Deleted User#0000: perhaps they could give it the hard examples back after its learnt well the easy ones. Then it'd be like one form of curriculum
Deleted User#0000: or gradually interleave them among the easy ones later in training
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/786688560417472512/Screenshot_from_2020-12-10_12-19-11.png
gwern#1782: yes, exactly one. if you stop gradient of both, obviously neither can learn, and if you train both simultaneously, they just collapse as expected
Deleted User#0000: what gwern said
Deleted User#0000: its a big surprise to everyone in the field
gwern#1782: it's pretty cool. my personal interpretation was that it's sort of adversarial. when one half is frozen, the other has to 'fight it' and find signal in order to make the greedy 1-step improvement via gradient descent. if you have two trainable branches simultaneously, they can 'conspire' to make the problem easy by updating the same way and collapsing towards each other. but if one is frozen, it can't 'cooperate' and remains rigid, so the other has to find an independent way to make progress to reduce its loss at all. you vs the clone of yourself: "Brother! We are the same! There's no need to fight, we can both win! We just need to cooperate and always answer '1'!" CLONE: "Cease your lies. There can only be one." schmidhuber would surely make an analogy to prediction-minimization with rival NNs
Deleted User#0000: a lot of companies spend a lot of time designing negative samples
Deleted User#0000: not familiar with this literature. By collapse u mean it just makes everything have high similarity?
gwern#1782: yeah
Deleted User#0000: yea |
Deleted User#0000: weird that that trick works then
Deleted User#0000: yea, it doesn't make sense
Deleted User#0000: esp since NNs are known to take shortcuts
Deleted User#0000: anyways, kind of off topic
Deleted User#0000: but that's what's going on there in self-supervised learning research land
Deleted User#0000: i should probably revisit balanced consistency regularization for GANs and do a stop gradient for one of the branches
Deleted User#0000: it didn't work for me the last time i tried it
gwern#1782: mm. they don't claim it's superior, just that it's simpler. and I'm not sure I'd expect it to work in D/G setups: the G helps keep D honest, after all, D doesn't need an 'internal opponent' like a frozen copy
Deleted User#0000: yea true, byol is still better (which also doesn't need negative sampling, but requires momentum on one branch)
Deleted User#0000: cool, i like this interpretation
bmk#1476: @carro made me this✌👑✌👑✌ wen C4
Sid#2121: we could always try and rebuild C4 with @-Archivist 's compute
LaDoze#9817: Hello, I am not an expert in the field, but I am interested in GPT-neo. By the way, I'm looking for a tool to create blog articles based on existing articles (e.g. on Wikipedia). Could someone help me to find this tool and use it? If you have good information about these, I am willing to pay you. Contact me privately if you have an idea, thank you in advance ! 🙂
gwern#1782: (that sounds rather spammy)
Louis#0144: Has anyone gotten literally any of the huggingface examples to ever run...
Louis#0144: They all stall on colab
Louis#0144: Like most of the time
Louis#0144: Seq2seq and finetune stall 100% of the time from a fresh transformers installation
Louis#0144: https://stackoverflow.com/questions/64635072/huggingface-transformers-run-clm-py-stops-early
Louis#0144: This |
Louis#0144: Even with a sufficiently small batch size
Louis#0144: Beyond that if you use run clm with PG19, it has a memory leak
Louis#0144: And doesn’t progress past intialization
LaDoze#9817: I really don't want to spam and I'm really looking for a tool like this. I'm just saying that you'll be more interested in providing it to me if you have a reward in return. If I can get it for free it's obviously better.
Louis#0144: lol you aren’t going to get far unless you are willing to wait like everyone or contribute
Louis#0144: It takes time
Louis#0144: It’s gonna be many more months before GPT neo is ready
LaDoze#9817: I'm not necessarily talking about gpt neo. gpt-2 or whatever can do the trick... As long as it reformulates articles correctly. Do you have this at hand?
chilli#5665: Lol
StellaAthena#3530: @LaDoze welcome to EleutherAI! While we welcome people of all levels, we are a research group and not a place to hire people or get introductory-level help. If you want to learn how to obtain GPT-2, I recommend r/machinelearning, quora, or www.google.com
If you are interested in talking about machine learning research we are more than happy to do so.
chilli#5665: Uh, I would suggest r/learnmachinelearning
tsongz#1949: @Louis no. Especially on TPUs. It took nearly an hour for the first step. https://cdn.discordapp.com/attachments/729741769738158194/786739884894584882/unknown.png
Louis#0144: gggggg
tsongz#1949: I've literally given up ever trying to use their trainers. I only use their models and tokenizers. 😐
truongsinh#6988: repost from #deleted-channel :
Hi Folks, so I learned about EleutherAI from Stella. A little about myself, my current job pays really well, but it does not have any “publishing paper” part. I'm currently pursuing MSCS@GaTech, ML specialization. I have decade of software engineering expirence, but stopped writing papers when I graduated. I'd like to get involve, but don't know how or where to start. I am aware that you guys are focusing on GPTNeo and The Pile at the moment...
Aran Komatsuzaki#5714: We're also focusing on attracting yet another GT students 🙂
Louis#0144: LMAO |
bmk#1476: Awesome! We can use a lot of help with putting together papers, great to have you on board
Louis#0144: give him the tag!
bmk#1476: Enorangificationing complete
Louis#0144: @truongsinh
Louis#0144: ty
bmk#1476: @truongsinh anyways, the tldr of our current state of affairs:
gptneo codebase is mostly done, but there are a few bits that still need to be done, mostly relating to getting evaluation working, and also some optimization. the gptneo code can train models up to about 10B params and we plan on running a bunch of experiments using it and writing papers, however it is fundamentally limited in terms of how big we can scale.
the pile v1 is almost done and we're in the home stretch (freeze on major changes begins on the 21st, and we post on arxiv by the end of the month) but we could still use some help with doing dataset analysis; if you can think of an analysis we should run but aren't currently and implement and run it before the 21st, you can get a minor authorship on the paper.
truongsinh#6988: oh, btw, after classes in Gatech, R Markdown is my latest writing tool, and I love it
Louis#0144: ....
Louis#0144: what a masochist
bmk#1476: as for planned projects:
we are going to be working on a successor to gptneo in parallel with using gptneo for various experiments. not much is set in stone yet but if you're interested i can elaborate on what i'm thinking (it will need a shitload of engineering effort)
we're also working on a v2 successor to pile v1, again it's not entirely set in stone but we have some early stages in motion. again, i can elaborate is youre interested, and it will also need a lot of engineering effort
Louis#0144: Tell me a story like im five is DONE
Louis#0144: automatic plot generation is complete |
Louis#0144: Now just to write the paper
Louis#0144: Im so happy
Louis#0144: ngl… I didnt realize how close I was to finishing LMAO
…kinda took me by surprise when everything just started working
I had an “oh yeah im done… cool.” moment
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/786747499087134720/screen_shot_2020-12-10_at_7.08.49_pm.png
Louis#0144: but yeah it writes stories
Louis#0144: with little to no degregation
Louis#0144: it can go for hundreds of sentences
Louis#0144: plot holes start showing up by sentence 15
Louis#0144: but Im working on extending that to roughly sentence 100
Louis#0144: any how
Louis#0144: @truongsinh whos lab are you in?
tsongz#1949: that's pretty cool, so it's contextually coherent pretty consistently
Louis#0144: yeah!
Louis#0144: because it can interface with a symbolic reasoner
Louis#0144: and it writes new sentences by doing information extraction over reference stories
Louis#0144: so it tries to chain plot points together that it knows are coherent
Louis#0144: rather than write its own |
Louis#0144: it has a database of a few thousand plot points which is more than enough for most stories
chilli#5665: Can I do this too? :P
bmk#1476: sure
StellaAthena#3530: @truongsinh glad you made it :)
If you ~~don’t hate yourself~~ aren’t into engineering, there is also science to be done (though it doesn’t involve making neat guns).
@bmk and I are doing a quick project to enable the study of neural scaling laws without spending years on computations.
I also have a RL project that I want to spin up, building off a recent ICML paper: http://proceedings.mlr.press/v119/reddy20a/reddy20a.pdf https://github.com/rddy/ReQueST
bmk#1476: we're always looking for more helpers
chilli#5665: So you're not looking for more datasets
Louis#0144: dont encourage them
Louis#0144: that shit is like
chilli#5665: Just analyses
Louis#0144: heroin
bmk#1476: we are looking for more datasets, but only for v2
bmk#1476: no neew datasets will be added to v1
Louis#0144: "just give me another hit of those sweet GBs"
chilli#5665: Where can I see what's already been done? |
bmk#1476: for v1?
chilli#5665: Yeah
Louis#0144: 💉
bmk#1476: https://www.overleaf.com/read/wgmnfqvzckjz
bmk#1476: paper draft
tsongz#1949: I have a dataset but I don't think google would be happy
bmk#1476: what is it
truongsinh#6988: i'm not in any Lab (if you consider Eleuther AI a lab). I'm still checking GPTNeo and The Pile (if you mean these are the 2 labs)
Louis#0144: Oh I assumed youre in a lab at GT
Louis#0144: im in riedl's lab
tsongz#1949: 1M questions and answers collected from google searches
bmk#1476: gptneo and pile are projects
bmk#1476: that sounds tiny lol
tsongz#1949: throttled
chilli#5665: I have a dataset
bmk#1476: threshold for pile v2 inclusion is **100 GB**, unless you can make a damn good case for inclusion
chilli#5665: The first trillion numbers of pi
bmk#1476: no
truongsinh#6988: no, I'm just a student at Gatech, haven't joined any Prof's lab or VIP yet. Only TA for SDP next sem.
Louis#0144: @bmk ok I have this 2kb text document that my cat wrote when she stepped on my KB |
bmk#1476: no
Louis#0144: loser
bmk#1476: no u
Louis#0144: "yeah our LM is cat powered"
Louis#0144: I wonder what % of data in the pile is written by nonhuman mammals
bmk#1476: after deliberation i have decided to target 100TB instead of 10TB total size for v2, mostly because mC4 is already bigger than 20TB and we need to be The Biggest
bmk#1476: and because given the help of archivist i believe we can get at least 30TB ish from CC even post filtering
Deleted User#0000: the georgia tech takeover continues
Louis#0144: join us
chilli#5665: is this going to be submitted to a conference?
chilli#5665: based off of the 8 page limit
bmk#1476: yes, we will make a trimmed down summary version of the paper
bmk#1476: but the full version will go on arxiv
Louis#0144: the paper is actually 100TB
Louis#0144: :^)
AI_WAIFU#2844: Yo can we throw in a few GBs of raw program output into the pile v2?
bmk#1476: i'm sorry, you must be mistaken
bmk#1476: did you mean a few *hundred* GB
AI_WAIFU#2844: :ultrazucc:
AI_WAIFU#2844: Also are we going to throw in a few hundred genomes in there? |
bmk#1476: Absolutely
bmk#1476: The goal is set, 100TB
cfoster0#4356: Rly
cfoster0#4356: We gonna ship people boxes of hard drives? 😅
bmk#1476: mC4 is 22TB, we gotta do better than that
cfoster0#4356: truly a pole measuring contest
bmk#1476: Well, compression, so 30TB
bmk#1476: That's like only 3 hard drives
bmk#1476: I have more than 30TB sitting on my desk at this moment
cfoster0#4356: This is why people think we're nuts
AI_WAIFU#2844: Do we have youtube video transcripts in there?
bmk#1476: That's already in v1
bmk#1476: Oh, also we are going to try and scrape All Of Github
bmk#1476: Every single public repo
StellaAthena#3530: That’s in V1
AI_WAIFU#2844: That's dangerous.
bmk#1476: That's gotta be a handful of tb
bmk#1476: We already have 600GB of github sitting around, but that's peanuts
bmk#1476: We can go b i g g e r
cognomen#6297: can you call it the hoard |
bmk#1476: I've been considering the names Pile v2, Mound, Heap, and Stack
aquajet#7800: Heap and stack imply order
bmk#1476: Mound then?
aquajet#7800: Yeah
AI_WAIFU#2844: with v3 as the mountain
bmk#1476: v3: 1PB or bust
AI_WAIFU#2844: missed opportunity to name different iterations after different mountains across the solar system
triggerhappygandi#0001: Man. My piss poor pc won't even accept the name of v4 without warning about storage running out.
3dprint_the_world#6486: surely 99% of github is just copy-pasted code
bmk#1476: we can dedupe
3dprint_the_world#6486: yeah but I mean I suspect the actual size is much smaller than the apparent size
bmk#1476: ah
bmk#1476: still, it's pretty honking large
triggerhappygandi#0001: _Where does this hunger for data end? _
gwern#1782: at a BPC loss of ~0.6
olives ❀#2305: ayyy! hi guys!
olives ❀#2305: the custom-domain website with its custom scrollbar is cool, but why use Google Sites
olives ❀#2305: https://images.weserv.nl/?url=https%3A%2F%2Fapi.allorigins.win%2Fraw%3Furl%3Dhttps://cdn.discordapp.com/emojis/769654458229194782.png?v=1&w=48&h=48&fit=contain
olives ❀#2305: Google Apps Script doesn't even support the new version of gsite
olives ❀#2305: also, what is this `GPT3_XL` i hear? |
olives ❀#2305: a pretrained gpt3 model?
olives ❀#2305: https://images.weserv.nl/emoji.gif?url=https%3A%2F%2Fapi.allorigins.win%2Fraw%3Furl%3Dhttps%253A%252F%252Fcdn.discordapp.com%252Femojis%252F771727198612226048.gif%253Fv%253D1&w=48&h=48&output=gif&fit=contain&filename=emoji.gif&n=-1
StellaAthena#3530: Hi! We used Google sites because none of us are web devs and I could make the site in the time it takes to learn a framework
olives ❀#2305: woah you sound like a professional
StellaAthena#3530: Coming not-so-soon to a torrent near you, yes.
StellaAthena#3530: A professional what?
olives ❀#2305: > not-so-
😢
truongsinh#6988: can you give more information about the project building off ReQueST?
olives ❀#2305: Your professionalism makes you sound like the CEO/CTO/C*O of this company, in which case I should not be wasting your time.
StellaAthena#3530: It's a leaderboard for LMs trained on the Pile. ReQueST's source code is free for anyone to use, so we modified their leaderboard instead of making our own.
StellaAthena#3530: It's 10 pm and I'm chatting with friends on discord while playing *Magic: the Gathering* in another tab and drinking. This is what I do in my free time.
olives ❀#2305: is that what pros do? 😍
StellaAthena#3530: Relatedly, we are not a company. We don't have leaders.
truongsinh#6988: Let me rephrase to see whether I understand correctly, so the new project is to create the leaderboard for language models, and the scores are harvested from human feedback. Out of curiosity, how do we incentivize humans to give valuable/meaningful feedback :-?
olives ❀#2305: Then who is `@Daj` ?
truongsinh#6988: (off topic, I wish Discord has thread feature, like Slack)
StellaAthena#3530: We are publishing the largest open source English language modeling dataset. We don't need to do anything to incentivize people to train models on it.
StellaAthena#3530: Daj, BMK, and Sid are the people who founded the channel. They are "in charge" in the sense of "people listen to them when they have good ideas" rather than "they are boss men that we must obey"
truongsinh#6988: yeap, agree with that part, though maybe what i missed is how "train models" can give EleutherAI feedback. Or maybe, what is the reward function? |
truongsinh#6988: or actually, if my questions seems noobs, should we move this discussion to another channel ?
StellaAthena#3530: @truongsinh I don't understand. Leaderboards are a standard feature of ML datasets. We aren't doing this for feedback.
StellaAthena#3530: The thing that gets scored is their model's performance on a held-out test dataset, if that's what you're after
truongsinh#6988: let me ask it in 2 different ways,
- this project is a RL, so... what is the reward function in this case, or
- we have USPTO and Wikipedia(en) datasets, how USPTO can gain points to be in higher rank than Wikipedia, and then what Wikipedia can do to gain points and reclaim the rank
truongsinh#6988: oh yeah, sorry didn't see this before hit "send" button 😄
cfoster0#4356: (aside: here and #deleted-channel are usually good places for "noob" questions)
StellaAthena#3530: The leaderboard is not a project and it’s not RL. People train models on our data. They tell us about it. We post the results so others who use the data can compare against them.
truongsinh#6988: damn, I think I misunderstood the whole thing
> I also have a RL project that I want to spin up, building off a recent ICML paper: http://proceedings.mlr.press/v119/reddy20a/reddy20a.pdf
>
is different from ReQueST 🤦
StellaAthena#3530: 100% completely unrelated
StellaAthena#3530: The leaderboard serves the exact same purpose as this website: https://paperswithcode.com/task/object-detection
bmk#1476: @truongsinh have you read the abstract and intro of the pile paper yet? that probably explains a lot of potential questions
truongsinh#6988: sorry I wasted 20 minutes of your time 😅
StellaAthena#3530: Again, this is being triple tasked with booze and *Magic*. My time is already “wasted” (aka spent doing things that make me happy)
StellaAthena#3530: Questions are good
StellaAthena#3530: Keep asking them. |
olives ❀#2305: what is machine learning?
olives ❀#2305: someone is typing please dont actually answer that stupid question
StellaAthena#3530: Okay, how about questions *about research*
truongsinh#6988: hmm, reading that paper's abstract, it sounds similar to Inverse reinforcement learning, in which reward function (among others) is learned
StellaAthena#3530: Yes it is
truongsinh#6988: cool, i think this is really interested. when/if it starts, let's see how I can help.
truongsinh#6988: meanwhile, I think I can deep dive into that paper, and some of its references
StellaAthena#3530: Step 1 would be to read the paper and get comfortable with their code
StellaAthena#3530: The main thing I've been thinking about is a slight modification to their framework: they are basically saying "do IRL except demonstrations have an associated cost." I think an interesting twist on this is to allow *partial queries*. To use the example of learning the right route, instead of asking "is this a good route?" you can ask "is this a good route from a to b?" where the cost is proportional to the distance between a and b.
StellaAthena#3530: I think this framework is very applicable to real-world scenarios and has the potential to be highly interesting, mathematically.
Deleted User#0000: >chungus.webp
um
bmk#1476: it is exactly as it appears
Deleted User#0000: i love that you included the enron emails
Sid#2121: ^
Sid#2121: woops responding in the wrong channel
Louis#0144: god I love writing drafts https://cdn.discordapp.com/attachments/729741769738158194/787037867620302888/Screen_Shot_2020-12-11_at_2.27.22_PM.png
Louis#0144: trying to figure out a good name for my LM
Louis#0144: thoughts on EDGAR?
Louis#0144: ELI5 Drama Generation and Recall |
gwern#1782: EDGAR is already the SEC thing. very confusing
gwern#1782: ie one of the most used government databases in the world
Louis#0144: pfft
Louis#0144: who cares about SEC
Louis#0144: smh
gwern#1782: it's also a common name to begin with. you may not be interested in edgar but edgar may be interested in you given his proflicness: https://www.google.com/search?num=100&q=EDGAR%20site%3Aarxiv%2Eorg
gwern#1782: _ventures a suggestion that using a name with 12k hits already on arxiv alone may not be the best choice from a SEO or memorability perspective_
Louis#0144: hm
Louis#0144: I need an horror novelist name is the thing
Louis#0144: bc it writes horror
Louis#0144: like my last model was named gary
Louis#0144: im not one for SEO
gwern#1782: _has never heard of 'gary'. 🤔_
bmk#1476: maybe call it OORT
bmk#1476: find something that fits the backronym lol
gwern#1782: 'oh yeah, Gary. The snail from Spongebob, right?'
bmk#1476: I mean, I know of a certain Gary of the Marcus variety...
gwern#1782: are we sure they're not the same? his thoughts certainly do evolve at the speed of a snail
bmk#1476: @Louis Ominous Ordered Routing Transformer
Louis#0144: LMAOOO |
asparagui#6391: you could run them on a distributed kubernetes cluster
asparagui#6391: then it would be the oort cloud
Louis#0144: O fuck
olives ❀#2305: oh so swearing is allowed in this server? `@Louis` is not banned yet
Louis#0144: im special
Louis#0144: tyvm
Louis#0144: also lmao no swearing what is this like a server for middleschoolers?
olives ❀#2305: why because you are radioactive?
Louis#0144: have you SEEN the other channels
chilli#5665: The fuck
AI_WAIFU#2844: I think there's an implicit norm that swearing is lazy, and you should be creative with your profanity. If there isn't one I've just established it.
bmk#1476: ~~fuck norms, my homies use general vector spaces~~
StellaAthena#3530: Swearing is like sex. Most people finish in seconds but it’s far more satisfying when you spend 30 minutes doing it properly
olives ❀#2305: WHAT
olives ❀#2305: UM
gwern#1782: swearing is like sex. if you do it right, it is a powerful source of motivation and might, but if you foreswear your geas, it will destroy your manna and leave you a pitiful prey of a losel
olives ❀#2305: I'm going to use GPT-3 to predict using that as the prompt
olives ❀#2305: 🙂
acertain#1646: what's sota for tiny text classification models? LSTMs?
gwern#1782: for classification, I'd think it'd be CNNs |
triggerhappygandi#0001: It would probably be BERT.
triggerhappygandi#0001: :zucc:
gwern#1782: tiny by normal-people standards, not eleutherai standards, acertain doubtless meant
AI_WAIFU#2844: Are there any studies looking at what happens when you hold compute and data constant but lower the number of parameters?
Aran Komatsuzaki#5714: yes
Aran Komatsuzaki#5714: my paper did it, but so did T5 paper and openai scaling paper.
AI_WAIFU#2844: when holding both compute *and* data constant?
AI_WAIFU#2844: Also link to your paper?
Aran Komatsuzaki#5714: the latter two are more thoroguhly studied, so you don't have to read my paper.
Aran Komatsuzaki#5714: let me find the relevant parts.
Aran Komatsuzaki#5714: take a look at this image from kaplan's paper. pick a certain point from x-axis. https://cdn.discordapp.com/attachments/729741769738158194/787366675531038820/fig1.png
Aran Komatsuzaki#5714: ok this shows that loss decreased as you increases the number of parameters for a given computes and data size, but you want to know more, right?
AI_WAIFU#2844: I want to know what happens when you keep the number of FLOPS and data points constant, but change the number of parameters. Pretty sure that with Kaplan's paper, as you move up and down the x-axis, the amount of data changes along with the number of parameters.
AI_WAIFU#2844: Specifically, I'm interested in what happens when you move across the spectrum from neural GPUs/convolutions, all the way to MoE
Aran Komatsuzaki#5714: no in this case the size of dataset used is the same in this graph
Aran Komatsuzaki#5714: do you mean the number of tokens processed?
AI_WAIFU#2844: I guess? Is there a difference? The number of tokens processed is only the same at the end of those training curves right?
AI_WAIFU#2844: If you pick a vertical line in that graph, the number of tokens processed will be inversly proportional to the number of params.
Aran Komatsuzaki#5714: ah right near at the end of the curves
Aran Komatsuzaki#5714: sorry i guess i'm too sleepy to see that lol |
Aran Komatsuzaki#5714: i'll go to bed
AI_WAIFU#2844: yeah, what I'm looking for is how loss changes with increased parameter reuse.
Louis#0144: hello ladies
Louis#0144: whats SOTA on whole word masked LMs?
Louis#0144: Electra?
Deleted User#0000: Yea prob electra
Deleted User#0000: It'll get you very close at least
Louis#0144: I need it to do coreference resolution
Louis#0144: Like identifying pronouns to who those pronouns refer to
Louis#0144: so I mask out the pronoun and Im using whole word masking to predict the name from a set of names I identified in the text
Louis#0144: it doesnt rly work well
Louis#0144: I feel like there must be a better way
Deleted User#0000: https://www.aclweb.org/anthology/2020.emnlp-main.687.pdf
Deleted User#0000: there's a lot of pretraining objectives out there
Deleted User#0000: ur avatar is gross
Louis#0144: LMAO
Louis#0144: ok true
Louis#0144: I should change it
Louis#0144: I need to find like a pic of a goose in a lab coat
gwern#1782: 'who would win, the entire ML community or one naughty goose' |
Louis#0144: ok I asked an arts major to draw one for me
Louis#0144: shes my bff from uni
Louis#0144: She owes me a favour anyway
gwern#1782: just steal some _untitled goose game_ fanart imo
Louis#0144: no
gwern#1782: no u
Louis#0144: all noncandian goose are STRICTLY inferior
Louis#0144: candian geese > all other geese
Louis#0144: wow this became goose racism really fast...
gwern#1782: you think the fanartists care? you all look the same to them
Louis#0144: 0 to 60 in 2 discord messages
Louis#0144: is there a good way to batch the perplexity computation
Louis#0144: like i know it can obv be done
Louis#0144: but is there a helper function that would save headache
Deleted User#0000: https://arxiv.org/pdf/2012.03837.pdf PARALLEL TRAINING OF DEEP NETWORKS WITH LOCAL UPDATES
AI_WAIFU#2844: > Figure 4: Total compute cost vs. serial compute cost (walltime) Pareto curves computed from
> validation loss for a 6M parameter parameter transformer. We find that for high loss cutoffs (e.g.
> 5.0), significant speedups (around 4×) can be obtained. For cutoffs of 4.0, and 3.9 speedups (around
> 2×) are still possible, but only with the overlapping method. For even lower cut offs, 3.8, we find the
> majority of our models are unable to obtain this loss. In the bottom table we show the best achieved |
> validation loss for each training method maximized across all hyperparameters.
olives ❀#2305: i made a question-answering bot https://pastebin.com/XuMjfQba
olives ❀#2305: 😄
tin481#8570: Unfortunately, local in the sense of "within a layer", so same FLOPs. Method is designed for if you have ungodly amounts of compute and are willing to trade in for a real time speedup.
gwern#1782: well, if you're training on a GPU cluster where bandwidth/overhead is your problem...
Deleted User#0000: yeah perhaps it could also make crowdsourced/distributed training a bit easier?
chirp#4545: so I've wondered for a while how much money GPT-3 is making
just found this data point: there's a GPT-3-based startup that now does $10k MRR, after just a few months
[1:10] https://www.listennotes.com/podcasts/building-fires/19-paul-yacoubian-co-founder-ComtZIYLlnu/
gwern#1782: wonder if they're still running at a loss like AID
kindiana#1016: copywriting seems to have higher willingness to pay per token
frank cilantro#9153: does anyone know of any, say, books, on more recent deep learning techniques, in particular transformers, new types of autoencoders, GANs, etc?
frank cilantro#9153: or should i just go to the papers
frank cilantro#9153: while slowly breadth-first-searching the relevant citations to build up background..
StellaAthena#3530: @frank cilantro Yeah just read the research
frank cilantro#9153: is there a nice published sequence of papers to read
frank cilantro#9153: i have a tendency of being scattershot and forgetful about what i've read
frank cilantro#9153: guess i oughta get my elbows greasy haha |
bmk#1476: 1. are you looking for math too or are you good there
2. are you looking for more broad stuff on DL or stuff very specific to what we talk about around here
frank cilantro#9153: 1. math's no problem, and 2. i have been using bert in particular without much understanding of how it works, but GPT type stuff is also interesting to me
bmk#1476: here's a list of stuff i compiled quite a while back for papers related to our particular direction, not sure if this is what you're looking for
http://jalammar.github.io/illustrated-transformer/
https://arxiv.org/abs/1706.03762
https://arxiv.org/abs/1811.02084
https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
https://arxiv.org/abs/2005.14165
https://arxiv.org/abs/1811.06965
https://arxiv.org/abs/2006.16668
frank cilantro#9153: looks like a great launching off point actually
frank cilantro#9153: tyvm
frank cilantro#9153: putting this in me notes :^)
bmk#1476: some more recent additions:
https://arxiv.org/abs/2001.08361
https://arxiv.org/abs/2010.14701
frank cilantro#9153: ty 🙂
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/787517194866655263/image0.jpg |
bmk#1476: Perfect
bmk#1476: We need to publish something with the acronym GOOSE eventually
olives ❀#2305: https://i.redd.it/6b9fuyqaxub51.png
olives ❀#2305: General Autonomous Object Organization - Second Edition
Louis#0144: Generalized Objection based Objectives Storytelling Engine
Louis#0144: or Generative
bmk#1476: This is actually perfect lol
Louis#0144: LMAO
olives ❀#2305: k lets make it
Louis#0144: too late I am dming my advisor the name
Louis#0144: >:)
bmk#1476: Better than the oort thing tbh
olives ❀#2305: "oort thing"?
bmk#1476: I mean who else here is building a storytelling engine lol
olives ❀#2305: :smiley:
bmk#1476: And other than me but i would never be allowed to name a system that lol
Louis#0144: LMAO
Louis#0144: Generative Ordered planning intrOspection Storytelling Engine
Louis#0144: Using that
Louis#0144: unironically |
Louis#0144: G.O.O.S.E.
bmk#1476: maybe the second O can be Ominous
bmk#1476: since you said it's a horror engine
Louis#0144: its QA based storytelling
Louis#0144: I have 200GBs of models on my google drive
Louis#0144: I need to clear some of these out
Louis#0144: @bmk yo that reminds me
Louis#0144: for the pile v4
Louis#0144: literally include all the weights for GPT-Neo
Louis#0144: LMAO
Louis#0144: as text
Louis#0144: e.g.
Louis#0144: 5 becomes five
Louis#0144: 0.5123 => zero point five one two three
bmk#1476: why the hell would you do that
Louis#0144: 🙂
Louis#0144: sadism
bmk#1476: why not just, like, encode it normally
Louis#0144: SADISM
bmk#1476: newsflash gpt can handle numbers |
gwern#1782: @frank cilantro oh, it's easy. just read everything in https://www.reddit.com/r/mlscaling/
Noa Nabeshima#0290: Say you want to have links in papers do the right thing: instead of taking you to the works cited section, they download and open the paper that's referenced.
Noa Nabeshima#0290: You could maybe use scientific paper APIs to get the works the paper you're reading cites (as a json) and then when you click a link with a name like (Silver 2009) you could have your program search through the json to find papers that match Silver 2009 and then somehow find the DOI and then download it from sci-hub (there are other routes you could try in parallel to make failure rates low)
gwern#1782: yeah, you could. it's a wicked problem because there are so many differences in formatting and metadata and lots of weirdness
Noa Nabeshima#0290: I think this is super doable except for the part where a program is triggered when you click (Silver 2009)
StellaAthena#3530: @Noa Nabeshima isn’t this rather trivial in LaTeX / BibTex?
gwern#1782: for example, for my own auto-citation code, yesterday or so I discovered that PLOS assigns DOIs to individual tables, figures, or supplements
Noa Nabeshima#0290: D:
Noa Nabeshima#0290: how so?
Noa Nabeshima#0290: I would pay good money to talk to a PDF standard guru
StellaAthena#3530: @Noa Nabeshima it seems like you should be able to write a simple `bib.sty` file to do it.
Noa Nabeshima#0290: I want to use this myself for other people's papers, not just for my own, if that's what you mean
StellaAthena#3530: Oh
StellaAthena#3530: I thought you meant format your papers to do this
StellaAthena#3530: *that* is easy
StellaAthena#3530: Other people’s is not
Noa Nabeshima#0290: yeah
gwern#1782: PDFs do allow popups on hover, but parsing arbitrary PDFs is super hard
3dprint_the_world#6486: In a previous job I had to write some tools to generate and edit pdfs. t's worth keeping in mind that pdf is an awful format that doesn't even specify layout or anything. Basically a pdf is just a list of characters and xy coordinates.
Noa Nabeshima#0290: How do I access that format? |
gwern#1782: to point back to our earlier discussions about why we don't just dump PDFs or PDF OCR into a LM despite there being so many millions of PDFs available, PDFs aren't text, they're barely even glyphs or pixels arranged arbitrarily in 2D
3dprint_the_world#6486: Yet this awful format is still better than html for scientific papers
Noa Nabeshima#0290: How are links represented?
gwern#1782: ho ho you think citations are even hyperlinked!
Daj#7482: PDFs are turing complete
Daj#7482: ¯\_(ツ)_/¯
Noa Nabeshima#0290: https://cdn.discordapp.com/attachments/729741769738158194/787761568111788052/unknown.png
Noa Nabeshima#0290: I mean this blue stuff
gwern#1782: gosh life sure would be easy if citations in PDFs in the wild always had a hyperlink associated with them
Daj#7482: The PDF standard is several thousand pages
gwern#1782: you can maybe reverse-engineer a specific tex template...
Noa Nabeshima#0290: Arxiv PDFs have this
Daj#7482: There are probably 0-5 people on the planet that actually understand PDF I would reckon
3dprint_the_world#6486: @Noa Nabeshima are you asking about the pdf format in general? It's highly documented. It's a complex format though, so be forewarned.
bmk#1476: so the entire pipeline from tex to pdf is incredibly cursed
gwern#1782: if you do it right, HTML can be pretty good. look at my website, or arxivvanity/sanity
3dprint_the_world#6486: @gwern yeah because you don't have any math 😜
Noa Nabeshima#0290: I'm restricting the scope to PDFs that have this clickable blue thingy. I'm assuming that the link has the text data 'Silver et al. 2016'
Daj#7482: Whenever anyone brings up webdesign I eagerly show them gwern's site like a pushy Jehova's Witness
bmk#1476: does there exist another document format that could realistically replace pdf (modulo any network effects) for the specific niche of papers? |
gwern#1782: @3dprint_the_world I have excellent math support via compile-time mathjax. and if I needed arbitrary latex support, there are pandoc plugins to compile latex docs to PNG/SVG which can be inlined/linked.
3dprint_the_world#6486: yeah I guess mathjax is alright.
3dprint_the_world#6486: it's a lot of overhead though just to display some math.
Noa Nabeshima#0290: @3dprint_the_world How would you recommend learning the format? The main thing I want to do is have a pdf viewer set up so that when I click the blue thing it triggers a function that takes as input the string in blue.
Daj#7482: Literally HTML?
bmk#1476: html is also mildly cursed
gwern#1782: it's some CSS and fonts... and overhead, as opposed to an entire PDF viewer?
Daj#7482: HTML is like your grandma cursing at you for not eating your vegetables, PDF is like you accidentially defiling the grave of an entire dead indian civilization
3dprint_the_world#6486: @bmk I actually don't think there's anything that could replace PDF for papers.
Daj#7482: This, modulo network effects, makes negative sense to me
3dprint_the_world#6486: PDF and HTML are two completely different things. HTML, bad as it is, is at least some kind of content-based markup language. PDF is just a series of instructions for a printer, essentially.
Daj#7482: PDFs are so absurdly bloated and don't even reliably support simple things like _copy pasting_
Daj#7482: PDFs were developed for _printing_
Daj#7482: They're _image files_
3dprint_the_world#6486: yes
Daj#7482: Look at distill.pub
Daj#7482: Now look at your pdf
Daj#7482: Now back to distill.pub
bmk#1476: honestly, i totally think developing a new format specifically for papers would be really cool
Noa Nabeshima#0290: distill is cool |
bmk#1476: it could compile to pdfs and html i guess, but the generated html would be basically binary output
3dprint_the_world#6486: I actually think the reason PDFs took off for papers initially was *because* of the fact that it's basically an image format.
Daj#7482: bmk, you have a problem with "huh this seems easy to fix this absurdly complex technical problem that literallybillions of dollars have been spent on"
3dprint_the_world#6486: Academics were used to sending each other physical papers that were professionally typeset.
bmk#1476: wasn't there someone on hn who said that someone developing a latex replacement would be a really valuable target for philanthropy
bmk#1476: or something like that
Daj#7482: haha
Daj#7482: Network effects
bmk#1476: i am not saying it's *easy*
3dprint_the_world#6486: Even in the 90's, when my dad was doing his phd, the process for submitting a paper was entirely based on physical paper and snail mail. Few journals did email or online correspondence.
Daj#7482: It's the inadequate equilibria problem
Daj#7482: HTML5 is actually really good if you use it right
Daj#7482: But Facebook still uses PHP
bmk#1476: if this format can compile to pdfs and is less than half as cursed as latex, i don't know about others but i will start using it right away
Daj#7482: Airlines still use IBM mainframes
Daj#7482: HTML can be easily compiled to PDFs, basically every web browser does this
Daj#7482: We already have _vastly_ superior formats
bmk#1476: is html *less* cursed than latex?
Daj#7482: But LaTeX has first mover advantage
Daj#7482: Absolutely |
Daj#7482: Just copy gwern's CSS lol
3dprint_the_world#6486: PDF is also basically first-mover advantage.
gwern#1782: but seriously, the HTML version of like 99% of arxiv papers on https://www.arxiv-vanity.com is better than the PDF. it looks about as good, reflows, has working copy-paste, is machine readable, much more handicap-accessible, encourages hyperlinks... heck, even dark mode mostly works
Daj#7482: Comparing modern HTML/CSS/JS to PDF is like comparing a not-perfect-but-modern-ish language like C# to FORTRAN
3dprint_the_world#6486: @gwern I like your site but I strongly, strongly disagree with arxiv-vanity being better than pdf
gwern#1782: (would it look as good printed out? no. but I haven't printed out an arxiv paper since 2008, and such skeumorphisms are not a use-case that should be catered to at the expense of infinitely more important ones like 'can be read on a smartphone)
3dprint_the_world#6486: the images are all the wrong size and wonky
3dprint_the_world#6486: figures often come out totally wrong
3dprint_the_world#6486: sometimes incomprehensible
3dprint_the_world#6486: however the math rendering is good, I'll give them that
Daj#7482: Wow it's almost like if we taught our scientists to format a WYSIWYG website builder, or just some kind of startup medium.com/distill.pub type platform, instead of a language made by a backslash fetishist that somehow thinks adding a digit of pi to your release version makes any sense outside his own cartoon world, we wouldn't have this problem
Daj#7482: (for the record: I love the guy, but he is absurdly detached from reality lol)
3dprint_the_world#6486: he thinks it's a good idea to write your own machine language
3dprint_the_world#6486: he's nuts
3dprint_the_world#6486: (I love him too, but he's nuts)
Daj#7482: Seems like the kind of guy to develop and promote the most userfriendly, modern solutions
bmk#1476: honestly, this is a good thing, because the difficulty of typesetting latex is probably a major factor in slowing down timelines
3dprint_the_world#6486: I think I fundamentally disagree with the idea of a website-like format being suitable for scientific papers
Daj#7482: man
3dprint_the_world#6486: like I don't want my papers to resize to fit my screen. It sounds like a good idea in theory but it doesn't really work. |
Daj#7482: If only I could use a convenient HTML5 editor, and then export it to some kind of format like...PDF or something...
Daj#7482: Or, you know, HTML absolutely supports static sizes
Daj#7482: Resizing is actually rather tricky
gwern#1782: _notes gwern.net resizes very well_
3dprint_the_world#6486: yeah I know. I'm not talking about HTML per se. I'm more talking about website-like look and feel
Daj#7482: Also HTML/CSS/JS is a full programming ecosystem. You just need one plucky startup or OSS project to make a idiot-proof plugnplay system
3dprint_the_world#6486: having a paper manually formatted for different screen sizes is probably good, but that is way too much effort usually. So I'll settle for being formatted for one screen size (laptop/desktop)
Daj#7482: My argument is, with modest retraining, I can recap everything Latex and PDF gives me at a fraction of the effort, and am suddenly able to, you know, publish things that have advanced at least a little since the 1500s...
Daj#7482: touché
Daj#7482: distill.pub is the only 21st century paper publishing site, period
Daj#7482: Everything else is mid 20th century at best
3dprint_the_world#6486: One worry I have with the idea of academics going to HTML for writing papers is that it's a bit too unconstrained
Daj#7482: Oh yeah, because Latex totally isn't
3dprint_the_world#6486: not everyone has the discipline of @gwern
Daj#7482: Have you not implemented your own OS in Latex yet?
Daj#7482: I'm sure Knuth has
Daj#7482: At least HTML pretends to be consistent
gwern#1782: you wouldn't need my discipline, you'd just use the tool and it'd be highly opinionated. if you look at the Markdown source of any gwern.net page, it looks a lot like any Pandoc Markdown source. there's a few HTML classes you won't recognize, a sprinkling of custom syntax (always link-based starting with '!'), but other than that, all of the gwern.net-ness lives in the build code / JS / CSS and metadata database.
Daj#7482: Latex is literally DECADES of organic code decay
3dprint_the_world#6486: do you really want to see web 3.0 'design' in papers? |
3dprint_the_world#6486: huge screen-filling background scrolls?
Daj#7482: distill.pub
Daj#7482: Nice strawman you just defeated
Daj#7482: "Some people have terrible handwriting, we shouldn't write things down"
Daj#7482: ~~Also, we all know the one true webdesign is raw .txt files hosted on Apache Webserver~~
bmk#1476: But why use html when you have the opportunity to invent a new shiny format that nobody wants to use?
3dprint_the_world#6486: I dunno, I just feel like people would be too encouraged to make things ever-more-flashy at the expense of actual content
Daj#7482: bmk, this is an intervention
Daj#7482: As if academics had the time or nerve
Daj#7482: Or as if conferences wouldn't police that
3dprint_the_world#6486: but that's exactly what I mean!
Daj#7482: I wish academics were _much_ more experimental
3dprint_the_world#6486: I don't.
3dprint_the_world#6486: I've seen what happens when academics get experimental.
3dprint_the_world#6486: All I want is that researchers just communicate their findings or results in the most plain unobfuscated non-flashy way.
Daj#7482: Look at your math
Daj#7482: Now look at mine https://cdn.discordapp.com/attachments/729741769738158194/787768412880502814/IMG_20201209_152110.jpg
Daj#7482: Science advances one funeral at a time, and I can't wait for us to free ourselves from the archaic shackles of our printing press caveman ancestors
3dprint_the_world#6486: honestly this just looks tacky to me
3dprint_the_world#6486: and it wouldn't even work for most math |
Daj#7482: Look at your block of text
3dprint_the_world#6486: I'm sorry 🤷♂️
Daj#7482: Look at distill.pub
bmk#1476: I feel like this specific example only works if you already understand it
Noa Nabeshima#0290: huh actually as I'm looking at it I'm starting to appreciate the notation
Daj#7482: Haha I don't really care ofc, but I do think that the amount of reverance and deference to authority when it comes to paper publishing is _astounding_
Daj#7482: Has no scientist ever taken a public communications class?
Daj#7482: Wait, don't answer that
Daj#7482: Totally disagree, this makes things so much easier to follow
Daj#7482: You're all just scared of something new and better lol
3dprint_the_world#6486: it doesn't work for me, I'm colorblind.
bmk#1476: What the heck does "spin your signal around a circle" mean to someone who doesn't already understand this
Daj#7482: haha, fair
3dprint_the_world#6486: inb4 you say that colorblindness is just me being archaic
gwern#1782: I think it's tacky because it's a bad color scheme esthetically, and bad for colorblind people; but the idea of colorcoding notation is extremely useful and we only don't do it because color printing was so expensive historically. I mean, try applying any of the arguments against colorcoding complicated equations to, say, complicated graphs...
Daj#7482: "This weird E sign makes no sense to people who haven't seen it before!"
bmk#1476: That's absolutely not the same thing
gwern#1782: (color printing anything is super expensive, and remember why Knuth made the tex symbol for math mode '$' - it was another one of his little jokes, that the text enclosed was "expensive" because you were charged several times more by the typesetter to deal with fscking math instead of nice easy english text)
Daj#7482: All of your arguments are patternmatching for me to "Authority figure say tradition good, tradition therfor good!"
Daj#7482: It is though |
Daj#7482: I learned about the rotation thing before I learned the summation sign
Daj#7482: Don't ask lol
Daj#7482: Has any of you actually ever read a distill.pub paper? Or used a colab notebook?
Daj#7482: Why doesn't every paper come packaged with its own inline, reproducible code?
Daj#7482: notebook doesn't run? Reject from the conference
3dprint_the_world#6486: I'm not against good design, in fact I think we should have more good design and visual aesthetic in papers. The *problem* is when people who aren't really trained in good design attempt to do that. The results are often god-awful and in bad taste. There are people who are trained in good graphics design. In an ideal world every paper would get a good graphics design treatment. But it's not practical to hire graphics designers for every paper. So I'm happy to just fall back to 'be boring, but clear.'
bmk#1476: forget inline and reproducible, getting any code that runs at all would be good enough imo
bmk#1476: why require that the code be embedded into the thing
StellaAthena#3530: Because that would drastically cut down on the amount of publishing you can do
bmk#1476: that seems like the wrong emphasis
Daj#7482: "People who have bad handwriting attempt to write, so we should discourage those that are good at it from trying"?
3dprint_the_world#6486: no
Daj#7482: Ok, I'm depressed again
3dprint_the_world#6486: I really don't get your argument.... are you saying handwriting should be *encouraged* in publications?
bmk#1476: the number of publications at neurips would probably fall at least one order of magnitude, maybe two, given the state of current research code
StellaAthena#3530: I see no downside to this
bmk#1476: me neither
Daj#7482: :yes:
Daj#7482: 10x less papers with 10x the quality
3dprint_the_world#6486: lol |
Daj#7482: We did it boys
gwern#1782: 'djikstra would approve'
bmk#1476: handwriting is *quality*?
Daj#7482: In this metaphor yes
Daj#7482: It's a standin for "easily communicating what it intends to"
3dprint_the_world#6486: I'm honestly confused now
Daj#7482: Eh, I'm basically just saying people aren't ambitious and creative enough
Daj#7482: It's why I left academia
3dprint_the_world#6486: creativity is overrated
Daj#7482: "Mr Advisor, this paper is complete garbage and has no scientific use, but it will take me 3 months to get ready, you really want me to spend all that time on this instead of doing something useful?" " :yes: "
Daj#7482: ^ Basically a real conversation I had multiple times
bmk#1476: you have a finite amount of creativity, do you spend it on the container or the contents
Daj#7482: You are missing the point
Daj#7482: but honestly I don't wanna continue this convo lol, need to get some last minute work done
gwern#1782: you can take the creativity from other designers, and you amortize container work over all the contents you can apply it to
Daj#7482: Summary: I'm an ADHD startup-kid that thinks y'all are too tradition-bound and should go, like, write a poem or take ayahuasca or something
gwern#1782: https://www.gwern.net/About#design "The great sorrow of web design & typography is that it all can matter just a little how you present your pages ... But the great joy of web design & typography is that just its presentation can matter a little to all your pages."
Daj#7482: Or read Robin Hanson
Daj#7482: And free yourself from the Elephant in the Brain that makes arguments like "this is how it's always been, it's good" not seem _instantly_ suspicious
gwern#1782: _declines to say how much time he spent working on that parallelism_ |
Daj#7482: And yes, people don't appreciate the perfection of gwern.net haha
3dprint_the_world#6486: I think you're fundamentally misunderstanding my argument @Daj , which is likely my fault since I probably haven't communicated it as well as I could have. My argument is that for the purposes of academic publishing, simplicity often beats 'creativity' and '30 pieces of flair', because many academics aren't really good at that nor do they have time to be good at it. I don't really care about tradition.
3dprint_the_world#6486: and also because hiring graphics designers or web designers isn't feasible for most papers either
3dprint_the_world#6486: like if everyone standardized on some kind of template, like maybe gwern's template, that would be cool. But I think it would take a lot of effort to make a generic template that satisfied everyone's needs.
3dprint_the_world#6486: and people's needs are often much more diverse than what we initially expect, in surprising and unintuitive ways
3dprint_the_world#6486: this is precisely why latex is such a bloated mess.
Daj#7482: A piece of context that might make the argument I am gesturing at make more sense: I find myself pretty convinced of the class of arguments that the overwhelming majority of useful is scientific work is done by a miniscule minority of overperforming researchers, and adding a 50th-percentile scientist does little to nothing for overall progress, and that person would probably have more positive impact if he was the 99th-percentile guy's plumber. These kind of traditions and old technology have a regression to the mean effect, making the worst scientists forced to publish passable papers, but restricting that what the top people could do if they put their minds to it
Daj#7482: When I say "creativity", I don't mean "clever wordplay and flashy pictures", I mean "trying out lots and lots of new models of formatting, presenting information and publishing rapidly to find out what works", the hacker-spirit, fail fast
Daj#7482: distill.pub is, as far as I can tell, the single best collection of papers I have ever seen
Daj#7482: They're appealing, understandable, interactive, accessible...marvelous
Daj#7482: Not all papers would be like this
Daj#7482: But I'm confident that 99th percentile could figure out even _better_ ways to publish
Daj#7482: (and the rest can follow along the template, if necessary)
3dprint_the_world#6486: ok I agree with that generally.
acertain#1646: re: pdf parsing, semanticscholar/allenai probably has code & models on github to extract citations?
3dprint_the_world#6486: except I would change "the overwhelming majority of useful scientific work is done by a miniscule minority of overperforming researchers" to "the overwhelming majority of useful scientific work is done by a miniscule minority of overperforming researchers's grad students"
acertain#1646: @Noa Nabeshima
Daj#7482: Are you implying grad students aren't researchers?
Daj#7482: Triggered!
Daj#7482: haha |
Daj#7482: but yes I know what you mean
Daj#7482: But still, there's something about that weird, small group of far-end-of-the-bell-curve scientists that is so unfairly overperfomant that it's just naturally insulting to our human sense of fairness
Daj#7482: The top 1% is _far_ more than twice as productive as the median
Noa Nabeshima#0290: I roughly know this, am assuming that I can use semantic scholar, etc. APIs (assumption might be false!). The main thing I don't understand right now is how to get a program to run when I click the blue link on PDFs that have them.
3dprint_the_world#6486: sure, my dad is one of those 'overperforming' researchers. hundreds of papers in his field and thousands of citations. he basically founded his field.
Noa Nabeshima#0290: that's cool
3dprint_the_world#6486: there's nothing magic about it. it's the winner-take-all effect.
3dprint_the_world#6486: if you have a good start to your career, you get access to the best students and grants, and from then on it's mostly just a management problem.
acertain#1646: so pdfs can have normal links in them, to other sites, so you could rewrite pdfs to modify/add links? idk how hard that would be, maybe you can just add rectangle links
Daj#7482: tbh, I'm thinking more "Eliezer" and "Von Neumann" than "Most cited researcher in their field"
acertain#1646: or you can try to script/modify zathura or maybe adobe acrobat
Daj#7482: No offense to your dad, I'm sure he's brilliant
3dprint_the_world#6486: but you don't know who my dad is 😁
3dprint_the_world#6486: trust me, he's one of the 1%
Daj#7482: I'm just applying social oiling shhh
bmk#1476: i mean, i think we're doing a good job here of taking advantage of large numbers of parallel researchers (and i don't believe any of us are the von neumanns of anything)
3dprint_the_world#6486: by any reasonable measure
Daj#7482: I guess I don't think it's _just_ luck. It can be
bmk#1476: pile was massively parallel (see: author list) and i have a few other ideas for similar massively parallel projects
3dprint_the_world#6486: no it's not luck, my dad is brilliant, when he was a kid he got like 150 on an IQ test or something |
Daj#7482: But like, I've met _truly_ smart people, and they put any professor with thousands of papers I've met to shame
Daj#7482: Sounds about right
Noa Nabeshima#0290: yes that's true. I have a weak preference against editing the pdf files, but that might be good
3dprint_the_world#6486: and as I said, he basically founded his own field
bmk#1476: none of us are those people, so we need to look for the next best thing
StellaAthena#3530: Is it a math or math-adjacent field?
Daj#7482: Lets be frank: Do you think any of us is gonna solve alignment, AGI, cure cancer, or any of that?
bmk#1476: no
Noa Nabeshima#0290: possible
bmk#1476: absolutely not
3dprint_the_world#6486: @StellaAthena yes, probability/stats more specifically.
StellaAthena#3530: Hmmm now I’m really curious lol
Daj#7482: tbh...I do think there are 1-3 people in this discord I _genuinely_ think are Eliezer level smart
bmk#1476: nobody here will do anything like that, and we need to accept that and go for the next best thing
Daj#7482: (not von Neumann level productive though, unfortunately)
Noa Nabeshima#0290: not yet*
3dprint_the_world#6486: wrote one of the very first seminal papers in the field in the 90's, everyone laughed at him originally, etc., now it's one of those 'attention is all you need' papers that everyone cites
3dprint_the_world#6486: lol
Daj#7482: This is the average-case-best-strategy, but can be false in "extremostan" areas
StellaAthena#3530: Now I really want to see Connor’s “EleutherAI intelligence power rankings” |
Daj#7482: I have decided to dedicate my life to be utterly crazy, for example, multiplying the miniscule chance I happen to be a genius after all by the astronomical payoff of if my obsessive attempts to apply it to alignment actually pays off
Daj#7482: Huh now that you mention this...I do actually have such a thing in my head, lol
3dprint_the_world#6486: haha, don't want to dox myself here just yet. Also I don't really want to take any credit for my dad's achievements, lol. In comparison I'm just a failson 😢
Daj#7482: But it would be social kryptonite to say it publicly haha
StellaAthena#3530: Obviously
bmk#1476: i for one am content with any potential ranking
3dprint_the_world#6486: also, the Von Neumann's aren't the 1%, they're more like the 0.01%
StellaAthena#3530: Like, is there any chance you might not?
acertain#1646: @Noa Nabeshima https://github.com/allenai/scholarphi maybe?, looks to be based on pdf.js or maybe image rendering
Noa Nabeshima#0290: woww amazing, I misunderstood
bmk#1476: there is a striking correlation between my own internal ranking and the degree to which i have worked with the said person
3dprint_the_world#6486: anyway my point was that a large part of academic success just comes in being a good manager (a research group is basically a small company) and having good organizational skills. Obviously you need to be smart too, and capable of solving complex technical problems. But there are plenty of people who are really good at the latter who fail because they aren't good at the former.
bmk#1476: i'm not sure where the causation flows from yet
Daj#7482: -2. lucid's dog
-1. lucid
0. gwern when not anime
1. Me
2. Me
3. Me
4-6. Stella |
7. Unclaimed
8. Sid when high
9. Sid when sober
10-395. All the anime and pepe pfp people
396. Sid again
397. bmk, pushed down as he speaks anime language
398-1200. The mindless, terrible void
1201. Me
1202. gwern when anime
acertain#1646: i just found it :)
Daj#7482: I never explicitly formulated it in my head before just now
bmk#1476: ~~お前はもう死んでいる~~
gwern#1782: so you're saying us anime people could take over if we wanted to, because our collective IQ points are so much larger
StellaAthena#3530: I’ve come to the conclusion that there are about 1 mathematician per 10,000 people in a country like the US / UK / France
Daj#7482: The summation operation of anime-pfp-level-intelligence (also called a "Twitter") has the interesting property of becoming smaller with each addition
3dprint_the_world#6486: @StellaAthena I'm probably coming dangerously close to doxxing myself but the field in question in genomic selection, specifically marker-assisted selection
Daj#7482: actually, I made a few mistakes to that ranking, let me fix that
bmk#1476: placing bets now on my new spot being below the void
3dprint_the_world#6486: and on that note, I actually think that genetics/biology is a field with a lot of low-hanging fruit right now and a lot of people who are currently doing things like we're doing (ML, AI, mathematics, etc.) would have way more impact if they got into those areas
Daj#7482: I worked in a bio lab for a while |
3dprint_the_world#6486: an interesting example being AlphaFold
Daj#7482: someone help these poor souls learn math
bmk#1476: if you have any particular ideas of potentially high-impact things we should do, i'd absolutely love to hear them
3dprint_the_world#6486: figure out biology
3dprint_the_world#6486: lol
3dprint_the_world#6486: but more seriously,
Daj#7482: I think the single largest boost to human progress would be some educational or medical intervention that made people better at math
bmk#1476: have you seen how hard it has been to get me to learn math? making me learn bio would be impossible
Daj#7482: Wait...autistic people are good at math...vaccines cause autism...Bill Gates you bloody genius!
3dprint_the_world#6486: right now basically everything in biology is a huge mystery. Up until recently we didn't even know how proteins folded into shape. But that's only part of the puzzle.
3dprint_the_world#6486: We used to think that organisms evolved by randomly mutating new traits.
Daj#7482: Broke: figure out biology spaghetti code
Woke: Replace biology with superintelligent designed virtual paradise scapes
3dprint_the_world#6486: Now we know that that is a total misunderstanding.
bmk#1476: wait, what? this is news to me
3dprint_the_world#6486: Instead we evolve mostly by turning genes on and off.
bmk#1476: but how did those genes get there in the first place
gwern#1782: @3dprint_the_world I think I know who you're referring to and I've read their papers many times. incidentally, I think we were discussing the other day brute force physicists/statistics/data vs credentialed self-recommending experts, and boy, the incursion of animal genetics methods into human genetics is that in *spades*.
Daj#7482: Broke: Organisms evolve to survive
Woke: Organisms evolve to evolve |
3dprint_the_world#6486: they were already there, just 'dormant' or 'junk'
Daj#7482: Genes invented humans in order to reproduce on the moon
gwern#1782: (it is quite dispiriting to watch an entire field turn out to be garbage and overturned by some simple statistical points)
3dprint_the_world#6486: @gwern wow you are eerily accurate on that point
gwern#1782: (but mega props to visscher, yang, the others, and those who got on board early like steve hsu for really demonstrating how the scientific method works even when ideology and prejudices and innumeracy blind an entire field)
3dprint_the_world#6486: @bmk Organism development works by a sequence of genes getting turned on and off, in a way similar to running a computer program. Over evolutionary time periods, we have a constant background level of mutation that is going on and modifying genes. Genes have a kind of ranking in terms of how critical they are to core biological processes. Really critical genes, like those that control DNA copying, rarely if ever get modified. Less critical ones, like those that control skin color for lack of better example, get modified at much higher rate. And then we have 'junk' DNA which is essentially just lots of old 'genetic code' that has long since mutated into barely recognizable form. Most evolution consists not of evolving new DNA sequences, but bringing old code back to do new things ('uncommenting' code).
bmk#1476: how does the gene know how important it is and how does it make sure the important ones dont get mutated? dna raid? ecc?
gwern#1782: now, where does this junk DNA come from you ask? well, sometimes whole chromosomes get duplicated. or gene regions get duplicated creating a copy-number variation. if you literally have 3 copies of gene A, like AAA, well, the first A is probably enough to do whatever it is that A does, so now A2 and A3 can potentially evolve into something useful
3dprint_the_world#6486: there are explicit mechanisms to prevent certain genes getting modified, and also just selection processes (if you lose the ability to make ribosomes, you die)
bmk#1476: ah
bmk#1476: i was wondering how our wetware implements ECC
Daj#7482: Problem: Organism isn't working
Solution: Shut off organism and hope someone else got it right
3dprint_the_world#6486: https://www.nature.com/scitable/topicpage/dna-replication-and-causes-of-mutation-409/
bmk#1476: that honestly makes a lot more sense than "Everything is just bit flips accumulated" because it explains how these big complicated structures could come into existence
Daj#7482: Imagine every time our code didn't work we'd just randomize the code a bit and try again
Daj#7482: (this is what ML actually believes)
bmk#1476: :guilty:
bmk#1476: where did you get this perfectly accurate description of my debugging process
Daj#7482: Wait...are you just a pile of several trillion cells stacked on top of each other in a trenchcoat?! |
bmk#1476: oh shit i've been figured out
bmk#1476: i mean, uh
bmk#1476: gotta go make a research in the research factory
Sid#2121: nice, three spots in the list. very content
Sid#2121: also just skimming through the thread above and totally agree with your position re: paper publishing. Now put your money where your mouth is and make eleuther a distill.pub style platform
bmk#1476: why not just use my blog lol
Sid#2121: or we could just rip off @gwern 's site and change the background colour
Daj#7482: We're entirely organized through a discord, we're the most zoomer AI lab imaginable
bmk#1476: my site is *already* a cheap gwern.net ripoff
Daj#7482: We have big chungus in our paper
Sid#2121: can we publish our research on tiktok?
bmk#1476: absolutely not
Daj#7482: Only if you do the dancing
Sid#2121: all mathematical equations will be expressed through dance
gwern#1782: works for bees
Daj#7482: Put that B.A. to use
Daj#7482: I assume this is what you learn in a B.A.
Sid#2121: B.As are for big brains, ok
Sid#2121: we learnt... at least several things
bmk#1476: when's the last time a bee published an ML paper tho |
Daj#7482: Interpretive dance being among them, I imagine
Sid#2121: they're doing it all the time, in dance form
bmk#1476: actually scratch that the median bee can probably write better ML papers than humans because there's no math in them anyways
Daj#7482: How would you know? Maybe all of them are published by bees masquerading as humans in a trenchcoat?
bmk#1476: er, math paper?
Daj#7482: The woke twitter mob is definitely run by wasps
bmk#1476: see, bees will never replace humans because we manufacture the trenchcoats
bmk#1476: and bees cannot
Daj#7482: _Yet_
bmk#1476: therefore, by an unexplained leap of logic, i will conclude that humans will always be superior to bees
Daj#7482: Once they master the technology, our species is done for
Daj#7482: No, listen to me! We have to work on Bee Alignment!
Sid#2121: seriously tho @Daj can we do this?
Sid#2121: i'm pretty sure distill's template is OS?
bmk#1476: :smallbrain: AI alignment
:bigbrain: BI alignment
Daj#7482: Absolutely, I would _love_ to publish like distill or gwern
bmk#1476: y not use my blog template lol
Daj#7482: but I also can only express myself artistically through weird lovecraftian prose and science puns
bmk#1476: the source is available at `view source` |
Sid#2121: it's not as pretty as gwern's, sorry 😦
Daj#7482: Oh, and vaguely connected blog posts that turn into mildly-angry, rambling manifestos
bmk#1476: just a few minor tweaks to the horrific mobile defaults and it should be good to go
Daj#7482: I'll say it again: Raw txt on Apache webserver
Daj#7482: http://www.weidai.com/
You may not like it, but this is what peek webdesign looks like
bmk#1476: well, we have one already
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/787784938870014002/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/787785044155564083/unknown.png
Daj#7482: http://www.weidai.com/bmoney.txt Satoshi at his best
Sid#2121: i'll have you know this is peak web design https://cdn.discordapp.com/attachments/729741769738158194/787785244497149962/Screenshot_from_2020-12-13_21-56-42.png
Daj#7482: to be fair, you have to have an extremely high IQ to understand Schmidhuber's webdesign
gwern#1782: schmidhuber knows that the secret to esthetic enjoyment is steady compression progress, so he designs his websites to be maximally cluttered so you can derive maximal enjoyment from gradually figuring it out. you may not like it. but this is it. this is peak second derivative of compression performance.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/787785818512687154/unknown.png
Sid#2121: what the fuck
Daj#7482: Eventually, Schmidhuber's website is just an encrypted blob of text, the private key destroyed
3dprint_the_world#6486: Finally, to answer @bmk 's question, I think the fundamental open problem in biology is: Is there some simple underlying principle? Like right now biology seems really really complicated. There are all these genes with complicated expression networks, and proteins that each seem to fold into some unique shape, etc. But maybe this complexity is illusory. Maybe it's just because there's some underlying principle we're missing. Like if you tried to understand a computer by breaking apart a CPU under an electron microscope. It would all seem really complicated. But actually the fundamental principles are simple.
gwern#1782: (I like the 'circuit' approach. same as CNNs, the core logic may not be that complicated, it's just the embodiment in a chemically watery soup with randomness and parasites and stuff that's complicated)
bmk#1476: do you think making a really honking big model and giving it all the info we have is a viable strategy towards finding said underlying structure
3dprint_the_world#6486: it might be a step, sure. |
3dprint_the_world#6486: but it might be a huge red herring.
gwern#1782: I like WP a lot. one of my goals with gwern.net is to show how you could do 'WP but better': popups for everything, not just WP articles; much better link icons; all fulltext non-rotten links; nicer math rendering; etc
Daj#7482: something something https://slatestarcodex.com/2018/09/13/the-omnigenic-model-as-metaphor-for-life/
3dprint_the_world#6486: I agree with the premise of that article but not the conclusion.
3dprint_the_world#6486: I don't think building large polycausal models is the best we can do. (or at least, I hope not)
Daj#7482: :brr:
3dprint_the_world#6486: Even though biology probably isn't simple in a 'this ONE WEIRD GENE controls intelligence!' way, it's also probably much more simple than 'the best you can do is large approximate statistical correlations'
Daj#7482: I'm actually reasonably willing to bet that biology is _precisely_ that complicated
3dprint_the_world#6486: that's for sure what it might seem like *right now*. But maybe this complexity is illusory.
Daj#7482: Yep, it might be
Daj#7482: I'm feeling roughly 70/30 atm personally
Daj#7482: Maybe 60/40
3dprint_the_world#6486: the same way the apparent complexity of, e.g. animal taxonomy is illusory
bmk#1476: I expect the same techniques that solve the complexity of natural language will be hugely useful
Daj#7482: It really depends how much evolution actually optimizes for compressibility
Daj#7482: Since it has a _very_ large quantum computer to work with, in theory
Daj#7482: (and no ethical restraints)
bmk#1476: And when i say techniques i mean honking big models
3dprint_the_world#6486: @bmk yeah could very well be.
Daj#7482: I'm a pariah of course, but I _genuinely_ think we'll have custom synbio that is infinitely easier to work with and more powerful long before we figure out full control of old bio |
gwern#1782: https://www.lesswrong.com/posts/bNXdnRTpSXk9p4zmi/book-review-design-principles-of-biological-circuits https://www.nature.com/articles/s41467-020-19092-2 not quite what I'm thinking of but gives you an idea what I mean
gwern#1782: and of course you've seen the distill.pub posts on circuits
Daj#7482: Yep, I've updated hard away from my bio-pessimism
Daj#7482: I used to think there was <10% chance just studying genes would get us anything useful before AGI arrives at all
Daj#7482: (unless we allowed for _extremely_ unethical experiments)
3dprint_the_world#6486: purely studying genes might be the wrong approach, sure.
Daj#7482: I have some very strong, very weakly held beliefs here heh
Daj#7482: I think doing biology research is among the less efficient ways to advance biology tbh
Daj#7482: Not the worst of course, but far from the highest ROI
Daj#7482: Biology as a field is...a goddamn travesty
bmk#1476: The best way is to pull an alphafold?
Daj#7482: Yup
bmk#1476: I mean, honestly, i don't think i disagree
Daj#7482: It's just so bizarrely obvious to me and to almost no one else, which seems like a pretty strong hint I'm probably the crazy one
Daj#7482: But, want VR? Cure for cancer? Immortality? World peace? _Work on AGI_
bmk#1476: I've just always thought of applying ML to bio as an obvious consequence of advancing ML, and that my time would be better spent advancing ML itself
bmk#1476: Though i really have seriously considered pivoting to applying ML to imnortality
Daj#7482: It's so incredibly close, and the moment it takes off, all problems will be solved on timescales that are blindingly fast on human scales
3dprint_the_world#6486: @Daj I actually think creating synthetic life might be the pathway to understanding biology.
3dprint_the_world#6486: So I don't think it's either/or. |
Daj#7482: I think _replacing_ the spaghetti code with a better framework is the way to go
3dprint_the_world#6486: again though, maybe it only *looks* like spaghetti code
Daj#7482: I don't expect any DNA to be around not all too long after AGI takes over
Daj#7482: Yup, this is an empirical question
bmk#1476: I think by the time we can replace and/or understand it, replacing and understanding will be equally easy
Daj#7482: As said, I'm something 65/35 it's spaghetti or not
Daj#7482: Making it basically moot
3dprint_the_world#6486: @Daj read the article gwern linked, it's quite good
Daj#7482: I've read it before
Daj#7482: As said, I used to be 90/10
Deleted User#0000: yea, personalized medicine is overhyped and bears little fruit besides its obvious use in cancer treatment
Daj#7482: but even if it isn't spaghetti code, I fully expect a superintelligence to effortlessly design a fully superior replacement for biological systems
Daj#7482: in every possible dimension, meaning there would be no reason to keep carbon-based life around except for sentimental reasons
Daj#7482: I genuinely don't think we will ever see any kinds of cyberpunk style gene mods or cyborgs
Daj#7482: Not because we couldn't do it, but because we'll get AGI first
Deleted User#0000: i mean, there has been big wins from genetics though https://www.sciencemag.org/news/2020/12/crispr-and-another-genetic-strategy-fix-cell-defects-two-common-blood-disorders
Deleted User#0000: there are disorders out there that can be pinpointed to single point mutations
3dprint_the_world#6486: your argument, taken to its logical conclusion, is that we shouldn't really bother to understand *anything* except for how to make an AGI.
Daj#7482: With all due respect to this massive scientific and medical success...this seems extremely quaint compared to the stuff happening in ML lol
Daj#7482: Yep, precisely |
Daj#7482: I'm willing to bite the bullet on that argument
StellaAthena#3530: He actually does believe that
bmk#1476: :yes:
Daj#7482: I think almost all scientific work other than AGI and AGI alignment is preeeetty low expected value
Deleted User#0000: hmm, i disagree, if you dive into biology, you will be humbled by the complexity
bmk#1476: Just use a bigger model
Deleted User#0000: deep learning will give us a new way of approaching that complexity, i'll give you that
Daj#7482: I know this argument is extreme, but I commit to being honest about my beliefs and biting the bullet on their consequences
Deleted User#0000: perhaps we'll see a renaissance in systems biology as the tools get adopted. i think alphafold2 is kind of a watershed moment
Daj#7482: I think that if AGI took another hundred years or more to arrive, we would see such a bio revolution
3dprint_the_world#6486: I mean I don't even necessarily hard-disagree. I just think it's hard to figure out what the *practical* consequences of that belief are, when people can't even agree on the route to obtaining AGI.
Daj#7482: My own models just predict it will happen before that and make biology beyond that point more like archaeology than engineering
3dprint_the_world#6486: like if you think Neuralink-style brain enhancement is the path to AGI, as many people do, then you have to care a lot about biology.
Daj#7482: lol, neuralink is so absurdly silly in my eyes
bmk#1476: We don't
3dprint_the_world#6486: sure, I don't either.
3dprint_the_world#6486: I'm just saying.
Daj#7482: like "completely incoherent" silly
bmk#1476: Fwiw I'm fully on board with connor in that solving AGI is possibly the best instrumental path for nearly everything
3dprint_the_world#6486: yes, but we just don't know the path for how to get there |
Daj#7482: Politically incorrect time: I think, as stated before, that the bottleneck is the 0.001% researchers dedicating themselves to alignment. I think the economy naturally optimizes so hard for AGI accelerationism that it will come no matter what anyone tries to stop it
3dprint_the_world#6486: it's like saying 'Fusion is best power source so we should stop work on solar and just work on Fusion'. It's like, ok sure, but we *need power now*, and we have no idea how long fusion will take. Need to diversify.
3dprint_the_world#6486: can't put all your eggs in one basket.
Daj#7482: I think there is almost no value in trying to speed up AGI development if you are an individual making that choice
Daj#7482: Maybe even negative value
Daj#7482: But this stuff _works_
Daj#7482: It's not like GPT3 has been promised for 30+ years
Daj#7482: Moore's law is still going strong
3dprint_the_world#6486: I'm just saying I'm not sure if 100% of humanity dedicating itself to the goal of making larger language models is a properly hedged strategy.
Daj#7482: that is not what I am suggesting
3dprint_the_world#6486: ok then what are you suggesting
bmk#1476: Language models != AGI
Daj#7482: I am not making any _prescriptive_ statements, I'm trying to sketch the decision making models I am personally using to arrive at my conclusions
bismarck91#5255: 🤔
Daj#7482: I am an individual with certain strengths and weaknesses, and with these models and beliefs
Daj#7482: So I'm trying to sketch how those models come to the rather fringe believes I hold moderately strongly
Daj#7482: You can use this to update your own beliefs, or not, however useful you find my models and information
Daj#7482: I commit rather strongly to being honest and explicit with these believes because I've seen a truly phenomenal amount of very smart people nod along with every single step of my reasoning, until I reach the punchline of "So AGI will make humans completely obsolete and accelerate research so much that there is no way we could make any significant contribution beforehand other than the starting of the actual takeoff and its alignment!" and they just stare at me blankly and say "No."
Daj#7482: haha
bmk#1476: i think this server has a very high concentration of people who say ":yes:" to that punchline |
Deleted User#0000: i agree, i think its just a matter of timescale
Daj#7482: Yup, we can all be smug about those stupid out-group people together!
Daj#7482: Imagine being out-group, lmao, what a bunch of losers
Deleted User#0000: i mean, the scientific method is just an algorithm..
Daj#7482: The "scientific method" is _mostly_ a shibboleth
Daj#7482: Some people consider frequentism science lol
bmk#1476: it's a shibboleth among.. a *certain* crowd
Daj#7482: and by "some people" I mean the entirety of the social, biological and medical sciences
Deleted User#0000: yea, there's a bunch of rituals around it
Daj#7482: The entirety of the social, biological and medical academia, you mean?
Deleted User#0000: i agree a lot of it is nonsense
bmk#1476: no, people there consider it a burden
Daj#7482: ***P VALUES***
bmk#1476: "ugh do we *really* have to show that our numbers are significant? they're clearly *bigger*"
Daj#7482: Again, no matter how cynical you are, you need to be more cynical
Daj#7482: Biology is _barely_ a science
Daj#7482: It's not not a science
Daj#7482: But I'd be willing to bet more than 50%, maybe 80%+ is closer to ritualistic offerings than hard bayesian evidence on the workings of biological systems
Deleted User#0000: yea, alas, we are hopelessly entangled with this 'biology', so we must figure it out
Daj#7482: It's not that the abstract idea of studying biological systems isn't scientific, the same goes for social psychology, parapsychology, and any other fields |
Daj#7482: The problem is in how actual real world humans execute on those"scientific" endeavors
Deleted User#0000: alphafold2 really is the best thing to happen to the biology field. it shows that the complexity is tractable
Daj#7482: My point is biology the object of study is of merit, but the way biologist attempt to tackle this problem seems absurdly inefficient
Daj#7482: This isn't unusual in the least
Daj#7482: The amount of resources humans waste on exclusively ritualistic activities, while convincing themselves they aren't rituals, is _astounding_
Deleted User#0000: it's inefficient because it is really that complex
Deleted User#0000: biology doesn't come with sets of rules like physics
Daj#7482: No, I don't think that's the only problem
Deleted User#0000: exception is the rule
Daj#7482: It's _one_ of the problems
Daj#7482: But not the only one
3dprint_the_world#6486: yeah, and what did I say at the very beginning of this discussion. that it's low-hanging fruit and a lot of people working in the fields we're working on could actually have more impact if they switched to biology.
Daj#7482: Yep, agreed!
gwern#1782: bring the fire down to the heathens
Daj#7482: I think if all the computer scientists and mathematicians started doing bio, the field would advance a decade a year
3dprint_the_world#6486: but you're saying, "let's not do that, let's just keep working on AGI"
Daj#7482: but the higher value sciences would be neglected, of course
Daj#7482: yup
3dprint_the_world#6486: 😃
Deleted User#0000: yea, i can get behind that, unfortunately you'll have to fight a bunch of post-docs insisting on using excel |
Daj#7482: I think it's a travesty if someone with the talent for working on AGI/CS/Alignment decides to go into bio
Daj#7482: But luckily/unluckily, culture kinda does the sorting for us
3dprint_the_world#6486: hmmm hard disagree on that
Daj#7482: There's very distinct _kinds_ of people that go into various fields (on average)
bmk#1476: Still better than fintech
Daj#7482: earning to give imo has higher ROI than doing bio research, usually
Daj#7482: not always
3dprint_the_world#6486: I think a lot of advancement can come from smart people with a certain type of brain going into a field that usually does not contain that type of brain
Daj#7482: I actually don't think that's true in the far limit
bmk#1476: This is assuming you're earning *to give*
3dprint_the_world#6486: fun fact: Ed Witten was originally a political activist and organizer, he didn't even like physics
Daj#7482: I think the 0.001% are so wildly smart and ambitious, nothing short of something like extreme poverty or debilitating disease can keep them out of the field they are best to work in
Daj#7482: but this might be a survivorship bias artifact
3dprint_the_world#6486: yea
Daj#7482: I am in favor of fully global healthcare, education, etc not (only) for to give every human the quality of life they deserve, but _especially_ to get more of those 0.001% into research
Daj#7482: can you imagine how many von-neumans we have lost to malaria in Africa?
Sid#2121: hard agree, and the same argument applies to work
Daj#7482: So when I say working in bio is a travesty, I'm specifically referring to the unlikely scenario that someone like Eliezer suddenly decided to get a PhD in genetics
3dprint_the_world#6486: on a completely unrelated note, I feel you may be overly romanticising the role of the 0.001% in getting stuff done. There's usually lots of people laying the groundwork first. People like @bmk who dedicate themselves to gathering good quality data, for example.
Sid#2121: i bet a lot of geniuses are stuck in dead end jobs because they've never been given any opportunity to go beyond them |
Daj#7482: I know what you mean, my model is more nuanced than that
Daj#7482: It's not that we don't _need_ those other people
Daj#7482: It's that they're rather fungible
Daj#7482: On a gradient of fungibility
3dprint_the_world#6486: And everyone thinks "That's me!"
Daj#7482: Yea, but some _actually are that_
Daj#7482: I wasn't sure if people like this _actually_ existed until I met some
Daj#7482: These people are just different
Daj#7482: No professor you've ever met at a college is like these guys (except maybe if you go to Berkley or MIT or something, and even there I expect there to be maybe 0-5 at most)
Sid#2121: you're right, some people can make world-changing, incredibly niche simpsons/ML crossover memes
3dprint_the_world#6486: I don't even really know where this discussion is going anymore
Daj#7482: Through an extremely convoluted set of coincidences, this was in fact the exact action that led to the full, flawless alignment of superintelligent AGI
Sid#2121: i'm not sure where it started, i'm just here to shit on wage labour and big up my meme making skills
Daj#7482: Did we ever have a point? lol
3dprint_the_world#6486: I guess not
Daj#7482: I just like talking to smart people
Daj#7482: Fun and high ROI activity
3dprint_the_world#6486: yeah I do too
3dprint_the_world#6486: but let's not jerk ourselves off too much here
Daj#7482: Eleuther is like the watercooler in a really good uni lab |
Daj#7482: I can do whatever I want in the privacy of my home
bmk#1476: i am indeed quite fungible
3dprint_the_world#6486: I mean yeah I do agree that obtaining AGI is a good instrumentally convergent goal, but I also think it's good to have instrumental diversification too
Daj#7482: I think fungibility is a very high dimensional gradient. Different axis matter more for different tasks, and have higher or lower variance
3dprint_the_world#6486: like maybe it might turn out that the key to figuring out AGI is some obscure math problem that only 3 people in the world are working on right now
Sid#2121: Connor -> human translation: "everyone is replaceable, including you"
Daj#7482: Of course! Again, these are my models, not my suggestions. I happen to commit myself very strongly to this goal personally _only because I condition on my talents and intersts._ I happen to have some rather weird interests and talents that just happen to maybe be useful for these kinds of unusual tasks, so I _think_ i have a rather large comparative advantage here
Daj#7482: Or not who knows lol
Daj#7482: Not quite: "_Most_ people are replaceable"
3dprint_the_world#6486: this is kind of the point of science trying to advance in every possible direction
Deleted User#0000: No they will all be replaced. Looked at what happened with David baker and alphafold2
Deleted User#0000: And countless examples
Daj#7482: Oh yeah of course
Daj#7482: AGI will exceed humans on _all_ axis
Daj#7482: Obviously
Daj#7482: This is just in these last few decades we fleshy humans have
Deleted User#0000: Right, I think the narrow intelligence we have now is good enough for a lot of problems
Deleted User#0000: Nevermind making it general
Daj#7482: I guess I define general as "a single or collection of intelligences broad enough to fulfill all axis we care about above human level"
Daj#7482: If we have to wire up a hundred specialized models in parallel, but the resulting system outperforms the 0.001% humans on all axis, who cares? Game over, thanks for playing! |
3dprint_the_world#6486: like who would have thought that obscure industrial research into making incandescent lamps brighter would wind up literally being the key to computers, lasers, etc.
Deleted User#0000: Yea I agree that's the goal. In the short term, I think having specialized attention nets for each domain would vastly complement and accelerate the way we solve problems now
Deleted User#0000: For fields that have clean accessible data
Daj#7482: I'm not arguing against basic research at all! I think we should go ahead and just defund all but the top 10% bio labs and give their funding to completely fundamental physics and math! Fat chance that'd happen....
Deleted User#0000: Institutions will never change, too much inertia
Deleted User#0000: Just find like minded people
3dprint_the_world#6486: isn't that what we're doing? 😃
Deleted User#0000: Yea I think Connor wants a broad change to society
Daj#7482: I don't think we need to invest actively in stuff like scaling models or hardware, as that is now fully buoyed by industrial economic incentives
Deleted User#0000: I think the writing is on the wall
Deleted User#0000: But people haven't read it yet
Daj#7482: I would want that, sure, but I think it's _much_ harder than solving AGI alignment
bmk#1476: so what do we invest in
Daj#7482: lol
Deleted User#0000: It'll take time
bmk#1476: we're basically all scaling atm
3dprint_the_world#6486: well, for people to read the writing on the wall, first we need to build a wall
bmk#1476: so if you want to change our direction, we really need to work at it
Daj#7482: I genuinely think it would take a magnitude of order or two more time and effort to fix institutions than to just straight up solve alignment before AGI arrives
Deleted User#0000: You can't even get people to believe viruses, how would they shift their approach to problem solving with AI? |
Daj#7482: I am working on it! Just not easy for my ADHD stressed brain to scale lol
3dprint_the_world#6486: the people who could even remotely contribute to AI believe in viruses.
3dprint_the_world#6486: so that's not really a problem.
Daj#7482: Let me be clear: If I could press a button to make every person on the planet understand AGI alignment, _I wouldn't press it_
Deleted User#0000: I disagree, AI will be accessible to the public like how people can operate steam engines
Daj#7482: I think these ideas of "oh! We need to raise awareness! We need to work together!" blah blah blah are well meaning but ultimately parasitic memes
Deleted User#0000: Eventually
Daj#7482: Nah, AGI will take over by then
Daj#7482: People will have no meaningful input on the trajectory of the future pretty soon
bmk#1476: ytho
bmk#1476: isn't that strictly better than not many people understanding it
Daj#7482: Because it's _all_ about those 0.001%
Daj#7482: I think that probably 50% or more of people, if they joined this field, would make it _worse_
Daj#7482: Exhibit A: Identity politics infiltrating academia
bmk#1476: yeah, but there are probably people out there who *could* be very impactful but who dont know of its existence
Daj#7482: It's actively counterproductive to scientific work
Daj#7482: This is what happens when people who can't _actually_ contribute _try_ to contribute
Daj#7482: I genuinely think most (not all) crazy toxic SJW type people _really are trying to help_
Daj#7482: I think most people ruining science and institutions genuinely think they're helping
3dprint_the_world#6486: or the ever-increasing infiltration of marketing/sales/'tech entrepreneur'/HR people into tech industry |
Daj#7482: People don't realize how bad the decay is
3dprint_the_world#6486: it's pretty bad.
Daj#7482: These people are't _evil_, I'm serious
Daj#7482: They want to make the world a better place
Daj#7482: And it conflicts horribly with our inbuild forager desire for fairness, but not all people are made equal
Daj#7482: There is a _tiny_ amount of people that I think even have a shot of making meaningful progress in Alignment single handedly
bmk#1476: so "pivot eleuther to do alignment" isn't really the strategy you're thinking of
Daj#7482: these people still need assistants, chip developers, plumbers, friends, family, etc etc, but while I can exchange their plumber with any other without any noticeable loss in productivity, _I can't replace this one person_
bmk#1476: because most of us probably couldn't contribute to alignment
Daj#7482: Yesn't
Daj#7482: Figuring out whether you're capable of helping is actually really hard
bmk#1476: if we pivoted to alignment, you'd be the only person left lol
Daj#7482: Usually requiring roughly a PhD or two of failure to make sure lol
Daj#7482: MIRI spent like 6 years doing absolutely nothing after it was founded
Daj#7482: To be fair, Eliezer was 19 at the time lol
bmk#1476: i'm sure if we spend 6 years doing nothing eleuther would kind of cease to exist well before 6 years was up
3dprint_the_world#6486: uhm, no offense to EY, but I wouldn't consider him part of the hypothetical 0.00001%
3dprint_the_world#6486: he's great at communicating stuff. his contribution is potentially attracting smart people to alignment.
Daj#7482: I think that a group of 10 80th percentile researchers won't do much for alignment, but those same people plus one 0.001% person has a real shot at making a big impact
3dprint_the_world#6486: not actually making a contribution himself |
Daj#7482: hard disagree. I know people will disagree with me, but I think EY is absolutely in that tier
3dprint_the_world#6486: nah
Daj#7482: It's hard for me to verbalize why I believe this
Daj#7482: But I am unfamiliar with any academic I would consider more raw intelligent than EY
Daj#7482: I could be duped ofc
Daj#7482: People generally can only recognize intellect up to one SD or so above their own
Daj#7482: So maybe EY could be one SD above me, or the multiple that I think he is
3dprint_the_world#6486: I think it's more complicated than just being one SD above or below someone.
3dprint_the_world#6486: EY definitely has talent in articulating complex ideas.
Daj#7482: I'm gesturing at a far more complicated model
Daj#7482: I think the contributions EY has made to alignment are more than any other person, period
Daj#7482: I think HPMOR is more valuable than the lifetime output of the average star MIT professor
Daj#7482: Fight me
3dprint_the_world#6486: But I've never seen anything to suggest he's actually made significant contributions himself. Maybe I'm wrong.
3dprint_the_world#6486: He's attracted a lot of smart people which is great.
andyljones#7746: @3dprint_the_world how'd you rank musk
3dprint_the_world#6486: lol
Daj#7482: I don't care about purity or prestige, I'm a consequentialist, I care about results
3dprint_the_world#6486: I've talked about my position on musk before
Daj#7482: If I could choose to entirely undo the entire contributions of MIT's top professor or undo HPMOR, I'd choose the former |
Daj#7482: If writing Harry Potter fan fic gets us better long term outcomes, you better set up that fanfiction.net account
3dprint_the_world#6486: I think Musk is actually a genius but *not* in the way people usually describe. He's a brilliant manager and a great public speaker.
3dprint_the_world#6486: But he's not some kind of aerospace engineering genius.
Daj#7482: I think genius does not have to be purely mathematical or technical
Daj#7482: A very smart but not genius scientist with a _lot_ of charisma will often have more overall power than the smartest researcher in the field
Daj#7482: Charisma is a force multiplier
3dprint_the_world#6486: But it's not even about power.
Daj#7482: And again, I only care about results, not how we get them
Daj#7482: It's _only_ about power
Daj#7482: There is nothing else in the game
3dprint_the_world#6486: Lots of people are great at obtaining power and influence but totally miserable at doing anything with it.
Daj#7482: The ability to shape the future is the only thing we care about acquiring
Daj#7482: Ok yes, true
Daj#7482: Need a better word for what I mean
Daj#7482: I mean something like "distance they have pushed us away from bad timelines towards good timelines in expectation"
3dprint_the_world#6486: In terms of being a manager, Musk is definitely in the top 0.1%, and I think this aspect often gets downplayed when people talk about Musk.
3dprint_the_world#6486: Instead people overplay his overall technical abilities, which honestly are probably just average.
Daj#7482: I would btw also trade any MIT professor for Musk
Daj#7482: I don't care how "pure" his intellect is, he has results
Daj#7482: He gets shit done |
3dprint_the_world#6486: yeah, his management style is almost alien if you consider 99.9% of companies. Like the idea of actively inviting *negative* feedback about your product from users and employees is anathema in most of the tech industry.
3dprint_the_world#6486: I honestly think most of the success in the tech industry comes *despite* management, and mostly just because there's lots of smart people.
Daj#7482: One of the most important lessons I have learned and worked for years to try to unlearn is just _how much people care about status hierarchies_
3dprint_the_world#6486: (who could manage themselves just as well)
Daj#7482: Almost the entirety of hate I see towards Musk is, imo, completely purely status attacks
Daj#7482: One of the signs of those 0.001%
Daj#7482: Most "widely accepted authoritatively smart" people hate them
3dprint_the_world#6486: And also generally only caring about making a good product and not what will please shareholders. The most famous example is the 'tesla stock is overpriced' tweet, lol
Daj#7482: If there's someone getting shit done or giving you really useful ideas, the more people hate them, and the more respected those haters are, the more attention you should play, usually
Sid#2121: Musk's contributions to the aerospace industry are amazing, but i love shitting on his dumbass tesla tunnel
Sid#2121: what the fuck is that, cmon
3dprint_the_world#6486: But I do also think Musk has some really weird political ideas, and the hyperloop is awful
3dprint_the_world#6486: haha yeah was just about to say that
Daj#7482: Another important fact about the super fringe people: They are absolute fucking weirdos
3dprint_the_world#6486: I used to think Musk had distanced himself from hyperloop but recently he doubled down on it which is really disappointing
Daj#7482: No one that is neat, respectable and socially fully normal is that kind of genius
Daj#7482: No one
3dprint_the_world#6486: hyperloop is a really really bad idea
3dprint_the_world#6486: anyway
Sid#2121: I don't really think musk is super fringe, Just a slightly higher than average intelligence businessman who hit it rich |
Daj#7482: These people are _really_ difficult to deal with and often do and say really crazy shit
Sid#2121: and who loves to meme and doesn't really give a shit
Daj#7482: It's part of the package
Daj#7482: I disagree, I think Musk is an absolute outlier
3dprint_the_world#6486: He's a slightly higher than average intelligence businessman *who is actually good at managing people*, who hit it rich.
bmk#1476: Come join the "Musk is overrated but still kinda instrumentally useful i guess" club!
Daj#7482: There are tons of people in situations like Musk
Sid#2121: yeah this is absolutely me
Daj#7482: And they make it rich and don't get shit done
Sid#2121: yes, and i think most rich people are fucking idiots
Daj#7482: fair
3dprint_the_world#6486: yeah, I know quite a few rich people. Mostly idiots.
Daj#7482: I guess your definition of "slightly above average intelligent" is close to the upper limit in my model lol
Sid#2121: elon musk is not an idiot, but not "0.0001 percentile intelligent"
Daj#7482: I think there are very few if any people as instrumentally intelligent as Musk
Daj#7482: or more
Louis#0144: We don’t care if it gets caught, but we have to make sure it can’t crawl through our skin.
Your brain wants to know if something is going to try to enter or leave the room.
If it sees a person opening the door, it knows they are either trying to enter or exit the room.
Your brain is trying to prevent an intruder from getting in or out of the room. |
By thinking about it, he makes himself more aware of what could happen if he opened the door.
He has been conditioned to think that if he opens the door, someone will be able to get in or out of the room.
So he feels compelled to open the door.
He doesn’t need to open the door, it’s just that he is forced to do so by the fact that there is no other option.
The reason Hansel does not want to open the door is because
Hansel’s hand still trembles as he pushes open the twice-cooked door.
The last time he saw the house he was glancing back over his shoulder as he and his sister fled into the trees.
Louis#0144: Call this piece hansel introspects about the properties of a door while being hunted by a witch
3dprint_the_world#6486: about wealth: people romanticise the idea of individual accomplishment but the reality is that most wealth is inherited or due to luck. I'm sorry but it's true.
Daj#7482: I think my "maximum intelligence" just is rather low compared to your model lol
Sid#2121: I think we could all be referring to vastly different things when we say the word 'intelligence'
Louis#0144: I added a slider to make the stories more or less interesting
Daj#7482: Ye, I use the instrumental definition of intelligence
Daj#7482: Intelligence is how good you are at achieving your goals
Daj#7482: Lots of people want to get rich and go to Mars
Daj#7482: If they're so smart, why aren't they Musk?
3dprint_the_world#6486: intelligence: the ability to accurately predict the next token in a sentence
Sid#2121: i think this is a poor definition of intelligence. I think you underestimate how much acheivement is down to circumstances, and luck
Daj#7482: I _genuinely_ think "If you're so smart, why aren't you rich?" is a legitimate challenge to anyone claiming to be smart
3dprint_the_world#6486: @Sid yeah I'm trolling |
Daj#7482: Expectation and uncertainty is implied
Sid#2121: no i'm talking to connor @3dprint_the_world
Sid#2121: your definition of intelligence isn't bad lmao
3dprint_the_world#6486: oh sorry
Daj#7482: I find any non-instrumental definitions of intelligence basically incoherent
Daj#7482: At least, have yet to encounter one that isn't and captures all the properties I care about
3dprint_the_world#6486: maybe, but "I'm rich therefore I'm smart" holds basically 0 weight to me, given my life experience.
Daj#7482: That's different
3dprint_the_world#6486: The vast majority of rich people I've known (and I've known a lot) are abject morons.
Daj#7482: and btw there are totally good reasons for a smart person to not be rich
Daj#7482: Bad life circumstances, or just not optimizing for money
3dprint_the_world#6486: wealth kind of amplifies stupidity in a way
Daj#7482: I'm talking about the reverse arrow of causality
3dprint_the_world#6486: yeah I know
Daj#7482: "Most really smart people could be rich if they wanted to"
3dprint_the_world#6486: although what's your definition of rich
3dprint_the_world#6486: it used to be you're rich if you have a million dollars
Daj#7482: It's just a kind of proxy for the amount of optimization pressure they can exert
3dprint_the_world#6486: nowadays a million dollar net worth might be borderline poor
Daj#7482: If you WANT to be rich and have lots of instrumental intelligence , you should easily become rich |
3dprint_the_world#6486: anyway I gotta go work
Daj#7482: I've seen this happen multiple times in fact lol, really smart people just being like "brb, getting a million dollars this year", and just...succeeding lol
Daj#7482: Alrighty, I should head to bed
StellaAthena#3530: I agree, but I also think “being rich is boring” is a legit answer.
bmk#1476: well, it's instrumentally useful for many exciting things
bmk#1476: i don't care *about* being rich but a load of money would sure be helpful for achieving my goals
3dprint_the_world#6486: another legit answer is that it's probably more worthwhile to get lots of people (including rich people) to agree with me rather than just putting all my goals on hold until I obtain money.
3dprint_the_world#6486: EY is not rich.
bmk#1476: having a load of money gives me more agency than being bound to other people
3dprint_the_world#6486: aha, but does it?
bmk#1476: almost by definition yes?
3dprint_the_world#6486: problem is, the threshold of money you need to actually start being completely free to pursue your own goal might actually be quite a lot of money (depending on the goal)
bmk#1476: well, it's not all or nothing
bmk#1476: the less you rely on others financially, the more agency you have
3dprint_the_world#6486: like if you have the goal of Elon Musk of getting to Mars, you need to be extremely wealthy before you can even start thinking about doing it yourself
bmk#1476: no, but, musk would have a hell of a lot harder job if he has a tenth of the money he has now
bmk#1476: and a hell of a lot easier job if he had 10x more money
3dprint_the_world#6486: but that's what I mean though
3dprint_the_world#6486: there's people like Musk, who do have the money to do it themselves, and that's great. But there's also people like Zubrin, who's time would mostly be wasted if they tried to do that path (make a lot of money, start a rocket company, etc.)
3dprint_the_world#6486: instead Zubrin decided to raise awareness and write books |
3dprint_the_world#6486: which influenced people like Musk
3dprint_the_world#6486: taking that path was probably way more effective for Zubrin's goals
3dprint_the_world#6486: and I'm citing Zubrin as an example because I'm pretty sure he could have run a successful rocket company if he wanted to (being an actual aerospace engineer with public following)
bmk#1476: i mean, if he had more money he could have made much more influential books
bmk#1476: he could have payed for much more pr to increase his reach
bmk#1476: he could have payed for ghostwriters
3dprint_the_world#6486: lol @bmk I think you're missing the point
bmk#1476: i'm saying more money always better
3dprint_the_world#6486: yes that's precisely *not* the point
3dprint_the_world#6486: you're ignoring the cost of obtaining the money
3dprint_the_world#6486: the cost in terms of time, aligning yourself to that goal, etc.
StellaAthena#3530: The purpose of money is to purchase happiness. I have a moderate amount of money, and doubling my income currently appears to require greatly decreasing my happiness. The increased money would not be able to buy me the lost happiness.
bmk#1476: i dont think i disagree
3dprint_the_world#6486: the question isn't "if I have a million dollars, how should I pursue my goals?", it's "is it more worthwhile to become an entrepreneur as an intermediate goal, or instead try to influence people do to my bidding?"
3dprint_the_world#6486: and sometimes the answer is "the latter"
3dprint_the_world#6486: sometimes those two might actually be the same goal.
3dprint_the_world#6486: (this is also why the concept that superintelligence will just translate any goal into 'make more money' is kind of bogus)
3dprint_the_world#6486: (even though instrumental convergence definitely is a useful concept to thinkabout)
bmk#1476: i mean, ML engineer isn't exactly a low paying job, would be high happiness for me, and also instrumentally useful in itself towards alignment goals
StellaAthena#3530: It was a big change to how I thought about the world when I realized the saying “money doesn’t buy happiness” is false |
bmk#1476: so i guess in my case it lines up
StellaAthena#3530: Sure
StellaAthena#3530: That’s perfectly reasonable. I currently make 90k/year, but have turned down offers for almost twice that much to work at a hedge fund
3dprint_the_world#6486: I think "money doesn't buy happiness" isn't intended to be taken literally. It's supposed to mean "you may think you'll be happy if you have more money, but that's actually not guaranteed"
StellaAthena#3530: This would make me unhappy in several ways:
1. More boring job
2. Worse ability to manage my disabilities
3. Less flexibility in when I work.
bmk#1476: i think vague truisms with tons of asterisks attached probably just aren't the best way to convey wisdom in general
3dprint_the_world#6486: yeah well
3dprint_the_world#6486: people like that shit
3dprint_the_world#6486: lol
3dprint_the_world#6486: personally I'm a huge fan of Yoga Berra-isms
bmk#1476: i think it's a great tragedy that speaking in riddles is associated with wisdom
StellaAthena#3530: I think a more accurate truism is “if you aren’t happy without money (and have enough to satisfy your needs), having more money won’t make you happy”
3dprint_the_world#6486: i.e. "You can observe a lot by just watching."
StellaAthena#3530: Vaguely styling this after a much more reasonable truism: if you can’t be happy single, you won’t be happy in a relationship.
3dprint_the_world#6486: but if you're not happy in a relationship, you may be happy single!
bmk#1476: we need to start normalizing clunky but accurate chunks of wisdom over poetic but kinda vacuous, inaccurate, and/or misleading truisms
bmk#1476: so yes that sounds much better |
bmk#1476: i can do one better: "You can observe a lot by just observing." :bigbrain:
3dprint_the_world#6486: lol
3dprint_the_world#6486: I suppose it's a play on the different meanings of 'observe' and 'watch'
3dprint_the_world#6486: kind of like "if you come to a fork in the road, take it"
3dprint_the_world#6486: I always took that to mean: "If you come to a fork in the road, take one option, instead of going back"
3dprint_the_world#6486: i.e. "don't let a tough decision discourage you from trying at least one option"
3dprint_the_world#6486: of course you can also interpret it as a dining fork stuck in the road, but that's just silly.
bmk#1476: "the world if people just said this: [insert picture of utopia]"
3dprint_the_world#6486: lol
3dprint_the_world#6486: maybe
3dprint_the_world#6486: but as I said, people like this shit
bmk#1476: but this is the Cool Rationalists Club
3dprint_the_world#6486: I guess maybe phrasing things in that way makes them more memorable
bmk#1476: clearly they just need to start using anki
3dprint_the_world#6486: it's why people like poems
3dprint_the_world#6486: if you think about it, the idea that words have more value if they rhyme is kinda weird and absurd
3dprint_the_world#6486: but I suppose it does something to the normie human brain
3dprint_the_world#6486: there's some trigger in there that if it 'sounds' aesthetically 'right', then the words themselves carry more importance/truth/meaning
bmk#1476: the entire idea of (LW) Rationalism™©️ is to find all the places where the normie human brain makes wrong judgements and try to compensate for them, right
Sid#2121: ah yes, the classic rationalist hot take of "aesthetics is useless" |
Sid#2121: very :bigbrain:
bmk#1476: this is entirely different from my take what the heck
bmk#1476: i'm saying that poetry is useless *for conveying practical, actionable, concrete advice*
Sid#2121: hard disagree
Sid#2121: poetry and epic poems have been used to convey concrete advice for millenia
Sid#2121: have you heard of dreamtime stories?
AI_WAIFU#2844: Meanwhile on LW https://www.lesswrong.com/posts/fwvr3fXdAFTdfszMB/the-steampunk-aesthetic
bmk#1476: if i want to tell someone "don't let a tough decision discourage you from trying at least one option" i should just say that and not some compressed poetry version
bmk#1476: that doesn't make it *good* at it
Sid#2121: it *does* though otherwise all of our folkloric stories would be dry and textbook-like
AI_WAIFU#2844: but they are though?
bmk#1476: people have been using leeches to cure diseases for ages
Sid#2121: bro folklore is fucked up, disgusting, weird, and beautiful. idk what you've been reading
Sid#2121: totally different thing
3dprint_the_world#6486: I think I ought to mention that my interpretation of Berra is actually probably not the typical normie version
3dprint_the_world#6486: I think a lot of normies probably just think of Yogisms as just being absurd/nonsense
3dprint_the_world#6486: But I think they fail to grasp the true meaning of Yogism
StellaAthena#3530: Does anyone know how well does GPT-3 work as a generator of text *for training ML models*? Has anyone taken 100 GB of output and tied training a model on that?
AI_WAIFU#2844: I think that's explicitly forbidden
bmk#1476: they cant do nuffin bout it |
AI_WAIFU#2844: but see all the distilling literature
Sid#2121: paying for 100GB of gpt3 outputs would probably cost about as much as just training a new model
chirp#4545: @StellaAthena https://twitter.com/krandiash/status/1290151816705880066
bmk#1476: i expect it to be significantly worse than straight distillation
StellaAthena#3530: @chirp that’s exactly what I’m think about!
StellaAthena#3530: @bmk @AI_WAIFU @Sid I’m not interested in doing this, and I’m not interested in an alternative method with a similar result. For theoretical reasons, I am specifically interested in the exact question “if I ask a language model to generate a dataset for training a language model, will the resulting dataset be usable?”
bmk#1476: for the specific case where model 1 is much much bigger than model 2?
bmk#1476: er, it would probably work better than a control, but *how well* is another question and i'm not sure i can say with confidence that the result would be stellar
StellaAthena#3530: Yes.
bmk#1476: this is basically distillation but discretized, and the main advantage of distillation comes from the "dark knowledge" in the probabilities that *aren't* the correct class
bmk#1476: so i expect it to be strictly worse than distillation
olives ❀#2305: i can explain quantum superposition and entanglement now
olives ❀#2305: quantum superposition
in their words: a superposition is something both 1 and 0
in my word: a superposition means "i dont know"
quantum entanglement
in their words: you can transfer data instantly by knowing the relationship between two quarks
in my word: the relationship cannot be established faster than the speed of light. hence, this is stupidly useless. it is the same as if i wrote one and zero on two papers. then, i tell you to take it to the other side of the world. once you look at your paper, you immediately know what my paper will say. 🙃
olives ❀#2305: conspiracy theory: quantum computing is fake. we are all being fooled like that time with the hydrogen monoxide prank |
olives ❀#2305: wait ||fuck||, my idea was already proposed by einstein
3dprint_the_world#6486: > a superposition means "i dont know"
This is incorrect.
3dprint_the_world#6486: Read up on Bell's theorem 😃
3dprint_the_world#6486: I would say that: A superposition is a *fundamentally different thing*, which the word 'both' can't really adequately capture.
olives ❀#2305: oh no... 🙈 🤯 💥
3dprint_the_world#6486: the usual sense of the word 'both' is just inadequate here. i.e. someone can be 'both' male and European, but a light bulb can't be 'both' on and off at the same time. The sentence just doesn't adequately convey the meaning of superposition.
3dprint_the_world#6486: the problem is just because you're trying to make sense of a quantum thing by analogy to e.g. light bulbs or coins.
3dprint_the_world#6486: but you have to instead try to think in a different way: the superposition (or, specifically, the wavefunction) *is* the fundamental thing.
3dprint_the_world#6486: so it's not like we are taking mutually incompatible states (spin up and spin down) and yet creating a state that is both. Instead, an electron in a superposition of spin up and spin down is just a basic thing in nature that exists. We *observe* what look like actual discrete states when we do measurements, sure, but that's really just an illusion.
olives ❀#2305: oh wow thank you; everyone is so nice here
olives ❀#2305: 🤗
olives ❀#2305: huggingface
3dprint_the_world#6486: nah we're not that nice, we're just happy to waste time shooting the shit with strangers lol
3dprint_the_world#6486: (also it's worth mentioning this isn't necessarily the view that everyone in the field of QM interpretations agrees on. But that's a very long discussion.)
gwern#1782: if you train on sampled output, you're also training on the really screwed up distribution which is said sampled output compared to the original logit distribution
gwern#1782: you're stacking flaw on flaw
StellaAthena#3530: Tbh I think it’s clearest in math
StellaAthena#3530: 1 + i is a superposition of 1 and i. It’s not both, it’s not an uncertain choice between them. It’s just another thing
3dprint_the_world#6486: also, QM actually distinguishes between 'I don't know if it's 0 or 1' and 'it's in a superposition of 0 and 1'. Those are distinct states and they have different properties. If you're interested, these are all captured in the so-called density matrix formalism. |
StellaAthena#3530: (Forgive my lack of normalization)
3dprint_the_world#6486: It was bothering me tbh, but you're forgiven
3dprint_the_world#6486: (1 + i)/sqrt(2)
StellaAthena#3530: If I had a square root key I would have used it
3dprint_the_world#6486: lol
StellaAthena#3530: EXCUSE ME
StellaAthena#3530: I think you mean [sqrt(2) + i sqrt(2)]/2
3dprint_the_world#6486: lol
bmk#1476: dont qubit values live on the entire complex projective plane
3dprint_the_world#6486: you mean the complex projective *space*
bmk#1476: basically the same thing
3dprint_the_world#6486: not really but ok
bmk#1476: anyways ok so complex projective space: doesn't that mean that every point, not just those with norm 1, maps to a different point on the bloch sphere
3dprint_the_world#6486: I think you're in a state of both understanding QM, not understanding QM, and everything in between
bmk#1476: bloch sphere is CP^1 right
3dprint_the_world#6486: yea
bmk#1476: so 1 + i and (1 + i) / sqrt(2) are in fact not the same point
3dprint_the_world#6486: oh I see what you're getting at. Yeah they're different but I think Stella meant the pure state (1+i)/sqrt(2)
olives ❀#2305: what have i caused
AI_WAIFU#2844: yeah this happens pretty much every day |
olives ❀#2305: im going to make a discord bot with this: https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313
olives ❀#2305: i'm open to people telling me that this is the stupidest idea i have ever had
olives ❀#2305: there we go bmk and AI_WAIFU is now ~~typing~~ shouting at me
bmk#1476: ~~at least we're not talking Catgirl Theory~~
olives ❀#2305: ...
AI_WAIFU#2844: Now it's been a while since I looked at QM, but if you ignore phase for a sec, a superposition can be interpreted as "I don't know" so long as you look at just the eigenstates.
AI_WAIFU#2844: Right?
cfoster0#4356: I'll bite. Not on this Discord, plz @olives ❀
olives ❀#2305: https://images.weserv.nl/emoji.gif?url=https%3A%2F%2Fapi.allorigins.win%2Fraw%3Furl%3Dhttps%253A%252F%252Fcdn.discordapp.com%252Femojis%252F770799747354853388.gif%253Fv%253D1&w=48&h=48&output=gif&fit=contain&filename=emoji.gif&n=-1
bmk#1476: This discord server is not for general ML help, r/learnmachinelearning is probably the place to go
AI_WAIFU#2844: That's a funny way to spell r/machinelearning
bmk#1476: r/ML is also probably not the right place to ask
AI_WAIFU#2844: Yeah but everyone posts their noob stuff there anyways.
AI_WAIFU#2844: which was the joke
bmk#1476: Ah, ok
bmk#1476: This server is like what r/ML is supposed to be but hyperfocused on scaling and with some alignment and miscellaneous and memes mixed in
AI_WAIFU#2844: Chilli might take offense to that
cfoster0#4356: Err I was also just saying "don't use our data or deploy the bot here"
bmk#1476: well, it's not possible to deploy a bot to a server you don't own
bmk#1476: but yes we will not add the bot |
bmk#1476: (and *please* do not scrape discord, that's against tos and also just generally a dick move)
olives ❀#2305: ~~userbot~~
olives ❀#2305: i was planning on making it DM-only 🤔 but i dont think its possible for someone to talk to bot without mutual server
olives ❀#2305: > I've like...never had to ban anyone, ever
well, thats good to know
StellaAthena#3530: While this is true, it’s not true in an interesting sense. You can make the same statement about an unobserved die roll.
AI_WAIFU#2844: I don't know, to me that really simplifies QM from "wobbly woblly superposition handwavy bullshit" down to "unobserved die roll"
StellaAthena#3530: Let’s define a new term: qchance. qchance is closely related to probability, similar to how standard deviation is closely related to variance. Our whole lives we’ve been used to talking about probability, but in QM it’s often more convenient to talk about qchance. The arithmetic relationship is that qchance • qchance* = probability, where x* denotes the conjugate of x. For any event, qchance(E) • qchance(E)* = probability(E)
StellaAthena#3530: (I wrote the wrong thing at first, but now it’s correct)
StellaAthena#3530: qchance(roll a 6 on a d6) = 0.4082... because 0.4082 • 0.4082* = 1/6
StellaAthena#3530: This is just an abstract concept right now. I’m not talking about QM, just about probability distributions
StellaAthena#3530: qchance(flip a coin and get heads) = sqrt(2)/2 because sqrt(2)/2 • (sqrt(2)/2)* = 1/2 = probability(flip a coin and get heads)
3dprint_the_world#6486: to see precisely why this is incorrect, we can talk about ~~Bell's theorem~~ the EPR paradox
StellaAthena#3530: Now, the core weird thing about quantum mechanics is that we are used to probabilities being between 0 and 1
StellaAthena#3530: for the examples I have mentioned so far, qchance is also between 0 and 1. However that turns out to not always be the case.
3dprint_the_world#6486: but basically, a superposition actually contains more information than "I don't know"
3dprint_the_world#6486: and you can even quantify this in terms of its entropy, in the density operator formalism
3dprint_the_world#6486: and, even more concretely, you can manipulate a superposition to actually yield a spin-up or spin-down state with 100% probability, without destroying information, something that wouldn't be possible if you just didn't know.
StellaAthena#3530: Some complex numbers satisfy the equation 0 <= x • x* <= 1. Specifically, all of the ones inside the unit disc.
StellaAthena#3530: When we say that something is in “superposition” this is really all we are saying: “the qchance of the event is a complex number” |
3dprint_the_world#6486: even though it seems weird, a superposition of two states is actually a pure state with entropy 0, whereas "I don't know if it's 1 or -1" is a mixed state with entropy 1.
bmk#1476: do those both fall somewhere on the sphere
StellaAthena#3530: Yes, this is absolutely weird. Every example of a random variable that you can name probably has a real-valued qchance
3dprint_the_world#6486: @bmk the mixed state is inside the sphere. not on it.
bmk#1476: ohhhh
bmk#1476: so *the entire sphere* is pure states
3dprint_the_world#6486: ya
bmk#1476: so the space of all possible quantum states is actually a ball
3dprint_the_world#6486: well, for a single qubit, yes.
bmk#1476: yeah thats what i meant
StellaAthena#3530: But this is all that is happening. It’s just that the way the information is encoded is weird. And we can make use of this! There is quantum information theory and quantum probability theory! These are fields with really interesting work. You’re dealing with an unusual random variable, but you’re still just dealing with a random variable. You don’t need to throw up your hands and say “nothing makes sense anymore”
StellaAthena#3530: @dopa @AI_WAIFU @cfoster0 does that make sense?
olives ❀#2305: https://images.weserv.nl/?url=https%3A%2F%2Fapi.allorigins.win%2Fraw%3Furl%3Dhttps://cdn.discordapp.com/emojis/776275163129184266.png?v=1&w=48&h=48&fit=contain
olives ❀#2305: wow yall are so smart
smh imagine
3dprint_the_world#6486: we're not smart we just spent years studying this shit, lol
StellaAthena#3530: ^^ tbh yes
3dprint_the_world#6486: ok well I mean Stella is definitely smart
AI_WAIFU#2844: hold on
3dprint_the_world#6486: btw this entropy stuff is basically the reason why making quantum computers is hard - your *pure* superposition states keep converting to high-entropy mixed states 😁 |
StellaAthena#3530: Doing computation with this is tricky, and requires some cleverness.
StellaAthena#3530: Designing quantum algorithms is hard, and as @3dprint_the_world says manic for he computers actually *work* is also hard.
StellaAthena#3530: So, what I said about qchance is true but it’s not the whole story. The other important piece is that you can *encode* these qchances into electrons
StellaAthena#3530: Bits have two states: 0 and 1. However with a quantum system that’s not the only option. You can create a *superposition* which is a *probabilistic combination of the two*. Effectively, the state of the electron is a random variable.
AI_WAIFU#2844: @3dprint_the_world can you give an example of a 0 entropy superposition? What does that look like for a single qbit?
StellaAthena#3530: Random variables are not “independent” or “dependent.” A sampling methodology can be, but it doesn’t make sense to ask if a coin flip is independent right? The same is true here.
bmk#1476: a coin flip itself is not dependent or independent
bmk#1476: two coin flips are independent *of each other*
StellaAthena#3530: So the answer is that two qbits are *usually* independent
StellaAthena#3530: There is a second phenomenon – different from superposition – that can cause two qbits to be dependent
dopa#3178: this what I was not sure about heh
StellaAthena#3530: So back to computing. I’m going to switch notations now to what physicists tend to use, as its 99% awful but actually rather useful in this precise circumstance.
Let’s denote the two states |0> and |1>, the extra fluff is largely to distinguish them from the coefficients they are going to get. We write a|0> + b|1> to represent the idea that there are a-many outcomes that come up |0> and b-many that come up |1>
dopa#3178: does entanglement requires superposition ?
StellaAthena#3530: There’s one caveat here, which is that I’m again not talking about probability. I’m talking about qchance
StellaAthena#3530: Here they’re using “up” and “down” instead of “zero” and “one” but it’s the same idea https://cdn.discordapp.com/attachments/729741769738158194/787900877729955840/image0.png
StellaAthena#3530: |z|^2 = z•z* so this is the same way to convert qchance to probability that I introduced
StellaAthena#3530: So there’s a qchance of 3i/5 that the electron is up and a qchance of 4/5 that it is down
StellaAthena#3530: This gives a *probability* of 9/25 for up and 16/25 for down. |
StellaAthena#3530: It’s worth noting that another popular notation is to use vectors. One of the states is [0, 1] and the other is [1, 0]. The sphere @bmk was talking about is defined by these vectors
StellaAthena#3530: I prefer the |> notation for talking about qchance and probability because the coefficients correspond directly to the already familiar notion of counting outcomes in an obvious way.
bmk#1476: And the sphere connects nicely to complex numbers too right
dopa#3178: why electron can't be sideways (left or right) ?
bmk#1476: i think the tldr is "spin isn't actually a ball spinning, it's just weird"
bmk#1476: does that sound close enough to right
StellaAthena#3530: BMK is exactly right
StellaAthena#3530: It’s even spin!
StellaAthena#3530: “Spin” is a basic physical property lien charge
StellaAthena#3530: And like charge, there are two options (for electrons)
StellaAthena#3530: We usually call them “spin up” and “spin down” but the name is somewhat artificial
StellaAthena#3530: The reason we don’t have three states is that electrons don’t have three possible spin values
dopa#3178: does spin represents entropy in way ?
StellaAthena#3530: No
StellaAthena#3530: Well, systems with many particles in superposition have a higher entropy than ones without
StellaAthena#3530: (Because they have probabilities. Certain events have low entropy and random events have higher entropy)
AI_WAIFU#2844: Stella, do you know what 3dprint_the_world was going on about with zero entropy superpositions?
dopa#3178: me thinks aspect of entropy is even harder to understand
StellaAthena#3530: I’m not sure, but my understanding is that (if we are only talking about the state of the particle) 0 entropy requires it to not be in superposition
dopa#3178: how state of the particle is defined ? |
AI_WAIFU#2844: Yeah, I can see it for superposition's of things that aren't eigenstates, but for eigenstates 0 entropy <==> not in superposition.
AI_WAIFU#2844: was my understanding
bmk#1476: i'm honestly still not entirely sure what the difference is between being on the sphere and being inside the ball
bmk#1476: so even superpositions are pure states too, i.e they're on the sphere
bmk#1476: then what the heck is a *mixed* state
AI_WAIFU#2844: I don't get the sphere thing at all tbh I find it confusing. I think in terms of vectors
bmk#1476: but spheres are nice and visual
StellaAthena#3530: Pure vs mixed states requires me to use the word “Hilbert space” several times and therefore it outside the scope of this conversation
bmk#1476: I think I know what a hilbert space is
AI_WAIFU#2844: can we just pretend a hilbert space is a regular 2 dimensional vector space
bmk#1476: Unless there's a physics specific meaning
dopa#3178: I tried to understand Hilbert space once
StellaAthena#3530: So far I’ve only been talking about pure states
StellaAthena#3530: Oh! I’m silly
StellaAthena#3530: So yes, everything I’ve mentioned is a pure state. A pure state is when a particle is either in state 0 or state 1
bmk#1476: A hilbert space is an inner product space where every sum of unit basis vectors where the sum of the squares of the coefficients exists converges in the space right
StellaAthena#3530: A *mixed* state refers to multiple qbits being “mixed” together. Aka entangled
AI_WAIFU#2844: oh
StellaAthena#3530: I wasn’t going to go there, but okay let’s
AI_WAIFU#2844: well that changes everything then |
bmk#1476: I'm probably going to sleep soon, I'll catch up on the convo tomorrow
AI_WAIFU#2844: cause now ya gotta look at the eigenstates of the system, not just the individual components
StellaAthena#3530: There is magic that you can do (as a physicist to explain the how on a mechanical level, I don’t know) that produces *correlated* qbits
dopa#3178: the systems in this case is one or multiple qubits ?
StellaAthena#3530: Multiple
StellaAthena#3530: We generally say “correlated” but it’s important to note that it’s **physically impossible** for two entangled particles to agree
StellaAthena#3530: One will be in state 0 and the other will be in state 1
StellaAthena#3530: There are no other options
dopa#3178: I am confused; what does entropy represent in multiple qubits
StellaAthena#3530: @dopa this isn’t about entropy directly, but the answer is “the same thing as it always does”
StellaAthena#3530: In classical thermodynamics you can have several particles moving around, right? And talk about the entropy of the system?
dopa#3178: yes
StellaAthena#3530: Same thing here
dopa#3178: oh
StellaAthena#3530: Now, one of the really weird things about entangled photons is that you can move them far apart in space
StellaAthena#3530: Side note: we generally talk about electrons and photons when we talk about quantum particles. We aren’t required to restrict ourselves to them, but they’re easiest to use for computation and are the defaults.
StellaAthena#3530: Electrons encode their state with their spin, while photons encode their state with their polarity
dopa#3178: does polarity has more states then electrons ?
StellaAthena#3530: Nope
StellaAthena#3530: Still two classical states and then you can have superposition of them too |
dopa#3178: so electron is moving inside qbuit and has up/down state?
StellaAthena#3530: A qbit is not a physical object
dopa#3178: https://tenor.com/view/brule-what-ay-what-gif-14969459
StellaAthena#3530: Well
StellaAthena#3530: There is no particle called a qbit
bmk#1476: There exist physical objects that can represent a qbit
StellaAthena#3530: Just like there is no particle that is called a bit
dopa#3178: ok
StellaAthena#3530: The electrons in my shoes are not bits. The ones in my computer are.
StellaAthena#3530: Calling something a (q)bit is a statement about the fact that we are using it to encode information
StellaAthena#3530: It’s not a type of particle
dopa#3178: I understand this
dopa#3178: in classical computer multiple electors represent single bit
dopa#3178: if I am not mistaken
StellaAthena#3530: Idk
StellaAthena#3530: I’m a mathematician
dopa#3178: got it
dopa#3178: the confusion point I have is about state of the system; is position and momentum of particle + up/down state or only position and momentum of particle ?
dopa#3178: not sure, may be it is irrelevant here
StellaAthena#3530: Up and down are not directly connected to position and momentum |
StellaAthena#3530: Particles have many properties. Spin is one of them, and position is another one of them
StellaAthena#3530: Just like the spin of an electron is a random variable, so is its position. However position is continuous, so now you have your desired many “state” situation!
StellaAthena#3530: I put state in quotes because there’s infinitely many of them and we don’t usually call it a state but oh well
StellaAthena#3530: Another term you may have heard is “wave function”
dopa#3178: yes, did not followed through to understand it in 2D space, using python script in grid world
StellaAthena#3530: A wave function tells you the qprob that a particle is in a given location, just like the expression |0> + |1> tells you the qprobs
dopa#3178: ok
StellaAthena#3530: Like classical position, “quantum position” aka “wave functions” are the solution to a certain differential equation (the Shrodinger Equation)
StellaAthena#3530: One other big buzzword I want to mention to make sure I hit all the common misconceptions about QM: observation.
dopa#3178: yep, there is github code for Shrodinger Equation in 2D space, this is only way how I can understand mathematics 😦
dopa#3178: state of the system is position of particle and up/down particle state, did I understand this correctly ?
StellaAthena#3530: Yes
StellaAthena#3530: There are other things that can be quantum-y about a particle but those are the two things we are discussing.
dopa#3178: thank you!
StellaAthena#3530: The idea of observation / measurement in QM can sound really weird (what do you mean the state of the electron depends on if I look at it?!?!?) but in the qchance framework it’s quite simple
dopa#3178: I see it as temperature transfer of a sort, but it is probably wrong
StellaAthena#3530: You had a random variable (a coin spinning in the air) and then you *measured which outcome happened*
dopa#3178: this makes sense
StellaAthena#3530: Roughly speaking, when a particle is in superposition this means that it is still up in the air spinning.
StellaAthena#3530: When it gets “measured” (which really means “interacts with another particle in a certain way) its like the coin lands |
StellaAthena#3530: Do you know what “degrees of freedom” means?
dopa#3178: not sure, directional constrains ?
StellaAthena#3530: It’s more general than that
dopa#3178: range constrains?
StellaAthena#3530: It refers to how underdetermined a system is
dopa#3178: ok
StellaAthena#3530: When you give a description of a physical system, often times your description fits multiple possible systems
dopa#3178: underdeteminded in terms of positions and up/down state ?
StellaAthena#3530: Quantum mechanics is non-deterministic. When a particle is in superposition, it means that **physics doesn’t uniquely determine its property**
StellaAthena#3530: It is consistent with all of the laws of physics and the past states of the universe for it to be spin up, and it’s also consistent for it to be spin down
StellaAthena#3530: Likewise it’s consistent with physics for the particle to be in many different locations
StellaAthena#3530: (Not at once, just that there are multiple options)
dopa#3178: position is consistent with physical properties, right ?
StellaAthena#3530: Yes
dopa#3178: evolution of universe does not care is if electrons are up or down, got it 🙂
StellaAthena#3530: I’m tired and should pretend to go to sleep now.
dopa#3178: thank you very much and good night, this was super helpful conversation!
StellaAthena#3530: But if you want to understand a quantum algorithm, I recommend two sources on shore’s algorithm:
https://www.scottaaronson.com/blog/?p=208 |
ERROR: type should be string, got "\nhttps://youtu.be/lvTqbM5Dq4Q\nStellaAthena#3530: My pleasure\ndopa#3178: thx for links, btw I am more interested in context of TSP problems, not sure if encryption is related to TSP's\nAiratak#7842: Did google stop working? Damn 2020.\nAiratak#7842: Use spotify man\nAiratak#7842: Youtube has low quality music\nAiratak#7842: I mean spotify is also lossy but it is better than youtube\nAiratak#7842: tidal is lossless i guess\nAiratak#7842: Yea for me too\nAiratak#7842: lagging\nAiratak#7842: See, I am not an audiophile and can't really tell a difference between spotify's lossy and tidal's lossless\nAiratak#7842: but Youtube is just really bad\nAiratak#7842: there are people talking in the background at times\nAiratak#7842: idk maybe\nAiratak#7842: They may use GCP which also may have an outage?\nAiratak#7842: https://www.reddit.com/r/discordapp/comments/7mv84u/does_discord_still_run_on_gcp_google_cloud/\nAiratak#7842: This is old but if it is still correct then it explains why this Google outage makes discord slow\nAiratak#7842: Yup\nAiratak#7842: It is an issue with google servers" |
Airatak#7842: So GCP and all google services down
dopa#3178: everything is down that requires auth
Airatak#7842: Well colab works. I'm happy.
Airatak#7842: wait even colab is acting weird
Airatak#7842: Even GCP seems to be down
Airatak#7842: That is why discord lags
Airatak#7842: For me gmail is not working at all
dopa#3178: it partially down 😦
dopa#3178: I think discord and other app having issue because of surge of users
Airatak#7842: Gmail is back up!
Airatak#7842: not everything
Airatak#7842: Gmail still working weird
Airatak#7842: says contacts not available
Airatak#7842: Discord still lagging a bit
dopa#3178: google voice is still down
Airatak#7842: I feel like I witnessed a historic moment today
dopa#3178: hehe
Airatak#7842: Drive and Slides do not work yet
dopa#3178: something similar happened couple years ago also
dopa#3178: if I am not mistaken |
dopa#3178: only true backup I did not have is google voice
dopa#3178: 😦
dopa#3178: let hope it is not a gmail massive leak lol
dopa#3178: damm that would be disaster, deleting things from search engines business would make a killing
StellaAthena#3530: https://twitter.com/SophosAI/status/1338486871169650688
olives ❀#2305: anyone recognize this ip 34.201.172.85
olives ❀#2305: http://34.201.172.85
StellaAthena#3530: It’s owned by HF
StellaAthena#3530: FYI it’s generally bad practice to link directly to an IP address of unknown origin.
olives ❀#2305: its huggingface's ip 🤣
olives ❀#2305: i got it from `dig huggingface.co`
olives ❀#2305: i guess noone memorized huggingface's ip but everyone memorized the youtube video id for every rick roll ever
StellaAthena#3530: ^^
StellaAthena#3530: **Please** don’t post random, unidentified IPs as links
bmk#1476: I don't think random ips are the end of the world since anyone can buy a domain anyways but I'm still not entirely sure how this particular one is relevant to anything
Louis#0144: http://127.0.0.1/
Anyone recognize this URL? I think its hosting illegal content
bmk#1476: oh god oh fuck someone is hosting a shockingly large amount of images of joe biden eating a sandwich there
bmk#1476: what a weirdo |
Ken#8338: How good of a predictor are you? https://www.metaculus.com/ai-progress-tournament/
Ken#8338: $50,000 AI forecasting tournament:
Metaculus, an AI forecasting community and website, has announced an AI forecasting tournament, starting this month and running until February 2023. There will be questions on progress on ~30 AI benchmarks, over 6-month; 12-month; and 24-month time horizons. The tournament has a prize pool of $50,000, which will be paid out to the top forecasters
3dprint_the_world#6486: the state |0>, the state |1>, and the state (|0> + |1>)/sqrt(2) are all pure states with 0 entropy.
3dprint_the_world#6486: in fact you can do a basis change to say |a> = (|0> + |1>)/sqrt(2) and |b> = (|0> - |1>)/sqrt(2), and now the states |0> and |1> become the superposition states.
3dprint_the_world#6486: |0> = (|a> + |b>)/sqrt(2) 😃
3dprint_the_world#6486: an example of this in physics is that a photon can be either vertically or horizontally polarized, or it can be either left-circular or right-circular polarized.
3dprint_the_world#6486: or any superposition of these
StellaAthena#3530: Ohhhhh
StellaAthena#3530: This made things click 🙂
Deleted User#0000: i can only think of pipe operators from Elixir when i see |> https://elixirschool.com/en/lessons/basics/pipe-operator/
frank cilantro#9153: somebody should come up with probability amplitude neural networks haha
StellaAthena#3530: What do you mean?
StellaAthena#3530: How do these differ from quantum circuits?
frank cilantro#9153: i guess i am unfamiliar w them so maybe they do exist lol
dopa#3178: spiking neural networks signals are encoded in sine-wave, if I am not mistaken
StellaAthena#3530: What do you want these probability amplitude neural networks to do, precisely?
3dprint_the_world#6486: keep in mind that in quantum circuits all operations must be unitary, which implies they are reversible and no information loss occurs
3dprint_the_world#6486: whereas usually in neural nets there's substantial information loss from input->output (this is the whole point)
3dprint_the_world#6486: so you have to get around this using ancilla qubits |
Daj#7482: Attract VC money
Daj#7482: :berk:
frank cilantro#9153: oh nothing in particular, just was wondering if it was possible as a proof of concept
3dprint_the_world#6486: there's lots of work already in quantum machine learning, etc.
StellaAthena#3530: Okay, but what do you want those words to mean? When defining a new type of object you can't just give it a name. You need to give it a functionality
3dprint_the_world#6486: so far quantum machine learning hasn't really been shown to have major advantages because the bottleneck is getting data in/out of the system
3dprint_the_world#6486: which is just as slow in quantum computers as it is in classical computers (actually, slower, because of how delicate quantum states are)
Louis#0144: https://open.spotify.com/track/6BRUJVnmtsg3gwyzW7b6OP?si=TlNHRQg7Q6CLq8DecHADow misread this as luigi's song
Louis#0144: got v confused
Louis#0144: like dang luigi u goddamn bard
StellaAthena#3530: relatedly, @3dprint_the_world we were having a chat about quantum algorithms a month or so ago
Louis#0144: the quantum NN paper is really good
Louis#0144: erick loved it
Louis#0144: which means its top notch info theory
StellaAthena#3530: @3dprint_the_world You were talking about decomposing spaces with symmetry, but I wasn't quite able to follow. Can you explain that some more?
Louis#0144: I dont think anyone would argue that theyre ready *right now*
dopa#3178: @3dprint_the_world if TSP can be solved faster in QM it will be breakthrough in machine learning
Louis#0144: I think the consensus is a like a decade or two awya
Louis#0144: doubt tbh
Louis#0144: TSP being solved would break NP entirely |
dopa#3178: haha
StellaAthena#3530: There is zero reason to believe this would happen though
dopa#3178: corrections, substantially improved solutions
Louis#0144: ML is already drastically improving TSP
Louis#0144: so I would agree there
StellaAthena#3530: What **reason** do you have for believing this
dopa#3178: many learning problems can be defined as TSP's
StellaAthena#3530: No AI algorithm will ever solve P vs NP because AI algorithms aren't perfectly correct. AI being good at something is basically unrelated to its computational complexity, as complexity theorists define it.
frank cilantro#9153: what is the current best bound on length of approximately optimal tsp
Louis#0144: Oh i meant w my statement like
Louis#0144: approximations
dopa#3178: but it might help to find heuristics to convert problems from NP-hard to NP-complete ?
Louis#0144: I think QC will improve approximations
frank cilantro#9153: isn't it like 1 < x << 2
frank cilantro#9153: ratio
dopa#3178: it still unknown what what is in NP-hard and NP-complete and largely based on creatively defining heuristics
StellaAthena#3530: @dopa No, nothing that you're saying is true.
dopa#3178: 😦
dopa#3178: may be I saying it wrong heh
StellaAthena#3530: Every NP-complete problem is NP-hard. There are NP-hard problems that are not NP-complete. These statements follow immediately from the definitions. |
dopa#3178: There are NP-hard problems that are not NP-complete. - this what I meant to say, sorry
dopa#3178: for being a pleb 🙂
StellaAthena#3530: Heuristics have nothing to do with determining if a NP-h problem is NP-c
dopa#3178: hmm
StellaAthena#3530: Given a NP-h problem, to show that it's NP-c you need to show that there is a polynomial time algorithm for verifying a proposed (correct) solution
dopa#3178: I am not saying that it would be possible to formally verify correct solution
StellaAthena#3530: aka, you need to show it's in NP
Louis#0144: do you agree that the bounds on TSP approximation will greatly improve tho?
dopa#3178: but for example in case of games like GO, startcraft it is possible to find useful solutions
StellaAthena#3530: @dopa So what? Both of those problems are in P. From a complexity theory standpoint they are trivial.
dopa#3178: SC is in P ?
dopa#3178: SC is not NP-hard ?
Louis#0144: o boy
Louis#0144: 🍿
dopa#3178: I am idiot ? 😦
Louis#0144: https://tenor.com/view/dis-gonna-be-good-drama-pull-up-chair-happy-hopeful-gif-8009838
dopa#3178: I miss understood everything 😭
StellaAthena#3530: Define "substantially." I believe the exponential time hypothesis, which says that there is no subexponential algorithm for some NP-C problems. In particular, I believe TSP to be an example of such a problem.
3dprint_the_world#6486: I'm talking about theory, not actual practical computers. As long as the bottleneck is the amount of data you have to get in/out of your system, quantum machine learning will always be just as slow as classical machine learning.
StellaAthena#3530: You're not an idiot. You have probably just learned about complexity theory from pop writings. |
dopa#3178: but we find approximate solutions to TSP that cannot be formally verified but still improve some operations efficiency or automate process
dopa#3178: I try to read books more then anythings, but I know without formal education, I can make stupid mistakes
StellaAthena#3530: On a scale of badness, this is worse than learning quantum mechanics from pop writing but probably a smidge better than learning Godel's Incompleteness Theorems from it 🙂
Louis#0144: whats that pop philosophy book that every 20 year old incel quotes for godel's incompleteness theorem
Louis#0144: lmao
Louis#0144: I forgot the name
Louis#0144: something about bach and godel
frank cilantro#9153: godel escher bach
StellaAthena#3530: Godel Escher Bach
Louis#0144: LMAO
Louis#0144: YES
dopa#3178: how is SC is not NP-hard please explain to me, pretty please
Louis#0144: god that book is such a waste of paper
dopa#3178: also to make more complicated, what is starcraft game is multi-agent system
dopa#3178: then it is also not NP-hard ?
frank cilantro#9153: i mean.. what is the problem statement for starcraft
frank cilantro#9153: just given a game, is a set of user inputs going to result in a win?
dopa#3178: it is partially observed state game
dopa#3178: and can be defined as POMDP
frank cilantro#9153: ok so the inputs are like.. user_input(t), fog(user, t), enemy_input(t), game_state(t) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.