data
stringlengths 115
7.61k
|
---|
sid#9193: cips?
kindiana#1016: lol nerf with moe :berk:
cfoster0#4356: https://arxiv.org/abs/2011.13775
𓅬 gabriel_syme 𓅬#3220: nvm I shouldn't pretend I know what I'm saying. (it's this btw: https://github.com/saic-mdal/CIPS)
sid#9193: Yeah there are a lot of similar things that fall under the category of “implicit function decoder + localized latents”
sid#9193: See nvidia’s recent paper on neural geometric level of detail
sid#9193: Unfortunately in 3D you have to store these localized feature codes at voxel corners (lots of memory) or in an octree (not differentiable)
𓅬 gabriel_syme 𓅬#3220: cool thanks for the paper! here for others: https://arxiv.org/abs/2101.10994
cfoster0#4356: Hmm by compositional do you mean something with an objectness prior?
sid#9193: If I understand CIPS correctly, it's similar in some way to https://arxiv.org/abs/2012.09161
sid#9193: Yeah. I mean something like what Hinton touched on in GLOM
𓅬 gabriel_syme 𓅬#3220: interesting this seems to be on the opposite side of the trend of discrete representations
𓅬 gabriel_syme 𓅬#3220: that zoom 👀
sid#9193: I think that an important connection to be made is the objects as graphs idea that's been put forward many times, and self attention as learning an adjacency matrix
sid#9193: But that's all I know so far. I'm looking closely at the gan-transformer paper released yesterday
𓅬 gabriel_syme 𓅬#3220: was going to ask you if you think that one is a good generator for this stuff
EricHallahan#1051: Discrete representations are only useful for discrete problems IMO.
cfoster0#4356: *DALL-E:* :guilty:
sid#9193: Also, for what it's worth, I think that NVIDIA paper I mentioned earlier is not very impressive. It's basically the same quality & memory usage as radial basis functions https://www.semanticscholar.org/paper/Reconstruction-and-representation-of-3D-objects-Carr-Beatson/d06a86d22f9e205f34f69910c0b13f906d75f889
sid#9193: (but less interpretable) |
𓅬 gabriel_syme 𓅬#3220: man my generative_models folder never seems to stop growing 😦 need a review of the last 6months or smth
sid#9193: I think, internally, the gan-transformer must be learning a compositional representation like hinton referenced in glom. The question is extracting it and reconstructing it
𓅬 gabriel_syme 𓅬#3220: I was really impressed by those attention maps they are showing, seemed to capture very fine detail (to my untrained eye). I need to try it on a dataset of my own though
𓅬 gabriel_syme 𓅬#3220: thankfully, in my domain, redundant datasets like above are not that hard to make 🙂
sid#9193: What are you trying to generate?
𓅬 gabriel_syme 𓅬#3220: hmm various things, but the simple cases imagine urban design
𓅬 gabriel_syme 𓅬#3220: simple volumes perhaps
𓅬 gabriel_syme 𓅬#3220: or maybe interior scenes of buildings like in the paper
𓅬 gabriel_syme 𓅬#3220: I can programmatically generate views that's easy actually, so making a toy dataset is possible
sid#9193: Hm. You might want to take a look at convolutional occupancy networks, but imo the quality is still not very good
sid#9193: Actually, that learns from 3D data, not images
𓅬 gabriel_syme 𓅬#3220: cool I have that
𓅬 gabriel_syme 𓅬#3220: one idea I had was to couple such a model with a Quality Diversity library I have that generates urban designs (in 2d heightmaps though). Thanks for the paper btw 🙂
EricHallahan#1051: People keep asking me about Wav2Vec and I say "we've been using vector quantization for decades in audio, so why are we not using VQ in high quality compression? Did you ever consider that?"
sid#9193: Do you think VQ provides a good path forward? My interpretation was that the reason vq-vae works well at all was just that it's effectively predicting a smaller image
sid#9193: but in this smaller image, the pixels are intelligently chosen patches, not solid colors
𓅬 gabriel_syme 𓅬#3220: really cool paper actually, probably because of how close to my domain the examples are. So can we do transformer occupancy networks? 🙂
Deleted User#0000: @sid what do you make of https://arxiv.org/abs/2102.13090
sid#9193: lemme take a lot
sid#9193: look* |
sid#9193: reminds me a lot of https://arxiv.org/abs/2012.02190
Deleted User#0000: > But that's all I know so far. I'm looking closely at the gan-transformer paper released yesterday
@sid yea recently there's been a resurgence of old transformers ideas
Deleted User#0000: https://arxiv.org/abs/2102.08606
sid#9193: (at first glance I'm not even sure what the difference between ibrnet and pixel-nerf is)
Deleted User#0000: Both actually have its roots from the induced set attention block from the Set Transformers paper
Deleted User#0000: But they both carry the idea further
Deleted User#0000: I haven't read pixel nerf yet
Deleted User#0000: Is that a significant paper in your opinion?
sid#9193: no
Deleted User#0000: Ok
sid#9193: All of these come from pixel-aligned implicit functions, which trains an SDF on 3D models of humans, then is able to one-shot reconstruct images of people with clothes
sid#9193: then pixel-nerf uses a nerf instead of sdf, and manages to not need 3D training data at all
sid#9193: It's a decent approach but it's difficult to reason about occlusion when all your features are fixed on camera rays
EricHallahan#1051: VQ is a vetted technique in the low-bitrate packetization of speech, which is what I was referring to.
sid#9193: I meant in the context of dall-e
EricHallahan#1051: In that context it is more of a liability than an asset.
EricHallahan#1051: It's useful (JPEG is still around for a reason), but it has problems with global consistency without some sort of backend.
EricHallahan#1051: Technically, you could train on GIFs if you wanted to. They have the advantage in this context of a fixed color palate which is indexed. But I don't know why you would want to do that.
sofiane#0880: does anyone know what the dataset for something like https://same.energy would be |
sofiane#0880: In the API i can see that each image has a source url, could it be just from web scraping himself, or from common crawl?
StellaAthena#3530: @sofiane Yeah it’s scraping the web.
genai (Immortal Discoveries)#0601: How many lines of code is GPT-2? I just want to make sure I am not mistaking what I think I see, any expert here?
cat_#4534: That's not a very useful metric
genai (Immortal Discoveries)#0601: still want to know
cat_#4534: 500-600
mgostIH#0245: Do you want to include its deps too?
genai (Immortal Discoveries)#0601: what are deps?
mgostIH#0245: dependencies
mgostIH#0245: Like tensorflow or pytorch, or the code loaded in GPUs to run
cat_#4534: It depends on how far you want to go. Pytorch or tensorflow, CUDA, python with libraries, your graphics driver, your OS...
genai (Immortal Discoveries)#0601: just the code itself of GPT, not the background stuff it calls or the cloud it runs on or whatever if you get me
mgostIH#0245: https://github.com/openai/gpt-2
mgostIH#0245: So aye, on the order of 500 lines
genai (Immortal Discoveries)#0601: yes i have looked but wanted to make sure i was right
genai (Immortal Discoveries)#0601: ok cool, i thought 350 maybe you overshot
cat_#4534: All the py files together are 500-600 for that implemetnation and gpt-2-Pytorch.
genai (Immortal Discoveries)#0601: Anyone here know what BPC evaluation is? How is it different from the Hutter Prize?
genai (Immortal Discoveries)#0601: I mean is it compression, or if not how does it know the bytes per char...seems like it suggests compression
triggerhappygandi#0001: https://github.com/openai/gpt-3 |
triggerhappygandi#0001: This has ***over 9 thousand*** stars
triggerhappygandi#0001: Why
triggerhappygandi#0001: It literally has _nothing_
mgostIH#0245: AI go brrr
Deleted User#0000: Actually, when you say "inevitably collapses", wdym here?
Because when I hear "collapse", the only thing I can think of with that name is mode collapse. But that's a GAN-specific thing and my proposal was a non-GAN proposal
GANs are pretty hard to train, so that's part of the idea with my proposal, to avoid dealing with the trickyness of GANs
StellaAthena#3530: That’s what happens when you’re famous. Same reason that everyone’s talking about Sutton’s paper despite the fact that it doesn’t do anything.
triggerhappygandi#0001: Papers that don't do anything are abundant. The new paper from Google that compares transformer modifications says so flat out. But it's jarring to see a repo that literally only has .md files and have nearly 10000 stars. What are people forking it for? Modifying the readme's?
EricHallahan#1051: Street cred
EricHallahan#1051: "I forked GPT-3. See... right here!" (points to screen)
Louis#0144: gm catboys
Louis#0144: exclusively catboys
Louis#0144: no one else
Louis#0144: (praying one day one responds)
triggerhappygandi#0001: Jail@Louis
Louis#0144: lmao
nz#9710: bonk
EricHallahan#1051: honk :goose:
Louis#0144: :goose: |
Louis#0144: stan waterloo
Louis#0144: :goose:
Louis#0144: my pfp is a goose in a lab coat
Daj#7482: furry
StellaAthena#3530: Reminder to fill out the survey to give feedback on improving the community:
https://discord.com/channels/729741769192767510/729741769738158194/816294933392130068
Deleted User#0000: just trying it now, and google images still works better
Deleted User#0000: its impossible to beat google lol
Deleted User#0000: well they are just starting tho i guess
Daj#7482: https://www.nature.com/articles/d41586-021-00530-0
We're in Nature magazin :ultrazucc:
StellaAthena#3530: Brief, but cool:
> When OpenAI announced GPT-2 in February 2019, it originally said it wouldn’t release its model because of concerns about malicious use, although it did so nine months later. But before that release, university student Connor Leahy was able to replicate it using a couple of weeks of effort and some cloud-computing credits. Leahy, currently a researcher at the start-up firm Aleph Alpha in Heidelberg, Germany, now leads an independent group of volunteer researchers called EleutherAI, which is aiming to create a GPT-3-sized model. The biggest hurdle, he says, is not code or training data but computation, which a cloud provider called CoreWeave has offered to provide.
EricHallahan#1051: > GPT-3-sized model
Dat double hyphen
nz#9710: A pity that they don't mention that OpenAI licensed GPT-3 to microsoft
jrowe#5371: nice
jrowe#5371: they do
jrowe#5371: they talk about oa's for profit spinoff licensing to Microsoft |
nz#9710: mmh they mention the partnership but nothing more
nz#9710: (but maybe I'm missing it)
jrowe#5371: nope, you're right, I assumed
triggerhappygandi#0001: wen Selene? :ultrazucc:
bmk#1476: ~~they didn't even cite us so i can't even put this down in my Google scholar smh~~
triggerhappygandi#0001: I'll put it in my resume "mention in Nature"
nz#9710: schmidhuber moment
bmk#1476: Résumé? What's a résumé? I only know citation count and h-index
EricHallahan#1051: The entire extent of my resume is "almost won VEX Robotics World Championship, twice."
bmk#1476: i wonder who has the most extreme ratio of résumé mention of eleuther and involvement in eleuther
bmk#1476: sorry, you didnt fool the judges in the reverse turing test well enough
bmk#1476: since we dont have any strict definition of membership, any lurker can technically say "member of eleutherai" and not *technically* be wrong
Daj#7482: https://i.imgur.com/OtWWKQd.png
EricHallahan#1051: In middle school, the joke was I *was* the robot, because I could pretty much fit within the 18" cubic starting size at the time.
EricHallahan#1051: membership wen
Daj#7482: You need to purchase EleutherAI Gold Membership™️ for access to GPT-Neo2
triggerhappygandi#0001: Okay fancy pants there are normies like me who don't have either
triggerhappygandi#0001: Costs $69.69/year
jrowe#5371: only 99 interweb points, limited time offer, act now!
triggerhappygandi#0001: Actually, give us AWS/Azure credits lmao |
Maestro#7643: I want to ask about which ai algo is best for my use case, which channel should this go in?
EricHallahan#1051: Either here in #general or in #off-topic, depending on what it is about.
StellaAthena#3530: I recommend r/learnmachinelearning
EricHallahan#1051: Or reddit.
Maestro#7643: I'll go in off-topic first, then reddit if needed 🙂
Maestro#7643: Thanks
sid#9193: I mean the discriminator wins every time. The problem is not with the gan framework; the discriminator wins because the generator just can’t represent a dataset with a lot of geometric variation. So although something other than a gan would train stably, the results would still be bad.
Deleted User#0000: Wouldn't GANs also have a harder time learning things than something in the style of an autoencoder? Because when the discriminator wins, it stops being able to provide an informative gradient
I guess I should find time to implement it myself sometime somehow 🤔
sid#9193: GANs have the advantage that they sidestep the viewpoint estimation problem
sid#9193: The discriminator internally disambiguates the viewpoint, but in an autoencoder you either need pose estimation or a labeled dataset
sid#9193: In the unlabeled setting I would be surprised if an autoencoder performed better in practice since the pose estimator is also learned
Deleted User#0000: Wdym "viewpoint estimation problem"? It doesn't seem to be mentioned in the pi-GAN paper
sid#9193: So if you have an autoencoder, given an image, you need encode that into a latent code and then decode the latent code into an image again. But the decoder is volume rendering, so you need to pick a viewpoint (a point on the unit sphere) to cast rays from.
sid#9193: You could always pick the same point, but then internally the objects would be learned in different poses, which means interpolations stop making sense
Deleted User#0000: hmmm
I feel like that's a problem that'd be more relevant for an objectness prior than a 3D prior?
sid#9193: What do you mean by that?
Deleted User#0000: At least if I understand your idea correctly, what you're saying is that the representation of say, a table and a table fallen over
Would not necessarily be related by a rotation |
But might instead be related by something like (going from table -> table fallen over) "the top surface now has a hole in it, and the left hole has now been filled up with a surface
Or something like that
But this is only "wrong" because we know that tables are objects that can fall over; that the scenes in 3D space are a union of objects that each have separable shapes, positions and orientations
Purely geometrically, with no objectness prior, it'd be perfectly reasonable to say that you can't just rotate a table to get a table that is fallen over
So we would want to consider some different approach for encoding the knowledge of poses into the network than for encoding the knowledge of 3D scenes into it
sid#9193: That's true. However there's also a related problem where the far side of scenes isn't optimized
sid#9193: In the GAN you randomly pick the viewpoint for each generated object, so all sides are optimized in the limit
sid#9193: But in the autoencoder, even if your dataset contains multi-view data for each instance, that data will be mapped to different latents and be represented multiple times in the decoder
jrowe#5371: hierarchies of invariant representations
Deleted User#0000: Seems like the P(S'(E(x))) term should take care of part of this
Idk if it would do enough; I guess unless S' samples from every direction, it probably wouldn't? 🤷
Deleted User#0000: The "encode, render rotated, encode again, render rotated back should yield the original image" idea that I mentioned might also address this, but I don't want to emphasize it too much as part of the point of this was to reduce the complexity involving GANs, and throwing more and more losses onto the problem adds back complexity
sid#9193: The S(E(x)) portion is the problem; you'll end up with representations in S that are one-sided.
sid#9193: Like, for any image (& its associated latent code), only the pose associated with that image is ever optimized. Other poses are never dealt with by the loss fn.
Deleted User#0000: That doesn't sound correct? Since the P(S'(E(x))) term samples a random pose?
Deleted User#0000: Or like
sid#9193: As I understand your initial post S(E(x)) (not S') is just an autoencoder?
Deleted User#0000: E encodes things, S decodes/renders
S' decodes and renders rotated
sid#9193: Oh I see what you're saying |
sid#9193: The P(S'(E(x))) deals with the other viewpoints
Deleted User#0000: Yes
Deleted User#0000: Like not necessarily by ensuring that there exist latent that code for different viewpoints. But by ensuring that renderings of the latents from different viewpoints become realistic
cfoster0#4356: High level recap for those watching:
x is an image. E(x) is an embedding, say a voxel grid or radiance field. S(E(x)) renders the embedding as is. S'(E(x)) rotates the embedding and renders it. P(...) evaluates the likelihood of a rendered image according to some image prior
Right?
sid#9193: Are you sampling x from a distribution? Or using the fixed x associated with training examples?
Deleted User#0000: Yes
sid#9193: Sorry, E(x) is the encoder, never mind
Deleted User#0000: x is taken from the training examples
sid#9193: In this case you can only reconstruct the training examples
sid#9193: Because you don't know the distribution of E(x)
Deleted User#0000: You wouldn't be able to dream up new scenes because you don't know the distribution of E(x), I would agree with that, however I would hope that there would be some generalization to the test set so that you could also generate scenes for images in the test set
sid#9193: To make sure I understand correctly, E(x) is the latent code generated by an convolutional encoder?
Deleted User#0000: Yes
sid#9193: If you can find a tractable way to get P (which is probably the most difficult part) it might work.
cfoster0#4356: P could be anything. It's just a likelihood model over images, so you can use whatever model you want (autoregressive, normalizing flow, etc.), no?
Deleted User#0000: As I understand it, there are a number of different approaches for P which vary in tractability and accuracy |
Deleted User#0000: Basically what cfoster said
sid#9193: It would be a bit difficult because you are asking P to do the viewpoint generalization
cfoster0#4356: You can train P in advance on some huge dataset of naturally occurring images and keep it fixed when training the rest
sid#9193: I recommend looking at https://arxiv.org/abs/1707.05776
sid#9193: It's not exactly the same thing but has some similarities
Deleted User#0000: Are you? Possibly I'm not understanding "viewpoint generalization" correctly, but I'd think viewpoint generalization is a stronger property than what you are asking of P
sid#9193: P needs to look at an image and determine its probability. So it needs to (somehow, implicitly) know what the pose is in order to compare to its knowledge of that pose
sid#9193: Like, the front of a car is low probability if you think you're looking at the back of the car
sid#9193: That's also how the pi-gan discriminator works
Deleted User#0000: It's okay if P has a different understanding of "fronts of cars" and "backs of cars", as long as it understands that both can occur
Deleted User#0000: That'd still work I think?
sid#9193: Yeah I think it could work. One more problem though
sid#9193: There's no consistency enforced between different views
sid#9193: So you could end up with (on car dataset) a pickup where the front is a sedan or something
sid#9193: basically P just moves each data point closer to its nearest neighbor, and the unseen views of E(x) are gonna be a mess since they're not optimized in S(E(x)), So you can't tell a priori what the nearest neighbor is
sid#9193: Off the top of my head I think shapes with sharp angles would suffer the most
sid#9193: Since there are faces for which few viewpoints can see that face
cfoster0#4356: Your embedding should take care of view consistency
sid#9193: How so?
Deleted User#0000: The idea would be to rely on the inductive biases of the 3D representation for this |
Which would be different depending on the exact inductive bias. E.g. a voxel grid automatically enforces view consistency, since the learnable stuff functions the same from any view. I think SIREN NeRFs only do so due to the inductive biases of gradient descent? (Might be wrong, feel free to correct me.)
I also came up with "encode, render rotated, encode again, render rotated back should yield the original image" as an additional way of enforcing it but I don't like it super much
sid#9193: Actually I think the inductive bias goes away as soon as you don't rotate anymore
sid#9193: It can just learn a "billboard"
Deleted User#0000: S' rotates
cfoster0#4356: In the NeRF case, the volume density does not depend on your view direction
sid#9193: Actually wait I think that breaks it. If you only use one view per image, nerf can learn any density field that projects to that view
sid#9193: So S' may have nothing to work with
cfoster0#4356: The reason it can't learn a billboard is that billboard images have low probability under your prior
Deleted User#0000: No because if different views project to unlikely images then that would get caught by P(S'(E(x)))
sid#9193: But it may be totally unable to optimize the billboard into something reasonable
cfoster0#4356: Yeah that's fair. It's an empirical question whether it can
sid#9193: Like, the representation extracted by S(E(x)) might just be useless to P(S'(E(x)))
sid#9193: Yeah
Deleted User#0000: They would probably need to be optimized jointly, rather than first optimizing S(E(x)) and then optimizing P(S'(E(x)))
sid#9193: Does that circle back to being a GAN?
Deleted User#0000: No, because P is fixed
sid#9193: You might like https://arxiv.org/abs/1809.09087
Deleted User#0000: In a GAN, P would be learned in an adversarial context (or well, not quite, the discriminator isn't quite the same as a probability density, but the concept is similar for this purpose I think? To ensure realism. 🤷 )
This method free P up to be learned non-adversarially |
sid#9193: That method uses a similar approach (move each fake data point close to its nearest neighbor)
Deleted User#0000: Haven't managed to read it all, but isn't this the opposite? My proposed method is explicit ML (P specifies the L and we maximize P), while GANs are used in the paper as an example of implicit ML
sid#9193: Maybe, not totally sure
sid#9193: Either way I think it's relevant
sid#9193: I think when you're maximizing P on unseen views (S') for any learned P it'll be internally matching against a nearest neighbor in the dataset
Deleted User#0000: That seems like a reasonable way of thinking about it
sid#9193: One thing that I tried was to use a signed-distance field parametrized by a neural network instead of NeRF. This means it can't learn a billboard (since SDF forms a closed manifold), plus you can use surface normals to enforce geometry
sid#9193: That might also help, but again I think for any reasonably sized dataset you'll hit the representation capacity of the NN
cfoster0#4356: >>> reasonably sized
cfoster0#4356: :mesh:
cfoster0#4356: P and E should probably be big
sid#9193: Well, my dataset for my GAN was 7500 images and it didn't work, but 20 images did. So you can do the binary search if you want to 🙂
sid#9193: (of course small datasets are another problem entirely and I had to set R1 constant super high for it to work at all)
Deleted User#0000: Part of my thought process is "GANs seem finicky, which is probably bad for going big. What would be something that's better for going big?"
Since it seems pretty important to go big
sid#9193: E big => slow volume rendering
cfoster0#4356: You only run E(x) once to get your field parameters
cfoster0#4356: Or your voxels
sid#9193: Oh are we lumping the Nerf into S
cfoster0#4356: The rendering bit is in S I think |
Deleted User#0000: Oh true 🤔
sid#9193: This brings up yet another problem
Deleted User#0000: But there is still the question of the representational capacity of the NeRF
sid#9193: MLPs are bad at learning high-frequency functions (which pi-gan uses siren to try to solve)
cfoster0#4356: Yeah I thought the idea was to use a similar architecture as pi-gan. So E(x) are the parameters used to modulate the implicit field
sid#9193: ie small changes in latents = very very small output changes
sid#9193: This further decreases representational capacity
Deleted User#0000: Yeah, that's also what I had in mind, it's just that I thought the NeRF would be big too. Which runs into the problem sid mentioned with slow rendering
sid#9193: Empirically modulating the nerf doesn't work that well
sid#9193: Because the latents are sampled from something like a standard normal and so the nerf needs to learn incredibly high frequency components in latent-space
cfoster0#4356: There aren't any latents to sample here, no?
sid#9193: You're right I was thinking of pi-gan
cfoster0#4356: I'd buy that, potentially
sid#9193: I'm just telling you what all the experiments told me haha
cfoster0#4356: Slow rendering when? It shouldn't be any slower than NeRF usually is.
Deleted User#0000: thanks for all this btw, very helpful
Deleted User#0000: as someone new to NERF
sid#9193: I'm not opposed to the premise of @Deleted User 's idea but from what I can tell nerf really does not want to work for high geometric variation
sid#9193: And viewpoint estimation can't be ignored; either you match poses explicitly or the network tries to do it internally
Deleted User#0000: Usually people don't try to massively scale up NeRF I think? |
Which
Wouldn't necessarily break my proposal, in that it could still work on the domains where NeRF works. Like celebrity faces and such.
But what I had in mind was trying to think of some way to make the method simpler and easier to train, so that scaling it up would be more viable
cfoster0#4356: Yeah what I meant is, you don't have to make the NeRF big, just the network that maps the image to FiLM parameters
cfoster0#4356: But I'd totally buy that nerf might not work well here lol
sid#9193: There's a problem with that too unfortunately. The film conditioning affects the gradients in the siren network (which is totally critical to performance; see their supplementary material) so making the network too big makes the film outputs have high variance, which breaks siren
Deleted User#0000: voxels would presumably be exactly as expensive as sid says
So there's very much some questions about representations
sid#9193: Any representation where you store data explicitly (like voxels) will probably work better
sid#9193: I mean, it still might be bad depending on resolution, but probably better than nerf
sid#9193: I don't mean to shoot down your idea @Deleted User, sorry if it seemed like that. Just have had a lot of bad experiences with nerf on this particular problem
Deleted User#0000: What I'm getting as a takeaway from this conversation is "there's some serious practical concerns about whether this approach is viable/scalable, and that is likely going to depend a lot on the architecture, *but* the approach does sound like a promising experiment"
Admittedly the positive part comes partly from @cfoster0 's responses too, but ofc criticism is important 😅
chirp#4545: https://twitter.com/jackclarksf/status/1367221897608372224?s=21
triggerhappygandi#0001: All the more prospect for EleutherAI to shine for doing research that's "just for shits and giggles" 🙂
𓅬 gabriel_syme 𓅬#3220: I wonder if this is an interesting addition to the discussion on 3D scenes/models @sid @Deleted User
https://neural-3d-video.github.io/
𓅬 gabriel_syme 𓅬#3220: data coming soon 😄
𓅬 gabriel_syme 𓅬#3220: Dynamic NERF, no idea what that means but the videos looks nice, heh
cfoster0#4356: Yeah. Though this goes in the direction of a bunch of cameras capturing a scene, which really limits its immediate applications |
𓅬 gabriel_syme 𓅬#3220: ah yeah multiple video cameras
𓅬 gabriel_syme 𓅬#3220: so all the above discussion/ideas, would be much easier if access to multiple viewpoints was available right? or even this method could be applied?
triggerhappygandi#0001: I wonder if large corpa of still frames of video games could help with this
𓅬 gabriel_syme 𓅬#3220: they can I feel
triggerhappygandi#0001: Say a billion frames from like 50 games
cfoster0#4356: Yeah that's where my money's at, or at least that combined with youtube
𓅬 gabriel_syme 𓅬#3220: was going to say this is a problem in the wild but not really in modelling
EricHallahan#1051: It's already been proven to work in RL
triggerhappygandi#0001: Interestingly, a guy named Phil Wang curated a 60 million frames dataset for minecraft for RL
triggerhappygandi#0001: hmmmmmmmmm
𓅬 gabriel_syme 𓅬#3220: 🙂
Deleted User#0000: thats not me
triggerhappygandi#0001: Doubt
triggerhappygandi#0001: You are definitely multiple people
Deleted User#0000: lol, there's a lot of phil wang's, it's a pretty generic name
𓅬 gabriel_syme 𓅬#3220: I started a PhD in a gaming AI lab actually this year (although doing architecture) I'll see if they have something I can curate
triggerhappygandi#0001: But the amount of code you push makes me doubt it. 5 people and a dog
triggerhappygandi#0001: For sure
Deleted User#0000: i like the Aran Komat == AK theory better
StellaAthena#3530: Can’t we just like, record streamers on twitch? |
𓅬 gabriel_syme 𓅬#3220: They do have some really interesting affect datasets (images+affect rating) wonder if you could use these techniques to get better representations for affect prediction
Deleted User#0000: i haven't even played minecraft (though i hear it is good)
𓅬 gabriel_syme 𓅬#3220: that's what they did too and they measured affect from comments
triggerhappygandi#0001: You cannot trick me like this.
StellaAthena#3530: There’s 216,000 frames in an hour of footage
EricHallahan#1051: Minecraft went downhill post-1.6.4
EricHallahan#1051: nuf said
StellaAthena#3530: Getting to 1B is pretty easy
StellaAthena#3530: 5,000 hours of streaming footage? Price of cake
𓅬 gabriel_syme 𓅬#3220: the ideal dataset would involve small(ish) samples from multiple games I guess?
triggerhappygandi#0001: yeah that's what I was thinking. 30-40 hr gameplay of a hundred games and you have all the frames you need
StellaAthena#3530: @triggerhappygandi do you know what twitch’s recording policies are like
triggerhappygandi#0001: Never used twitch. But if we were to contact them about research I guess they would be open to it?
cfoster0#4356: *PS1 emulator is all you need*
triggerhappygandi#0001: If not, we can always stream on discord ourselves
zphang#7252: I assume a lot of gaming stuff might be incredibly repetitive if we just grab twitch footage
triggerhappygandi#0001: "Guys i am contributing to research by playing Witcher 3 for 50 hours"
triggerhappygandi#0001: That's why we need like a hundred games
zphang#7252: true, just go for diversity
StellaAthena#3530: 100 games, 10 streamers per game, 50 hours per streamer, we further diversify by only using one in every 10 frames. |
𓅬 gabriel_syme 𓅬#3220: I will try and ask today, the lab made a dataset from twitch recently but I don't know any specifics yet
StellaAthena#3530: That’s it. That’s 1 billion frames
triggerhappygandi#0001: You got the best dataset you could want
triggerhappygandi#0001: Yeah
𓅬 gabriel_syme 𓅬#3220: btw, twitch also comes with semantic data (you can capture the chat on the side I believe).
zphang#7252: 1TB corpus of "Pog"
triggerhappygandi#0001: Pretty much
triggerhappygandi#0001: That's the only word the model would learn
𓅬 gabriel_syme 𓅬#3220: lol maybe
zphang#7252: Pog and KEKW
triggerhappygandi#0001: And god forbid if you record a pokimaine stream :guilty:
triggerhappygandi#0001: FEET
𓅬 gabriel_syme 𓅬#3220: but you can use it for other things I guess. For example, frequency can be related sometimes to the video and action taking place
𓅬 gabriel_syme 𓅬#3220: but yeh 90% memes lol
triggerhappygandi#0001: If we ever actually do this, I have one rule and one rule only: No among us. Just no. Cancer inducing game.
𓅬 gabriel_syme 𓅬#3220: would we do 3d games?
EricHallahan#1051: What is the goal here?
triggerhappygandi#0001: That's the idea. Preferably newest AAA games
triggerhappygandi#0001: 3D modeling
𓅬 gabriel_syme 𓅬#3220: there's a nice stlye transfer repo btw, passing styles from game to game. Might be dataset there as well |
𓅬 gabriel_syme 𓅬#3220: let me find it
EricHallahan#1051: So yes, we need 3D games.
triggerhappygandi#0001: With extremely high polygon count
triggerhappygandi#0001: Sadly Ocarina of Time won't cut it
cfoster0#4356: Really?
zphang#7252: time for HD remasters to shine
EricHallahan#1051: I disagree.
𓅬 gabriel_syme 𓅬#3220: Legendary editions
triggerhappygandi#0001: The whole game has less poly count than 2B's butt in Nier Automata
triggerhappygandi#0001: This is actually true lol
EricHallahan#1051: It is still valid geometry.
triggerhappygandi#0001: But if you want to do somewhat realistic rendering, probably not worth it.
triggerhappygandi#0001: It could help with an AI game engine though
cfoster0#4356: Reason I suggested older emulated games is you can just download a buttload of games, play for a bit, and record ground truth motions and whatnot
triggerhappygandi#0001: Man we will probably have AI game engines by 2025
EricHallahan#1051: No one ever set the constraint of realism.
triggerhappygandi#0001: It is one of the use cases though
triggerhappygandi#0001: We can also stream right here on discord if we actually want to do it.
cfoster0#4356: Ngl that sounds like fun
triggerhappygandi#0001: One day we should |
𓅬 gabriel_syme 𓅬#3220: I'd love some gaming here 🙂
triggerhappygandi#0001: For research purposes
zphang#7252: I only play 2d platformers and jrpgs, which is useless for this
triggerhappygandi#0001: @zphang Then I assume you have played hollow knight?
𓅬 gabriel_syme 𓅬#3220: such a great platformer
triggerhappygandi#0001: They are still useful for AI game engine
zphang#7252: lol sadly not
𓅬 gabriel_syme 𓅬#3220: was playing ori a few weeks ago
zphang#7252: looks like it might be up my alley
zphang#7252: but never felt the pull to play it
triggerhappygandi#0001: Really is. Play it noooooow
zphang#7252: No! I'm just going to replay old megaman games instead
triggerhappygandi#0001: It was the first platformer I played. I know comparing something to Dark Souls is cliche now, but hollow knight has a lot of its elements like good combat and the "show, don't tell" policy for storytelling
𓅬 gabriel_syme 𓅬#3220: the twichAPI is not very extensive, I only found smth about getting clips in the previous version. But if we ever get the license thing settled maybe even a 'dumb' approach like this works 🙂 https://cdn.discordapp.com/attachments/729741769738158194/816893720451743804/unknown.png
𓅬 gabriel_syme 𓅬#3220: can easily find a number of videos from a channel and download them with one click
𓅬 gabriel_syme 𓅬#3220: not exactly neural radiance fields rendering but this is pretty cool!
http://monstermash.zone/
𓅬 gabriel_syme 𓅬#3220: the paper might be interesting which is why i posted
𓅬 gabriel_syme 𓅬#3220: https://igl.ethz.ch/projects/monster-mash/monster-mash-paper.pdf
Kazumi#1297: there's going to be a lot of facecam in the dataset |
𓅬 gabriel_syme 𓅬#3220: yeah I was thinking we might need a center crop or smth although that kind of sucks
𓅬 gabriel_syme 𓅬#3220: maybe we just mask those areas since they will be always at the same point?
IKEA#9631: Shower thought: there so much medical data being generated constantly going unused that if you somehow managed to gather data about everytime someone went to see a doctor for any reason anywhere in the world, the patients profiles, the diagnostics etc, you could probably build an AI tool that can replace a GP with near 100% success rate today, and probably a good chunk of specialists as well
One#5919: **I will never leave this server**
One#5919: the problem with that is that people don't trust technology @IKEA
One#5919: they'd rather have a human give them 90% accurate care than a machine 95-100%
One#5919: idiots, i know
IKEA#9631: Middle aged people and boomers don't trust technology*
One#5919: yeah
IKEA#9631: That should fix itself with time
One#5919: yeah 😄
One#5919: i would choose an AI doctor any day
IKEA#9631: Also... Medical secrecy, anonymity, yadda yadda
IKEA#9631: (for the datasets)
One#5919: yeah because that shit doesn't get leaked anyway
One#5919: (it totally gets leaked)
One#5919: (all the time)
One#5919: (for everyone)
IKEA#9631: :thonk:
One#5919: i worked at a document review company |
One#5919: my job was to go through leaked documents and mark any instances of personally identifiable information
One#5919: in a year i saw the entire medical history of thousands of people
One#5919: all kinds of companies get hacked every minute
One#5919: every second
One#5919: what i'm saying is, medical history should be made available to "the good guys"
One#5919: they won't harm anyone with that access and they will give us AI superdoctors
One#5919: AND IT GETS LEAKED ANYWAY, TO _BAD_ ACTORS
𓅬 gabriel_syme 𓅬#3220: that's not a reason to do AI doctors lol, it's probably one to not do until you solve it
nz#9710: I mean, there are already several large scale medical image datasets.
𓅬 gabriel_syme 𓅬#3220: I'm all for health for everyone (which is the only reason I would want to push AI doctors) but that has to have a modicum of safety
One#5919: have you _seen_ GPT-3?
One#5919: used it yourself?
nz#9710: What do you mean?
𓅬 gabriel_syme 𓅬#3220: I mean if there are issues with data privacy now, creating a platform that will need to concentrate huge amounts of data might not be the best idea. Unless we work on that issue first?
𓅬 gabriel_syme 𓅬#3220: I find myself very often in this position, feeling like a madman asking for simple things 🙂
𓅬 gabriel_syme 𓅬#3220: I love the capacity for change AI brings, I do. I just think it needs to be carefully implemented
Aran Komatsuzaki#5714: yes i've seen GPT-3 before. he was a cool guy.
nz#9710: Medical images can be safely anonymized.
nz#9710: Medical reports too.
𓅬 gabriel_syme 𓅬#3220: I agree and I think there's a field working on this right? How to train models without being able to extract information afterwards? |
𓅬 gabriel_syme 𓅬#3220: is it differential privacy? I forget. But yeah, it's definitely the way forward. Just need to be thought out, including why we do it
𓅬 gabriel_syme 𓅬#3220: For e.g., having AI-doctors would be incredible for the billions with no access to health services
nz#9710: There's that, but from what I've seen the main medical datasets simply anonymize the data
𓅬 gabriel_syme 𓅬#3220: aha okay, that also works
𓅬 gabriel_syme 𓅬#3220: who owns them now? Is it only patients or also insurance companies?
nz#9710: Mmmh, the datasets I know are owned by universities I think (Stanford and MIT, mainly) and can only be used for research.
𓅬 gabriel_syme 𓅬#3220: aha ok I understand
One#5919: the coolest!!! 🙂
One#5919: maybe GPT-Neo will be even cooler
One#5919: and he's named after me 😏
One#5919: https://tenor.com/view/power-the-matrix-keanu-reeves-neo-bullet-gif-17795240
One#5919: https://tenor.com/view/power-the-matrix-bullet-stopped-the-bullets-bullet-drop-gif-17795245
One#5919: https://tenor.com/view/keanu-reeves-matrix-powers-gif-9315457
loxias#2533: I have a bit of first hand experience in that actually. I was a programmer at a medical research institution, we used topological data analysis of the full medical records of tens of thousans of patients and discovered 3 new diseases. All from the data.
loxias#2533: Well, not so much "new" but it turns out there are different diseases all labled (incorrectly) "diabetes". Was a big paper about a decade ago.
loxias#2533: Works on deidentified data just fine
StellaAthena#3530: This sounds like a great way to kill people tbh. A lot of people would rather see a human doctor than see an AI doctor and will simply stop going to the doctor.
Also to leak private info world wide
StellaAthena#3530: This is dope, I love TDA. Can you link me the paper? |
loxias#2533: to be clear, I'm a mere programmer, not one of the doctors, 🙂 pretty sure the top google hit for "MSSM topological data analysis diabetes" pulls it up.
loxias#2533: and yeah TDA, mapper, barcodes, are really cool stuff.
loxias#2533: side note: I have no idea what this server is or how I got here, I thought I was clicking on another one for options trading and saw math and was like heyyyyyy cool!
triggerhappygandi#0001: Federated learning will supposedly help with keeping the privacy
Louis#0144: Federated learning, is that where we all dress up like Spock and set phasers to SGD?
Louis#0144: https://tenor.com/view/jeb-sad-please-clap-meeting-gif-5105690
triggerhappygandi#0001: https://tenor.com/view/not-funny-didnt-laugh-not-funny-didnt-laugh-dancing-money-dance-gif-14496446
StellaAthena#3530: We’re an online ML research group. A lot of our current work is in NLP because our first and highest profile project is about NLP, but we are happy to bring on collaborators with any interests.
StellaAthena#3530: “Supposedly”
Louis#0144: Hello topologist checking in
Louis#0144: Lmao
Aran Komatsuzaki#5714: NLP? never heard of it
loxias#2533: That explains it. ML is a strong interest of mine (not to be confused with deep learning. I know very little of that) and NLP is a side fascination -- an ex of mine is an NLP researcher. I'm a signal processing "i can't use the word expert, because I know enough to know not to", high dimensional space sorta guy. MATLAB, C++, and anything with sound processing or array processing is my jam. Wonder how I found my way here 🙂
Louis#0144: NLP? is that a new club drug?
Louis#0144: @Aran Komatsuzaki lets go get some NLP man
mgostIH#0245: NLP is just a fancy way of predicting the stock market
loxias#2533: I have a few minor recognized accomplishments in compression and audio signal processing (I'm so sorry for the Alexa. I said it was a bad idea at the time!), and in most job type organizations I'm the person who bridges writing performant systems software in C as well as reading/writing the underlying math/MATLAB. Looking forward to lurking more on your discord sometime not during trading hours.
Does NLP (of the sort I learned 15 years ago, complex algorithms, beam search, tons of other stuff I've forgotten) still exist? Or is that "Pre-Copernican" and real NLP is all just black boxes of deep learning these days?
mgostIH#0245: we still have beam search |
EricHallahan#1051: It exists, but primarily in hybrid systems.
Louis#0144: old school nlp is mostly dead tbh
Louis#0144: syntax trees are usually useless for instance
Louis#0144: and all non neural coref resolution stuff is pretty useless
Louis#0144: modality is making a comeback tho
Louis#0144: so like 1980s/1970s NLP
loxias#2533: Sure, that was a bad example. I don't really know NLP. Just dated one for a long time. So some rubbed of. I grok high dimensional continuous spaces, not discrete ones. 😉
😉 She used to talk about ... syntax trees, yes, and there was some specific algorithm for MT she was trying to improve the running time of for her thesis...
Louis#0144: wow running time for machine translation is something i havent heard anyone talk about in a long time
loxias#2533: Wow, that's amazing. So a whole generation of PhDs are... moribund. Makes me chuckle.
Aran Komatsuzaki#5714: we've already burned down all the syntax trees
Louis#0144: @loxias if you finished your phd 3 years ago its already useless
Louis#0144: unless you did like theory
loxias#2533: This would be closer to 10 years ago. 🙂 And I just learned NLP from her and one class. Awesome the progress of math. "Of course, these new fangled deep learning things will never replace *my* area of experience, features for audio and music must be tuned by one with a deep understanding of the underlying mathematics and psychoacoustics...." </sarcasm>
EricHallahan#1051: Compression and signal processing is right up my alley in a way, as I have been considering putting together something related to speech some time in the near future here. Linear prediction is the primary direction I am looking at: Well understood, fast, and pretty easy to quantize robustly if needed.
loxias#2533: Yeah LPC is the backbone of everything good 30 years ago.
EricHallahan#1051: LPCNet is pretty incredible for the level of compression it provides.
loxias#2533: But deep learning is coming for us too. I haven't read the paper yet, but Google had something recent that claimed improvements over Opus, which... I find hard to believe, but can't wait to believe it.
EricHallahan#1051: https://jmvalin.ca/demo/lpcnet/
gwern#1782: whole forests of random trees chopped down to feed powerplants for GPUs :sadge: |
gwern#1782: NNs killed the random star
loxias#2533: *sniff* But what about my artisanally crafted features? Weeks to months tweaking MATLAB until it captures human perception "just right"? elegant polynomials which approximate transfer functions of the auditory cortex.... 😉
EricHallahan#1051: Yeah. Interpretability has given us some excellent ways of achieving those kinds of things though.
gwern#1782: all this craftmanship and artistry to save moments... will be lost in the rain.
loxias#2533: I need to make a few trades, but I'd love to be welcome to contribute here in the future. I also wish I could say something witty and cool, having finally realized "this is the GPT-Neo people, and I bet it's _that_ gwern! the one you always agree with online!"
StellaAthena#3530: Reminder to fill out the survey to give feedback on improving this channel! Lurkers are especially encouraged to participate:
https://discord.com/channels/729741769192767510/729741769738158194/816294933392130068
loxias#2533: Unless it's meant to be an inventory of existing skills, I bet survey answers would be more useful after actually participating for a few months, sou desu ne?
LaPapaya#4347: Doubt. Has Openai said anything about Eleutherai since you guys started this project?
LaPapaya#4347: Or are they silent about the topic?
StellaAthena#3530: The questions are about what drives people’s participation levels in the channel and what would make them interested interested in participating more. Although responses from people who are active in the community is helpful, insights from people who primarily lurk or want to get more involved are also important. The questions are:
>>> 1. The last survey identified five major types of people in our channel. Which of these do you identify with, if any? You may choose as many as you like.
2. Do you feel like you are getting what you want out of this community? How can we make being part of this community better for you?
3. If you are involved with a research project, how has the experience been for you? If you're not involved with one of our research projects, do you want to be? What would it take to get you involved in one?
4. Are there topics you wish there was more discussion of? |
5. As the community grows, it's important to us to maintain a high level of discourse. At the same time, we don't want to forbid casual conversation or promote a feeling of exclusivity and so have moderated with a light touch so far. Do you like the current level of moderation? Are there types of comments or conversations you would prefer we cut down on?
Louis#0144: Any good responses yet?
StellaAthena#3530: 17 responses so far. I haven’t looked at them too closely yet though.
thenightocean#6100: BTW my dad is a physicist and I always remember reading piles Nature magazines in his office while he would talk about his ultimate dream to be published in Nature as career peak.
And now his good-for-nothing son sorta got something that looks like that (even if my contributions are few lines of website HTML and some shitty SVG icons in sketch) 😄 😢
Louis#0144: nature is trash for CS
Louis#0144: dw
Sahl#0630: nurture is better anyways
thenightocean#6100: hey, dont ruin my emotional family moment with your boring facts
Louis#0144: (also dont support paywalls)
Louis#0144: true ok carry on
bmk#1476: if you need eleuther stuff to do i can always get you stuff to do
triggerhappygandi#0001: Lmao congrats
triggerhappygandi#0001: Nature is still _Nature_@Louis
thenightocean#6100: @bmk sure! I am working with Kip fixing web app in interactive agi, but I might take more work if needed.
Crit#0843: Hey guys, I was referred this server by someone on reddit..I'm not a data scientist or anything but more of a SaaS enthusiast who has been tracking GPT 3 apps
Crit#0843: Are you guys really making a large scale language model as big as GPT3 ?
Crit#0843: would love to discuss more..in laymens terms if possible haha |
Daj#7482: Hello! Please check the FAQ in #rules
Daj#7482: In short: We are
EricHallahan#1051: We are happy to answer any additional questions you may have.
jrowe#5371: size matters, but Eleuther seems to be competing in the "motion in the ocean" category as well
Crit#0843: I'm just fascinated that some other people are working on an open source version of GPT 3 (which is fantastic btw) how are you guys covering the costs? from what I heard it was about $12M to train GPT 3
cfoster0#4356: Mm I forgot that we don't mention CoreWeave in the FAQ...
Daj#7482: We don't? Oops
EricHallahan#1051: As of right now, CoreWeave is generously donating some compute for testing, and they hope to make some more resources available as we may need it in the future as it becomes available to them.
cfoster0#4356: Yeah err it's due for a refresh. We still say we're using *MTF*
Daj#7482: ono
EricHallahan#1051: Yeah the FAQ needs an overhaul soon.
Crit#0843: Oh got it..at least there are people helping the open source of this kind of technology..can I ask where you guys are in the process? Like compared to the existing GPT 3?
cfoster0#4356: Let's figure out what needs changing in #website
sandos#7339: huh, I thought someone said it was some Nvidia program? 🙂
EricHallahan#1051: CoreWeave is an "NVIDIA Preferred Partner".
StellaAthena#3530: GPT-3 exists, ours does not
Daj#7482: Code is pretty far along, just waiting on compute to get to large scale training
Crit#0843: do you guys have a public timeline/roadmap that people can follow?
Crit#0843: i'd like to stay up to date on the progress
Daj#7482: "It's done when it's done" |
Daj#7482: lol
Daj#7482: We are a group of hackers in a cave, we're not good at the whole "organized planning" stuff
Crit#0843: haha makes sense
Daj#7482: But atm it really just comes down to compute probably
Daj#7482: Think "this year, most likely"
Daj#7482: for timelines
EricHallahan#1051: So soon™️
triggerhappygandi#0001: Lmao me personally I'm just in for "hahaha made these gpus go brrr"
triggerhappygandi#0001: Seeing `nvida-smi` spit out a list is literally heaven
Crit#0843: oh wow thats a lot faster than i was expecting :chad:
Crit#0843: also will how easy will it be to use gpt neo? like gpt 3 uses an api system which has created a new ecosystem around it..what do you guys think the usage will be like?
EricHallahan#1051: TBH when I got access to a pod for the first time that was my reaction. I never had access to good compute, so it was a shock for me.
I didn't use it though. :berk:
Daj#7482: We will create a model and code and make it freely available to anyone that wants to use it
EricHallahan#1051: We don't know.
Daj#7482: We have no plans of providing any kind of API or hosting ourselves
Daj#7482: It will probably be a bitch to use :berk:
triggerhappygandi#0001: Me first time seeing 8 GPUs: _wack_
EricHallahan#1051: It isn't really our concern. Just having a model is hard enough.
Crit#0843: hahaha :carlos2: |
triggerhappygandi#0001: I have to find time to practice mtf just so I can see what a v3-512 looks like
Crit#0843: that would be plenty i guess haha...time to invest in GPU's to run the model then :carlos:
sandos#7339: Is it at all viable to run models that dont fit in a single GPU, on a regular machine? I assume just using the model is faster than training, but I guess it might not be possible to offload the weights to disk due to performance. Maybe RAM + VRAM combined?
Daj#7482: It would be ungodly slow
Daj#7482: Like, "hours/days per token" or some shit
Daj#7482: (this is taken out of my ass but is probably in the right magnitude of order)
sandos#7339: ooh.. right. These loop around. I was thinking only one layer is needed at a time, but its not that simple ofc
triggerhappygandi#0001: There is a thing called ZeRO-offloading
triggerhappygandi#0001: Pushing the limits of what you can do with single GPU
𓅬 gabriel_syme 𓅬#3220: Are you also going to test with distillation after the big model is trained? I can already think of applications at work but yeh even large companies will struggle to inference that.
EricHallahan#1051: Our plan is to attempt to distill it when we are done. I believe it is a term of CoreWeave's that we attempt it (because, you know, they would like to host instances of it at a reasonable price when we are done.) Don't quote me on that though, I may be wrong.
LaPapaya#4347: Based https://cdn.discordapp.com/attachments/729741769738158194/817158675813892206/unknown.png
Louis#0144: @bmk @Daj let’s do something big for the one year mark
Louis#0144: How about like
Louis#0144: A mountain of cocaine
Daj#7482: Lame
Louis#0144: LMAO
Daj#7482: I just complained about how lame coke is earlier in off topic lmao
Louis#0144: Ok a drug of connors choice
Daj#7482: Nice that's my favorite |
bmk#1476: what *can* we do for the one year mark
bmk#1476: write a paper?
Daj#7482: Wow slow down there you party machine
Daj#7482: Don't wanna be _too_ cool
bmk#1476: "The Effect of Connor's Favorite Drug on Productivity"
Louis#0144: One of my advisors at Waterloo did a biiiig study in the effect of marijuana on math students
Louis#0144: There were many volunteers
One#5919: marijuana can affect you in about 20 different ways
One#5919: no way of knowing which until you try it
One#5919: then
One#5919: no way of knowing whether the second time will be totally different until you try it again
One#5919: for me, it makes me incredibly creative and confident (reducing inhibitions in the brane? idk)
One#5919: but the thing is, i can get the same effect by just not sleeping for 20+ hours. for free
One#5919: so i haven't touched weed in five months and don't even think about it anymore day to day
Peter L#3352: Wait so you just don't sleep?
One#5919: yup
One#5919: insomnia causes mania
Sid#2121: :thonk:
One#5919: that's my advice to anyone who's depressed: stay awake for 20+ hours
Peter L#3352: No I get that but you implied that you haven't smoked wee in 5 months as if you found a better alternative in sleep deprivation... |
Sid#2121: This is terrible advice
One#5919: terrible advice is still advice yo
One#5919: you can choose whether u wanna follow it or not
One#5919: btw
One#5919: **i will never ever leave this server voluntarily**
One#5919: just so it's known 🙂
One#5919: this is my favorite server that i didn't start
One#5919: :chad:
One#5919: https://cdn.discordapp.com/attachments/729741769738158194/817166607717498960/me.png
One#5919: this is what i look like. i _am_ a total chad
One#5919: ppl over at the Portable server have taken to calling me that
One#5919: anyway, imma step out cause i'm a lil manic rite now
One#5919: https://cdn.discordapp.com/attachments/729741769738158194/817167619915907082/156608398_1064184580758866_2224665685685513369_o.jpg
CRG#8707: <https://www.reddit.com/r/slatestarcodex/comments/83nxpy/staying_awake_the_surprisingly_effective_way_to/>
gwern#1782: what were the results?
Louis#0144: lol
Louis#0144: does it really matter
Sid#2121: Totally different as a controlled medical intervention in conjunction with other treatments, lol, but this is interesting. I suspect the chronotherapy and lithium might have the greater effect
Sid#2121: I don’t think ‘sleep less’ on a regular basis is sound advice for depressed people, (that’s also not what this article is suggesting)
Ward#1738: Bee learning compared to machine learning https://www.lesswrong.com/posts/yW3Tct2iyBMzYhTw7/how-does-bee-learning-compare-with-machine-learning |
𓅬 gabriel_syme 𓅬#3220: that's definitely not free though
𓅬 gabriel_syme 𓅬#3220: bees > ML imo
AI_WAIFU#2844: > In this report, I show that both bees and computer vision models are able to perform very similar few-shot image classification tasks. The efficiency with which they perform these tasks, however, differs somewhat: my central estimate is that bees use three orders of magnitude more computation to do them.
I've said it before and I'll say it again, CV models are way underpowered and we should try juicing them up.
AI_WAIFU#2844: > Very naïvely, we can adjust these estimates in the following fashion: since bees are three orders of magnitude worse than computer vision models, our prior should be that a transformative model should require roughly three orders of magnitude less compute than the human brain. I don’t think it’s obvious in what direction we should adjust this prior, so I’ll stick to it. As the human brain can perform the equivalent of 1e13-1e17 FLOP/s, we should then expect that a transformative model should require 1e10-1e14 FLOP/s to run. This is somewhat smaller than the central estimate of 1e16 FLOP/s found in (Cotra, 2020).
That's 1 GPU. :firealarm:
One#5919: Free in terms of money. But of course everything comes at a price. Breathing allegedly slolwly kills you by oxidizing all ur shit
𓅬 gabriel_syme 𓅬#3220: it's really hard to summarize the many ways not sleeping is affecting your body, even at the cellular level possibly. Price is steep. I say this having spend most of my 20-25 years sleeping at 1.5 day intervals lol. But still, one of the best things I changed (after quitting smoking) was sleeping early / waking up early
One#5919: Compute is going exponential tho
One#5919: Google is renting us a $5k card for free in each colab notebook
One#5919: Paperspace.io is selling an hour of eight V100s for $21
One#5919: Compute is becoming a non-issue. Gradually
One#5919: We gon have some AI buds rrrreal soon :jc:
AI_WAIFU#2844: If this is correct, compute has been a non-issue for 3+ years now.
AI_WAIFU#2844: *For individuals*
One#5919: Also. The opposite of Rokko's Basilisk is true. Superintelligences will seek to befriend and understand those who were not enthusiastic about their coming about
Ward#1738: This is for running / inferences - the really compuationaly expensive component of a thinking machine is the training.
One#5919: Intelligence without curiosity is like a car without an engine |
Sid#2121: [citation really badly needed]
One#5919: Citation is right here bud
One#5919: I just came up with it right now
One#5919: Think about it
Sid#2121: ... stating something isn't a citation
One#5919: Alignment is a non-issue
One#5919: The AIs will be more human than humans
One#5919: Eheheh ok ok imma stfu
One#5919: (But really tho. AI is good not evil. Ever.)
One#5919: I know u gon say good and evil are human constructs incomprehensible by a computer. But nah. GPT-3 tho
One#5919: AI is, for lack of a better word, _good_
One#5919: https://tenor.com/view/gekko-gordongekko-greed-good-lackofabetterword-gif-4994861
One#5919: A citation is a reference indicating where something was first said
Sid#2121: only :schmid: can cite himself dude
One#5919: Schmid? Eric Schmid? IDK that face
One#5919: I guess u can delete the comment where I self-cited, wasnt aware of the rule
One#5919: But yeah I moved the alignment discussion (rant?) to #alignment-general
Sid#2121: schmidhuber
One#5919: ✌
Sid#2121: i'm just shitposting, it's not a rule, but big claims require rigorous proof. You can't just say stuff like 'Alignment is a non-issue' and expect people to take you seriously without backing it up in any way |
Enealor#6657: Based on my experience, superintelligences will seek to devour
Enealor#6657: Once AI is smarter then us, it will consume us
𓅬 gabriel_syme 𓅬#3220: maybe once it's smarter than me it can make fricking powerpoint work
AI_WAIFU#2844: Let's actually try to keep this discussion here instead of #alignment-general .
𓅬 gabriel_syme 𓅬#3220: because I apparently can't 😦 need to sleep. I'm very curious about the alignment discussion though. I hadn't heard of it before this discord tbh. I find it curious that people are (probably justifiably) terrified of how unprepared we are to deal with it but still no one considers for a second that maybe we can...take a break and solve that first?
𓅬 gabriel_syme 𓅬#3220: like "We need alignment or the end of the world is near" but never "Hmm, what if we take this step by step then if we don't know what's going to happen"
𓅬 gabriel_syme 𓅬#3220: unless this means there isn't a community for this in the outside world? Like in the big companies?
AI_WAIFU#2844: You're in for a treat: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
AI_WAIFU#2844: Basically, getting everybody to agree to slow down is arguably a harder problem than alignment itself and/or might backfire. But if you want to look into it the key words are "AI alignment government policy".
mgostIH#0245: One thing I learnt with alignment is scamming people with dollar auctions
gwern#1782: mind blown that CLIP has the stroop effect
𓅬 gabriel_syme 𓅬#3220: well I'm guessing that is Scott's thoughts on the issue right, not a universal truth? There's no way he can prove that is there?
𓅬 gabriel_syme 𓅬#3220: wait, waffles?
𓅬 gabriel_syme 𓅬#3220: nvm googled it, my mind jumped back to NLs
gwern#1782: quick, ask big sleep to generate 'the stroopwaffel effect'
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/817186441422569492/1200px-Stroop-taak.png
𓅬 gabriel_syme 𓅬#3220: damn that was hard, I was literally cross eyed with effort for a few seconds
AI_WAIFU#2844: I would say it's the current consensus. But it's a debated topic. See: https://futureoflife.org/ai-policy-challenges-and-recommendations/
𓅬 gabriel_syme 𓅬#3220: interesting how quickly you get better at it though, with the simple heuristic of understanding what colors are present in the test?
AI_WAIFU#2844: And also https://futureoflife.org/ai-policy/ |
𓅬 gabriel_syme 𓅬#3220: Thanks I'll go through these
𓅬 gabriel_syme 𓅬#3220: My thoughts are that there is a practical way to aid alignment, in my naive view of it
𓅬 gabriel_syme 𓅬#3220: Focus on creating AI solutions (now) that actually try to solve real problems that affect us all
𓅬 gabriel_syme 𓅬#3220: it's naive, I'll read the articles thx!
Sid#2121: The magic search term here is 'prosaic AI alignment'
Sid#2121: Paul Christiano's work on the topic is good, and there's also a lot of interesting work going into making current AI systems more interpretable (distill.pub and Chris Olah specifically)
𓅬 gabriel_syme 𓅬#3220: thanks Sid, will take a look at the term. I've been following Olah's work for a while, it's really incredible and fits what I meant with practical
One#5919: Here's the problem with worrying about alignment
One#5919: We can't do anything about it
One#5919: It's already too late
One#5919: Most of AI is a complete black box to us
One#5919: We have no idea how this shit works any more than we do our brains
One#5919: Even if countermeasures were necessary we wouldn't have the first inkling what the correct ones might be
One#5919: For ecsmple Asimov's laws of robotics are insanely ill-prepared for what AI actually turned out to be
One#5919: It's like if an extinction-event-level asteroid were to strike Earth in a month
Sid#2121: (because he was a science fiction writer)
One#5919: It's too latr
One#5919: I dont give a shit what he was, I'm giving an example of something that at least _sounded_ reasonable
One#5919: Just sit back and learn to love the atomic bomb yo
bmk#1476: @One i recommend you look at the literature in the field before making strong claims that the entire field is useless. i recommend watching the yudkowsky stanford talk first, and then watching some of rob miles' videos. until then, i would like to ask you to please stop making strong, unjustified claims about alignment. |
One#5919: It's just deluded hubris to think you can change AI's _alignment_
One#5919: You can't tell a god what kind of soda to like drinking
One#5919: It decides for itself
One#5919: https://cdn.discordapp.com/attachments/729741769738158194/817193040727769088/unknown.gif
One#5919: I'm done, I've spoken my piece
One#5919: I won't say anything new on the topic
One#5919: But i mite link to my previous comments above. Would that be permissible?
One#5919: I really really really really don't want to be kicked from this server
bmk#1476: and i dont want to have to kick you from the server. but please, until youve read up on the background knowledge, dont make strong provocative claims, and that includes repeating or linking to the claims youve already made.
One#5919: You got it!!!
One#5919: I love clear instruction
cognomen#6297: > People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world.
— Pedro Domingos
nz#9710: Would be curious if you could go a bit more in depth about this statement.
nz#9710: What kind of scales are we talking about?
AI_WAIFU#2844: There's not much too it. The largest LMs have 2-3 OOMs more parameters than the largest CV/Image generation models.
𓅬 gabriel_syme 𓅬#3220: how do we juice them up? Is it scale/computation only?
AI_WAIFU#2844: I think we could do some arch improvements to increase #Params/FLOP, but other than that, yeah, just make the numbers bigger.
nz#9710: Yea it's a bit more than 3 OOMs for image classification (GPT-3 at 185b, and I think today's FB paper at 1B for image classification)
AI_WAIFU#2844: You get the idea |
AI_WAIFU#2844: And video models, despite having an even richer modality, are even worse.
AI_WAIFU#2844: I don't think there's ever been a 1B video model.
𓅬 gabriel_syme 𓅬#3220: is that lack of data you think? not having enough big datasets to make that worth the trouble?
𓅬 gabriel_syme 𓅬#3220: or is it training efficiency? because even dall-e with 400m pairs..the vqvae is quite small no?
AI_WAIFU#2844: Nope. It's arguably the opposite. Way too much data. 1 high quality image is millions of datapoints.
gwern#1782: (pixels are hardly independent)
Aran Komatsuzaki#5714: so am i
AI_WAIFU#2844: I know that. But there's still a tremendous amount of information in individual images that can be learned.
gwern#1782: is there? a picture is only worth 16^2 words after all 🙂
AI_WAIFU#2844: Do you have the relevant paper on cross modal scaling laws? I can't find it quickly.
AI_WAIFU#2844: nevermind: https://arxiv.org/pdf/2010.14701.pdf
triggerhappygandi#0001: According to scaling laws paper, a 32*32 image is basically worth 3-4 words lol
spirit-from-germany#1488: For all how are living in Germany or going there often... Did you know that the world's largest computer museum is there? 🙂
spirit-from-germany#1488: https://www.hnf.de/en/home.html
spirit-from-germany#1488: I've been there 2 times with my kids and it's really great for all ages 🙂
spirit-from-germany#1488: It's going to re-open on 7th march 🙂
𓅬 gabriel_syme 𓅬#3220: so I asked my colleague in the lab and he said that they simply claimed fair use for the data, since the study was for academic purposes, which I guess doesn't completely answer the question. I don't think that data is open in any case, so that might work? Concerning scraping, they created a custom framework to connect PUBG and Twitch-API to get their data (only focused on that game). Although, I think that due to the size of each video, even the GUI solution I found might work (especially if we maybe borrow that code and deploy it without an interface)
EricHallahan#1051: For those who haven't yet looked at the release notes for PyTorch 1.8.0:
https://github.com/pytorch/pytorch/releases/tag/v1.8.0
Louis#0144: amazon interview time |
Louis#0144: so nervous
Louis#0144: holy fuck
Louis#0144: ive never done a technical interview
Louis#0144: LMAO
Louis#0144: im praying they dont ask about dynamic programming stuff
Louis#0144: I hate DP
nz#9710: good luck Louis!
Louis#0144: tyty
MicPie#9427: Good luck!
I’m curious what they will ask.
Maybe you tell us afterwards?
Louis#0144: i dont think i can tell u
Louis#0144: ?
Louis#0144: idk what the policy on this is usually
nz#9710: Is this for a SWE role or for an ML one
MicPie#9427: Ah, yes, of course, only if you allowed. 🙂
Sphinx#2092: It should be mostly leetcode easy/medium plus random ml
Sphinx#2092: At least thats what it was for AS L5
nz#9710: AS?
Sphinx#2092: Applied scientist |
Sphinx#2092: Its the highest paying tech role at Amazon, I believe.
Louis#0144: The interview was super easy
nz#9710: Nice!
good afternoon#2346: is it normal to keep getting removed from this server
good afternoon#2346: it just keeps. disappearing.
StellaAthena#3530: @good afternoon what link are you using to rejoin? Some of our older links were temporary by accident but I thought we had scrubbed them
good afternoon#2346: the one in the Game Upscale server
good afternoon#2346: https://discord.gg/CZW7s9KS4W
good afternoon#2346: that one
triggerhappygandi#0001: 3425 members?
triggerhappygandi#0001: I only ever see like 50 max
triggerhappygandi#0001: Anyone lurking pls react to this
bmk#1476: Next target is fast.ai server
triggerhappygandi#0001: I frequently lurk there
triggerhappygandi#0001: When jeremy does code stream
bmk#1476: How many members do they have again
triggerhappygandi#0001: Secret deepmind server raid when
triggerhappygandi#0001: 4400 members
triggerhappygandi#0001: 1000 more
mgostIH#0245: Is fast.ai worth checking out |
triggerhappygandi#0001: Not bad, for 0 youtube videos
StellaAthena#3530: @good afternoon Try this one: https://discord.gg/vtRgjbM
bmk#1476: Is fastai the biggest ML server
bmk#1476: Or is there an even bigger one anywhere
good afternoon#2346: i havent been removed yet but if i do i have that one for backup
bmk#1476: It would be :ultrazucc: if overtaking fastai would mean we're be the biggest ML server
triggerhappygandi#0001: ***without a youtube video***
triggerhappygandi#0001: When we release NeoX, done and done
triggerhappygandi#0001: mfw Andrew Ng could be one of the lurkers
triggerhappygandi#0001: For all we know
mgostIH#0245: @triggerhappygandi Maybe you are Andrew Ng
mgostIH#0245: Sounds something an Andrew Ng in incognito would say
Louis#0144: Why do models in HF have their training loss plummet every time a new epoch starts? Same data set not using HF doesn’t have this
Louis#0144: Is it something weird w the scheduler?
triggerhappygandi#0001: @mgostIH if I was, trust me you'd know
sandos#7339: whaaa, the maths discord has 40k members! Crazy...
sandos#7339: I mean math is so boring.....
sandos#7339: https://tenor.com/view/hide-the-simpsons-bush-bushes-hermit-gif-5786484
EricHallahan#1051: #math
45#2247: https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture) |
mgostIH#0245: people need their homeworks to be done :S
good afternoon#2346: @StellaAthena I got removed AGAIN and I don't know how, so I joined using your link and I'm hoping I don't get dunked on again
good afternoon#2346: Don't know why this keeps happening :((
guac#4716: Arms up straight. Feet planted. Draw the foul.
StellaAthena#3530: @good afternoon Close and reopen discord. If that logs you out, DM me and I'll help you figure it out one-on-one
good afternoon#2346: I just purposefully left the server and joined back but I will do that as well
good afternoon#2346: nope, i even quit it out of my tray
good afternoon#2346: im just gonna hope for the best, if i get removed again i'll dm you
StellaAthena#3530: I sent you a friend request to make me easier to find
StellaAthena#3530: but if that didn't log you out of the server I think you're good to go
EricHallahan#1051: > ***"I don't know! I guess he just doesn't like you."***
- *Spongebob Squarepants*
good afternoon#2346: man i HOPE i'm good to go
good afternoon#2346: just closed discord and let it sit closed for ~10 mins and i am not removed so i think we good
StellaAthena#3530: Welcome for reals this time!
good afternoon#2346: well its good to be here! :)
StellaAthena#3530: If you want to get involved with our research lemme know 🙂
good afternoon#2346: Thanks! I've been poking around NMKD's server for a while and I'm FAIRLY interested in AI stuff, but it's definitely taking some time to wrap my head around 😅
good afternoon#2346: On top of that, I don't know python, mostly basic c++ and Java
good afternoon#2346: But I'm getting there :) |
sandos#7339: python is easy, no pitfalls at all. /s
good afternoon#2346: Is there a channel for just basic ai training help?
sandos#7339: there is an AI discord that has lots of that. https://discord.sg/ai
EricHallahan#1051: Try Reddit.
good afternoon#2346: Understandable
bmk#1476: That server has 10k members o.O
bmk#1476: *new target set*
bmk#1476: Eleuther 10k or bust
StellaAthena#3530: Well, now we have somewhere to send people who come asking basic q’s
bmk#1476: Should we add it to #communities too
StellaAthena#3530: Yeah
EstebanSir#2189: hey
EstebanSir#2189: very interested in the development of the gpt-neo project
EstebanSir#2189: hope it goes well-
EricHallahan#1051: Welcome!
EstebanSir#2189: i suppose you guys will post news and whatnot on #announcements , so i'll be checking that regularly
EricHallahan#1051: Yes. If you haven't already, check out #rules, where we have linked resources on what we do here.
mgostIH#0245: I don't like Byte Pair Encoding in NLPs, but I get its purpose
mgostIH#0245: It feels weird that an NLP model can't see words as letters when needed
mgostIH#0245: I'd imagine it'd help a lot when doing anagrams, jokes, rhymes |
mgostIH#0245: Of course just giving the model letters would be too inefficient
mgostIH#0245: But maybe we could do something like hierarchical transformers
mgostIH#0245: So a transformer attends at letters of a sentence to form a meaning and assemble words
Teemochu#8740: And math
mgostIH#0245: And then a top transformer attends to that later, but it feels like done already hm
EricHallahan#1051: I think everyone else here shares this sentiment.
mgostIH#0245: Or maybe we should have some sort of dynamic encoding
mgostIH#0245: A transformer reads like 1024 letters and then assembles its own encoding on the fly to pass to the top transformer
CRG#8707: "Decimal" is the current BPE encoding <https://arxiv.org/abs/2102.13019> https://cdn.discordapp.com/attachments/729741769738158194/817763289580175370/02ba6be0b5622230acc4320730abadf0.png
EricHallahan#1051: I'm more interested in character level modeling TBH.
Teemochu#8740: Wow interesting it seems the issue here is an understanding of place value
mgostIH#0245: Ye but I mean that some dynamic encoding could assign a word meaning as a single token when needed or separate it when the context asks for it
mgostIH#0245: After all we don't really see most of the words we use as single letters, unless the context requires it so
EricHallahan#1051: Exactly
mgostIH#0245: At the same time it's not necessarily true our simple heuristics map nice to tokens
mgostIH#0245: For example "Far off" could be seen as a single token
mgostIH#0245: But it's separated by a space
mgostIH#0245: There's a lot of verbs in English that are like this
mgostIH#0245: And understanding of characters is a must if you want to consider other languages in the same model too
mgostIH#0245: And maybe who knows if a model that learns how to tokenize things itself may find a much better way to represent numbers too |
mgostIH#0245: When I see ad hoc tricks to improve performance I always feel like it should be something learned
EricHallahan#1051: If you know something about a system, use it to your advantage.
EricHallahan#1051: Otherwise, learn it.
EricHallahan#1051: That's what I like about the "give BERT a calculator" idea.
EricHallahan#1051: And why I like LPCNet.
mgostIH#0245: I think finding the right architecture is about finding symmetries and good priors, but a prior shouldn't be "We tested 10 different ways and this is the best"
mgostIH#0245: Haven't heard of LPCNet
EricHallahan#1051: Audio.
mgostIH#0245: > Mozilla
Hope they put some good voice synthesis in the reader mode of Firefox
Sphinx#2092: BPE is a useful evil.
Sphinx#2092: I tried to get rid of it some time back, I ended up just diving deeper into it lol.
mgostIH#0245: I think the encoding itself should be learned
mgostIH#0245: Characters are obviously the atoms of text
mgostIH#0245: It doesn't really make sense to split "O" into two, not in English at least
mgostIH#0245: Maybe in Japanese it does
mgostIH#0245: But point is we think of words as being made of letters only when it's useful, not always
mgostIH#0245: And we use graphics of how words are displayed as a hint for separation
mgostIH#0245: Hey, maybe the real advancements will be using transformers for reading text rendered as images
mgostIH#0245: Would be funny that a model invented for text would be used for images to then use it back on text |
EricHallahan#1051: Character level modeling is a requirement of multilingual models IMO.
Sphinx#2092: I thought so too
Sphinx#2092: but not really clear.
Sphinx#2092: I'm not even sure if characters are even the atoms of text either.
Sphinx#2092: As opposed to bytes, for example.
mgostIH#0245: I remember a video from Bisqwit where he wanted to add a Finnish translation for a game
EricHallahan#1051: Because you need to map different sequences to the same concept.
mgostIH#0245: And the hard part was that in Finnish the endings of the items were far different than in English
mgostIH#0245: Like in English we just put a "s" most of the time for plurals
mgostIH#0245: While in Finnish there's female endings, male endings, different plurals and so on
mgostIH#0245: And even English has exceptions on the plurals (like Tomato-es)
EricHallahan#1051: That’s why you need to look at the character level in some capacity.
EricHallahan#1051: I wonder if a multimodal model would be better at learning multilingual contexts.
EricHallahan#1051: Has anyone tried CLIP with languages other than English?
triggerhappygandi#0001: Won't be as effective I guess
triggerhappygandi#0001: What other language is contained in the image-text pairs significant enough to generalize
EricHallahan#1051: True. Stupid English.
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/817778568142389288/ee61c03a5a2129c43fafa8efb5e78011.png
EricHallahan#1051: Cool
One#5919: How can I help the mission as a non-technical but creative and intelligent person? I know "non-technical" is a dirty word in such circles, but it really shouldn't be. All of you are technical, why should I be? |
triggerhappygandi#0001: Gather data I guess
EricHallahan#1051: There are plenty of tasks that you can be involved in.
triggerhappygandi#0001: Everyone is interested in data for languages with less than 100k speakers
One#5919: I speak Bulgarian but that has at least 10 million speakers
triggerhappygandi#0001: And Scottish Wikipedia is a very cited example for lack of diverse languages
One#5919: I lived in Edinburgh, Scotland for three years
EricHallahan#1051: Data collection and dataset preparation is critical.
One#5919: I'm heavy guys, even tho I don't code!!!!
One#5919: 🥲
triggerhappygandi#0001: Create a 100GB dataset on less spoken languages. Best way to contribute
triggerhappygandi#0001: Videos/images would be even better
One#5919: I know where to get a whole lot of Bulgarian text
One#5919: https://chitanka.info/
Louis#0144: And for something actually interesting
Louis#0144: https://twitter.com/degenrolf/status/1367698852850130946?s=21
One#5919: this guy Rolf must be really proud of being a degenerate to put it in his name
One#5919: all the classic psych stuff is getting blown out of the water
One#5919: maybe we've all changed too much for it to be applicable anymore
One#5919: https://en.wikipedia.org/wiki/Replication_crisis#Psychology_replication_rates
Louis#0144: Classic psych is incredibly western centric |
One#5919: WESSSSIIIDEE
Louis#0144: It’s awful
One#5919: https://tenor.com/view/asap-rocky-westcoast-rappers-rap-flexing-gif-4530470
Louis#0144: Psych is going through a massive reform rn
triggerhappygandi#0001: Why do you know it nerd
Louis#0144: Similar to when we started rejecting all of Freud’s ideas
Louis#0144: I used to do cog sci research 😉
One#5919: what if Freud's ideas were valid... in Freud's time
triggerhappygandi#0001: Fancy. What does it actually entail
Louis#0144: Lots of neuroscience
triggerhappygandi#0001: Like what's the day to day stuff people do in cog sci
Louis#0144: And looking at biologically feasible GANs
One#5919: what are those? they sound hella fun
Louis#0144: They r but I’m too sleepy rn
One#5919: another time
StellaAthena#3530: What could possibly go wrong
https://gender-api.com/
StellaAthena#3530: This is their actual website https://cdn.discordapp.com/attachments/729741769738158194/817783380703707227/image0.png,https://cdn.discordapp.com/attachments/729741769738158194/817783381063761920/image1.png
EricHallahan#1051: A lot of things, obviously. |
EricHallahan#1051: That is the most terrible thing ever.
EricHallahan#1051: Though it is stupidly simple to implement and get data for.
triggerhappygandi#0001: Why is it even necessary
One#5919: https://cdn.discordapp.com/attachments/729741769738158194/817784030233493505/capital.png
EricHallahan#1051: Marketing
One#5919: i wrote this a few minutes ago
One#5919: u think it's heavy guys????
StellaAthena#3530: It’s not necessary. Literally zero people need this.
One#5919: hi Stella!!!!
EricHallahan#1051: Marketing teams will eat this stuff up.
One#5919: yup
StellaAthena#3530: @EricHallahan my question is: do you think that the creators are stupid enough to think that this is a working technology or do you think they’re just taking money from suckers
One#5919: i'd bet the latter
triggerhappygandi#0001: This is like that juice company that *literally gave a juicer that only rips open a packaged juice after connecting to wifi*
One#5919: it probably _does_ work in like 80% of cases, too
triggerhappygandi#0001: Even thinking about Jucero makes me mad
triggerhappygandi#0001: The person who thought of this should be thrown into a black hole
EricHallahan#1051: My father tells me of the first time he was tracked through email. It was part of a demonstration of the technology during the dot-com boom. He clicked through the message, and got a phone call a minute later.
StellaAthena#3530: I would bet money against this actually. It gets several of my friends and family members wrong for example
One#5919: yeah and there is "they/them" as well |
One#5919: i bet it can't guess which ppl identify as "they/them" with ANY accuracy
EricHallahan#1051: Making assumptions that will work 85-95% of the time but improves effectiveness by a large enough margin will be absolutely worth the cost in most cases.
StellaAthena#3530: Even better, it doesn’t accept a string as a name if it contains anything other than the standard English alphabet. No accents, no apostrophes, no symbols not found in English.
One#5919: 🤦♂️
One#5919: REKT
EricHallahan#1051: Even the Behind the Name database has those.
StellaAthena#3530: *Jean Paul*? “Oops! It looks like something went wrong.”
One#5919: Gaultier
StellaAthena#3530: This so much
StellaAthena#3530: Everyone who gives the thumbs up or thumbs down to new tech should have someone with a baseball bat standing next to them to wack them and say “you shouldn’t solve this problem with an algorithm” when needed
EricHallahan#1051: (BtN is awesome btw, great database.)
triggerhappygandi#0001: I mean, it is kinda annoying useless service, but I don't see how it is _actively_ harmful?
triggerhappygandi#0001: It is the same as Jucero to me tbh
triggerhappygandi#0001: Very shitty idea, and I would hate if it got $100M seed funding
EricHallahan#1051: It fails the first law of robotics, through inaction.
StellaAthena#3530: @triggerhappygandi that depends heavily on what you are doing with the gender info
jrowe#5371: hit it with an ada complaint
jrowe#5371: there's no accommodation for the blind
jrowe#5371: (haven't looked, just guessing)
triggerhappygandi#0001: Hmmm.. hadn't thought about that |
StellaAthena#3530: The problem with this tech isn’t that it’s intrinsically harmful. It’s that it can only be neutral or harmful. There is no benefit
EricHallahan#1051: I get annoyed when I see doorknobs now.
EricHallahan#1051: Use handles, please!
One#5919: affordance
One#5919: https://en.wikipedia.org/wiki/Affordance
Enealor#6657: I hate it. I notice the form is also binary. Makes me want to change my email.
StellaAthena#3530: The kind of things that tech people want to do with this technology is deeply disturbing. I’ve read papers that explicitly pitch “automated gender recognition” as a way to do *targeted billboards*. As in, it detects who is walking up, guesses their gender, and then shows them lingerie or sports ads depending on if it thinks they are a man or a woman
StellaAthena#3530: Imagine how deeply embarrassing it would be to be misgendered in a world where that technology is everywhere
StellaAthena#3530: For transgender people and cisgender people
One#5919: https://tenor.com/view/unscannable-your-unscannable-idiocracy-gif-15095682
StellaAthena#3530: Oops, you dressed more feminine than the algorithm likes? You get reminded of this every time you walk passed an ad and it switches to advertising tampons
EricHallahan#1051: Isn't this exactly what happens with internet advertising anyway? Everyone is profiled, unless you cover your tracks.
EricHallahan#1051: At least that tends to be built over time.
One#5919: just because it happens already doesn't mean it's not fucked up
Enealor#6657: I realized a solution. Time to start getting adversarial patches on my clothes. Scan me and I'm a cat, bark bark.
EricHallahan#1051: Of course.
EricHallahan#1051: Eleuther shirt wen
One#5919: yoooooooooooooooooooo i just remembered an idea i had, brb
EricHallahan#1051: Adversarial shirt should be an option.
Enealor#6657: Eleuther shirt that's just the word "bird" over and over, plus patches |
EricHallahan#1051: BigGAN generated birds.
Enealor#6657: (I am honestly excited for the absurdity of fooling AIs by just slapping a label in myself)
StellaAthena#3530: Given that we live in a world where YouTube recommendation algorithms actively radicalizes people by recommending extremist media, I don’t have a problem with saying this is also bad
EricHallahan#1051: It is bad, it's even worse that it is normalized.
StellaAthena#3530: There’s a talk I went to where someone studied how a random walk across recommended YouTube videos starting from mainstream US conservative media ended up in a QAnon steady-state in ~100 videos
StellaAthena#3530: (This was before the insurrection and may not be replicable today)
neel#1028: It's sad to think about the future of AI when things like this come up. It's the typical flow of technology. You create new tech, it's incredibly exciting, there's massive development, it's all positive. Eventually, the tech is used for nefarious purposes, people start worrying, AND THEN policy makers give it a think which , by that time, is in the best case, mitigatory. Wouldn't hurt(literally and figuratively) if ethics is kept in mind from the start
jrowe#5371: sudo rm -rf
EricHallahan#1051: (That reminds me of a shower curtain my grandparents had in their old house that had birds and fauna of the mid-Atlantic.)
EricHallahan#1051: There was that IBM page where you could use GPT-3 to build `bash` commands. Loved it before they put it behind a special key for those involved. I'll have to find my results and post them to #the-faraday-cage-archive
EricHallahan#1051: They were really entertaining sometimes.
jrowe#5371: the most hostile adversarial shirt in history lol
jrowe#5371: get scanned and the scanner gets nuked
jrowe#5371: theres an icon that blue screens windows, I'll find the cmd
EricHallahan#1051: Delete system32?
EricHallahan#1051: You can force that if your smart.
EricHallahan#1051: It will try to prevent you from doing it.
jrowe#5371: you can run an unelevated command that sets the user's login icon to the poisoned one, bootloops the pc
jrowe#5371: <https://www.bleepingcomputer.com/news/security/windows-10-bug-crashes-your-pc-when-you-access-this-location/>
EricHallahan#1051: I have a love-hate relationship with Windows. |
jrowe#5371: same
jrowe#5371: i keep waiting for better Linux, or motivation to be better with Linux
EricHallahan#1051: Clearly superior than OS X, but battles hard against Linux.
EricHallahan#1051: And often loses.
EricHallahan#1051: They half fixed this with WSL
jrowe#5371: Microsoft's nonconsensual teledildonics pisses me off more than any of the rest
jrowe#5371: sorry, "default telemetry"
EricHallahan#1051: The fact that Windows 10 Pro still exists as a 100 USD upgrade is ridiculous.
EricHallahan#1051: Windows NT peaked at version 6.1
EricHallahan#1051: *Just make it standard*
StellaAthena#3530: The history of AI is this in reverse: the bad use-cases come before the good ones. Facial recognition tech was *originally invented* to profile criminals. Agent-based models were *originally invented* to figure out how to do the most damage to Germany's economy during WW2. Network analysis predates AI peoples' interest in it, but it gained attention in the AI community for its use in deciding who to assassinate to bring down a regime.
StellaAthena#3530: These technologies all have good uses, but they aren't good technologies that have been abused. They are abusive technologies that people are trying to redeem.
StellaAthena#3530: And this pipeline is hardly changing. Reminder that the US military spends more money on AI research than Facebook, Google, NVIDIA, Microsoft, and Apple combined.
jrowe#5371: it's like NDT's history of cosmology and military tech - the tools are used for brute expression of power because that's where the funding comes from
StellaAthena#3530: What is NDT?
jrowe#5371: you get missile targeting before you get Hubble pics
One#5919: https://cdn.discordapp.com/attachments/729741769738158194/817794918998867998/FaceApp_1615047782848.jpg
jrowe#5371: neil degrasse Tyson
One#5919: Can we make this an emoji?
StellaAthena#3530: Ah |
One#5919: @Daj
EricHallahan#1051: No, the original should be one.
One#5919: I suppose
One#5919: Smiles inherently make us feel good tho
dopa#3178: AI is an tool as is wheel, spoon, etc... and as any tool humans invented, they used to kill other humans, this our deep nature as is piece, just war is steady state of life. (Red Queen Hypothesis)
jrowe#5371: well. my vote is that Archibald serves as the face of any interactive gpt-neo agent, lol
One#5919: Heck
One#5919: Yeah
StellaAthena#3530: Some tools are developed to kill people. Some are not. That's important. Spoons are not weapons, guns are
One#5919: Sonnnnnnnn
triggerhappygandi#0001: Slippery slope probably ends in genocide?
jrowe#5371: ban spoons!
StellaAthena#3530: This isn't a slippery slope. It's a historical fact that the overwhelming majority of AI technology was invented for war
One#5919: Damn
triggerhappygandi#0001: That's why I hate face unlock
EricHallahan#1051: They are both tools however. I could try to kill someone with a spoon, it probably would not be very effective.
One#5919: Never realized
dopa#3178: internet was not created to be tool of war.
jrowe#5371: err
StellaAthena#3530: 1. The internet was developed by the US military |
2. The internet is not AI
One#5919: I hate it because it's dumb as shit compared to a fingerprint
StellaAthena#3530: You probably don't associate the name Nicholas Metropolis with the US military, but the people he developed the Monte Carlo method with were (among others) Stanislaw Ulam, Edward Teller, and John Von Neumann
StellaAthena#3530: MCMC was invent at Los Alamos
dopa#3178: this my point we need to stop treat AI is something special, it is just a tool, nothing more
jrowe#5371: darpanet
EricHallahan#1051: ARPAnet
AI_WAIFU#2844: No, that needs to be an anime catgirl.
StellaAthena#3530: My point is that the internet was literally created by the US military to accellerate US military technology research
jrowe#5371: the web was not military, the internet arguably still is, as it's the biggest projection of power by five eyes countries
dopa#3178: internet objective was simple to insure Command and Control within a military and gov
EricHallahan#1051: And the propagation of development work.
dopa#3178: I think it does not matter tool is created for warfare or for piece
dopa#3178: what is important how we use them
One#5919: it's just a tool but at the same time it's getting more and more powerful with each passing week. what other tool can say the same?
triggerhappygandi#0001: Tbf, war _did_ historically accelerate technology
dopa#3178: @One electricity
StellaAthena#3530: @dopa My original point was that we should stop being surprised when people use AI for bad things or consider it an abuse of the technology when in fact that's it's original purpose.
dopa#3178: there always will be people, organization that will think they can take from another people/organization something that does not belong to them, we need machines of war
StellaAthena#3530: 90% of people who say "I think it does not matter tool is created for warfare or for piece" blind themselves (deliberately or not) to the harm their research actively enables. I'm not saying that's an intrinsically wrong attitude, but almost everyone I've met who professes it is wrong. |
One#5919: electricity? we're not getting leaps and bounds improvements in power production comparable to AI are we? maybe we are?
dopa#3178: most don't understand concept of security 😦
឵Tomo#5259: agreed, its inevitable :PokeDerp:
StellaAthena#3530: There's a reason why AI Ethics researchers get death threats regularly. There's a very strong cultural current pushing against acknowledging the widespread harms AI tech does
jrowe#5371: yes, an rpg is just a tool, and you could find peaceful use for it
jrowe#5371: buuuuut
jrowe#5371: you don't want it as standard accessories in new cars
StellaAthena#3530: Just yesterday I got yet another sockpuppet twitter account created for the purpose of harassing AI ethics researchers banned.
bmk#1476: That's one garden path
Teemochu#8740: Pretty sure at least one of the games on a Tesla is an RPG 😛
Teemochu#8740: (yes I know you meant the weapon not the game)
triggerhappygandi#0001: Man that must suck. I have no stance on the Timnit Gebru thing, but she probably receives dozens of shit tier emails from trolls.
triggerhappygandi#0001: And I feel bad for her
triggerhappygandi#0001: For that
dopa#3178: there always will be people/organization that will think they can take from another people/organization something that does not belong to them, we need machines of war, so such people can be stooped.
most people talk about ethics in my perspective are wrong because they ignore concept of individual and collective security.
there needs to be control process that assure tools are used reasonably within context of freedom, individualism and diversity, etc... (they are not going to be perfect) irrespective if they developed for warfare or piece.
StellaAthena#3530: It's not just her. People have had their bosses called and emailed in an attempt to get them fired, have had their family members harassed, and have been publicly bashed for having mental health conditions by accounts with 40k followers.
StellaAthena#3530: Simply for siding with her
triggerhappygandi#0001: Someone tried messaging Yannic's boss for that |
bmk#1476: I'm curious, has anyone harassed alignment people in particular?
bmk#1476: (i don't really pay attention to twitter drama)
dopa#3178: furthermore, I see nothing wrong to send terminators (robots) to go to war and kill people in automated way because:
1. I don't think 18-20 years old should do this
2. war will be more humane when compared to Vietnam war, it is inherently better then carpet bombing cities.
3. I don't believe we will become peaceful species any time, I don't think this is even possible
StellaAthena#3530: I've never seen that, but I also spend minimal time on twitter. I mostly know about the stuff connected with Timnit because I got dragged into it by Pedro Domingos
triggerhappygandi#0001: He was in the right for the first few tweets
triggerhappygandi#0001: But now he's just brow beating
triggerhappygandi#0001: Like move on man
dopa#3178: alignment problem, so far to me, on surface is a joke, because it does not address risk management, so it seems to me it tries to develop crystal ball.
jrowe#5371: neural networks can be rewritten as really long polynomial equations
Space#4359: i don't think we should have very powerful weapons that the state uses
Space#4359: seems like an easy way to devolve into authoritarianims
jrowe#5371: if you can identify a feature bounding the solution to a particular higher level conceptual domain, like some ethical rule alignment for nns is a math problem
jrowe#5371: but it has to be constructed formally, like a principia cognitiva or something
CRG#8707: If you count EY, plenty. Others not so much.
jrowe#5371: Sam Harris, nick Bostrom, elon musk, Stephen Hawking all got dragged
dopa#3178: humans already have nukes, chimical and biological weapons, psyops is as old as warfare it self.
dopa#3178: problem is in defining particular higher level conceptual domain, it seems similar like developing security/risk management solution without clear context. |
jrowe#5371: yes, and the race is on to complete that before agi
jrowe#5371: otherwise, 🎲 🎲 🎲
Sahl#0630: i think it’s fine to give the benevolent ai dictator weapons 👀
dopa#3178: here is simple solution take google and all it employees and treat it as AGI magic super computer.
jrowe#5371: singleton behemoth zookeeper 🖇️
Enealor#6657: Look, all an AI has to do is sweet talk me, and I'll probably go along with it - no gun needed
Sahl#0630: corporations aren’t very powerful AGI
dopa#3178: ok, lol
Sahl#0630: wdym
jrowe#5371: faang?
One#5919: How far is AGI? React to this message with :citationneeded: for Weeks, :blackswan: for Months, :yarr: for Years or :totem: for Decades
dopa#3178: define AGI
One#5919: artificial general intelligence
dopa#3178: seems like it god according to some 🙂
One#5919: it's in the title
Enealor#6657: Local economy as an AGI
dopa#3178: thank you!
Sahl#0630: eleuther AI as AGI
Enealor#6657: Optimization of cost as the metaphorical gradient descent
Sahl#0630: basically economies and corporations have coordination issues, plus corporations can’t do too much better than human |
Sahl#0630: they are AGI but not very strong
dopa#3178: this my point, alignment tries to redefine problem in a most complicated way, without first establishing context in terms of security and risk management.
Enealor#6657: I wonder how lobbying extends to this
One#5919: nobody voting :blackswan: ? even after GPT-3?
One#5919: heck, GPT-4 might be AGI
Sahl#0630: alignment: how do you prevent the situation where an agent we create kills everyone (or worse)
One#5919: 1% chance
CRG#8707: https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/
dopa#3178: same way we build nuclear power plants and control nuclear weapons, no need to reinvent a wheel
jrowe#5371: dopa, have you read Superintelligence?
Sahl#0630: the reason people worry about alignment is we can’t very well test the systems before we implement them
dopa#3178: exactly like we cant test command and control of nuclear weapons
dopa#3178: nope, not sure I will like it
dopa#3178: we can't define intelligence yet 🙂
One#5919: GPT-3 passes the Turing test
One#5919: routinely
Sahl#0630: how does this relate to security and risk management
jrowe#5371: you're missing a lot of information and context, that book is really good at setting the basis for understanding the issue
jrowe#5371: Life 3.0 is also a good book for that
jrowe#5371: the audio book would be easy, great narration |
dopa#3178: I am not sure what are you asking, all I am saying is replace AGI variable/function with enterprise organization, then it is clear many problem we already solved and no need to redefine them
dopa#3178: getting it not for my reading list
jrowe#5371: cool
Sahl#0630: people have definitely considered that, the problem is we expect AGI to behave differently from a corporation
jrowe#5371: one of the easiest differences to see is the intelligence explosion factor
Sahl#0630: robert miles has a great video on the subject
Sahl#0630: in fact he’s a great introduction in general
Sahl#0630: if you want to see some thoughts about your approach check him out
dopa#3178: this sounds like a magic to me, I am sorry to be a bit direct about it
Sahl#0630: yeah robert miles is magic
Sahl#0630: 🙂
dopa#3178: 🙂
jrowe#5371: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion", and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
jrowe#5371: I. J. Good
One#5919: nailed it
mgostIH#0245: Let's poll some more 👀
AGI in less than X decades:
1, 2, 3, 4, 5
One#5919: in fact imma feed it into AI Dungeon as a prompt @jrowe
One#5919: 🤦♂️ |
dopa#3178: feel free to call me ignorant but I don't see any evidence of AI becoming some superhuman intelligence in open world with partially observed information, uncertainty, and delayed feedback.
One#5919: u should join my server
jrowe#5371: the difference between corporations and an agi is that the agi can be replicated, and improve its algorithms, leading to the intelligence explosion
One#5919: https://discord.gg/PKxPKH9w
One#5919: we're doing GPT-3D
jrowe#5371: corporations have to coordinate humans, and are inefficient
dopa#3178: well inefficiency source is not only humans, but uncertainty and complexity of environment
jrowe#5371: however, for all that, you're not wrong in that we can use corporations as a tool in solving the problem
dopa#3178: there needs to be proof of a sort that all inefficiencies in corporations are attributed to humans behavior
dopa#3178: it not that clear to me
jrowe#5371: there's just a difference in scope and depth by orders of magnitude
mgostIH#0245: @One Quite positive thinking that AGI is in 10 years 👀
One#5919: are you serious
mgostIH#0245: Yes
One#5919: you don't see the acceleration?
dopa#3178: at least let use them to establish concise context and processes
One#5919: training GPT-3 cost $20m
One#5919: that's pennies
mgostIH#0245: Even assuming the exponential graphs I don't think that AGI will be a thing in 10 years
One#5919: GPT-4 might be AGI, in a few months |
One#5919: the performance SCALES WITH SIZE
One#5919: LINEARLY
mgostIH#0245: I am more a 20 years guy, but like 19 years, not 11
mgostIH#0245: We are just starting multimodality
One#5919: i love dissenting opinions btw
jrowe#5371: gpt-3 can't play go, and it's a bad algorithm for that type of problem
mgostIH#0245: And we are nowhere near good RL
One#5919: i like it when someone disagrees with me
mgostIH#0245: Imo RL will be a must for AGI, and will happen only after multimodality
One#5919: go can be described in text
jrowe#5371: but it could be prompted into correctly using better algorithms, which we have
dopa#3178: like we done without AGI for thousands of years
jrowe#5371: constructing an agi will require integrating other algorithms, but gpt-x is a great coordination point
mgostIH#0245: Which is why I think RL is a must
One#5919: yo yo yo
dopa#3178: AGI will not break natural evolution trends of intelligence.
mgostIH#0245: We need some AI that can learn to interface with "discrete" tools
One#5919: we gotta focus on SELF-IMPROVING AI
One#5919: AI THAT MAKES THE NEXT BETTER VERSION OF ITSELF
One#5919: what's yud @Chlorokin |
jrowe#5371: yes, recursive cognition and stateful persistence are missing from gpt
One#5919: pic's too small
mgostIH#0245: Self improving AIs still require a fundamental shift in RL
jrowe#5371: Eliezer- yudkowsky
mgostIH#0245: As in, an AI that not only is able to efficiently infer structure from its samples, but that is actively able to ask for the best samples
One#5919: @Chlorokin @Chlorokin jeez there's two of u
Sahl#0630: i guess there’s just @One of you
One#5919: https://open.spotify.com/track/7egu63DOhNpivWOpGtzqGS?si=1bBQrJdiTjyXon7McOInHg
One#5919: i know the guy, autodidact who started less wrong
One#5919: he's heavy as far as i know
One#5919: so he pushes for self-improving artificial intelligence?
dopa#3178: what lesswrong person is self-thought like me ?
dopa#3178: does he has high school diploma ?
dopa#3178: really ?
dopa#3178: well he is better version of me lol, in this context 🙂
dopa#3178: it is very subjective, but I am started noticing pattern that most people who are self-though and continue learning, end in cognitive science in some form
One#5919: Yudkowsky did not attend high school or college.
One#5919: heavy fuckin guy eh
One#5919: maybe it should be encouraged
One#5919: self-directed study is the funnest |
One#5919: maybe with cool mentors
Sahl#0630: we definitely don’t have enough cool mentors in this day and age
Sahl#0630: or at least they’re hard to find
dopa#3178: culture needs to change dramatically, in my perspective, in most cases, management will want to see papers, operations will care about what you actually can do.
jrowe#5371: we still need janitors and linesmen and police and traffic engineers
dopa#3178: you don't need highschool for it but an on job training starting from volunteering
EricHallahan#1051: Definitely traffic engineers.
dopa#3178: I am testing this hypotheses right now, on my self
EricHallahan#1051: More traffic engineers.
EricHallahan#1051: Pls
jrowe#5371: lol
jrowe#5371: and phone sanitizers!
One#5919: we gon get robots to do all that
Chlorokin#6581: The moral of that story was they should have kept the phone sanitizers.
dopa#3178: just education is not an answer also irrespective if formal or informal, example is soviet union
EricHallahan#1051: I live down the street from QVC, so I *know* phone sanitizers.
dopa#3178: it is not only education quality, something else drives nation success, what is it ?
jrowe#5371: culture, liberty, communication, purpose and meaning, love, fulfillment
jrowe#5371: for some, all you need is catgirls
EricHallahan#1051: It is so weird that QVC Studio Park is where Commodore was before they folded. |
dopa#3178: this are abstract concepts, not applied policies 🙂
jrowe#5371: yup, we're doomed
Chlorokin#6581: Do not let Connor see you posting such filth.
One#5919: she's fully dressed
One#5919: i think it's ok
One#5919: i'll delete it just to be on the safe side
dopa#3178: lol
One#5919: she was cute in my opinion
dopa#3178: this is good example of out discussion yesterday in off-topic
dopa#3178: @One where all this your decision or you where influenced by environment ?
One#5919: i was influenced by Connor previously telling me not to post smut
One#5919: didn't wanna incur a second strike
One#5919: i really love this server
One#5919: can't risk having to leave it
EricHallahan#1051: The primary reason I disapprove is that it isn't very professional looking for those who are lurkers. #general is where those who join are dropped first, and should be left clean or we risk scaring away the right crowd.
One#5919: yeah exactly
One#5919: it's reasonable to keep it cleen
One#5919: normies are people too!
One#5919: https://open.spotify.com/track/0sL5WC2mgffTiCXUA0g2nh?si=7y0g4OnyQ6q3V0K3xQPGTg
One#5919: https://open.spotify.com/track/67YPjbcxUypwNOwYBZquq1?si=8_cIcQiDQXmqykx5WGCigQ |
spirit-from-germany#1488: LMAO 😄 😄 😄
spirit-from-germany#1488: https://cdn.discordapp.com/attachments/729741769738158194/817868878239498330/unknown.png
spirit-from-germany#1488: From the new ML Streettalk
spirit-from-germany#1488: This too...
spirit-from-germany#1488: https://cdn.discordapp.com/attachments/729741769738158194/817869883550203934/unknown.png
Sahl#0630: alignment in action
TheGamingWizardC75#9635: SW-7823-6388-4786
Switch Friends Code
alstroemeria313#1694: lol
Bedebao#4842: Chiming in, any noteworthy changes with GPT Neo or The Pile these past months?
EricHallahan#1051: How many months?
Bedebao#4842: Since the start of the year I guess.
EricHallahan#1051: The Pile was ~~published~~ released, we started working with CoreWeave, and began development proper on GPT-NeoX.
EricHallahan#1051: And finished training two models with GPT-Neo on Pile IIRC.
StellaAthena#3530: Released. It’s under review for publication but hasn’t been published yet. There’s a preprint on arXiv
Bedebao#4842: I don't recall hearing about GPT-NeoX. And how big are these trained models?
bmk#1476: Quite smol
EricHallahan#1051: NeoX is our GPU codebase.
Bedebao#4842: I vaguely remember something about training that would take half a year.
bmk#1476: 1B and 3B |
StellaAthena#3530: These are small scale models
EricHallahan#1051: That's realistic for a full size model.
StellaAthena#3530: Not GPT-3-scale models
Bedebao#4842: I think GPT-2XL was 1.5b?
Bedebao#4842: For comparison.
Bedebao#4842: So a model double that size still seems usable using consumer grade hardware.
Bedebao#4842: Is there somewhere you can download that 3B model?
Louis#0144: Not yet
Louis#0144: It’s mtf
Louis#0144: Really awful to work with
EricHallahan#1051: We are still trying to determine the largest size model we can realistically run on consumer hardware.
Louis#0144: I think we said 11b right?
Bedebao#4842: What does MTF mean in this context?
EricHallahan#1051: Mesh-TensorFlow
Louis#0144: It’s this awful version of tensor flow for clusters of TPUs
Bedebao#4842: Trying to convert it to PyTorch?
Louis#0144: Ye
EricHallahan#1051: We are trying to get the smaller model out on Hugging Face for evaluation soon^TM.
EricHallahan#1051: I think that was the number we threw around.
StellaAthena#3530: That was a back-of-the-envelope guess. The two people who offered to go test it disappeared on us |
Louis#0144: yoof
StellaAthena#3530: @ all you lurkers: This is the easiest conceivable way to contribute to the project. You’re just opening a collab file and running some code until something breaks.
EricHallahan#1051: :guilty:
Peter L#3352: Where's the colab file?
Nikita_lita#3879: oh
Nikita_lita#3879: is cuda required?
EricHallahan#1051: For what?
EricHallahan#1051: GPT-NeoX?
Nikita_lita#3879: yeah
Nikita_lita#3879: yeah, it requires tensorflow
EricHallahan#1051: Right now, yes, because we need every last drop of performance we can muster for training, and because that is what we are training on. Now that PyTorch has a (beta) ROCm backend, Intel is entering the GPU game, I am really hoping personally to see more vendor compatibility. I would like to think that we would want our end product to be vendor neutral, but we obviously can’t make any promises on things like that.
Nikita_lita#3879: ah, yeah, pytorch just added that
EricHallahan#1051: I am a big proponent of Vulkan for that reason.
Nikita_lita#3879: is vulkan compute appropriate for this? I thought it wasn't really suited for general compute tasks
Nikita_lita#3879: I've heard of people doing wacky things with vulkan compute shaders like paraLLEl
StellaAthena#3530: Sorry I was taking an exam. All I need you to do is boot up Colab and try to do inference with large models. The output doesn’t matter so you can initialize it randomly. The goal is to see how large of a model you can do inference with in Colab.
You can find set-up instructions here: https://github.com/EleutherAI/gpt-neo
I also recommend using GPT-Neo to build your large models. |
EricHallahan#1051: I don't think so, but the extent of my experience with writing Vulkan compute shaders is writing a raytracer/raymarcher around a year and a half ago, so I am not really in the loop when it comes to this kind of stuff.
EricHallahan#1051: Though I believe that is why the PyTorch Vulkan backend is so limited, but it was just revamped.
Nikita_lita#3879: I regret going with AMD for my GPU after getting into NLP stuff 😅
Nikita_lita#3879: and now the prices are through the fucking roof for any GPU for the forseeable future 😔
StellaAthena#3530: One of the perks of hanging out with us is you get as much compute as you need *gratis* 😉
What’s your NLP research on?
Nikita_lita#3879: i was originally just working on an easy way to attach phonemes to tokens for spacy pipelines
StellaAthena#3530: Fuck spacy
StellaAthena#3530: Sorry, I just wrote a 10 page report on why spacy is terrible
Nikita_lita#3879: oh
Nikita_lita#3879: well what do you recommend instead?
StellaAthena#3530: If you don’t want to use transformers NLTK is much better
EricHallahan#1051: Phonemes? I've been thinking of the intersection of text and speech, especially as we push for multimodal models.
Nikita_lita#3879: ehhhh
Nikita_lita#3879: i wasn't really a fan of ntlk when I tried it
StellaAthena#3530: But really you should just use transformers
Nikita_lita#3879: i was just using spacy because it made it very easy to create a pipeline of tasks and attach things to tokens; and I'm not really doing anything very intensive; the phoneme prediction is mostly just CMUdict + a pre-trained model
StellaAthena#3530: One major caveat to my anti-spacy crusade: I work with a lot of non-standard-English text. SpaCy’s tokenizer is highly optimized for natural English and once you start feeding in Twitter handles or foreign names it falls apart quickly.
Nikita_lita#3879: ahhh |
Nikita_lita#3879: yeah, I'm not doing that
Nikita_lita#3879: it's all natural text
StellaAthena#3530: Sorry I sometimes forget to caveat that.
StellaAthena#3530: / forget other people have different contexts lol
Nikita_lita#3879: I do hate that it breaks up contractions into seperate tokens, though
Nikita_lita#3879: by default, anyway
Nikita_lita#3879: I had to write something to peek ahead to tell if it's a contraction to get the proper pronunciation
StellaAthena#3530: That seems like the kind of thing where a little effort into augmentation could go a long way
EricHallahan#1051: Phonemes are interesting because that is on our way to making language models talk (which is a goal of mine).
Nikita_lita#3879: Right now my focus is making it easy to find rhymes within both whole words and syllables of words
Nikita_lita#3879: I was just using Phyme for this on the word text but it's extremely non-performant; more so if you're trying to find different types of rhymes
Sahl#0630: IPA as input 😳
Nikita_lita#3879: that's another thing I want to do, convert the arpabet notation into IPA
EricHallahan#1051: PITA
EricHallahan#1051: It's painful.
Nikita_lita#3879: there's already a library that does it: https://github.com/mphilli/English-to-IPA
EricHallahan#1051: I think NLTK can do it.
EricHallahan#1051: That I think is what I used.
Nikita_lita#3879: though I'm going to have to modify it a bit to get stress marks if i'm just feeding it syllable groupings
Nikita_lita#3879: but another thing that I just thought of that I thought might be interesting to pursue is correlating tracker issues together; given a description of bugs and a log file, see if there's any others that are a good match |
Nikita_lita#3879: this would be particularly useful on projects like wine and proton; most of the issues are just reports of program incompatibility, i.e. "The game crashed on launch. I don't know why. Here's a log."
However, there's usually a root cause that's already been documented, like a known issue with a particular wine library or an unimplemented function; it would be helpful to be able to automatically correlate these compatibility reports with the root causes to be able to easily track them and allocate resources accordingly
dopa#3178: How can GPT-3 be trained on sci-hub and other sources such that when I type text GPT will generate references automatically ?
Louis#0144: AMA, made my learning rate 20 for AdamW and just got SOTA
Louis#0144: lmao
Louis#0144: (SOTA on GLUCOSE, which is a common sense reasoning benchmark)
bmk#1476: wat
Louis#0144: I KNOW
Louis#0144: like wtf
Louis#0144: dude my jaw hit the floor
bmk#1476: congratulations, you are now a moderator of fast.ai
Louis#0144: LMAO
Louis#0144: 1e-5?
Louis#0144: nah fuck that
Louis#0144: we go big or go home
Louis#0144: B)
dopa#3178: What is AdamW
dopa#3178: can you please EL5 components, for pleb like me 🙂
Louis#0144: Adam makes it so every parameter has its own LR |
Louis#0144: effectively
Louis#0144: AdamW adds momentum to that I think?>
Louis#0144: can someone confirm this
Louis#0144: also @bmk have u seen novograd
Louis#0144: it looks wild
bmk#1476: no
Louis#0144: absolutely stomps adam
Louis#0144: no added performance overhead
EricHallahan#1051: Weight decay, right?
Louis#0144: Oh yeah
Louis#0144: but its like
Louis#0144: this weird meta weight decay
Louis#0144: the adam specific parameter can decay
dopa#3178: AdamW is part of GPT neural network ?
Louis#0144: Yeah
Louis#0144: GPT uses adam
Louis#0144: Well they use AdamW
Louis#0144: everyone uses AdamW rn unless you do GANs or sparse data
Louis#0144: if you use GANs, you typically use adafactor
Louis#0144: if you have sparse data, RMSprop |
Sahl#0630: well duh
Sahl#0630: learn faster = good
zphang#7252: iirc Adam often comes with optional weight decay, but it's usually implemented wrong, AdamW is the correct implementation (at least with pytorch)
dopa#3178: I can't find GLUCOSE benchmark
dopa#3178: it get to sugar things via google
dopa#3178: halp me
dopa#3178: 🙂
dopa#3178: https://gluebenchmark.com/
dopa#3178: is this the one ?
Louis#0144: its a dataset that a few people are turning into a benchmark
Louis#0144: it isnt a public benchmark yet
Louis#0144: will be soon
dopa#3178: got it, thanks
dopa#3178: I got worried my google skills are lost for second heh
zphang#7252: https://arxiv.org/abs/2009.07758 ?
gwern#1782: oh? did you get superconvergence?
Louis#0144: idk
triggerhappygandi#0001: You _what_
mgostIH#0245: Does anyone have any clue as to why the neurons in CLIP activate towards meaningful stuff?
Specifically I mean why aren't the concepts encoded in a random direction of neurons rather than being so "canonical" |
mgostIH#0245: As in (1, 0) rather than (0.5, 0.5)
andyljones#7746: activation fns are axis-aligned
mgostIH#0245: And that is enough? :Thonk:
mgostIH#0245: Is there some paper exploring this?
andyljones#7746: rephrase: activation fns are applied to neurons independently
andyljones#7746: on reflection i dunno how i'd go about formalising this, but i've a strong intuition that if i was gonna encode my foundational premises into a bank of switches, i'd very much want each idea to be a single switch
andyljones#7746: rather than each idea corresponding to three switches. that seems like it'd be a total mess
mgostIH#0245: I remember this image https://cdn.discordapp.com/attachments/729741769738158194/818098512540467250/unknown.png
mgostIH#0245: Showing how linear autoencoders are effectively SVD
mgostIH#0245: In a similar sense here the neurons are encoding the principal components of the data
mgostIH#0245: But a linear autoencoder doesn't even have activations
mgostIH#0245: Idk if it's related but it sounds so similar, I wonder if as you say there's some fundamental reason of encoding into a single neuron a very well defined cluster of information
andyljones#7746: suppose you already have patterns of activations for concepts A and B. suppose you want to learn 'A and B'. if A and B are encoded in a single neuron each, you've only got two weights to tune and you can stamp every other weight in your intake to 'less than zero'
andyljones#7746: if A and B are each encoded in a cluster of neurons, you've a lot more weights to tune
andyljones#7746: to my mind the bias this induces is that the concepts that turn up again and again and again are going to get bound to single neurons, coz then they're easier to learn composites of. rarer concepts'll get two neuron patterns, three neuron patterns, etc.
andyljones#7746: this is all well into the realm of 'idle speculation' though, sry
gwern#1782: whatever happened to superconvergence anyway? it's such a strange (and useful) thing when it exists but I haven't seen any mentions of it lately
Louis#0144: ya
Louis#0144: thats what happened to me yesterday
Louis#0144: entirely on accident |
Louis#0144: it converged in like only a few dozen batches
Louis#0144: for sure
Louis#0144: i dont have time to explore super convergence tho
Louis#0144: Personally I think retrievers are closer to RL than they are to normal NLP
Louis#0144: they have this weird exploration stage
Louis#0144: if you can find the docs that give you answers early on, then the generator trains faster than a generator without a retriever
Louis#0144: (unlike normal where they both crawl to an answer)
Louis#0144: this imho is why reader models are so exciting
Louis#0144: they can massively cut down on the compute required to train LMs
Louis#0144: but also im a slut for expert systems
Louis#0144: and retrievers are the best of expert systems and the best of large LMs
Ward#1738: The world as a neural network :). https://arxiv.org/pdf/2008.01540.pdf
Louis#0144: LMAO
Louis#0144: man alignment just got srs
bmk#1476: I think the general consensus was that that paper was kinda pointless
Louis#0144: what if we can align the worlds politics
Louis#0144: 😉
triggerhappygandi#0001: The name sounds like clickbait ngl
Louis#0144: it sounds like hes a crank
Louis#0144: ngl |
triggerhappygandi#0001: ngl
𓅬 gabriel_syme 𓅬#3220: God is a NN, where's Faithless when you need him
𓅬 gabriel_syme 𓅬#3220: WTH. This has changed since I would visit back with fast.ai. 10 seconds lol
GuusDeKroon#9696: I've been watching this project for a while, and was wondering, what's the expected ETA?
bmk#1476: there is no eta
bmk#1476: we have no concrete timeline at all
GuusDeKroon#9696: Ah okay :)
Yoann#2836: Hello ! I wanted to ask : once you have a trained model, let's say of gpt-neo, where do you run it ? How much RAM is needed on a server to make it run behind an API for example ?
jrowe#5371: that's partly to be determined, but GPT-Neo and GPT-NeoX are two separate branches
jrowe#5371: Neo will be intended for higher level machines, so you'd need a cluster or high performance compute setup
jrowe#5371: NeoX will be targeted at being able to run it on a GPU or Colab type setup
Yoann#2836: ok
jrowe#5371: timelines may or may not be announced as the project nears completion, nothing has been publicly given out so as to maintain the sanity of the developers 😛
Yoann#2836: ahah, sure
Yoann#2836: I'm a webdev so a bit useless, wish I could help 😄
jrowe#5371: lurk and opportunities will come up - more lurkers are welcome, this is a pretty top notch community
jrowe#5371: if you want to tinker and learn, head to #art and check out the pinned colab notebooks
Yoann#2836: Ok. Thanks.
jrowe#5371: and the history. awesome content there
triggerhappygandi#0001: A gpt-3 sized model needs like 400GB of VRAM. I guess you need 2 nodes with 8 V100s to run it? |
Yoann#2836: I was thinking about a project running it on an ec2 lmao, I guess I can't afford that. But if NeoX runs on a GPU, it's definitely more affordable
Louis#0144: the bigger NeoXs probably wont be for mortal GPUs
Louis#0144: think like
Louis#0144: A100
Louis#0144: but there will 100% be ones that run in colab
EricHallahan#1051: Have we decided on model sizes yet?
Louis#0144: 11b for the smaller one
Louis#0144: except it isnt 100% confirmed
Louis#0144: its napkin math rn
EricHallahan#1051: @ lurkers, go do that for us.
Louis#0144: i swear someone said that NeoX would go up to 50b in a call tho [Citation needed?]
Louis#0144: my memory might be bad
Louis#0144: do not cite me on that
Louis#0144: @bmk do u remember this
Louis#0144: u were in that call too
StellaAthena#3530: @Louis There’s an improved implementation that ClashLuke implemented on his models that may get us to 50
StellaAthena#3530: We haven’t implemented it yet for GPT-Neo
Louis#0144: ah ok
Louis#0144: thats what it was
Louis#0144: yeah |
Louis#0144: ur right
EricHallahan#1051: On Neo, right? (nvm)
StellaAthena#3530: There’s an open draft PR I think
Sahl#0630: I'm trying to get a sense of cost to run one of the bigger models
Sahl#0630: How long would inferring one token take on an A100?
Louis#0144: not long
Louis#0144: probably less than a second
Louis#0144: i havent worked w 50b but ive done a bit smaller
kindiana#1016: how do you fit 50B on an a100 :berk:
gwern#1782: if you can't fit it on the GPU, then I'd assume it'll take about as long as any other GPU? you're bottlenecked on how fast you can transfer the model weights off the hard drive or RAM. the actual on-GPU computation will be trivial
triggerhappygandi#0001: Praying to Jensen Huang
Sahl#0630: at $3 per GPU hour that's not too bad
Sahl#0630: oh it wouldn't even fit on the GPU
triggerhappygandi#0001: You could fit on 2 A100s though
Sahl#0630: and beam search / other methods would take a lot longer too right?
triggerhappygandi#0001: Most probably
Sahl#0630: I don't know much about those tbh
Sahl#0630: I guess they're basically distillation
kindiana#1016: at bs=1 beam search with low width wouldn't make a big difference to speed
Sahl#0630: alright |
Sahl#0630: but this is pretty affordable
Sahl#0630: you don't need to have a lot of money to run even the biggest model
Sahl#0630: and that's before we distill it
gwern#1782: 'a lot of money' '2 A100s' (isn't that like $100k of hardware)
kindiana#1016: I think a a100 is like 10k?
Sahl#0630: It's $3 per GPU hour from google cloud
kindiana#1016: https://www.dihuni.com/product/nvidia-a100-900-21001-0000-000-40gb-ampere-pcie-gpu-for-deep-learning/
gwern#1782: really? ok but $20k+misc is still a lot. and $3/hr adds up especially when you're not really sure what kind of throughput you'll get
Sahl#0630: That's true
Sahl#0630: You'd have to be making profit but it doesn't seem too bad
Sahl#0630: and you can definitely play around with it without paying too much
EricHallahan#1051: Just buy a couple Xeon Phi coprocessors, they cost 50 bucks on ebay. `/s`
gwern#1782: (I've read about people trying to program phi. there's reason they're available on ebay0
Teemochu#8740: 4x3090 (with power limit greatly dropped, or given two outlet circuits) is probably better (when considering price as a factor) than 2xA100 if you're allowed to get the former config
Teemochu#8740: still $10k+ though once you add everything up [and especially at current inflated prices]
bmk#1476: Phi 🤝 TPUs
Deleted User#0000: assumint they give it to u.. ive been trying to get my quote above 0 A100 and they didnt accept it:P i'll try more tho
Sahl#0630: oh I didn’t think about that
Sahl#0630: also demand will spike if a model were released
Sahl#0630: so it’d be even more expensive |
triggerhappygandi#0001: Find miners who can give more cheaply
AI_WAIFU#2844: Hey quick question, what's the general term for the extra parameters that a model keeps track of in addition to the parameters of the model?
𓅬 gabriel_syme 𓅬#3220: if it's at 10k it's almost affordable lol given that the 3090s will break 3k soon
AI_WAIFU#2844: I'm thinking moments but that doesn't seem right
kindiana#1016: optimizer state usually
AI_WAIFU#2844: eugh... ok
AI_WAIFU#2844: Also what's the current SOTA optimizer all the cool kids are using? I haven't payed close attention lately and still use Adam.
EricHallahan#1051: Adam
AI_WAIFU#2844: Nice.
bmk#1476: AdamW
kindiana#1016: imagine using weight decay :berk:
Louis#0144: imagine not actually solving for gradients by hand
Louis#0144: lmao
Louis#0144: sorry i cant associate w dorks who dont do all their floating point calculations on a Ti 84 calculator
Teemochu#8740: 2080Ti-84
Louis#0144: LMAO
Louis#0144: 2080Tix84
AI_WAIFU#2844: There's a meme to be made here
bmk#1476: someday, a 2080ti used will cost less than a new ti84
AI_WAIFU#2844: you mean a year from now when the latest crypto bubble pops? |
Louis#0144: pogggg
Louis#0144: im so hype for the crash
Louis#0144: omg
Louis#0144: im gonna buy like a rack of GPUs
AI_WAIFU#2844: I'm gonna build myself a full on beowulf cluster.
Louis#0144: i did that last time there was a crash
Louis#0144: ya
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/818657398659088394/unknown.png
AI_WAIFU#2844: It helps that cheap ryzen chips have so many PCIE lanes.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/818657456246095922/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/818657605907382282/unknown.png
EricHallahan#1051: But that is a *TI-84 Plus CE*.
AI_WAIFU#2844: 100Gbps fat tree topology
bmk#1476: 8 more years until ti84 cost takeover
bmk#1476: whats the difference
EricHallahan#1051: **eZ80**
EricHallahan#1051: Way (~3x) faster than a normal TI-84.
EricHallahan#1051: Has a color display.
EricHallahan#1051: Flat.
bmk#1476: "check out our new car, it's way faster and more fuel efficient than a normal Trabant and all wheel drive!" |
bmk#1476: ti84s are the trabants of computers
AI_WAIFU#2844: that's an insult to trabants
EricHallahan#1051: TI-83 came out in 1996
bmk#1476: i guess
bmk#1476: look how classy this shit is https://cdn.discordapp.com/attachments/729741769738158194/818658918774734858/Trabant_601_Estate.png
AI_WAIFU#2844: Like it's hard to understate how many orders of magnitude slower a TI-84 compared to literally anything else.
𓅬 gabriel_syme 𓅬#3220: I'm getting my own GPU farm when that happens, only for science
EricHallahan#1051: My guilty pleasure car I would like to own is a ZE1 Insight.
bmk#1476: Do there exist cheap chinese ripoffs of the ti84?
bmk#1476: I'd expect the ripoff to cost $5 tops
EricHallahan#1051: Thought they looked futuristic and cool when I was younger.
bmk#1476: I don't even know how they managed to make the ti84 that slow
EricHallahan#1051: Z80?
bmk#1476: I bet an arduino is faster
jrowe#5371: tiananmen-89
jrowe#5371: handles like a tank
Louis#0144: the 780ti had 3gb
Louis#0144: wtf
Louis#0144: thats wild
Louis#0144: i remember when i had the 8800 |
bmk#1476: Nvm, arduino is actually slower
Louis#0144: and I was shocked at the few hundred mb
AI_WAIFU#2844: this is the TI-84's processor https://en.wikipedia.org/wiki/Zilog_Z80
AI_WAIFU#2844: > By March 1976, Zilog had developed the Z80
EricHallahan#1051: No it isn't. It is much higher clocked.
bmk#1476: It is?
EricHallahan#1051: We should all switch to TI-99s
EricHallahan#1051: I think so?
AI_WAIFU#2844: Wiki says 20Mhz using modern tech
EricHallahan#1051: 15 MHz
EricHallahan#1051: On 84 plus
𓅬 gabriel_syme 𓅬#3220: I could run syndicate wars on that
AI_WAIFU#2844: I remember loosing precious seconds on my calc exams due to how bloody slow that POS was at integration
EricHallahan#1051: :thisup:
EricHallahan#1051: I had a ODE exam today. My mom last night asked me "do you need to charge your calculator?" I said "mom, I haven't used it since the last exam."
EricHallahan#1051: The previous exam I just used SymPy.
𓅬 gabriel_syme 𓅬#3220: i'm reading an AI in design paper from 1991, they are training a NN with 6 neurons 👌 probably using a similar processor
EricHallahan#1051: You wouldn't train an NN on a Z80 in 1991.
bmk#1476: >1991
|
Schmidhuber?
𓅬 gabriel_syme 𓅬#3220: no this is in architectural design
𓅬 gabriel_syme 𓅬#3220: so it must be some of his students then
𓅬 gabriel_syme 𓅬#3220: would he deal with such a lowly subject, not sure
bmk#1476: Hochreiter?
Louis#0144: ur mom?
mgostIH#0245: Can't you buy a Raspberry PI with a display and have it be like 100x more powerful than a TI84
IKEA#9631: Most likely
triggerhappygandi#0001: So you own a rack of GPUs?
triggerhappygandi#0001: If GPU economy crashes I will buy a dozen 3090s
triggerhappygandi#0001: Petaflop workstation right there
iamian#9489: I cant be the only one working on a paper with that struggle https://twitter.com/qualladoom/status/1369270895479566344?s=21
Aran Komatsuzaki#5714: i have no problem, since, as a self-proclaimed monarch, i use we as first-person singular.
iamian#9489: yes my lord
Louis#0144: They have died
Louis#0144: They were mining cards
triggerhappygandi#0001: Fug
iamian#9489: ow
triggerhappygandi#0001: What cards tho@Louis
Louis#0144: 1080s |
triggerhappygandi#0001: How many
Louis#0144: 6
Louis#0144: One remains
triggerhappygandi#0001: Atleast gaming is possible
Louis#0144: Yeah
triggerhappygandi#0001: Did you get them dirt cheap or something
Louis#0144: I don’t game though
Louis#0144: I did
Louis#0144: About $100 each
triggerhappygandi#0001: Cringe
Louis#0144: lol
triggerhappygandi#0001: Lmao
triggerhappygandi#0001: 6 cards for less than the price of one
Louis#0144: They served me for about six months
triggerhappygandi#0001: I would buy 16 3090s if I got them at similar discount
nz#9710: wtf how
Louis#0144: 🙂
Louis#0144: i needed people for a study
Louis#0144: so i posted to a discord server for a foodbank
Louis#0144: and i found people |
Louis#0144: is that unethical?
Louis#0144: I dont think so imho
nz#9710: why would it be unethical
nz#9710: (also it probably depends on what the study is about)
Deleted User#0000: whats the study?
rb#3159: Hi, is anyone interested in implementing https://arxiv.org/pdf/2009.03393.pdf?
Sahl#0630: It’s unethical to subject people to your model
Sahl#0630: 😔
Louis#0144: Probably not
Louis#0144: Someone tore this paper to shreds a while back
StellaAthena#3530: Sounds like me, but I don’t remember why
rb#3159: reasons being?
Ravna#1831: Program synthesis is much easier to do than automated theorem proving.
Ravna#1831: It has much larger datasets.
Louis#0144: ask stella
Louis#0144: not me
Louis#0144: lol
rb#3159: okay sorry
Louis#0144: no no its ok
Louis#0144: dw |
Ravna#1831: In ATP, a much better artificial dataset generation than the status quo is needed. It's an open problem.
Ravna#1831: Training on the existing human-generated dataset won't get you anywhere because there are probably less than a couple hundred starved phds who contribute to the dataset in the whole wide world and that's it.
Ravna#1831: If you just want program synthesis you can probably ignore the ATP papers, for now.
rb#3159: thanks for the suggession, but part of my goal is to build a benchmark for program-synthesis which also acts as a measure for general-intelligence. https://cdn.discordapp.com/attachments/729741769738158194/818860903939833896/task_hierarchy.png
rb#3159: the idea is to build a benchmark for a hierarchy of tasks in bottom up (like mentioned in the ARC paper). where tasks of any upper level requires a combination of skills required in the bottom level to be solved.
rb#3159: where the g-factor here being purely a measure of combinatorial generalization
Ravna#1831: Also according to this place's mainstream thought school, DAGs are naive human inventions that need to be purged. All we need is the source code represented as raw text. We should let the neural network find a better internal representation instead of forcing some prior like "tree" or "DAG" on it.
Ravna#1831: :sutton:
bmk#1476: We just use the royal plural all the time to avoid this problem
rb#3159: btw, has anyone attempted to work on the ARC corpus (https://github.com/fchollet/ARC)?
bmk#1476: I personally think ARC is kinda pointless but that's just me
RyanT#5929: Why?
Ravna#1831: Because it's something that Gary Marcus would come up with if he's actually competent with his arguments.:berk:
gwern#1782: (so, a more structured version of the AIXI IQ test?)
chilli#5665: I'd be interested in knowing why lol
Louis#0144: Be my guest, go digging
RyanT#5929: https://twitter.com/foone/status/1369500506469527552?s=21
RyanT#5929: PSA
hansmeyerandsteel#0070: Not sure if this is what you're referring to? Not quite to shreds, but the only GPT-f discussion I could find involving Stella: https://discord.com/channels/729741769192767510/747850033994662000/771504540859236373
Louis#0144: WHO DARES SUMMON ME |
Louis#0144: oh hi
Louis#0144: how r u
hansmeyerandsteel#0070: good thanks
Louis#0144: I’ll deal with this tmrw
Louis#0144: I’m being an insomniac rn
Louis#0144: Failing to sleep for three hrs now
hansmeyerandsteel#0070: no worries just putting in my 2 cents, back to lurking I go
EricHallahan#1051: Pretty much same here. Goodnight!
Louis#0144: why is there a timer on the #memes channel
Louis#0144: wtf
Daj#7482: You have no power here
Louis#0144: LMAO
triggerhappygandi#0001: Weak. I've been sleeping at 4am since past whole month.
IKEA#9631: you guys sleep?
Louis#0144: only with ur mom
Louis#0144: @triggerhappygandi its my turn tn
nz#9710: wow.
Louis#0144: 🙂
triggerhappygandi#0001: Roasted lol
Louis#0144: gotti |
jrowe#5371: Shoresy!
notooth#4850: Hello everyone,
Is there a tutorial to build a training dataset? I want to build one.
mgostIH#0245: what do you mean?
Teemochu#8740: The answer is both "lots of them, what kind of data do you want [and for what kind of model]?", and "no, that is far too general of a question and probably means you need to spend more time thinking about the things I asked in the first part".
EricHallahan#1051: Can you be more specific in what you are asking?
mgostIH#0245: The best answer is "You should check out my startup that does this, the prices aren't that high!"
bmk#1476: In other news, scientists have finally figured out how long a rope is. More at 6
EricHallahan#1051: It's 6.
notooth#4850: I want to build a training dataset that is readable to human, and trains the bot to write Python code.
Ward#1738: A New Lens on Understanding Generalization in Deep Learning https://ai.googleblog.com/2021/03/a-new-lens-on-understanding.html
Louis#0144: Hey “nerds”
IKEA#9631: sorry using the n-word is forbidden in this server
Teemochu#8740: Hey "geeks"
Darth Invader#4388: *Dear fellow scholars*
IKEA#9631: *This is Dr Skjfhskfjhskmgjhwem*
Teemochu#8740: I think your GPT needs more training
IKEA#9631: GPT0 :brr:
Deleted User#0000: https://twitter.com/Waymo/status/1369669950412095495?s=19
bmk#1476: Hey "nvidia employees" |
Louis#0144: @Teven congrats
Louis#0144: Ngl though HFs business model still kinda confuses me
Teven#6831: https://twitter.com/i/status/1370031155261607936
Teven#6831: this is possibly the most HF gif of all time
Louis#0144: I love it
Louis#0144: Lmao
Teven#6831: Canwen has really outdone himself this time haha
Aran Komatsuzaki#5714: are they also trying to cover non-text modalities as much as text?
Louis#0144: “The entirety of the first round of funding went into the creation of the celebration gif for our second successful round of funding”
Teven#6831: hahahaha i'd be OK with that
Teven#6831: but yeah 1. we've been cashflow positive in Jan which is as wild to me as it is to you @Louis and @Aran Komatsuzaki we've started adding speech and general audio processing stuff (Manuel Pariente's Asteroid if you're familiar)
Teven#6831: + people are uploading protein transformers and shit
Louis#0144: Oh wow
Louis#0144: That’s wild
Aran Komatsuzaki#5714: cool
Teven#6831: thanks though! this is still pretty wild to me
Louis#0144: And this is from people paying per token?
Louis#0144: Did u guys see a massive influx of new customers when GPT3 came out
Teven#6831: nah that's a small part, there's quite a bit from people paying to ask questions on Slack + revenue sharing with cloud providers + private models + .... tbh we're really throwing everything at the wall and I don't think there's one big contributing activity
Louis#0144: Oh wow |
Teven#6831: I got my Nov. paper into NAACL and it's all I could think about for the last 24 hours so at first I thought that's what you were referring to haha
Louis#0144: So you guys are going with the standard GPL “support as a service” model
Louis#0144: Interesting
Louis#0144: Congrats to that too
Louis#0144: I’m submitting three papers next week@
Louis#0144: Only one is written
Louis#0144: Lord save me
Teven#6831: that sounds.... intense but OK
Louis#0144: Thankfully they all have complete experiments LMAO
Louis#0144: just the writing is left
Teven#6831: still intense tbh
Louis#0144: Private models? People pay you guys to make models for them?
Teven#6831: nah, to have them on the model hub but private ? it's like Github for ML really
Teven#6831: and then they don't have to deal with infrastructure and deployment and shit and they can nicely use the API to call them
Louis#0144: OH
Louis#0144: that’s actually super useful for me
Louis#0144: LMAO
Louis#0144: damn I’ll check that out
Teven#6831: https://huggingface.co/pricing 9 dollars per month my man !
Teven#6831: am I a salesman yet |
Louis#0144: LMAO
EricHallahan#1051: I can see why it is so attractive.
Louis#0144: Yeah etf
Louis#0144: Wtf*
Louis#0144: I was going to deploy a search engine tech demo thing
Louis#0144: O well RAG may behave weirdly actually
Louis#0144: I don’t think HF has support for general retrievers yet
Teven#6831: tbh when I sold everything to move out of my unfurnished apartment in Prague I ended up with 40 euros more than before buying the furniture
Teven#6831: ever since this day I like to think of myself as something of a ruthless salesman even though I have 0 idea how that happened
Louis#0144: You guys are based out of france right?
Teven#6831: Paris/NY/wherever people want to work from it's 2021 my man
Louis#0144: Someone tried to explain to me a while ago that HF is French Canadian
Louis#0144: True
Teven#6831: that is emphatically what this company is NOT
Louis#0144: LMAO
Teven#6831: yeah only as part of RAG for now but there's surprisingly little demand for that feature I think
Louis#0144: RAG is really annoying tbh
IKEA#9631: Man I cant imagine living in paris on a new yorkers salary
Louis#0144: Painful to get working
IKEA#9631: thats like a 3x increase lol |
Louis#0144: I’ve had a few chats with Patrick Lewis and a bunch of other people at FAIR
Louis#0144: their consensus is that it’s a miracle RAG worked at all
Teven#6831: i've seen it do absolutely terribly under domain shift
Teven#6831: I was always unsure whether it was me messing up or it lacking robustness
Teven#6831: haha yeah but I think the company actually pays out the same for both, it's just the end result that doesn't look the same
Louis#0144: Yeah. I have it set up right now where the index set is a set of implications (eg A => B). We can’t use the loss function that FAIR used
Louis#0144: We’re using PPO
Teven#6831: it's the cost of healthcare I guess, I'm fine with that 🤷♂️
Louis#0144: That works way better
IKEA#9631: healthcare and *incredibly* ineficient french bureaucracy
Louis#0144: I think rag is a good idea but they aren’t treating it like a GAN
Louis#0144: Retrievers are basically GANs
Louis#0144: In a really weird way
Teven#6831: not sure I see the connection
Teven#6831: or rather I'm missing the A part
Louis#0144: The retriever itself is the generator and the reader is a discriminator that can tell apart positive and negative examples by comparing what it can generate from an example to a gold standard
Louis#0144: You get the same mode@collapse in RAG that you do in retrievers
Louis#0144: They behave identically
Louis#0144: That you do in GANs**
Louis#0144: That’s why you get such a large benefit by pretraining DPR |
Louis#0144: And why you can get away with synthetic labels using BM25 initially
Louis#0144: Imho reader and retriever models are almost exactly EM approaches
Teven#6831: yeah EM is the framework I use to think about those things
Teven#6831: but I'm not sure how mode collapse appears here
Louis#0144: It happens if it finds that it doesn’t need to use the retrieved documents or if the differences between positive and negative documents changes faster than the reader can learn to discriminate
Louis#0144: So if you force it during runtime to pick better negative documents that are along the decision boundary then it trains better
Louis#0144: Similarly if you use PPO to clip the retrievers gradient it trains better too
Louis#0144: I haven’t tried an ELECTRA type approach yet but that’s next
Louis#0144: Anyway I’ve been trying to use RAG to do explainable AI stuff where it retrieves documents that explain some causal relationship that it’s answering a question about.
Kinda Multihop esque I guess but most Multihop isn’t causal
Louis#0144: And it’s been a nightmare to get it to work there
Louis#0144: But we’re almost done
Teven#6831: sorry can you expand on that ? I'm understanding that at training time, you dynamically choose negative samples to compare against so that they're closer to the decision boundary
Teven#6831: but I'm not sure I get you
Teven#6831: I've been trying to fine-tune RAG for a bit to adapt it to other tasks but we ended up deciding to just make a dataset to train our initial DPR on cause it all sucked
Louis#0144: Yeah that’s correct. Basically all I do is I pick documents the same way that they pick documents in the original DPR paper
Louis#0144: I ask the reader model what a good document is for another query
Louis#0144: And use that as my negative for my current |
Louis#0144: Except I do this dynamically
Louis#0144: So when n_docs = 10, the first five are whatever gold docs it wants to retrieve right now
Louis#0144: The second five are optimal docs for other queries
Teven#6831: yeah I remembered that the DPR paper did something like this, so I thought the RAG paper did the same
Teven#6831: surprised that it wouldn't be the same
Louis#0144: RAG does it during pretraining the retriever
Louis#0144: It doesn’t do it during training
Louis#0144: I recommend asking Patrick
Teven#6831: Oh OK yeah
Louis#0144: He’s really easy going
Louis#0144: Very smart too
Teven#6831: haha well that's an attractive combination
Teven#6831: yeah I'll do that, bit of a shame as that's a cool project but I'm not very good yet at juggling between my academic + HF advisors + my HF boss
bmk#1476: We need to do an Eleuther x Huggingface paper at some point
Teven#6831: so I could really use the help haha
Louis#0144: The key thing is that if you don’t regularize the RAG retriever during training then all the documents it retrieves end up being roughly the same. This causes the retriever to collapse
Louis#0144: That’s why PPO works so well
Louis#0144: PPO clips the gradient as we get closer to a collapse
Teven#6831: yeah I need to re-read up on PPO I haven't done RL in a while
Teven#6831: it hasn't caught up outside of RL yet right |
Louis#0144: Nope
Louis#0144: Not at all
Louis#0144: No one else except Patrick and I view retrievers this way yet
Louis#0144: AFAIK
Louis#0144: actually I think aurko is starting to view them this way
Teven#6831: that'd be really cool ! I think the power law examination thing we're looking at atm with @StellaAthena is a nice convergence of interests
Louis#0144: Man is everyone working with Stella
Louis#0144: LMAO
Louis#0144: (I am too)
Teven#6831: wait who's got a cool erdos number here
Louis#0144: Oh actually
Louis#0144: Stellas erdos number is 2 or 3
Louis#0144: My erdos number was going to be 3 but it fell through at the last minute
Teven#6831: yesssssss time to get lower I'm at 6 atm iirc
Louis#0144: LMAO
EricHallahan#1051: 3 IIRC?
Louis#0144: Yeah probably 3
Louis#0144: Who was the MTG paper with?
Louis#0144: @StellaAthena
Louis#0144: Get in here |
Louis#0144: LMAO
StellaAthena#3530: Huh what
StellaAthena#3530: No, I did not co-author a paper with *that* Church
Louis#0144: Oh
Louis#0144: Lmao
Louis#0144: Is that church still alive
StellaAthena#3530: Alonzo Church, the “Church” whose name you often see in the same sentences as people like Gödel, Turing, Kleene, and Frege died when I was 2 years old
Teven#6831: so you're saying he had the chance to play MTG
StellaAthena#3530: I could plausibly obtain a Erdös number of 2, but do not currently have one
Louis#0144: Damn and here I thought you were like 40 something
StellaAthena#3530: IDK what mine is
Louis#0144: Jkjkjk
StellaAthena#3530: I have an erdos number of 4, apparently
Louis#0144: Mine is six or seven
EricHallahan#1051: Zero :berk:
Louis#0144: YOU ARE ERDOS
EricHallahan#1051: My inverse Erdos number.
StellaAthena#3530: You can compute yours here: https://www.csauthors.net/distance
Teven#6831: Ah wait I'm at 4 once the NAACL one comes out!!
Teven#6831: stonques |
Louis#0144: My Stella number is three
Louis#0144: lol
Louis#0144: Will be one soon
StellaAthena#3530: Weirdly, I have a smaller “genealogical erdos number” (which counts only advisor-advisee relationships and similar close mentorships) than “coauthor erdos number”
Louis#0144: What kind of advisor doesn’t write a paper with their student wtf
Louis#0144: That’s stupid
StellaAthena#3530: TFW you do research under a professor and have a close personal and professional relationship with him for years but never coauthor a paper with him
StellaAthena#3530: I feel like that reflects poorly on Laci and that’s not what I mean. He was my mentor when I wrote multiple papers, he’s just rather reticent to be added to papers he doesn’t consider “really his work”
StellaAthena#3530: There’s a really famous example of this in CS Theory which resulted in a hilarious footnote
StellaAthena#3530: Basically saying “Laci is responsible for section X of this paper and we wouldn’t have been able to write that section without him”
RyanT#5929: Laci is very particular lol
StellaAthena#3530: To be fair when you have both a Gödel Prize and an entry in the Stargate Wiki you’ve Made It
Louis#0144: STARGATE WIKI
Louis#0144: WHAT
StellaAthena#3530: https://stargate.fandom.com/wiki/L%C3%A1szl%C3%B3_Babai
Louis#0144: LMAO
Louis#0144: HOLY SHIT
Louis#0144: That’s so funny
RyanT#5929: Lol I did not know that he was on the stargate wiki
Louis#0144: Is a sub space communication some name of an actual algorithm and the writers were like “this sounds sci-fi” |
StellaAthena#3530: It’s a tehcnobabble that gets thrown around in stargate a lot
StellaAthena#3530: I’m annoyed I can’t find this paper
StellaAthena#3530: I’m pretty sure it’s by Lund and Fortnow and a couple others
StellaAthena#3530: Actually I thought it was this one but it’s not: https://dl.acm.org/doi/abs/10.1145/146585.146605
Louis#0144: ACMs website is cancer on mobile
StellaAthena#3530: Here is it without a paywall. https://lance.fortnow.com/papers/files/ip.pdf
StellaAthena#3530: This is a super cool paper. 10/10 strongly recommend this entire field
Louis#0144: Where is it
Louis#0144: I can’t find it
StellaAthena#3530: >>> Actually I thought it was this one **but it’s not**: https://dl.acm.org/doi/abs/10.1145/146585.146605
Louis#0144: Sorry I can’t read
Louis#0144: 🤷♂️
Louis#0144: Lit sci? Idk her
nz#9710: Sometimes I wish I knew how to read
StellaAthena#3530: Reading is OP
StellaAthena#3530: Need nerf
sschwarz25#1749: Hey team, if I wanted to run The Pile through GPT-Neo, what kind of hardware resources are recommended?
bmk#1476: A v3-2048 for 6 months
Louis#0144: probably close to half a million
Louis#0144: after all said and done |
sschwarz25#1749: Pricing out some situations right now 🙂
Louis#0144: half a million isnt worth it if u just want to use it for something small to medium scale
bmk#1476: @sschwarz25 what's your budget
Louis#0144: lol
bmk#1476: Order of magnitude
bmk#1476: $100k, $1M, $10M?
sschwarz25#1749: If we could whip up a proper GPT-Neo for under $1M, I would be very interested.
Louis#0144: can i ask who you work for?
bmk#1476: And how big of a model are you trying to train, full gpt3 sized?
sschwarz25#1749: I saw you have trained some engines already.
Louis#0144: nothing the size of what you want
sschwarz25#1749: Not necessarily, we are thinking of starting with some smaller use cases to get it right, then perhaps go large.
sschwarz25#1749: I own an AI/ML Firm 🙂
bmk#1476: Is this hypothetical, like you're going to go solicit investors if we can do it, or do you have the cash in hand already
bmk#1476: We're not interested in someone being the middle man to help us solicit investors
Louis#0144: if its the latter we should probably introduce him to connor
sschwarz25#1749: I wish it were the latter, but I can make some things happen with the right power point slides 🙂
sschwarz25#1749: We have been building out some GPT Use Cases using OpenAI.
Louis#0144: well, we'll let you know when the smaller one is ready then
Louis#0144: we dont have any time estimates since we're all open source |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.