data
stringlengths 115
7.61k
|
---|
bmk#1476: be the change you wish to see in the world
StellaAthena#3530: Typically “bigger is better” is cast as opposed to “work smart not hard”
voxs#0001: kek did google actually do that
StellaAthena#3530: Not really
StellaAthena#3530: They trained a fake model
StellaAthena#3530: If your 1T model doesn’t outperform a 100B model then it’s not a big deal
voxs#0001: why would they train a fake model
bmk#1476: lets not get into the weeds
bmk#1476: the tldr is we should still train a 1T (and that google's doesn't count)
bmk#1476: *1T or bust*
Big Fat Duck#0266: holy crap
Big Fat Duck#0266: there's all this extra setup for deepspeed
Big Fat Duck#0266: gonna be tough setting this up on that gifted cluster
bmk#1476: not that much more than mtf
voxs#0001: dang 1T sounds insane
voxs#0001: i joined this discord cuz i saw it on reddit and it sounded really cool
Singularity#9001: Can't wait to see what we'll have in 5 years
cfoster0#4356: *biased opinion incoming*
IMO this discord is one of the most interesting places to be a fly on the wall in AI at the moment
bmk#1476: equally biased but totally agree |
bmk#1476: is it also one of the most interesting places to be *involved* though?
AI_WAIFU#2844: Probably, certainly much better than the average lab. I'd put it within a stones throw of the larger industrial labs, if only because they probably have dedicated engineering teams and we have to do all of that ourselves.
gdawg16#0493: i know how to help !!!!!!
gdawg16#0493: i shall tell reddit.com/r/kubernetes to all come here
bmk#1476: hey, if we can get enough people we too can have a dedicated engineering team
axiom#3599: keeping up with this discord server is a full time job
cfoster0#4356: Lmao pls no
gdawg16#0493: ITS TOO LATE
gdawg16#0493: jk i haven't done anything
axiom#3599: you guys want some of my shitty poetry?
bmk#1476: *aprés ça, le deluge*
axiom#3599: do my people need me?
gdawg16#0493: where will you get kubernetes people then if not the great reddit.com
AI_WAIFU#2844: they'll hear about this place via word-of-mouth
bmk#1476: generally, there has been a negative correlation with quality with reddit origin, although there are certain outliers
axiom#3599: they will sense us on the wind
bmk#1476: the wind has been surprisingly effective
bmk#1476: for some reason, a ton of people know we exist
gdawg16#0493: cuz of reddit
gdawg16#0493: jk idk |
sloth_.on._tabasco#9015: you guys are doing gods work
sloth_.on._tabasco#9015: <3
3dprint_the_world#6486: Sure, EY is great, but come on, that's a bit much
gwern#1782: you misunderstand: we aren't doing god's work, we're doing gods-work
3dprint_the_world#6486: ah right
sloth_.on._tabasco#9015: nah i just hate proprietary stuff with a passion
axiom#3599: i do need a hobby while hiring at MIRI is frozen
bmk#1476: there is no god up here, except the one we are building
axiom#3599: what features would you like god to have? i'll set up a strawpoll
bmk#1476: 1. does not turn the earth into paperclips
3dprint_the_world#6486: shower thought: God is an insect maximizer
bmk#1476: crab maximizer
axiom#3599: i'll fly it by the team, but Clippy the ai overmind seems really deadset on that one
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/800580649223454750/unknown.png
bmk#1476: :guilty:
jin.kee#9020: May the basilisk smile upon us.
bmk#1476: the basilisk is infohazards 101
bmk#1476: it only gets worse from here
Gabriel#0454: Is there any safe course or tutorial on infohazards?
Gabriel#0454: Given that I've already read most of the Sequences |
Gabriel#0454: Also, one minute I think "These people need to collaborate with Gwern", next minute I see him here 😋
thenightocean#6100: There is Nick Bostroms paper on that I think.
thenightocean#6100: this should be the “company”motto or some sort of inspirational poster.
nz#9710: a thread about GPT-Neo is currently no. 1 on HN (won't link since last time you guys said it was better this way)
IKEA#9631: stonks
sepnax#5209: thats why im here now
sloth_.on._tabasco#9015: what's HN
nz#9710: Hacker news.
Aran Komatsuzaki#5714: Why is GPTNeo on HN rn? lol
Aran Komatsuzaki#5714: some people are asking for how to donate lol
Daj#7482: Oh neat we really are on HN
Daj#7482: We are, indeed, the hacker known as Eleuther
ale0sx#5274: im from HN too 🙂
Daj#7482: Welcome 👋
nz#9710: I guess someone saw the GPT-3 replication part from the website and decided it was worth a thread on its own
Aran Komatsuzaki#5714: makes sense. another possibility is that someone read the article by venturebeat.
triggerhappygandi#0001: It's a paid article
triggerhappygandi#0001: Who gives a paid article from the get go
AmazingTurtle#0001: yo guys im wondering.. is the #links channel the one i'm supposed to look at when i want to get into ML tech? i have some very basic knowledge already and I'm looking forward to get a tighter grip on these things
Daj#7482: We're not really a beginner-focused Discord, so we don't have collected resources for beginners, #links is kinda a legacy grab bag of stuff |
Daj#7482: I would recommend maybe looking into some of the servers in #communities , Yannic's server is quite beginner friendly
AmazingTurtle#0001: yeah i was about to mention that
AmazingTurtle#0001: thank you 😄
AmazingTurtle#0001: who's yannic and which discord server do you mean then? i don't see it in #communities
edit nvm i was blind
Daj#7482: Yannic's a great ML youtuber
l4rz#8278: downsized the gpt-neox model to GPT3-XL size (1.3B parameters). with ~500Mb of irc logs dataset it started to produce meaningful results after a couple of hours. 1700 iters sample https://cdn.discordapp.com/attachments/729741769738158194/800688594183389184/gpt-neox-sample-1700iter.txt
l4rz#8278: https://cdn.discordapp.com/attachments/729741769738158194/800688656146628608/Screen_Shot_2021-01-18_at_12.29.54_PM.png
Daj#7482: Neat! Is this on one GPU or multiple?
l4rz#8278: four v100s
Daj#7482: Nice
l4rz#8278: yesterday i tried to fit 8.6B parameters model and it worked (tho it was impractically slow)
l4rz#8278: also @Sid dk whether you aware or not, there's a russian team who used megatron + deepspeed to parallelize gpt training on 128 GPUs couple months ago https://github.com/sberbank-ai/ru-gpts
triggerhappygandi#0001: No he isnt. Yannic is a pseudonym
triggerhappygandi#0001: He has been lying to us
triggerhappygandi#0001: He acted in _The Bourne Ultimatum_
triggerhappygandi#0001: And didn't get Matt Damon in vc once
triggerhappygandi#0001: smh
paws#3311: @l4rz i posted that message in #gpt-neox-devs, that is where the current experiments for replication are going on |
l4rz#8278: ah shit i'm posting in the wrong channel
l4rz#8278: thx
paws#3311: its alright 🙂
parazyd#2104: Hi. Can someone contact [email protected] ? He would like to offer computing power or pay for it.
Daj#7482: DM me with more details if this offer is serious
triggerhappygandi#0001: We are turning into discount OAI
triggerhappygandi#0001: Just need a discount Azure now
Daj#7482: Me: Mom, I want OpenAI
Mom: We have OpenAI at home
OpenAI at home: EleutherAI
triggerhappygandi#0001: Me: :zucc:
triggerhappygandi#0001: "But mom where are 592 V100 instances for me?"
IKEA#9631: "I got 2 9600 GTs, take it or leave it":mesh:
Oju#1167: Hello. I am an CS undergrad, i recently submitted my first paper at IJCAI 2021. I want to put my group's work on arXiv and need endorsement for the same. If someone is willing to spare some time and look at our work and help us out, we'll be very grateful. Thank you!
If this isn't the right place to ask, I apologize.
Igor Krawczuk#1653: can your supervisor not endorse you?
Oju#1167: It's an independent work by me and some of my friends
Oju#1167: okay, thanks!
andyljones#7746: fwiw, you should really reach out to whichever prof in your dept is closest to the topic of your paper. getting an ongoing relationship with a prof is way more valuable than the endorsement itself, and you're much more likely to get that kind of ongoing relationship from a prof at your school |
Deleted User#0000: could literally call it that lol. Lile SETI@Home if we crowdsource compute
Igor Krawczuk#1653: AGI@Home
bmk#1476: The gell mann amnesia from reading these comments is real
AI_WAIFU#2844: Hey at least it's not reddit comments
triggerhappygandi#0001: r*ddit
Fanforum#5501: Hello i would like to know where is the documentation to give compute power ?
I didn't find it on the website.
StellaAthena#3530: @Fanforum Hi there Fanforum! Right now we are not taking donations of computing power in general. Training GPT-3 is so expensive that it doesn't make sense to crowd-source the computing, and as a general rule we have enough TPU power for our other needs. If you're interested in getting involved we can always use more ML devs though 🙂
daerken#0607: Hi @StellaAthena, I wonder what you mean by "Training GPT-3 is so expensive that it doesn't make sense to crowd-source the computing". Why wouldn't it make sense? If 100k GPUs were available in the crowd-sourcing world why would it be of no use ? The only missing piece is the tech to use all these GPUs right ?
Daj#7482: Training NNs is extremely bandwidth bound, there is no current technique that works reliably across huge clusters outside of high speed interconnect datacenters
Fanforum#5501: Hmm ok, too bad.
pH#9867: Is there any idea how much memory there will be needed to do inference on the gpt3-175b?
Daj#7482: Naively 350-700GB
pH#9867: ouch, thanks
Daj#7482: But that's likely to be off by a good factor
Daj#7482: and there are ways to optimize
daerken#0607: I understand that thus we need to find the missing piece of tech that would allow any device to participate to deep NNs training. Thanks for the reply :).
pH#9867: Right, I think I read that FP16 is not being used now? I'm a bit of a noob, but wonder if/when the model is somehow available, I can actually use it on a beefy GPU at home.
Daj#7482: 350GB is the naive estimate for FP16 weights alone, not counting activations and the like
AI_WAIFU#2844: Like it might be theoretically possible to do it over the internet, but the nodes need to be big enough to fit the entire network + optimizer. |
Daj#7482: There is some hope it can be distilled down and quantized to int8 perhaps for another magnitude of order shrink or so
Daj#7482: But it's unclear at this point how well that will work
Daj#7482: I guess that could work but even then you have to average the gradients (and no one can just fit the whole thing at home lol)
AI_WAIFU#2844: I'm thinking it might work if you used one of the methods I brought up in #research
pH#9867: I remember GPT-2 converting to FP16 didn't make a difference in the inference results... I wish I would be able to help out, but anyway, good luck taking this one on!
Daj#7482: I'm not on the cutting edge of distributed SGD work, but I do expect some methods to eventually distill out (even so currently it seems with fast interconnect you can get close to linear scaling so big players might not be incentivized to push this tech)
AI_WAIFU#2844: Data pararallel computing with log(dimensions) communication bandwidth is nothing to sneeze at.
Daj#7482: Not at all, I didn't look at the methods you posted, but that sounds like a big gain
AI_WAIFU#2844: Yeah they got like a 40x bandwidth reduction for 100M models, at 100B that multiplier should be far larger.
Daj#7482: Is this something we can implement?
Daj#7482: Or are there some hidden gotchas?
AI_WAIFU#2844: There are probably hidden gotchas, I only read the paper. But my guess is that for our intents and purposes, microbatching is good enough and much easier to implement. It's not going to help if you're not saturating the bandwidth you already have.
joshlk#7357: Hi 👋 , I'm a ML Research Engineer and I have experience with NLP pipelines, Tensorflow, distributed computing and ML/NLP in general. I love the idea of the projects. Whats the best way for me to get involved?
StellaAthena#3530: @joshlk Welcome! I'll DM you
reconscope#7790: Hey everyone, I'm kinda new to ml, I have made some RL models in unity ML-agents and I was hoping to learn a bit about how the gpt stuff works.
reconscope#7790: Most of my skills are oriented around graphics.
StellaAthena#3530: Welcome! If you want to learn about LMs generally there's a pinned post by @bmk with some resources
reconscope#7790: Alright, thank you.
StellaAthena#3530: Personally I would recommend starting with learning about transformers:
Transformers 101: IMO this is the best intro to transformers: http://jalammar.github.io/illustrated-transformer/ |
Attention is All You Need: https://arxiv.org/abs/1706.03762
New blog posts on transformers and attention: https://www.reddit.com/r/MachineLearning/comments/kkgyag/d_how_transformers_work_in_deep_learning_and_nlp/
cfoster0#4356: Hey! 👋🏿 I'd also recommend checking out some of the Discords in #communities, which are great places to learn and get acclimated to ML
reconscope#7790: Thank you, also is there any type of math I should know before approaching this?
reconscope#7790: to be of use?
StellaAthena#3530: @reconscope The more linear algebra you know the better.
bmk#1476: https://discord.com/channels/729741769192767510/729741769738158194/736374402366832681
bmk#1476: Graphics people typically know a lot of linalg
StellaAthena#3530: Do they? I don't mean this as a slight @reconscope but in my experience they tend to overestimate their competency at anything that isn't doing computations in numpy
reconscope#7790: Thank you all for the resources.
reconscope#7790: I used a lot of trig for the stuff I did.
reconscope#7790: but I am no math pro
bmk#1476: I meant compared to me lol
triggerhappygandi#0001: How so
triggerhappygandi#0001: You _would_ have to have that whole chungus sit on .cuda()
Daj#7482: because I don't know the conversion from theoretical memory to actual inference memory
triggerhappygandi#0001: From the models I've worked with, both align pretty consistently. fp16 × 175B = 700GB
triggerhappygandi#0001: I am curious if this can be optimized, since it looks pretty much the minimum requirement to me.
Daj#7482: ¯\_(ツ)_/¯
triggerhappygandi#0001: Aaaa don't ¯\_(ツ)_/¯ me give me an answer. |
Daj#7482: I don#t know the answer lol
Daj#7482: I don't deploy things, I'm a researcher
triggerhappygandi#0001: I am wonder. GPT neo uses linear attention right?
Daj#7482: Nah
triggerhappygandi#0001: So what is it translating to in terms of irl memory reduction
Daj#7482: Global + Local/Sparse attention
mick#2835: I thought it was some "axial attention" thing (I haven't re-read that paper enough times to get clear on that yet.)
triggerhappygandi#0001: It doesn't?
bmk#1476: Ackschuyally
bmk#1476: It's 350GB
Daj#7482: That's implemented but it's more for images and stuff
triggerhappygandi#0001: @bmk forgive me for doing 16/8 = 4. Grug head not right today
triggerhappygandi#0001: Why do we not use reformer/Linformer kind of attention though?
bmk#1476: Linear attention GPTae delenda est!
bmk#1476: Linear attention is bad and evil
triggerhappygandi#0001: Why
Daj#7482: Didn't you hear? It's bad and evil
triggerhappygandi#0001: Why
Daj#7482: That makes it both bad and evil
nz#9710: All EleutherAI homies hate it |
Daj#7482: haha
triggerhappygandi#0001: :chonk:
triggerhappygandi#0001: Why
triggerhappygandi#0001: Whyyyyyyy
Daj#7482: Actually there's just model quality loss and miniscule performance gain at these sizes
Daj#7482: It only makes sense for really really long sequence lengths
triggerhappygandi#0001: Man. I am underwhelmed. I thought it would be game changer
bmk#1476: Actually for gpt3 like models it's actually less efficient than regular attention and also worsr
Daj#7482: Same, we went through all stages of grief
triggerhappygandi#0001: Why not just have much bigger seq_len then
triggerhappygandi#0001: It would make my wish of writing a book come true.
bmk#1476: because no
triggerhappygandi#0001: Please
triggerhappygandi#0001: I want my own Harry Potter
bmk#1476: this is an infohazard
triggerhappygandi#0001: Seq_len = 132072
Daj#7482: We have Harry Potter at home
Harry Potter at home: HPMOR
triggerhappygandi#0001: :zucc:
bmk#1476: If gpt can ever write hp quality novels, we're all going to die soon after |
bmk#1476: If the fanfiction people ever find out that bigger models can do so, we will have 10T in a few months
bmk#1476: Therefore, the idea that a sufficiently big transformer can write high quality novels is an infohazard
triggerhappygandi#0001: Literotica gang rise up
bmk#1476: n o
triggerhappygandi#0001: Y E S
CRG#8707: TrXL caching (or any relative variant like the T5 bias) could do it. https://discordapp.com/channels/729741769192767510/729741769738158194/795312961190101003
mick#2835: bigger windows don't help, smarter architectures do
mick#2835: Some work found that 1024 len is already diminishing returns hard
Dromarion#3383: Writing that's good quality and writing that turns you on aren't necessarily the same thing. Coomers already make do with AI Dungeon, not that they would pass up better models though
triggerhappygandi#0001: Linformer says "ahcschually my best performance requires 65536 tokens" @mick
triggerhappygandi#0001: https://cdn.discordapp.com/attachments/729741769738158194/800793308703752216/unknown.jpeg
mick#2835: I think that falls into the "smarter architecture" thing if it works
triggerhappygandi#0001: It does.
triggerhappygandi#0001: Linear attention works god dammit
mick#2835: It's too hard. :'(
triggerhappygandi#0001: I guess we won't see it in images though
triggerhappygandi#0001: @StellaAthena should take a class someday to make us understand all the math in Linformer and performer.
triggerhappygandi#0001: I really didn't understand half of it
bmk#1476: Long context text transformer doesn't make sense, and neither does linear attention for text
bmk#1476: So you're not missing out |
CRG#8707: I think performer did ok there https://discordapp.com/channels/729741769192767510/795089627089862656/797926442939449434
StellaAthena#3530: What class should I take?
triggerhappygandi#0001: To make us understand the math
triggerhappygandi#0001: In Linformer paper
cfoster0#4356: take -> teach?
triggerhappygandi#0001: Yeah
bmk#1476: Isn't linformer the one where they just multiply KV first?
bmk#1476: Or is that a different one
triggerhappygandi#0001: It is, iirc
bmk#1476: In any event it shouldn't matter because it's useless
triggerhappygandi#0001: I get reformer performer and Linformer mixed in my head
triggerhappygandi#0001: So what do for fanfiction? @bmk
bmk#1476: Don't
triggerhappygandi#0001: Do
LaPapaya#4347: Sup
LaPapaya#4347: Hey, I was thinking
Louis#0144: congrats
LaPapaya#4347: Probably the next thing openai will do is a new musenet version with gpt-3
Louis#0144: 👏
thenightocean#6100: Btw I updated the website with links to Github and discord on the home page and new FAQ page (copied the one from github). Should be easier for people coming there from Hacker news to navigate. |
Deleted User#0000: I don't know you, but i'm so excited for gpt-neo project
LaPapaya#4347: Me too
Louis#0144: fuck the descartes meme caught me off guard
Louis#0144: LMAO
Deleted User#0000: Hello everyone! 👋 I am totally new here but I am curious where you guys raise funding for the compute resources.
CRG#8707: See #announcements
bmk#1476: Also see the info document
bmk#1476: It's in the channel description
brunex345#1653: Hi guys thanks for letting me be parte of this fantastic movement
StellaAthena#3530: Welcome!
triggerhappygandi#0001: Teach us Performer @StellaAthena
StellaAthena#3530: What do you want to know?
triggerhappygandi#0001: I didn't get any of it
triggerhappygandi#0001: From what I understand it transforms the softmax attention into a space where it is just a dot product between two smaller matrices
StellaAthena#3530: Yeah that is roughly true
triggerhappygandi#0001: How does it happen
triggerhappygandi#0001: How do they know whatever they've done is working
triggerhappygandi#0001: Or that it is working as it should
StellaAthena#3530: Do you know how kernel methods work
bmk#1476: Is this the "kernel trick" thing |
triggerhappygandi#0001: Yes
triggerhappygandi#0001: I don't @StellaAthena
triggerhappygandi#0001: 😅
StellaAthena#3530: Go learn that then
triggerhappygandi#0001: Could you give me a rough idea
triggerhappygandi#0001: I'm sure if I pull up a Wikipedia article I'd be just equally lost
StellaAthena#3530: Have you taken any analysis?
triggerhappygandi#0001: ... ok I will look at Anal 101
bmk#1476: :gameryes:
triggerhappygandi#0001: I'm on phone I can't see who is owoing
triggerhappygandi#0001: But I bet it's @bmk
bmk#1476: There is no evidence
triggerhappygandi#0001: There is in my heart
triggerhappygandi#0001: Btw, are kernel methods related to SVM? @StellaAthena
StellaAthena#3530: Yes
StellaAthena#3530: SVMs use them to not suck
bmk#1476: Are kernels a generalization of distances?
triggerhappygandi#0001: Good. That gives some anchor.
StellaAthena#3530: Kernels are inner products
bmk#1476: Are they more or less general than inner products? |
triggerhappygandi#0001: Is convolution technically a kernel method too
bmk#1476: I'm guessing no?
StellaAthena#3530: No
StellaAthena#3530: neither
bmk#1476: The wikipedia definition is useless for intuition
triggerhappygandi#0001: If it uses kernel, and does an element wise product, how is it not a kernel method?
StellaAthena#3530: A kernel is any function $K(x, y)$ that can be written as $\langle\phi(x),\phi(y)\rangle_\mathcal{H}$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800813234138775552/193204646687408129.png
bmk#1476: Ohhh
bmk#1476: Wait
triggerhappygandi#0001: So transforming into a different space is necessary
bmk#1476: Wait a sec
bmk#1476: That's.. basically the same thing as preapplying the transformation?
bmk#1476: I had always assumed it was a bit more .. *complicated*
bmk#1476: Why is it convenient to frame it as a kernel?
StellaAthena#3530: So you have $f:\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}$, and the output of that function can also be obtained by first lifting $x$ and $y$ to a hilbert space and then taking their dot product
bmk#1476: Rather than applying the transforms to the input and *then* doing the SVM, for example
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800813834150739989/193204646687408129.png
bmk#1476: Why would you want to use this mental framing?
mick#2835: Wait do both instances of `phi` have to be the same map? |
triggerhappygandi#0001: I see. So the Linformer paper literally only does this kernel trick, and nothing fancy on top of it.
StellaAthena#3530: Yes
StellaAthena#3530: @bmk because SVMs can only do linear equations
bmk#1476: No i know that
bmk#1476: What I'm asking is
bmk#1476: So to use a kernel you replace the inner product with a kernel right
StellaAthena#3530: No
StellaAthena#3530: you use an inner product to replace a non-linear function with a kernel
bmk#1476: Huh?
StellaAthena#3530: If you pretend that your original dataset was $(\phi(x), \phi(y))$ instead of $x,y$, then all of a sudden the data is linearly separable
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800814368462995466/193204646687408129.png
mick#2835: Wait does `phi` have to be linear??
bmk#1476: Yes so this is why I'm asking
triggerhappygandi#0001: That's pretty much all the Performer does, right? @StellaAthena
bmk#1476: This is exactly equivalent to using phi to project your input before putting it into the SVM, right?
bmk#1476: Why not just.. think of it that way?
StellaAthena#3530: It is
StellaAthena#3530: that is how you think of it
Sphinx#2092: I think there's a certain part of the story missing.
Sphinx#2092: First of all, computing phi could be expensive |
Sphinx#2092: if not literally impossible.
Sphinx#2092: Secondly, even if you could do it, you may not know what phi even is.
Sphinx#2092: The missing part of the story is really Mercer's theorem, which says gives you some conditions on functions k such that they arise as inner products on some hilbert space.
Sphinx#2092: This turns the game upside down, and allows you to simply use kernel functions without having to worry about what the underlying change of coordinates is.
StellaAthena#3530: Also the representer theorem and reisz representations
StellaAthena#3530: (sorry about the half explainer, I'm highly distracted. Listen to sphinx)
bmk#1476: Ok wait so just so I'm on the same page
bmk#1476: In svms you're trying to optimize a thing with a $w^Tx$ term right?
TeXit#0796: **𝐛𝐦𝐤** https://cdn.discordapp.com/attachments/729741769738158194/800816110215626792/606987544235868219.png
bmk#1476: And the kernel replaces that with $K(w,x)$
TeXit#0796: **𝐛𝐦𝐤** https://cdn.discordapp.com/attachments/729741769738158194/800816197317820416/606987544235868219.png
bmk#1476: Which is equivalent to projecting w and x using phi but in some cases you can compute K easily but not phi
bmk#1476: Right?
Sphinx#2092: Yes. More to the point, you may know what K is but not phi.
triggerhappygandi#0001: @Sphinx but Performer somehow makes it work
bmk#1476: What is a practical example of that?
Sphinx#2092: Almost kernel you can think of, I'm sure you don't know the phi e.g. the Gaussian kernel.
Sphinx#2092: but you can also choose more exotic kernels e.g. Matern
bmk#1476: Gaussian kernel is using the gaussian pdf as phi?
bmk#1476: And so there's a simple shortcut to compute K? |
paws#3311: Also the fact that the matrix multiplication is pretty costly, and using the kernel makes it computationally affordable? (The kernel trick is why SVMs are usable at all)
Sphinx#2092: No, you use the Gaussian kernel as the kernel.
Sphinx#2092: It's not necessarily matrix multiplication. Its more like phi could be mapping to an infinite-dimensional space such as in the Gaussian case.
Sphinx#2092: which is, of course, impossible to do on a computer.
bmk#1476: Ok hold up, what *is* a gaussian kernel
Sphinx#2092: https://en.wikipedia.org/wiki/Radial_basis_function_kernel
bmk#1476: I thought I knew but apparently i dont
Sphinx#2092: I guess ML people call it the RBF kernel.
Sphinx#2092: Notice that when they write it as a sum, you can piece out what the phi is, namely it's some vector of "infinite dimension"
mgostIH#0245: Ye I think that what really matters is that it acts as a scalar product of a function of the terms
mgostIH#0245: Thus giving linear-like properties
mgostIH#0245: Like K(x, y) = <x, y> ^ 2
mgostIH#0245: It doesn't really matter that phi(x)=(x1x1, x1x2, x1x3, x2x1, x2x2, x2x3, x3x1, x3x2, x3x3) (for 3D vectors)
mgostIH#0245: You don't ever end up using phi
mick#2835: Is $$K(x,y) = \frac{1}{1+\langle x\cdot y \rangle}$$ still a polynomial kernel?
TeXit#0796: **mick** https://cdn.discordapp.com/attachments/729741769738158194/800819505244536852/206886091494653953.png
StellaAthena#3530: No
mick#2835: If I call it "rational kernel" will everyone hate me?
StellaAthena#3530: yes
StellaAthena#3530: That's not a kernel at all |
bmk#1476: ~~lesswrong kernel~~
mick#2835: Would you mind giving me some insight on why this falls outside of the scope of kernels while $K(x,y) = 1+\langle x\cdot y \rangle$ doesn't?
TeXit#0796: **mick** https://cdn.discordapp.com/attachments/729741769738158194/800820679221051433/206886091494653953.png
Sphinx#2092: It's not even defined for all x,y pairs...
bmk#1476: Phi here just tacks a 1 onto the end of each vector right?
StellaAthena#3530: There isn't a hilbert space whose inner produce computes it
mick#2835: So I'm guessing that screws up some useful machinery later?
mick#2835: I apologize for adding burden to a scarce resource here. I just really appreciate learning how to apply new techniques and kernel methods have sortof just stayed in a corner of my mind not ever popping out as useful, so I'm curious about the topic.
Sphinx#2092: You need to ensure your kernel function is defined for all values and is also a valid kernel.
Sphinx#2092: The latter condition is equivalent to it being positive-definite.
mick#2835: I think I need to see it break for not being PD to understand why.
Sphinx#2092: As I said before , your function is not even well-defined.
mick#2835: I use that function I posted earlier with unit length vectors so the divergence never happens in practice
mgostIH#0245: <x, y> can be -1
Sphinx#2092: Notice that if y = -x, and x has norm 1, your kernel will blow up.
mick#2835: I get that all, but I mean what analysis machinery breaks later because of the kernel not being PD
Sphinx#2092: Inner products are positive definite.
Sphinx#2092: since <x,x> = norm(x)^2
mick#2835: I'm not trying to be difficult. I need to know more than "This is a rule, just follow it." or else my brain rejects it.
mick#2835: Telling me over and over, "It must be PD" does nothing |
mick#2835: I'm asking why
Sphinx#2092: It's part of the definition of the inner product.
Sphinx#2092: Otherwise, you won't get a norm when you compute the inner product with itself.
mgostIH#0245: @mick Anything building up on those properties often uses them, so I'd say "depends on the algorithm"
Sphinx#2092: So it doesn't correspond to any meaningful geometry in a traditional sense.
mgostIH#0245: You'd get negative distances for example
mgostIH#0245: What would that mean?
mgostIH#0245: Maybe some algorithms don't care because they just want to define "x is near y"
mgostIH#0245: So if you cheat and give them something that doesn't respect positive definiteness it might lead to some weird but interesting behaviour
mgostIH#0245: Some would completely fall because you may be optimising for a wrong (or unreachable) target and explode in magnitude
mick#2835: I guess I should write more carefully... imagine this:
$$K(x,y) = \frac{1}{1.0001 + \langle \frac{x}{\|x\|} \cdot \frac{y}{\|y\|} \rangle}$$
TeXit#0796: **mick** https://cdn.discordapp.com/attachments/729741769738158194/800825025665826836/206886091494653953.png
StellaAthena#3530: @mick The outermost thing in your equation should be $\rangle$ and $\langle$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800825253189910649/193204646687408129.png
StellaAthena#3530: Those symbols mean “the inner product”
StellaAthena#3530: \langle x, y\rangle$ is the inner product of $x$ and $y$. If you’re trying to build interesting inner products, you should put the functions inside that expression, not outside it
mick#2835: I suspect there is a diverging miscommunication going on here.
TeXit#0796: **Stella Biderman**
Compile Error! Click the :errors: reaction for more information. |
(You may edit your message to recompile.) https://cdn.discordapp.com/attachments/729741769738158194/800825561546227752/193204646687408129.png
mgostIH#0245: I think this breaks the absence of <x, y> = 0
mgostIH#0245: So there's no orthogonality
Sphinx#2092: It's worse than that. There is no 0 element.
StellaAthena#3530: What does this mean? Inner products can absolutely take on the value of 0
mick#2835: I'm trying to connect it to how kernel methods can help ML because that function works fine in actual ML tasks and induces representations
mgostIH#0245: @StellaAthena I mean that this can't take the value 0
mgostIH#0245: But inner products in general can (and should)
mick#2835: If kernel methods immediately discard large patches of valid solutions it sounds like a dead end rabbit hole to even try to apply to ML
StellaAthena#3530: Ah
mick#2835: Even if it works for a hack or two, if it immediately is ruling out working solutions, why bother?
StellaAthena#3530: They don’t. They greatly *increase* what you can do
mick#2835: Maybe one day I'll understand how. Feel free to throw words my way about it any time.
StellaAthena#3530: Why do you think they decrease the space of solutions?
mick#2835: I'm not saying they decrease the solution space, just that they are searching a subset that looks small
StellaAthena#3530: Have you used linear regressions much before?
mick#2835: Absolutely. Before ML I worked in crypto on the theory side of things.
mick#2835: Stuff like finite fields, learning with errors, multivariate quadratic systems, etc is the point of view I'm coming from
StellaAthena#3530: Oh
StellaAthena#3530: Dope |
StellaAthena#3530: Good to know
mick#2835: I find ML and crypto hilariously "similar looking" at times lol
StellaAthena#3530: Y’know how often times we have data we would like to plot a linear regression for, but the data isn’t linear so we transform it?
StellaAthena#3530: Typically by logs or exponentials?
mick#2835: Yeah I follow
mick#2835: Okay I think I get what you're implying
mick#2835: The benefit is coming from applying geometry tools later?
bmk#1476: Can pls halp explain how homomorphic encryption works
StellaAthena#3530: It doesn’t. Next?
bmk#1476: Wat
mick#2835: Lol. "very slowly and sketchily" is how.
StellaAthena#3530: This is that on steroids. There are a lot of things in ML that require you to take the dot product of two vectors. However, they often don’t actually involve the *actual vectors themselves*. For example, in most optimization problems we have a constraint like $6 = \Sigma \omega_i\alpha_i$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800827928799477800/193204646687408129.png
StellaAthena#3530: That equation is the same equation as $\omega\cdot\alpha = 6$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800828120918655046/193204646687408129.png
mgostIH#0245: You can even do stuff like `min ||A * x - b||` to find the solution of a linear system, where `|| . ||` is some induced norm
mgostIH#0245: Idk if there are practical examples of this with kernel methods
mgostIH#0245: But basically minimising stuff can be done in order to find solutions to constraints and whatnot
StellaAthena#3530: Dot products are a particular type of inner product
StellaAthena#3530: If your equation doesn’t care about the values of $\omega$ and $\alpha$ and only cares about their dot product, you can choose them to be highly convenient things to suit your needs |
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800828705046921246/193204646687408129.png
mgostIH#0245: Still hm, the question seemed fair, how do you prove that K(x, y) is a valid kernel?
StellaAthena#3530: This is why it is often described as a trick
mgostIH#0245: Without knowing the phi(x) ideally
bmk#1476: I saw a weird definition on wikipedia
StellaAthena#3530: Mercer’s Theorem
bmk#1476: The one I was complaining about being completely unintuitive
bmk#1476: I assume that's the one
mick#2835: Can I ask for a toy example? I really apologize for having poor communication in these details. I had multiple awful math teachers in a row and still haven't really had a highly available good one, so I've had to learn for myself and it's amazing how ambiguous people can be with math terms and notation while thinking they are being explicit.
mgostIH#0245: Ohhh, so a kernel has to be just a symmetric semi-positive definite function?
mgostIH#0245: @mick Orthogonal projections!
mgostIH#0245: They are used to minimise distances but can only work if you have an inner product
StellaAthena#3530: Let $K$ be a symmetric function from $\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}$. Then there exists some $\phi$, $\mathcal{H}$ such that $K(x, y) = \langle\phi(x),\phi(y)\rangle_\mathcal{H}$ if and only if
$$\int K(x,y) g(x)g(y)dxdy\geq 0\quad\forall g$$
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800829604650680381/193204646687408129.png
mgostIH#0245: If you use a kernel you would be minimising across other spaces, not just the trivial one
mgostIH#0245: Cool! Didn't know they were **exactly** the same thing
StellaAthena#3530: This integral inequality is something you’ve probably heard of before. It means that $K$ is positive semi-definite
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800829835311841330/193204646687408129.png |
mick#2835: Just to clarify, is this the "linear kernel" ?
mgostIH#0245: @mick no that's any kernel
StellaAthena#3530: This is a kernel
mgostIH#0245: It's a theorem that tells you what a kernel **is**
StellaAthena#3530: You can take this to be the definition of a kernel, even.
StellaAthena#3530: “A kernel is a symmetric, positive semi-definite function from $\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}$”
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/800830339349610496/193204646687408129.png
reconscope#7790: What is that R symbol?
StellaAthena#3530: The real numbers
mgostIH#0245: Wait what's g here
StellaAthena#3530: Any function (maybe any L^2 function?)
StellaAthena#3530: No, any function
StellaAthena#3530: It doesn’t even have to be integrable
mgostIH#0245: https://cdn.discordapp.com/attachments/729741769738158194/800830695740014612/unknown.png
mgostIH#0245: Here it says square-integrable
StellaAthena#3530: Ah
StellaAthena#3530: Yeah that’s what L^2 means
mgostIH#0245: Ye
mick#2835: How do these functions (the ones given as p.d. examples) play into this discussion? https://en.wikipedia.org/wiki/Positive-definite_kernel#Examples_of_p.d._kernels
StellaAthena#3530: Oh right because you’re conjugating |
andyljones#7746: honestly this is a topic better suited to concentrated study rather than casual chat
mgostIH#0245: Or #math :sip:
mick#2835: I need to know *what* to study!
reconscope#7790: @mick same here
andyljones#7746: a linear algebra textbook'd be a good place to start
mgostIH#0245: They are all different examples of kernels that must satisfy that property
StellaAthena#3530: If you really want to understand this stuff, linear algebra, real analysis, and functional analysis
mgostIH#0245: This is really basic functional analysis afaik, I haven't even started it
bmk#1476: It seems like there are multiple "dialects" of linear algebra
mgostIH#0245: Unless you want to prove these statements yourself
andyljones#7746: what'd you mean?
mgostIH#0245: But understanding this stuff is mostly linalg
mick#2835: I've been through LA textbooks up and down and it never did anything like this for me. I'm book retarded I guess.
mick#2835: I only learn by direct supervision and self play lol
bmk#1476: Spoken by ML people, physicists, graphics people, etc
bmk#1476: I mean it like an analogy mostly
andyljones#7746: oh, right, yeah you're right - it's a ridiculously useful tool, and different disciplines focus on slightly different subsets
StellaAthena#3530: I mean, it’s also literally true if you think about QM, “normal” LA, ML, and functional analysis
mgostIH#0245: The fourier transform is the diagonalization of convolution 🧠
StellaAthena#3530: All topics in math can be broke into three categories: |
1. The study of linear functions
2. Topics that we can reduce to the study of linear functions
3. Topics that we do not yet understand
mgostIH#0245: This gives another perspective to people saying "BuT AI iS JuSt MaTrIx MuLtIpLy"
Sphinx#2092: Functional analysis is probably overkill (unless you really want to prove mercer's theorem) but I think this kind of content is commonplace for any decent intro to ML class.
Sphinx#2092: I believe Andrew Ng covers it in his Stanford class, which might be a useful resource.
Louis#0144: this but category theory
bmk#1476: But does anyone understand category theory
Sahl#0630: yeah category theory is just a morphism in the category of axiom sets
Sahl#0630: don’t @ me
andyljones#7746: offtopic's busy so: here, look at my last three weeks' work
i have been watching that dark purple run as if i were nine years old watching a computer defrag its hard-drive. i am *immensely* satisfied. https://cdn.discordapp.com/attachments/729741769738158194/800842719077072946/hTmPHyfFXCdAAAAABJRU5ErkJggg.png
Igor Krawczuk#1653: What are we seeing?
Igor Krawczuk#1653: I see elo, so I assume some AZ/RL thing self playing?
andyljones#7746: perf of my lil AZ implementation against perfect play on a small Hex board
andyljones#7746: important bit is: up good, down bad. -250 has been my nemesis, and today i have beaten it 🥳
Igor Krawczuk#1653: Nice 👍
Sid#2121: i do love watching lines go up more than other lines
Sid#2121: congrats |
Sid#2121: what made the difference?
andyljones#7746: there's a parameter in AZ that governs how much the search should pay attention to the policy network v. the value network. turns out i had the parameter off by, oh, a hundred-fold.
Sid#2121: lol, nice
andyljones#7746: kinda astonishing it learned at all with the setting it was at
(which is incidentally the setting used in all the literature, probs because they all had very different setups to me)
Sid#2121: so *negative* elo is your net's elo compared to perfect play?
andyljones#7746: y
Sid#2121: is it chess? or something simpler
bmk#1476: Exciting!
andyljones#7746: hex! has all the strategic possibilities of chess or go, but with a far simpler ruleset https://cdn.discordapp.com/attachments/729741769738158194/800844397968883752/hex.mp4
andyljones#7746: goal is to connect your two sides of the board. wonderful thing is there're no ties and no state other than the current board
https://en.wikipedia.org/wiki/Hex_(board_game)
Sid#2121: no idea what's going on here but it looks fun
Sid#2121: pretty sure i would absolutely suck at this game
bmk#1476: Oh, hex is really fun
andyljones#7746: it's probably stockholm syndrome talking, but yeah i am equally bad at go, chess and hex, and i definitely find hex the most compelling
bmk#1476: I'm also equally bad at all those games, what a coincidence
Sid#2121: i only tried go once and got immediately confused and ragequit |
Sid#2121: i'm sticking with my chess, haha
bmk#1476: This is the advantage of sucking at all the games
bmk#1476: I never feel bad about sucking at a game I've never played before because i suck equally much at everything and so i don't feel a home turf advantage anywhere
Louis#0144: .
Sid#2121: .
bmk#1476: .
Louis#0144: .
bmk#1476: .
Sid#2121: did you just delete my post? lmao
bmk#1476: .
reconscope#7790: you cant tie in hex?
3dprint_the_world#6486: yep, representation theory and all that
3dprint_the_world#6486: and I would amend that:
3. Topics we do not yet understand, but once we do, they will be reducible to linear functions.
bmk#1476: Every day, i feel worse about not grokking linear algebra
3dprint_the_world#6486: at least you know you don't grok it
3dprint_the_world#6486: nothing's worse than someone who pretends they know linalg and they don't know what a bilinear map is
AI_WAIFU#2844: I only started to grok it after intro QM
Louis#0144: Has there ever been a good IoT device
Louis#0144: Like ever |
Louis#0144: I can’t think of a single one
Louis#0144: Nothing that has improved my life at all
Louis#0144: Or would improve it
AI_WAIFU#2844: 3D printers. It's good to be able to monitor them remotely. They're fickle mindless children that are exceptionally prone to burn your house down.
Louis#0144: Oh
Louis#0144: That’s a good case
StellaAthena#3530: Roomba
Louis#0144: But that’s like borderline industry
Louis#0144: Don’t agree
bmk#1476: ~~i too enjoy monitoring children remotely~~
Louis#0144: All processing can be done locally on the roomba
Louis#0144: Most don’t use crazy advanced AI stuff
Louis#0144: There’s no reason it would access the cloud
StellaAthena#3530: Mine clearly has a SLAM algorithm built in at least
Louis#0144: But that can be done on the roomba
Louis#0144: They have arm chips
StellaAthena#3530: Yeah
StellaAthena#3530: Like genuine distributed across many small devices stuff?
Louis#0144: Yeah
Sahl#0630: Smart lights are nice |
StellaAthena#3530: Iron Dome is an example but isn’t what you’re looking for
Louis#0144: My smart light wants my location 24/7
Louis#0144: Even when I’m not home
Sahl#0630: Well it’s badly designed
Louis#0144: I threw it out a few minutes ago when I realized this
StellaAthena#3530: @Louis your mom bought you a stupid lamp
Louis#0144: I know...
Louis#0144: Fuckin Atlantic energy
Louis#0144: I looked up reviews of the company and like everyone says they’re a scam
Sahl#0630: Once Project CHIP comes out with the new standard I’ll get smart lights to act as cues to start routines
Sahl#0630: That’ll be very useful
AI_WAIFU#2844: Oh I've got a really good one.
Louis#0144: Even like philips hue though
Louis#0144: Why does it need to be connected to the internet
Louis#0144: It doesn’t need external access
StellaAthena#3530: My smart lights don’t care about my location, but turn on when I open the door, can be dimmed from my phone, and my grow lights are on a solar-synchronized schedule
Sahl#0630: Do plants want lights to be synchronized to the sun
AI_WAIFU#2844: This dystopian creation:
https://www.youtube.com/watch?v=nkcKaNqfykg
Sahl#0630: Or is that for comfort |
Louis#0144: Sure but that can all be done with LAN only right?
StellaAthena#3530: @Louis what can’t be done over LAN
Sahl#0630: Yes but it’s simpler to design for a mothership server
Louis#0144: That’s my issue
Louis#0144: IoT is a code name for lazy design
Sahl#0630: No, it’s just lazy design
Sahl#0630: Not implicit to IoT
Sahl#0630: Just lazy design is common
Sahl#0630: How many internet providers still use IPv4
Louis#0144: I would be totally ok with IoT if I didn’t have to log in and it never accessed anything outside of my local network
Sahl#0630: Well here we are
Sahl#0630: Companies shit
Sahl#0630: But there’s still value
Louis#0144: Here we are, I’m using an ancient TV and an old ass fridge because I refuse to upgrade
Sahl#0630: Fair
Louis#0144: I have a flat screen from before smart TVs
Louis#0144: When it dies I’m going to get a computer monitor
StellaAthena#3530: @Louis the purpose of IoT apps are to collect data to sell
Louis#0144: Yep
StellaAthena#3530: That’s why they *actually* exist |
Sahl#0630: There is DIY IoT that is LAN only
Sahl#0630: But you probably have to fiddle around with it
AI_WAIFU#2844: I'm looking forward to the future where I need to take apart my fridge and swap out the electronics so that it doesn't spy on me.
Sahl#0630: This is why you need policy
Sahl#0630: Otherwise companies go for the easiest solution
Sahl#0630: Or rather the profit maximizing solution
Louis#0144: Yeah tbh I want IoT to straight up be banned
Sahl#0630: I like IoT
Sahl#0630: I want coloured lights
AI_WAIFU#2844: Policy requires public buy in. That's getting harder to do by the day.
zphang#7252: apple stuff is not so bad
Louis#0144: Does Apple have smart lights
StellaAthena#3530: @bmk said policy was a waste of time so *\*shrug\**
Sahl#0630: No but they have a standard
Sahl#0630: HomeKit
AI_WAIFU#2844: Speaking of, does anyone know if there's any way to get a small linux box to accept phone calls and sms?
Sahl#0630: I wonder if Project CHIP will make things more secure
Sahl#0630: Or will lower security
Sahl#0630: overall
Sahl#0630: god I wish phone calls and sms stopped existing and we just used IP |
Louis#0144: Yeah
Louis#0144: +1
Louis#0144: No reason to have phones anymore
Sahl#0630: phones are still shit quality
Sahl#0630: and a phone plan costs so much
Sahl#0630: very little of it being data
AI_WAIFU#2844: I mean everywhere outside of america the did that. Everyone uses whatsapp, but that's not going super well rn.
Louis#0144: Real talk I honestly think touchscreens are literally the worst input method possible and I’m shocked how much they took off
AI_WAIFU#2844: The other problem with IP is that as it's currently implented, not everyone can accept inbound traffic.
Sahl#0630: touchscreens are pretty good
Sahl#0630: tbh
Louis#0144: The zero tactility tho....
Sahl#0630: remappable buttons
AI_WAIFU#2844: NAT fucks with everything.
Sahl#0630: NAT is shit
Sahl#0630: ipv6 good
Sahl#0630: bell dumb
Sahl#0630: amen
StellaAthena#3530: Touch screens are cool
AI_WAIFU#2844: Like the internet would be so much better if we all used IPv6 and individuals could own IP addresses. |
StellaAthena#3530: Legit, touch screens became the future when people started seeing movies like Minority Report and went Woaaaaaaah the future!
AI_WAIFU#2844: Also legally enforced net neutrality.
3dprint_the_world#6486: call me old-skool but I thought the old resistive touchscreens that were actually precise and gave you tactile feedback were great.
3dprint_the_world#6486: then Steve Jobs had to come along and dunk on styli and suddenly everyone wanted to be cool and dunk on styli too
3dprint_the_world#6486: what else do you expect from someone who thought the cure for cancer was to become fruitarian
3dprint_the_world#6486: it was probably the lack of protein talking
AI_WAIFU#2844: Were there ever any scratch resistant resistive touchscreens?
3dprint_the_world#6486: no, but if people actually continued with the tech I think we could have gotten them
AI_WAIFU#2844: I think that's what killed them.
AI_WAIFU#2844: I got my DS all scratched up and now I'm sad.
AI_WAIFU#2844: But my phone has lasted me 7 years.
3dprint_the_world#6486: you could be right.
3dprint_the_world#6486: still though, I don't think it would be that hard to make scratch-resistant ones
3dprint_the_world#6486: I mean, they can make flexible glass displays now
AI_WAIFU#2844: Maybe, but it's hard. IIRC flexible glass needs to not have any scratches whatsoever. So if you scratch it even once it it'll break.
AI_WAIFU#2844: Plus it's quite a bit stiffer.
3dprint_the_world#6486: anyway
3dprint_the_world#6486: I appreciated the precision and tactile feedback
3dprint_the_world#6486: one thing that touchscreens are badly lacking is touch latency
3dprint_the_world#6486: I think there was a study showing that to actually feel as good as writing on paper, touchscreens need to have < 10 ms response time |
3dprint_the_world#6486: that's time from contact to showing up on screen
3dprint_the_world#6486: I think the best current ones are around 30-40 ms, which is very slow
Louis#0144: Yes!!!
chilli#5665: what? :thonk:
chilli#5665: those felt awful
chilli#5665: garbage precision
chilli#5665: massive delay
chilli#5665: am I thinking of something different
chilli#5665: there's a reason those came with styluses
3dprint_the_world#6486: depends on the device.
3dprint_the_world#6486: some devices had awful ones, others had amazing ones.
3dprint_the_world#6486: huge variation.
3dprint_the_world#6486: I used some of the Palm ones and they were quite good
3dprint_the_world#6486: a few pocket pcs had good ones, but I feel old for even mentioning pocket pcs
3dprint_the_world#6486: I think there were some HP laptops with good ones too.
mick#2835: you should feel old for knowing what a restive touch screen feels like at all!
3dprint_the_world#6486: ok fine
3dprint_the_world#6486: 👴
bmk#1476: oh god i hated resistance touchscreens
bmk#1476: resistive? |
bmk#1476: idk
bmk#1476: maybe i had cheap ones but the fact that you can *feel* the give from the outer layer is annoying as hell
bmk#1476: the tactile feedback is annoying as hell imo
bmk#1476: which is ironic considering how much i like mechanical keyboards
bmk#1476: i was blown away when i first tried an iphone
bmk#1476: im still convinced that capacitive touchscreens are magic
StellaAthena#3530: How long is a reasonable amount of time to wait while your work computer tries to update pandas before you smash it in frustration
3dprint_the_world#6486: are we talking minutes or hours
mick#2835: pff those are simple! what's magic are the laser total internal reflection touch systems that just use one sensor lol.
bmk#1476: do i look like i know anything about electricity
bmk#1476: the sum total of my knowledge about capacitors is those videos on youtube where they charge em up and zap things
3dprint_the_world#6486: I mean, your avatar is literally an electrical grid
bmk#1476: electric grids are good for zapping things
bmk#1476: also look closer, it's a train station
StellaAthena#3530: five minutes so far
3dprint_the_world#6486: yeah smash it then
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/800942522930167858/3uI8vU62_400x400.png
mick#2835: implying it's reasonable to wait rather than smash first
bmk#1476: r/nocontext https://cdn.discordapp.com/attachments/729741769738158194/800942701037355028/unknown.png
3dprint_the_world#6486: you probably won't believe this but I actually built one of those |
StellaAthena#3530: r/nocontextneeded
StellaAthena#3530: I’ve done this
3dprint_the_world#6486: at a previous job I had access to an amazing huge lab with industrial manufacturing machines
3dprint_the_world#6486: everything from CNC machines to solder reflow ovens
bmk#1476: ~~at a previous job i had access to a single crappy laptop~~
3dprint_the_world#6486: at one workbench we had an oscilloscope worth $60k
bmk#1476: i dont even like laptops
mick#2835: I love this kind of hardware and hate having to use it 🤣
3dprint_the_world#6486: lol why
3dprint_the_world#6486: it was kind of mind-blowing to be able to probe things happening in under a nanosecond
mick#2835: because it means I had to pull my head out of the clouds and deal with the realities of the MOSFETs not being exactly identical to the models I use in the simulations lol
3dprint_the_world#6486: oh yeah
3dprint_the_world#6486: at sufficiently fast speeds/frequencies no simulation even remotely works
3dprint_the_world#6486: I don't think people truly appreciate all the magic that has to happen for a USB 3 cable to transfer data
mick#2835: lol. microwaves. microwaves everywhere.
3dprint_the_world#6486: I remember a talk by a USB 3 engineer who said "yeah initially we thought there's no way the laws of physics would allow 5 gbps over copper, but then we tried it and we actually did get some signal passing through, lol"
bmk#1476: lol imagine thinking about any level of abstraction lower than tensors in pytorch
3dprint_the_world#6486: "so then we just had to design this amazingly complicated signal recovery chip to take all the mashed up noise and reconstruct a binary signal out of it"
chilli#5665: lol imagine thinking about any level of abstraction lower than keras :berk:
mick#2835: keras gang |
StellaAthena#3530: Lol imagine thinking about any level of abstraction lower than functional analysis
mick#2835: wait did we just ascend or descend?
3dprint_the_world#6486: if you ascend far enough eventually you descend
3dprint_the_world#6486: you wrap around
mick#2835: I thought that was only on a riemann sphere 🤣
StellaAthena#3530: No, any space of positive curvature
StellaAthena#3530: Also finite fields
StellaAthena#3530: Okay but unironically my inability to fully formulate CNNs as operating on Hilbert spaces makes me feel deeply uncomfortable and hesistent to say I understand them
3dprint_the_world#6486: Anyway I got so hooked on that stuff that I set up my own little lab at home; I even have a small CNC machine
3dprint_the_world#6486: if any of you guys want to make physical robots let me know
StellaAthena#3530: @3dprint_the_world Definitely
StellaAthena#3530: I love building shit
AI_WAIFU#2844: I'll just order from china for 10$ like a normal person. Then I'll also get free soldering.
chilli#5665: imagining not building your own CNC machine :berk:
AI_WAIFU#2844: We had a PCB machine at an old lab I was at.
3dprint_the_world#6486: yeah but can you get custom aluminium parts for $10?
StellaAthena#3530: @3dprint_the_world Do you have the equipment I need to build a full-body X Ray machine
mick#2835: https://sweetiebot.net/
mick#2835: X-rays are mega easy to generate
StellaAthena#3530: IK |
AI_WAIFU#2844: Most painful thing I've ever had to work with.
3dprint_the_world#6486: what kind of x ray machine are we talking
3dprint_the_world#6486: are we talking chest x-ray, or CAT scanner
StellaAthena#3530: Chest x-ray
3dprint_the_world#6486: because one is almost trivial and the other is very very hard
3dprint_the_world#6486: oh yeah, easy then
bmk#1476: i want to get into cool physical stuff someday but unfortunately i dont own any cool-things-hardware
StellaAthena#3530: Yeah I just don’t have the material access anymore
bmk#1476: and also i dont know anything about physical things
StellaAthena#3530: I’ve actually built one of these before, in college
bmk#1476: i couldnt assemble a lego set
3dprint_the_world#6486: all you need for chest x-ray is an x-ray tube, which they have on e-bay for cheap, the rest of the control electronics and so on is trivial
bmk#1476: well, i guess computer hardware kinda counts
bmk#1476: but also not really
mick#2835: You can literally just wrap a guitar tube in foil and drop a few kV across it and get xrays lol
AI_WAIFU#2844: Do they still use x-ray transformers, or is it all solid state now?
3dprint_the_world#6486: I've actually tried this and couldn't get it to work
StellaAthena#3530: Oh huh
mick#2835: Upgrade to a radio tube then lol
3dprint_the_world#6486: I just think the beam is too diffuse in a guitar tube |
StellaAthena#3530: I didn’t know they were like $30
3dprint_the_world#6486: a higher power radio transmitter tube might work
AI_WAIFU#2844: Yeah, then you just need the x-ray sensitive film.
mick#2835: Oh don't expect to get anything very good. More like an entertaining radiation shotgun :P
Sahl#0630: Is that very safe
Sahl#0630: hmmmm
mick#2835: Not at all!
3dprint_the_world#6486: no it's very unsafe
mick#2835: ultra terrible idea for sure!
StellaAthena#3530: What I built in college was a booth where you walked inside and someone pressed a button and then you waited very still and eventually it produced an x-ray
Sahl#0630: Hello and welcome to cancer booth
3dprint_the_world#6486: it's actually amazing how easy it is to build something that could easily kill you or give you cancer
bmk#1476: how does one get into physical stuff under the constraint of not very much space
Sahl#0630: We are a new startup
AI_WAIFU#2844: Learn2Solder
3dprint_the_world#6486: my initial 'lab' was literally in a 2 m^2 closet in my apartment, lol
bmk#1476: and also no skills
3dprint_the_world#6486: don't need much space, depending on what you want to do
Sahl#0630: Cancer booth hopes to reduce population growth rate while also generating millions of x rays for machine learning projects!
mick#2835: Get an arduino and a breadboard. |
AI_WAIFU#2844: Just watch youtube videos
StellaAthena#3530: The most foolproof way to make something that’ll kill you is probably thermite?
AI_WAIFU#2844: I mean a knife and some determination works pretty well.
mick#2835: Pff I can hack together a nitrogen mask out of a ziploc bag and I bet it's a nicer death too
3dprint_the_world#6486: I would say probably a cascade voltage multiplier
bmk#1476: oh man ive broken so many that im afraid to ever touch one again
StellaAthena#3530: Sure, but I’m thinking more like a fun at home project that your kid could kill themselves with
Sahl#0630: The most foolproof way to kill someone is to create unaligned AI
Sahl#0630: This also works for future people
AI_WAIFU#2844: No because there's a chance you'll fail to die and now you have s-risk
StellaAthena#3530: (Assuming you leave magnesium lying around)
Sahl#0630: s risk and paradise risks are both unlikely
Sahl#0630: you’d probably get ambivalent AGI
AI_WAIFU#2844: Taking apart a power supply with unusually large caps.
3dprint_the_world#6486: re youtube videos: most channels on 'making things' are actually quite bad, they're more focused on entertainment and clicks than actually teaching useful skills and tips.
3dprint_the_world#6486: Applied Science is a really good channel
3dprint_the_world#6486: also Marco Reps
bmk#1476: ~~this is the worst beginner crafts channel ever, it's like the `rm -rf /` of crafts~~
mick#2835: You can get arduino chips for like $2 if you buy just the bare chip, don't be afraid to burn a few. Just, make sure to figure out *why* you burned it each time and you'll save a lot of money in the long run :P
3dprint_the_world#6486: Applied Science is by Ben Krasnow who's some kind of lead engineer or something at Google Life Sciences |
3dprint_the_world#6486: (or is it Alphabet Life Sciences. Anyhoo)
StellaAthena#3530: Relatedly, I knew someone who borrowed a car battery off a parked car in the street and then insisted on returning it
3dprint_the_world#6486: Marco Reps is a German dude. Need I say more.
3dprint_the_world#6486: come on, you all know you want to watch a youtube channel on engineering from a German dude
StellaAthena#3530: Who was the guy who did the glitter bomb for Amazon package theifs
AI_WAIFU#2844: Mark Rober?
StellaAthena#3530: Yeah
bmk#1476: ~~nominative determinism~~
3dprint_the_world#6486: Mark Rober
3dprint_the_world#6486: yeah he's great
StellaAthena#3530: His stuff is cool
AI_WAIFU#2844: kabbalah moment
3dprint_the_world#6486: WAIT
3dprint_the_world#6486: I just remembered the best one: Dan Gelbart
3dprint_the_world#6486: an Israeli engineer living in Canada
AI_WAIFU#2844: I also like Stuff Made Here
Sahl#0630: I watched night hawk in light some time ago
Sahl#0630: He seems good
3dprint_the_world#6486: sadly he hasn't made any videos in years, but my God, his videos are the best compressed into-the-vein hit of knowledge on how to make things
mick#2835: photonicinduction |
mick#2835: js
bmk#1476: i used to watch this many many years ago
3dprint_the_world#6486: 😂 😂
bmk#1476: this is what i was referring to when i said the whole capacitor thing
3dprint_the_world#6486: UNTIL IT POPS
mick#2835: it's always fun watching a true psychopath at work
AI_WAIFU#2844: Anyone else watch Nile Red/Blue?
3dprint_the_world#6486: but yeah, watch Dan Gelbart
chilli#5665: i agree
Louis#0144: Wait are we talking about bombs
3dprint_the_world#6486: I'm still unpacking Dan Gelbart's stuff after many years
Louis#0144: Wtf
bmk#1476: we should have a youtube watch party at some point
AI_WAIFU#2844: Yesnt
mick#2835: "youtube engineers" so yes
3dprint_the_world#6486: like every time I watch his videos I notice some new trick I never appreciated before
Louis#0144: Ah yes Eleuther discord the perfect place to plan domestic terrorism
Louis#0144: ;p
3dprint_the_world#6486: https://www.youtube.com/watch?v=xMP_AfiNlX4
bmk#1476: im never going to find the motivation to watch youtube videos about not-ML if theres no social pressure to do it |
chilli#5665: I think stuff made here's videos are entertaining
Louis#0144: I refuse to watch ML YouTube
Louis#0144: 1) it’s all incredibly low quality
chilli#5665: I don't really watch ML youtube
Louis#0144: 2) why do I want to work after hours....
bmk#1476: i mean stuff like the ben mann talk
chilli#5665: like, I sometimes watch videos
bmk#1476: and other bits of info into OA or whatever
Louis#0144: 2 minute papers is beyond trash
chilli#5665: I believe this entire discord is predicated on the idea of "working after hours"
chilli#5665: lol
Louis#0144: Like his presentations are consistent horrendous
Louis#0144: Very poorly written
Louis#0144: Very poorly researched
AI_WAIFU#2844: also there's Sam Zeloof, who basically has a DIY silicon chip fab.
3dprint_the_world#6486: yep
Sahl#0630: aw I like 2 minute papers
mick#2835: > 2 minute papers
Sahl#0630: I used to watch him all the time
AI_WAIFU#2844: https://www.youtube.com/watch?v=TrmqZ0hgAXk |
bmk#1476: ok
bmk#1476: so
Louis#0144: He doesn’t even fucking read the papers!!!!
Louis#0144: LMAO
bmk#1476: @Louis we get the point
Louis#0144: Ok
Sahl#0630: Yeah but then I read them after!
bmk#1476: i already clarified what i meant
Sahl#0630: He shows cool papers
bmk#1476: you dont need to go all ham on 2mp
3dprint_the_world#6486: yes, and his "what a time to be alive!" schtick isn't cute or funny
bmk#1476: pls
bmk#1476: let's make a playlist of videos worth watching
bmk#1476: and then let's have a yt watch party sometime
Louis#0144: Yeet
mick#2835: I think his channel comes across better if you think of him as a "science enthusiast" rather than a "paper reviewer" 🤣
Sahl#0630: and train models on it
chilli#5665: hmmm
AI_WAIFU#2844: Hey guys what do you thing about that super popular ML guy, Siraj Raval?
bmk#1476: the JanNet people have already beat you to it |
chilli#5665: https://www.youtube.com/watch?v=ErfnhcEV1O8
chilli#5665: this is a good video
3dprint_the_world#6486: I only watched one of his videos and he seemed extremely untolerable to me
bmk#1476: love his "quantum logic doors" paper
chilli#5665: the videos this guy has made that aren't about tensorflow are good: https://www.youtube.com/c/AurelienGeron/videos
chilli#5665: lol
StellaAthena#3530: https://m.youtube.com/watch/Mh5LY4Mz15o
AI_WAIFU#2844: I was being sarcastic, he's been exposed as a complete fraud. But that was obvious from inspection
chilli#5665: lol I thought we were talking about ML videos
bmk#1476: clarification:
videos about, like, building stuff
StellaAthena#3530: Learning to build stuff from videos is the wrong approach to take
chilli#5665: https://www.youtube.com/watch?v=B3CsOx5U9Gs
Sahl#0630: robert miles would definitely be on the list multiple times
StellaAthena#3530: If you want to *see* cool stuff that’s different
bmk#1476: i mean idk someone recommended watchig videos about building stuff a few pages up the log
3dprint_the_world#6486: https://www.youtube.com/watch?v=GuCdsyCWmt8
3dprint_the_world#6486: https://www.youtube.com/watch?v=7n1r5XfVkyk
zphang#7252: raise patreon funds to pay bill wurtz to do an eleuther promo video |
3dprint_the_world#6486: "keep your dick in a vice"
AI_WAIFU#2844: Ah yes, the French Canadian speaking what's left of the chinook Jargon.
3dprint_the_world#6486: btw I don't make PCBs myself. That's dumb.
3dprint_the_world#6486: I mean, some people do and it gives them joy and I don't want to ruin anyone's fun.
3dprint_the_world#6486: But when you can get $5 pcbs from china there's no point
3dprint_the_world#6486: with double layers and solder masks and silkscreens, no less
mick#2835: ~~*grumble grumble* In my day boards only had 2 layers and when you routed a signal from point A to point B you expected the signal to be carried from point A to point B!~~ jk im not *that* old yet.
3dprint_the_world#6486: I use my CNC machine for other things
bmk#1476: How hard is it to design pcbs for things?
bmk#1476: It's always been a dream to get a pcb printed and assembled
3dprint_the_world#6486: not that hard actually. PCB design is remarkably accessible.
bmk#1476: But i don't know anything about pcbs
3dprint_the_world#6486: of course it depends on requirements.
AI_WAIFU#2844: Depends on how complicated your thing is
bmk#1476: I'm thinking like gluing together a cheap soc and some dram
AI_WAIFU#2844: But simple stuff can be done by complete noobs in 45mins with the right tutorial.
3dprint_the_world#6486: basic low-frequency stuff with 1 or 2 layers (e.g. audio or simple 8-bit computers)? easy.
AI_WAIFU#2844: And the right tools
3dprint_the_world#6486: high frequency RF/microwave stuff or modern logic boards (like graphics cards)? 10+ years of experience minimum.
mick#2835: Yeah under 1MHz you basically just play connect the dots and you're done lol |
3dprint_the_world#6486: 😀
mick#2835: Above that. Black magic.
3dprint_the_world#6486: ^
AI_WAIFU#2844: ^
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/800953039581085717/images_11.jpeg
3dprint_the_world#6486: lol, BGA
3dprint_the_world#6486: yep, that's leaning towards the black magic side
chilli#5665: what's a pcb
mick#2835: lol no. microwaves come out of the pins and they will fly right off the board and you'll need expensive gear to see why
chilli#5665: part connector board?
mick#2835: like the signals will not give a shit about your wire
mick#2835: buy one of those Raspberry Pi boards or something like that
AI_WAIFU#2844: Well into the black magic territory.
3dprint_the_world#6486: I'm still convinced pci-e traces on motherboards are some kind of elusive dark magic
bmk#1476: I've broken more rpis than I'm willing to admit
AI_WAIFU#2844: Printed Circuit Board
3dprint_the_world#6486: I still for the life of me can't figure out why they put meanders in such seemingly physics-defying places
3dprint_the_world#6486: *and it works*
AI_WAIFU#2844: I'm convinced my entire computer is magic.
bmk#1476: Ok so anything with a complicated cpu is off the table, got it |
bmk#1476: Er, hm
AI_WAIFU#2844: Not complicated, *fast*
bmk#1476: What *would* be a thing worth doing that wouldn't be literally impossible to do
AI_WAIFU#2844: slow 8051's are fine.
3dprint_the_world#6486: microcontroller stuff should be doable. like basic robotics.
bmk#1476: Is it common to print custom boards for microcontrollers?
3dprint_the_world#6486: yes
AI_WAIFU#2844: yup
3dprint_the_world#6486: I made a custom control board for my CNC
bmk#1476: I thought people always just used whatever those things are called where they preprint a board for you
bmk#1476: With the connectors
mick#2835: Even like a 20MHz MCU is not that hard to get working on a custom PCB
3dprint_the_world#6486: turned out both cheaper and more suited to my needs than if I bought a off-the-shelf one
AI_WAIFU#2844: China will take your design, solder the parts on, and ship it across the world for like 10$
mick#2835: emphasis on "take your design" 🤣
bmk#1476: What are those called where they basically just expose all the pins and you can just plug shit in
AI_WAIFU#2844: Breadboards
bmk#1476: The hardware equivalent of an API
3dprint_the_world#6486: @bmk lots of chips will actually come with reference PCB designs in the datasheet which you can basically just copy and modify for your needs
3dprint_the_world#6486: I've even designed 'difficult' microwave boards, by just working off the ref design |
bmk#1476: Huh
3dprint_the_world#6486: whereas designing them from scratch would have been... hard
AI_WAIFU#2844: Unless it's a specialty embedded chip, in that case getting schematics or any kind of documentation is damn near impossible without buckets of money.
AI_WAIFU#2844: So just stick to the common stuff
3dprint_the_world#6486: yes, common stuff all the way.
3dprint_the_world#6486: btw this is what all those chinese guys do too.
bmk#1476: unfortunately, years of only thinking about pc hardware has wired me to think that the only hardware that exists is pc hardware
3dprint_the_world#6486: they just mostly copy off the reference designs; no one is sitting there actually putting thought into it for something that's going to sell on ebay for $2/piece
bmk#1476: and after my dreams of sticking socs on custom boards was shattered, idk what id even need lol
AI_WAIFU#2844: It's actually crazy how obscure the lower levels of our tech stacks are.
3dprint_the_world#6486: @bmk some projects I've done: custom distortion pedals, a magnetic stirrer without any moving parts, a CNC controller
andyljones#7746: fwiw, there are very few one-off home projects where it makes sense to print a custom pcb rather than use a raspberry pi and the appropriate addons
bmk#1476: custom distortion what?
mick#2835: Distortion pedal is a great one
3dprint_the_world#6486: and also a few custom power supplies
andyljones#7746: computation is *cheap*
3dprint_the_world#6486: totally 100% disagree
bmk#1476: my dreams have been shattered
mick#2835: If you're a musician then definitely make some custom effect pedals.
bmk#1476: honestly i have a few rpis and i cant even think of a use for them |
andyljones#7746: overengineering nerds itt
3dprint_the_world#6486: like I literally have >20 projects I've done where building a custom pcb was far and away the best option
3dprint_the_world#6486: even taking into account cost
AI_WAIFU#2844: Like just remember, NVIDIA doesn't make their own chips. They just make the designs. All of our technological civilization is enabled by wizards in Taiwan.
3dprint_the_world#6486: this is true
bmk#1476: ~~so what's this "music" thing~~
3dprint_the_world#6486: it's scary how our entire civilization rests on a small number of factories in Taiwan
mick#2835: Raspberry Pi is a great choice if you need heavy compute but a lot of my around the house projects are actually analog power electronics
mick#2835: The only time I actually used a Pi in a home project is my media center TV lol
AI_WAIFU#2844: Doubly so when you consider the geopolitical position of Taiwan.
3dprint_the_world#6486: raspberry pis suck
3dprint_the_world#6486: I have one and haven't used it in years
bmk#1476: i have several just gathering dust
3dprint_the_world#6486: yeah same
bmk#1476: if anyone can think of a good use for them that would be nice
3dprint_the_world#6486: "will it blend?"
mick#2835: If you have a cable box then replace that with one.
bmk#1476: like, a tv thing?
bmk#1476: does anyone even *watch* tv anymore?
mick#2835: Yes, turns any cheap ass TV into one you can SSH into lol |
bmk#1476: (is it a bad thing that im struggling to think of things that i can do in the meatspace that do not involve my daily computer workflow?)
mick#2835: Okay maybe we're going about this wrong.
mick#2835: How about things that *don't* involve the computer. Lets eliminate those!
bmk#1476: maybe we should go back to the tensorflow mines
bmk#1476: wha-
mick#2835: These aren't mutually exclusive.
mick#2835: I mean, Roko's Basilisk considered, we should be working harder!
bmk#1476: i havent done anything productive in the past 30 seconds, i am therefore entering panic mode
3dprint_the_world#6486: @bmk stand up. Take a couple of steps back. Turn around.
There's actually a giant real-time hyperreal physical simulation around you.
3dprint_the_world#6486: it's quite cool
bmk#1476: >.> most of these objects arent even useful for writing code
3dprint_the_world#6486: yeah true
mick#2835: Okay so, TF is easy, it's Kubernetes that's fucking me up. I basically ragequit for a while earlier because I need to update a ton of packages and I just brought another physical box in instead with a fresh linux install.
bmk#1476: yeah ive spent too many hours trying to get kubernetes to work in the past days
3dprint_the_world#6486: oh is this for eegi?
bmk#1476: nah, for neox
bmk#1476: deepspeed
Gabriel#0454: Does anyone know why tensorflow is using tensorflow_estimator, which seems to have been last updated about 2 years ago?
PhoebusG#1798: Just discovered this project, and wow I love it... and the name, being Greek, freedoooom! I'm a ML n00b that's been trying to understand the stuff on the side for a few years. I've gotten the most use out fo SpaCY due to its simplicity and use cases so far. |
3dprint_the_world#6486: welcome. spacy's cool, we use it at work.
3dprint_the_world#6486: it's not an ML library itself though
PhoebusG#1798: Out of curiosity, what's a clear definition of an ML library, maybe a list somewhere of those? Just for my info/ ML education, thanks!
PhoebusG#1798: I also tried using Spark's NLU but I need to setup a VM/env just to use it, requires older Python versions etc.
PhoebusG#1798: So, I didn't bother so far, too busy to cater to that for just a test.
Big Fat Duck#0266: pytorch or tensorflow
Big Fat Duck#0266: deep learning support
PhoebusG#1798: Cool, TY.
PhoebusG#1798: I haven't used either by themselves yet, I have a more applied track, learn by doing approach. And most of the time, I can't dedicate the time for starting from scratch. Also, most programs I've seen setting those up look basically like a configuration script to instantiate a model, how long to run, number of layers etc etc. I'm still watching from a theoretical distance - for now.
PhoebusG#1798: Unless I get a practical idea of, cases where making/adapting your own model makes sense.
K_NOV#7126: hello
triggerhappygandi#0001: Здравствуйте
Louis#0144: Wtf since when do ducks speak Russian
kappapeachie#2764: ello
Louis#0144: Hi
triggerhappygandi#0001: В России-матушке утки говорят по-русски
triggerhappygandi#0001: Bless google translate
voxs#0001: lmao i love when colab gets pissed at me for using too much gpu
voxs#0001: so i switch to my alt account and colab cant do shit
triggerhappygandi#0001: :3berk: |
jbustter#5167: hi, ive been looking at graph neural nets, and i kinde wonder, can these type of graphs "creates" new nodes and connections?
jbustter#5167: all the descriptions of the process seem to involve already existing nodes
guac#4716: @jbustter yeah check any generative graph net. e.g. https://arxiv.org/pdf/1803.03324.pdf
haru#1367: lol
Kyler#9100: grrrrrr
chirp#4545: so i took another look at my notes from Sam Altman’s SSC thing
chirp#4545: for some reason i had the impression that he wasn’t a big believer in scale
chirp#4545: but now i don’t actually think that’s true
chirp#4545: he was a bit skeptical of pure-compute scaling, but i think he expressed confidence in scaling in general (scaling with “resources”)
chirp#4545: he even said that betting on scale is the most important thing he’s learned over his whole career
triggerhappygandi#0001: Of course he says that now
Imperishable_NEET#1969: Been reading about the extraordinary life story of Jim Simons, probably the closest real-life analogue to the guy in *Limitless* systemically beating the stock market in the face of the Efficient-Markets Hypothesis.
Imperishable_NEET#1969: Not through magic pills, but by a lifetime of mathematics studies and hiring a team of the leading mathematicians and computer scientists.
Imperishable_NEET#1969: Algorithms might rule big finance now beyond the hopes of any one person to beat it, but perhaps an AGI could beat the markets yet.
Daj#7482: Seems pretty clearly true to me
Daj#7482: Stock Market is basically a zero sum game, the difficulty is set by your most sophisticated oponent (+ noise)
Imperishable_NEET#1969: I always thought the guy in *Limitless* was actually channeling God or some kind of alien ASI through his brain, it's the only explanation. https://www.youtube.com/watch?v=ZppdNcMuRFU
StellaAthena#3530: This is only true if your baseline is “beating the market” not “net increasing wealth.” The total value of the stock market grows faster than the total number of people in the world (let alone the total number of investors)
Daj#7482: Yes that was an unstated assumption
triggerhappygandi#0001: Warren Buffet too |
triggerhappygandi#0001: In near term probably
triggerhappygandi#0001: But long term game isn't zero sum
triggerhappygandi#0001: If I have amzn stock since 2010 then I'm getting richer off of them working hard, rather than someone else losing an equivalent amount
Daj#7482: As Stella pointed out, I was implicitly talking about "beating the market", not just making money
StellaAthena#3530: I think it’s also worth stating that this is the wrong goal to optimize for in general.
Daj#7482: Yes it's the easier one to model
triggerhappygandi#0001: isn't "making money" easier to model?
triggerhappygandi#0001: Just get my $$ number to ascend
Daj#7482: Yea, but making a model of whether you're winning or not feels harder
Daj#7482: Eh maybe I have weird intuitions here
Daj#7482: Ignore me
triggerhappygandi#0001: Ok. Ignored, muted, blocked, unfriended and ghosted
triggerhappygandi#0001: :bigzucc:
Daj#7482: nice
triggerhappygandi#0001: Lol
triggerhappygandi#0001: In any case money won't matter when we kill hollywood with video-dalle
triggerhappygandi#0001: A Tarantino on every block
triggerhappygandi#0001: I watched Eliezer's intro video on alignment. It was a total philosophy class where you just think "what can be the worst possible scenario here?"
Imperishable_NEET#1969: Worst possible scenario is *I Have No Mouth, And I Must Scream*
Imperishable_NEET#1969: Or Roko's Basilisk |
triggerhappygandi#0001: Nice username
Daj#7482: I'd consider this top 50% best scenarios
Daj#7482: Hell is deep and endless
triggerhappygandi#0001: It is
triggerhappygandi#0001: And so is our creativity in imagining it
Daj#7482: AGI maximizing suffering is by definition the worst situation possible
Daj#7482: (assuming you accept some kind of non dual nature of consciousness and suffering)
triggerhappygandi#0001: What do these words mean, Kowalski
niplav#6179: I agree, but this is counterintuitive. Hedonium shockwave is not the best scenario, after all
triggerhappygandi#0001: Damn. Who would do this actively
Daj#7482: I consider whether this is true or not to be one of the most important open questions in ethics
Daj#7482: An aligned AGI that has an accidential negative sign in front of its reward function
triggerhappygandi#0001: aaaaaaa
Imperishable_NEET#1969: I guess you can go worse than that if some form of FTL travel, free energy, or creation of/travel to other universes turns out to be possible. Then a Hell singleton ASI could maximize dolorium infinitely.
andyljones#7746: if it's any help at all, the universe is very big and there *is* another light-cone out there with a civilization bent on self-propagation rather than suffering.
andyljones#7746: might take a billion years or so to show up here, but 🤷
Daj#7482: There's a good probability there is no other intelligence in our lightcone.
Daj#7482: I really hope we never find aliens
Daj#7482: Too high variance
triggerhappygandi#0001: We should |
andyljones#7746: in our *present* light cone, sure, entirely plausible. for that to be true in all our future lightcones, you have to make some arguments based on the expansion of space and you come out with some really small probabilities
triggerhappygandi#0001: Irl 40k
andyljones#7746: two ticks, sanderberg (ofc) wrote about this somewhere
Daj#7482: I guess I try not to think too much about things bordering on Infinite Ethics
Daj#7482: Just errors out the brain
Imperishable_NEET#1969: Jury's still out on the true nature of dark energy
triggerhappygandi#0001: It is
Daj#7482: Jury's out on _everything_
triggerhappygandi#0001: Idk how the scientists of yore managed to live with _not_ knowing everything. I feel sad thinking that I might die before dark energy/dark matter can be explained to a layman completely
triggerhappygandi#0001: What I do hope, is that it calms down someday and gravity comes out on top again
Imperishable_NEET#1969: Then big crunch?
triggerhappygandi#0001: Better than heat death imo
Daj#7482: something something modal fucking realism
triggerhappygandi#0001: Atleast we will all be together in the end
andyljones#7746: https://www.fhi.ox.ac.uk/wp-content/uploads/space-races-settling.pdf
(dh is 15bn lightyears)
(so to be left alone, you need less than one expansionist civ every 1e30 cubic lightyears)
|
(back of the envelope, should be about 1e20 stars in that volume) https://cdn.discordapp.com/attachments/729741769738158194/801466932615577600/unknown.png
fristiloverke#4159: we're already way past the point of physics being able to be explained to the layman
triggerhappygandi#0001: I know something something warping spacetime
triggerhappygandi#0001: Thats a broadly simplified explanation of general relativity
fristiloverke#4159: there's a theory that you can slow down in the case of heat death in such a way that youll live asymptotically forever
fristiloverke#4159: i forgot the details
andyljones#7746: aestivation, that's also a sandberg one
triggerhappygandi#0001: Well if heat death occurs we will all be inside one giant black hole
CRG#8707: I think this required a non exponentially expanding universe
triggerhappygandi#0001: And to any outsider it takes you a long time to fall in
fristiloverke#4159: but if spacetime is an elestic sheet and the earth is a ball then why doesnt the earth roll towards the sun?
triggerhappygandi#0001: Well, technically it does
fristiloverke#4159: there was a guy who tried to disprove GR that way
triggerhappygandi#0001: Its not exactly like a waterbed
triggerhappygandi#0001: Centrifugal force is a thing too (it is fake yeah but its effect counters gravity)
CRG#8707: Did anyone say... scaling? <https://iopscience.iop.org/article/10.1086/308434/fulltext/40116.text.html> https://cdn.discordapp.com/attachments/729741769738158194/801468784786407464/e25dfcb3ed8ee39a0e104bbd07af0684.png
Imperishable_NEET#1969: Worrying about heat death is silly when we haven't even conquered death first.
Noori#4805: ^
triggerhappygandi#0001: We are inching closer though
bmk#1476: Heat death is the final boss |
bmk#1476: Death is the current bossfight
triggerhappygandi#0001: I want big crunch to be the final boss
bmk#1476: Let's win this one first
Daj#7482: Death is a sidequest
Daj#7482: Alignment is the only bossfight
Daj#7482: Once you get the epic weapon from the AGI boss you can one-shot everything else
Daj#7482: speedrun strats
triggerhappygandi#0001: _how_
triggerhappygandi#0001: It doesnt give you FTL
Daj#7482: If AGI doesn't give you FTL then nothing will
triggerhappygandi#0001: Depressed
triggerhappygandi#0001: :zucc:
triggerhappygandi#0001: Lets hope it does then
Daj#7482: The universe is overrated anyways
Daj#7482: What is "things"?
bmk#1476: What about the catgirl sidequest
Imperishable_NEET#1969: Probably the rest of the Stelliferous Era, at least
CRG#8707: Depends on proton decay.
Daj#7482: Newbie trap, easy way to Game Over early on
triggerhappygandi#0001: Take that back |
triggerhappygandi#0001: :angrysutton:
Daj#7482: ~2-3 days after AGI
Daj#7482: lol
bmk#1476: Humans staying relevant?
They were never relevant in the first place
Daj#7482: Mostly a joke
Daj#7482: Depends on how you define AGI
Daj#7482: But like, really, _really_ not long
triggerhappygandi#0001: Universe is the realest shit there is. It is SUPER relevant!
triggerhappygandi#0001: Even though it may be a hologram
Daj#7482: The idea that biological humans will exist as a meaningfully impactful force in the universe in e.g. 1000 years is _absurd_ to me
Daj#7482: Exponentials
Imperishable_NEET#1969: I guess the last cope, if we can't solve the last question, will be that any number of previously unobserved, maybe heretofore unobservable phenomena could be true. Maybe the multiverse is real and our dead, de Sitter universe will collide with another one on an infinite timescale
triggerhappygandi#0001: True. Gotta get cyberpunked
Daj#7482: Sure, maybe a few decades or whatever
Daj#7482: But not a century
Imperishable_NEET#1969: Or maybe Boltzmann Brains or Quantum Fluctuations will restart things
Daj#7482: seems reasonable
triggerhappygandi#0001: Calm down lol |
triggerhappygandi#0001: 3-4 years isnt even in the realm of aggresively fast
triggerhappygandi#0001: AGI isnt Go
Daj#7482: This is post super human AGI existing, and is basically just silly speculation
Daj#7482: Doesn't mean anything
triggerhappygandi#0001: Have you read _The Emperor's New Mind_?@Daj
Imperishable_NEET#1969: Every generation thinks they'll be the last, especially in the modern era.
andyljones#7746: wouldn't have said that ten years ago
Daj#7482: Isn't that one of those insufferable consciousness books?
triggerhappygandi#0001: It is
Imperishable_NEET#1969: Singularity stuff is more grounded than religion but still an eschatology nonetheless
Daj#7482: Yea no thanks lol
triggerhappygandi#0001: :zucc:
Daj#7482: It's an inside view, yea
Imperishable_NEET#1969: Of course, there is the Doomsday Argument
triggerhappygandi#0001: I mean, Go is hard, but I would always say so
triggerhappygandi#0001: Go is a pretty simple world
triggerhappygandi#0001: Compared to irl
Imperishable_NEET#1969: There's also the Simulation Hypothesis or Boltzmann Brain Hypothesis, which I shrug off for being unfalsifiable.
Imperishable_NEET#1969: This video is a clickbait title, his actual answer is that we're probably living in base reality, but the moment we create a simulation the odds flip in favor of us also being simulated. https://youtu.be/HA5YuwvJkpQ
triggerhappygandi#0001: I mean, what would it matter anyhow |
triggerhappygandi#0001: We could be part of a running program
triggerhappygandi#0001: But in-universe things don't change by that knowledge
bmk#1476: I've heard some argue that the *only* difference between religions/cults/cranks and sufficiently engaging secular organizations/non-cult groups/researchers is the inside view or object level
bmk#1476: Maybe that's a bit of an exaggeration, but i think that it's a large chunk of the difference
droper#8996: I wonder where the energy that makes the universe possible came from. I also wonder if we are just too limited to even ask the right questions.
droper#8996: I imagine prehistoric people asked similar questions in their own way.
3dprint_the_world#6486: Chimps: "How long do you think we'll stay relevant?"
3dprint_the_world#6486: Although tbf intelligence isn't everything. Cockroaches are still around and will probably outlast us.
3dprint_the_world#6486: The key is finding a niche and being really really good at it.
3dprint_the_world#6486: really? you expect civilisation to last longer than the ~360 million years cockroaches have lasted?
3dprint_the_world#6486: In the case of cockroaches, their niche is basically: eating the decomposing detritus left by the activities of larger organisms, e.g. us.
bmk#1476: Yeah and the only reason we haven't wiped them off the face of the earth is because we don't care about them. If cockroaches create problems, we don't have any second thoughts about obliterating large numbers of them
StellaAthena#3530: Do you think we could actually exterminate roaches?
StellaAthena#3530: That sounds very difficult and non-trivial IMO
bmk#1476: I think mosquitos are a perfect example: we kill them en masse merely because they're annoying, and now we're looking to completely obliterate certain species of them using gene drive stuff because malaria
bmk#1476: Using some kind of gene drive thing? Given a few years of focused r&d, probably
StellaAthena#3530: hmmm yeah. Wasn't thinking about gene things
Sphinx#2092: How did it go for everyone who submitted to NAACL?
bmk#1476: Basically my point is that sure, we might still exist past the point of no return, but the moment we become even a minor inconvenience to the AI we will be obliterated without mercy
bmk#1476: "the AI does not love you, nor does it hate you, but you are made of matter that it can use for something else" |
nz#9710: You submitted a paper right? How did it go for you?
Sphinx#2092: Lukewarm. 3, 3.5, 3.5 and 2.5, 3.5, 3.5. I can most likely bump the 2.5 up, since they just didn't read a section of the paper explicitly addresses the problems they brought up, but who knows
Sphinx#2092: Rebuttals always feel like Chance Time from mario party.
Sphinx#2092: Did you submit?
nz#9710: No, ahah, I wish -- I'm currently dealing with midterm exams (I'm an undergrad) and hoping to work on my thesis in a couple weeks.
nz#9710: Well, hopefully you're able to bump up that 2.5! Good luck!
3dprint_the_world#6486: there's absolutely no possible scenario in which:
- roaches don't exist
- humans exist
3dprint_the_world#6486: I have never said anything with such confidence.
bmk#1476: But muh gene drive
Daj#7482: Give me 50 years of tech progress
bmk#1476: The only reason we haven't obliterated mosquitos yet is because of the coordination challenge of convincing people that we should
3dprint_the_world#6486: mosquitos are completely different
bmk#1476: How so
Sid#2121: less tasty
3dprint_the_world#6486: mosquitos fit a narrow niche
3dprint_the_world#6486: cockroaches are generalists
bmk#1476: How does that change things
3dprint_the_world#6486: https://en.wikipedia.org/wiki/Generalist_and_specialist_species |
bmk#1476: How does that change things
bmk#1476: Gene drive kills from the inside, not the outside
bmk#1476: It doesn't matter which environments it's adapted to if you're not trying to kill it by taking away its habitat
3dprint_the_world#6486: the concept of even using a gene drive to exterminate a species is dicey to begin with
3dprint_the_world#6486: let alone when the species is generalist
3dprint_the_world#6486: don't underestimate the ability of life to adapt 🙂
3dprint_the_world#6486: it's why, for example, as viruses mutate they become less deadly and more infectious
Daj#7482: This is just an artifact of our current tech level
Daj#7482: Human tech is moving faster than evolution
3dprint_the_world#6486: maybe!
bmk#1476: Life finds a way, except when it doesn't (because survivorship bias)
Daj#7482: Any line of power will be crossed that isn't forbidden by physics
3dprint_the_world#6486: but see: https://discord.com/channels/729741769192767510/729741769738158194/801524817908072458
at that point, we'd have likely used the same gene tech to improve ourselves beyond being human
3dprint_the_world#6486: so the point still stands
bmk#1476: This depends on your definition of human
3dprint_the_world#6486: like to simplify this even more: we could just upload ourselves to a computer in orbit and then nuke the entire surface of the planet into high-level nuclear waste
3dprint_the_world#6486: (although tbh I'm not even sure that that would get rid of cockroaches)
Daj#7482: I also think there is no _likely_ scenario where humans exist but cockroaches don't
Daj#7482: But "absolutely no possible scenario"? Not at all |
mick#2835: Even if you assume that it physically got rid of cockroaches then wouldn't we still remember them, and so they would still be a type of pest? Lol
3dprint_the_world#6486: I'd argue that outside of exceedingly implausible hypothetical scenarios, that's a valid statement.
3dprint_the_world#6486: people underestimate cockroaches and it's quite sad.
Daj#7482: "This is impossible, except for the scenarios where it is possible"
nz#9710: What about tardigrades tho
nz#9710: Are they cockroaches 2.0?
3dprint_the_world#6486: more like "This is impossible, except if you posit impossible circumstances to begin with"
3dprint_the_world#6486: If you assume 0=1, then sure, you can prove anything
Daj#7482: You use the word "impossible" waaaaaaay too frivilously lol
3dprint_the_world#6486: no I'm dead serious
Daj#7482: Must be the physicist in you
mick#2835: @3dprint_the_world people underestimate GPT and it's like a hundred times smarter than a cockroach 🤣
Daj#7482: "It's impossible to do X [under these extremely specific theories that are probably incomplete]"
Daj#7482: Just say high probability man
Daj#7482: Be a good bayesian
3dprint_the_world#6486: ok fine
3dprint_the_world#6486: 1e-100 probability
3dprint_the_world#6486: happy?
3dprint_the_world#6486: (yes I am that confident)
Daj#7482: Then I'd like to make a _lot_ of bets with you lol |
Daj#7482: I bet 1ct vs your 1e100$
3dprint_the_world#6486: I'll bet you ten grand. Right now.
Daj#7482: no no
Daj#7482: e100
3dprint_the_world#6486: I'll bet you everything I own.
Daj#7482: Cool deal
3dprint_the_world#6486: I'll suck your dick for eternity.
mick#2835: Is 1e-100 more or less probable than "impossible" let's be real impossible happens like 0.02% of the time 🤣
3dprint_the_world#6486: on top of it
3dprint_the_world#6486: I'll be your bitchslave
Daj#7482: Not sure how this effects EV :thonk:
3dprint_the_world#6486: like even assuming you had some tech that could hunt down every currently existing cockroach and kill them, they would just adapt, like they always do.
3dprint_the_world#6486: over 360 million years, they've had to deal with millions of predator species that tried to do exactly that.
Daj#7482: in 360 mio years, no species built nukes
Daj#7482: Lets see how evolution develops nuke resistance lmao
Daj#7482: Tech >>> Evolution
triggerhappygandi#0001: Exponential growth
3dprint_the_world#6486: they've also had to deal with millions/billions of fine-tuned genetic machines that tried to wipe them out (viruses)
3dprint_the_world#6486: they've dealt with everything
3dprint_the_world#6486: sure, but again, see https://discord.com/channels/729741769192767510/729741769738158194/801524817908072458 |
Daj#7482: You have an extremely limited concept of "everything" lol
3dprint_the_world#6486: as I said, we could plausibly upload ourselves on to a computer in orbit and then nuke the planet
3dprint_the_world#6486: that might work
Daj#7482: I'm about 1000:1 confident in that
3dprint_the_world#6486: (I mean, it still probably wouldn't)
Daj#7482: I'm trying to make a point of how _insane_ being 1e100:1 sure of _anything_
Daj#7482: I'm not 1e100:1 sure that _reality exists_
3dprint_the_world#6486: fully aware
3dprint_the_world#6486: me neither!
3dprint_the_world#6486: but I'm 1e100:1 sure about what I said
Daj#7482: So you're more sure of cockroaches then of the literal existence of cockroaches?
3dprint_the_world#6486: really
triggerhappygandi#0001: What the fuck is going on
triggerhappygandi#0001: :guilty:
3dprint_the_world#6486: :yes:
Daj#7482: You don't see the flaw in your ontology?
Daj#7482: Reality not existing is a superset of cockroaches not existing
3dprint_the_world#6486: I'm 1e100 sure of p(what I said | existence of reality)
bmk#1476: I'm not even 1e100:1 sure that cockroaches exist
Daj#7482: That's a _totally different statement_ |
3dprint_the_world#6486: or p(what I said | existence of cockroaches)
3dprint_the_world#6486: if that makes you feel better
3dprint_the_world#6486: ok fine.
triggerhappygandi#0001: You are very unsure then
3dprint_the_world#6486: I figured we'd all assume reality exists
triggerhappygandi#0001: I see those shits a lot of times
3dprint_the_world#6486: that that would be a common assumption for our literal debate
triggerhappygandi#0001: And they are very real
3dprint_the_world#6486: but if you want to question reality, sure
3dprint_the_world#6486: I'm on board
bmk#1476: How can you be 1e100:1 sure of *anything*?
Daj#7482: I wouldn't do this if you hadn't invoked _literal impossibility_
triggerhappygandi#0001: I can be sure of reality being real to me, with equal odds @bmk
3dprint_the_world#6486: ok, happy to be proven wrong
3dprint_the_world#6486: what am I missing
bmk#1476: "0 and 1 are not probabilities"
Daj#7482: Aligned ASI, for example
triggerhappygandi#0001: Yes.
Daj#7482: That also hates cockroaches
Daj#7482: But keeps humans as pets |
Daj#7482: I say there is more than e100 chance of that
triggerhappygandi#0001: But 1e100:1 can be
Daj#7482: So my EV on the bet is positive
3dprint_the_world#6486: but still, there's a big jump from 'hates cockroaches' to 'is willing to go scorched earth, literally'
triggerhappygandi#0001: Are you _that_ sure of AI hating cockroaches?
triggerhappygandi#0001: Man
triggerhappygandi#0001: Wtf
triggerhappygandi#0001: Based on what?
triggerhappygandi#0001: Do we only have negative view on cockroaches exclusively on the internet
mick#2835: No everybody hates cockroaches they are the worst
3dprint_the_world#6486: really? I'm not too sure of this. Out of the space of all possible programs, is one that hates cockroaches to the point of scorched earth, but likes keeping humans as pets, likelier than 1e100?
3dprint_the_world#6486: I think *you* may be understimating *program space*
Daj#7482: You really don't grok how small 1:e100 is
Daj#7482: I think there is a more than 1:e100 chance that _reality does not exist_
triggerhappygandi#0001: I thought you said 1e100:1 lol
Daj#7482: AI hating cockroaches is _strictly more likely than reality not existing_
3dprint_the_world#6486: I think you're really underestimating the size of program space
Daj#7482: We don't random search program space
3dprint_the_world#6486: 1e-100 isn't really a very unlikely program
triggerhappygandi#0001: Okay, are you _that_ sure of unreality? |
Daj#7482: I think you're underestimating how small of a subset ASI can realistically arise from in our universe
Daj#7482: and _how tiny 1:1e100 is_
triggerhappygandi#0001: It's just 10^-100
3dprint_the_world#6486: this discussion:
- your overconfidence is your weakness.
- your faith in your math is yours.
triggerhappygandi#0001: There are 1e80 something atoms in the universe
bmk#1476: I think you're underestimating how ineffective "I think you're underestimating" is as a debate strategy
mick#2835: In cryptography 1e-77 is considered the gold standard for "so impossible that you're ridiculous for going any further" lol
Daj#7482: Eh you're right, this is silly, we just have different intuitions. I'll just continue to money pump you whenever I can lol
triggerhappygandi#0001: Yeah but they're not dealing with _universe not existing_@mick
mick#2835: Lol
triggerhappygandi#0001: They're just doing encoding
3dprint_the_world#6486: I mean tbf it's not like either of us can ever cash out.
Daj#7482: Nah
3dprint_the_world#6486: so betting is meaningless here.
Daj#7482: It's just in good humor
3dprint_the_world#6486: yes
triggerhappygandi#0001: 1e-100 is still too high for universe being unreal
Daj#7482: and yeah I think there is a pretty high chance the universe doesn't exist |
Daj#7482: Simulation arguments are more than marginally compelling
triggerhappygandi#0001: Ah
triggerhappygandi#0001: See
Daj#7482: Nah I place it _way_ higher
triggerhappygandi#0001: To us inside, it doesn't matter
Daj#7482: I never said it mattered
triggerhappygandi#0001: It _could_ be a simulation
bmk#1476: We're talking about **B**ayesian™®© degrees of belief
bmk#1476: Or, at least i think we are
triggerhappygandi#0001: But any inside experiment can't prove otherwise
3dprint_the_world#6486: also I'll admit, in my initial statement, my definition of 'human' is pretty narrow: biological humans that e.g. eat regular food and reproduce normally
Daj#7482: That we know of
mick#2835: Actually it exactly does **matter** from "in here" 🤣
Daj#7482: Simulator could fuck with the simulation
Daj#7482: Or pull us out
Daj#7482: We could be post humans that deleted their own memory to relive a cool simulation of our ancestors lives
triggerhappygandi#0001: Yes. That we know of. But 1e-100 is too high
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/801531065473957938/20475d3e4cdce88d3fda61f05203529e.png
CRG#8707: https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument
Daj#7482: a classic |
3dprint_the_world#6486: lol
triggerhappygandi#0001: What even is 1e100. That's just a googol
Daj#7482: I don't think the human brain is in any way reliable enough to get a 1:1e100 confidence on _anything_
Daj#7482: That's basically my argument
triggerhappygandi#0001: Hmm. I see.
triggerhappygandi#0001: We can't really comprehend big numbers
triggerhappygandi#0001: Even 1e30 is hard to wrap your head around
bmk#1476: ~~Sounds like a failure of frequentism. I think this boils down to LWers not being Bayesian enough~~
Daj#7482: I can't reliably tell the difference between e5 and e6
3dprint_the_world#6486: agreed, I'm just saying I can't conceive of any possible scenario, hence 1e-100. Now you might say: "Just because *you* can't conceive it doesn't mean...... " but you asked for *my* confidence, so it's valid.
triggerhappygandi#0001: :void:
3dprint_the_world#6486: someone else's confidence might be different
bmk#1476: Imagine 10 apples. Now imagine 30 smaller apples floating to the upper right
Daj#7482: eh I guess so, I guess I'm saying "your epistemology seems really broken if it spits out numbers like that, I would debug that"
3dprint_the_world#6486: well initially I said 0
triggerhappygandi#0001: Ok I will hire you at €1e6/yr
Daj#7482: Which is broken by definition
3dprint_the_world#6486: you guys then bugged me about it
Daj#7482: Just a suggestion
3dprint_the_world#6486: 1e-100 is just to make you happy |
Daj#7482: If you wanna continue using non-bayesian epistemology fine I guess lol
Daj#7482: the acausal gods will surely like exploiting your decision theory
Daj#7482: (this is a joke)
3dprint_the_world#6486: lol
Daj#7482: (hopefully)
triggerhappygandi#0001: This is not a joke
3dprint_the_world#6486: I guess I just don't know what number to assign to 'totally confident to the point that I will bet anything I can bet on it'
3dprint_the_world#6486: if that number is 1e-10 instead of 1e-100, then sure.
3dprint_the_world#6486: I don't care about the actual number.
mick#2835: 1-e where e is a constant made up for this situation
mick#2835: There now we have a number
Daj#7482: Yea this is the bug I had a hunch you might have. You have a big SCALE_NEGLECT_ERROR haha
Daj#7482: If you don't care, it's fine
3dprint_the_world#6486: no I understand the difference in scales
3dprint_the_world#6486: I totally understand the absurdity of talking in large or small probabilities
bmk#1476: I think the problem is that using 0 as a probability is a social signal for non-bayesian reasoning
3dprint_the_world#6486: exactly
bmk#1476: And so if you drink the Bayesian koolaid, it will cause you to doubt others' reasoning when they signal they aren't into bayesianism
3dprint_the_world#6486: what I don't understand is the need to assign non-zero probabilities to things that are obviously impossible. Like to give a concrete example, you can't solve an arbitrary quintic equation in terms of elementary functions. It's literally impossible - probability 0.
bmk#1476: An anti-shibboleth, if you will |
3dprint_the_world#6486: it makes no sense to assign nonzero probability to this.
3dprint_the_world#6486: *0 probabilities exist*
bmk#1476: For all intents and purposes you're correct
3dprint_the_world#6486: one isn't being smart by having a nonzero prior for this
triggerhappygandi#0001: Probability of getting a 1e100 on a single dice throw
bmk#1476: But it's a Bayesian shibboleth
3dprint_the_world#6486: yes.
triggerhappygandi#0001: It _could_ occur if the universe is a simulation
3dprint_the_world#6486: giving a nonzero probability for this is just a way to signal
3dprint_the_world#6486: "Hey I like Bayesianism too!"
triggerhappygandi#0001: The programmer could just fuck with us
triggerhappygandi#0001: For having this conversation
Daj#7482: I actually don't believe this, but to preempt the discussion: It's because I have a different definition of what math is than you
bmk#1476: https://www.lesswrong.com/posts/6FmqiAgS8h4EJm86s/how-to-convince-me-that-2-2-3 btw, this would be the canonical response
Daj#7482: bmk was faster
bmk#1476: But i don't think arguing about that actually gets anywhere
Daj#7482: Basically, I don't trust my brain to have reasoned correctly about math with probability 1
bmk#1476: It's more a shibboleth in practice than anything else
mick#2835: The discrepancy is simple: the probably is truly zero... *assuming that nobody made a mistake in the theoretical machinery leading up to that conclusion being derived*
Daj#7482: Actually, I think it's _super important_ but eh |
triggerhappygandi#0001: It does if we all had lsd rn
3dprint_the_world#6486: here's another take: The probability is zero unless you don't understand math.
Daj#7482: Not everything is a shibboleth
triggerhappygandi#0001: I bet it would make extremely sensational visualizations
Daj#7482: Some epistemologies have different properties than others
Daj#7482: Counter: you don't understand math with probability 1
bmk#1476: I meant the cause of the disagreement we spent the last half hour on
bmk#1476: I don't think there's any actual disagreement wrt the question at hand, which was the cockroach thing
Daj#7482: I think most people haven't read the sequences and haven't groked _why_ bayesianism is special
Daj#7482: It's not a red team blue team thing
triggerhappygandi#0001: Why even go that complex. The probability of you being Einstein given a coin toss is 0
Daj#7482: Bayes _means something_
Daj#7482: (with probability ~1e15:1)
3dprint_the_world#6486: I understand the math behind it as well as I understand anything and can even conceive of giving probability to things.
So p(X|Y)=0, where Y=my ability to even assign probabilities to beliefs
Daj#7482: lol
Daj#7482: Then you're using bayes wrong
3dprint_the_world#6486: which is all that really matters anyway
Daj#7482: Or there is an infinity in there
Daj#7482: You need _infinite bayesian evidence_ to reach probability 1 or 0 |
mick#2835: Is quantum computation limited to considering only non-infinite superpositions?
Daj#7482: afaik yes but I'm not a physicist
Daj#7482: or complexity theorist
3dprint_the_world#6486: what's a "infinite superposition"?
mick#2835: I guess a qubit with a continuous value? Lol
triggerhappygandi#0001: Hmmmmmm.
Daj#7482: I vaguely recall something about that this would allow Hypercomputation
Daj#7482: but I might be wrong
Daj#7482: Definitely can't measure something infinitely precise irl
triggerhappygandi#0001: How can anything be _truly_ continuous physically, when universe itself is pixelated
mick#2835: Afaik max states "visited" is exponential in the number of qubits and yeah that Planck stuff
3dprint_the_world#6486: yeah basically you can't construct something like this and measure it. Heisenberg, for one.
triggerhappygandi#0001: Well there's a famous German who won Nobel to prove that so yes.
bmk#1476: Topologists hate him! Learn his one simple trick
bmk#1476: Er
bmk#1476: I mean analysts i guess
bmk#1476: But also topologists
triggerhappygandi#0001: I wish we knew all the secrets about these pixels in 2020
3dprint_the_world#6486: I'm more interested in why you're interested in continuously-valued qubits
mick#2835: well if we could hold a superposition of an infinite amount of states then perhaps it is possible for a human to generate an infinite amount of evidence for a position internally |
bmk#1476: Why do people study topology when nothing in reality is even continuous smh
3dprint_the_world#6486: unlikely
Daj#7482: Continuous functions are just useful approximations of discrete reality
bmk#1476: How the turntables
3dprint_the_world#6486: for one thing, I'm not sold on the idea that we do any quantum computation in our brains at all
Daj#7482: Note this would literally be dividing by zero lol
mick#2835: I put it as even more than unlikely, I'd like to say impossible, but I am extremely nitpicky about when I will break out the term impossible so I am instead considering this incredibly unlikely route lol
3dprint_the_world#6486: our brains are warm salty baths of water
3dprint_the_world#6486: not a very good environment for QC
Daj#7482: but _microtubules_
Daj#7482: lol
3dprint_the_world#6486: second, even in the very unlikely scenario we're doing some kind of QC in our heads, QC is inherently jittery, random, and noisy
3dprint_the_world#6486: even measuring a binary qubit is dicey
3dprint_the_world#6486: let alone a continuous one
Daj#7482: But 3dprint, evolution will surely figure it out!
Daj#7482: It's been optimizing for *checks notes* hundreds of millions of years!
mick#2835: Well so are people lol, actually that part there to me seems the easiest to reason away technically because you could just blame all of people's erratic weirdness on not getting the lucky samples from the quantum computation and having to try it again 🤣
Daj#7482: _Everett has entered the chat_
mick#2835: But I want to reiterate that I think the idea of considering an infinite amount of evidence that is all internally generated sounds completely asinine to me and I'm just playing devil's advocate against myself
Daj#7482: Quantum Immortality boys |
bmk#1476: Evolution doesn't explain how quantum computers spontaneously came into existence, checkmate atheists
3dprint_the_world#6486: ok, let's pretend your jocular attack is not in jest, and take it seriously 👅
CRG#8707: Does magnetoreception count?
Daj#7482: It's aboiut 50/50
3dprint_the_world#6486: the human brain has only been evolving for a few million years, not a hundred million
3dprint_the_world#6486: was *just* about to say that
3dprint_the_world#6486: some animals *do* use some weird quantum tricks in their brains
3dprint_the_world#6486: it's not QC
3dprint_the_world#6486: ofc
3dprint_the_world#6486: but it's still cool
Daj#7482: When will my microbes do Shor's Algorithm?
Daj#7482: That would be a funny project
Daj#7482: Try to evolve Shor's Algorithm
mick#2835: Have you tried?
mick#2835: Think really hard about a modular exponentiation of a long number until you can feel the cycle like a ring in your minds eye and then just spit out a prime!
Daj#7482: Evolve microbes to solve Gödel Problems :ultrazucc:
mick#2835: Do it!
3dprint_the_world#6486: here's my overall takeaway from this discussion: Bayesians need to understand life and evolution more.
Daj#7482: lol ok
3dprint_the_world#6486: but thanks for engaging anyway |
3dprint_the_world#6486: it was an interesting discussion
3dprint_the_world#6486: I enjoyed it
Daj#7482: Yea! Don't have to always agree on everything, it's interesting to engage super smart people with different views
Daj#7482: ~~Even if they're _wroooong!_~~
Daj#7482: jk lol
3dprint_the_world#6486: uh uh uh, wrong with probability < 1.0
3dprint_the_world#6486: 😉
Daj#7482: Hey if you get to divide by zero, so do I!
3dprint_the_world#6486: speaking of dividing by zero, back to my training run...
mgostIH#0245: this is me when my gf asks if I forgot something important
gwern#1782: bayesianism is just the evolutionary replicator equation on hypothesis-space, prove me wrong
Adam12341234#5266: Anyone mind joining voice chat? 🙂
bmk#1476: Why?
Musical_Pumpkin#4739: Hi all
Musical_Pumpkin#4739: Just dropped in
StellaAthena#3530: Welcome!
Musical_Pumpkin#4739: Thank you!
Musical_Pumpkin#4739: So you guys do AI stuff here? I'm doing an AI project right now, I've just put it aside because I was busy with work
StellaAthena#3530: Yes, we are an AI research collective. Many of our current projects are about language modeling, but we have a variety of interests and backgrounds.
Musical_Pumpkin#4739: Nice, that's dope |
bmk#1476: What is your project
Musical_Pumpkin#4739: My project is a red team oriented AI that can breach networks and operate alongside a red teamer while he works a joint
Musical_Pumpkin#4739: Once its finished, whenever that is, I'll release it as open source
mick#2835: lol
StellaAthena#3530: > while he works a joint
Is there another slang meaning of this term? Or are you making a hacker AI so you can smoke weed while it does all the work?
Musical_Pumpkin#4739: XD no it's supposed to watch my back
Musical_Pumpkin#4739: Sorry that was poor choice of words on my part
Musical_Pumpkin#4739: I've never done red teaming, cybersecurity is still pretty new to me
Musical_Pumpkin#4739: But red team is my focus
Musical_Pumpkin#4739: Perhaps I'll make a blue team version, or leave it to someone else after it's released
bmk#1476: How much do you know about AI?
Musical_Pumpkin#4739: A small portion, like I've said, work took the majority of my time last year
bmk#1476: https://discord.com/channels/729741769192767510/729741769738158194/736374402366832681
bmk#1476: Here are some resources about AI that you might find useful
Musical_Pumpkin#4739: Thank you 😊
Musical_Pumpkin#4739: I can provide a better explanation of my project *hopefully* once I'm at my computer
Musical_Pumpkin#4739: I'm on mobile right now
StellaAthena#3530: Hey guys! If you’ve been hanging out and think we are pretty cool but want to dip your toes into research with a more “data processing” type task than “recreate the world’s largest language model” type task let me know! We have some scaling laws research brewing that could use help on the data side.
3dprint_the_world#6486: was having a discussion in #math and was reminded of this https://en.wikipedia.org/wiki/Probability_measure |
notice the *square* brackets [0, 1]
chirp#4545: https://twitter.com/tlbtlbtlb/status/1298355376962670592
chirp#4545: i wonder if AI could end up the same way, and who would win/lose in that scenario
bmk#1476: ok so
bmk#1476: how much do you know
Golgi Apparatus#4074: I am a studying computer science, I have read up on the math behind neural networks and such
bmk#1476: about AI, math, software engineering
bmk#1476: ah
Golgi Apparatus#4074: but i dont know all the names and such
Golgi Apparatus#4074: practical application if you will
bmk#1476: https://discord.com/channels/729741769192767510/729741769738158194/736374402366832681 check this out
bmk#1476: you can skip the math ones
bmk#1476: but there's a few papers in there about things relevant to our projects
bmk#1476: actually that list needs some updating
bmk#1476: lemme compile an updated list
Golgi Apparatus#4074: Brillant, I do love 3b1b
Golgi Apparatus#4074: I have watched his series on neural networks
Golgi Apparatus#4074: But im more interested in what you all are doing?
Golgi Apparatus#4074: Are there kits avaliable online to start with Neural networks and such?
bmk#1476: The Eleuther Reading List: here is a list of resources for everything from the basics to the stuff we're actively doing research on |
math (for completeness, feel free to skip if you already know this stuff):
https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr
https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab
https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
more specific stuff for things eleuther is working on, in no particular order:
http://jalammar.github.io/illustrated-transformer/
https://arxiv.org/abs/1706.03762
https://arxiv.org/abs/1811.02084
https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
https://arxiv.org/abs/2005.14165
https://arxiv.org/abs/1811.06965
https://arxiv.org/abs/2006.16668
https://arxiv.org/abs/2001.08361
https://arxiv.org/abs/2010.14701
https://arxiv.org/abs/2101.00027
https://arxiv.org/abs/2002.05645
https://www.deepspeed.ai/
http://ruishu.io/2018/03/14/vae/ |
https://learning-at-home.github.io/
https://arxiv.org/abs/1810.04805
https://arxiv.org/abs/2006.04768
https://arxiv.org/abs/1909.08593
bmk#1476: Pinned a message.
Golgi Apparatus#4074: Thanks a bunch man
Golgi Apparatus#4074: How much do you read into the Arxiv pdfs?
bmk#1476: eh just skim em
bmk#1476: be familiar with what it is
Zoomology#8499: read the abstracts, if I don't understand how they arrive at conclusion or would like to see the proof, then I read thru
Golgi Apparatus#4074: What Api is common for Deep learning on this server?
bmk#1476: we use a bunch of different stuff
Zoomology#8499: I do try to at least skim the whole thing
bmk#1476: a warning that there may be a steep learning curve and we unfortunately don't really have the time to help people get up the curve
bmk#1476: so youre mostly on your own
Golgi Apparatus#4074: I understand
Golgi Apparatus#4074: Have had to tackle many
bmk#1476: also it may be useful to know that my selection of papers is incredibly biased towards directions that either we're directly working on or have considered and then ruled out
Golgi Apparatus#4074: Whats the average age on this server?
Golgi Apparatus#4074: you all seem very experienced for a discord server |
bmk#1476: most of the people here are early career
bmk#1476: we've never done a survey so we dont know for sure
3dprint_the_world#6486: :smallbrain: : reading abstracts
🧠 : reading the conclusion
:bigbrain: : reading the appendix
bmk#1476: hey, pile has a *respectably interesting* appendix!
3dprint_the_world#6486: indeed
3dprint_the_world#6486: that was a non-ironic use of the bigbrain meme
3dprint_the_world#6486: if you want the meme played straight, here you go:
:smallbrain: : reading the abstract
🧠 : looking at the figures only
:bigbrain: : looking at the last author
bmk#1476: last author is unironically strong signal though
zphang#7252: :chad: : looking at commits
Zoomology#8499: Why izzat?
kindiana#1016: :guilty: I unironically just look at figures sometimes
bmk#1476: i mean, a lot of papers are well summed up by diagrams
zphang#7252: papers are designed around that
3dprint_the_world#6486: typically research team lead or PI. The paper will be part of their larger research agenda.
zphang#7252: like captions are recommended to be relatively self-contained for that reason |
3dprint_the_world#6486: so it will probably have very similar themes to their other papers
cfoster0#4356: *looks at formatting to see what lab it probably comes from*
bmk#1476: we need to adopt a standard formatting for all of our papers
3dprint_the_world#6486: ~~comic sans~~
Musical_Pumpkin#4739: Hey everyone, I am back
bmk#1476: honestly, i do kinda like the ACL + modified author block
zphang#7252: usually it's based on conference tho
bmk#1476: i meant for arxiv verseion
zphang#7252: switching between single-column and double-column formats for resubmissions is fun :^)
3dprint_the_world#6486: stop it stop it you're bringing back my phd ptsd
Golgi Apparatus#4074: ACL?
bmk#1476: anyways, i will lobby for the grid author block layout for any arxiv submission in the future
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/801634078637948958/unknown.png
zphang#7252: > conference template doesn't support `\citep`
bmk#1476: i really like how it turned out
3dprint_the_world#6486: oh f you man
3dprint_the_world#6486: you're just doing this to me deliberately now
zphang#7252: this is a bad author block https://cdn.discordapp.com/attachments/729741769738158194/801634359303995453/unknown.png
zphang#7252: makes it look like Pedro Rodrigue, Shi Feng is one person
bmk#1476: oh god i didnt even notice that at first |
bmk#1476: and now i cant unsee it
bmk#1476: anyways i will strongly lobby for: grid of names, single EleutherAI main affiliation after that, asterisks for any other institutions if we *absolutely must* include them for policy reasons
3dprint_the_world#6486: first name: Shi Feng
surname: Pedro Rodriguez
zphang#7252: it's a suffix, like Esquire
bmk#1476: Wallace, Eric and Rodrigues, Pedro, Feng, Shi and Yamada, Ikuya and Boyd-Graber, Jordan
StellaAthena#3530: It's rare that you *absolutely must* include multiple affiliations, but people often wish to. It's a good thing to do.
bmk#1476: well, i mean, it's not the hill i'd die on but i'd certainly put up a small fight to keep the affiliation as simple as possible
bmk#1476: unless it's *really* a collab
bmk#1476: in which case obviously the rules are different
StellaAthena#3530: Why?
bmk#1476: honestly i dont have any really good reason
3dprint_the_world#6486: I agree with @bmk actually
bmk#1476: it's mostly just 1. visually cleaner and 2. it emphasizes that this was an *eleuther* thing, done in free time, not as a collaboration with xyz other organization (unless it actually was, in which case i think the rules are different)
bmk#1476: but again, not a hill i'd die on
3dprint_the_world#6486: I think it's best to just pick one affiliation: either your uni or the collab org
bmk#1476: if it was a collaboration, i'd support putting the n major organizations in order of contribution next to each other, and use the superscript footnote numbering / asterisk/dagger to indicate affiliation of individual authors
bmk#1476: if it's a near 50/50 collaboration, i'd support some kind of wild layout with the authors partitioned down the middle of the page and both affiliations on their respective sides
3dprint_the_world#6486: ah no you lost me there
3dprint_the_world#6486: just having superscripts is ok |
3dprint_the_world#6486: no need to create divisions
bmk#1476: fair
bmk#1476: im just tossing ideas out honestly
bmk#1476: it's mostly bikeshedding
Isaac McHorse#2007: OH F*$K! OH HELL NO! OH HELL NO! STOP IT!
3dprint_the_world#6486: oh is it
3dprint_the_world#6486: is it bikeshedding
Isaac McHorse#2007: IT 'S ALL ABOUT WORK! WELL YOU 'RE NOT WORKING!
3dprint_the_world#6486: I'm just curious to know if it's bikeshedding
Isaac McHorse#2007: ?! IT'S YOU! YOU SHOULD GO GET A REAL WORK OUT!!! GET OVER THE DISTRACTION!
bmk#1476: anyways, on to actual work
bmk#1476: i'm spinning up a new scaling law project
bmk#1476: interested?
3dprint_the_world#6486: sure, how can I help
bmk#1476: let's move over to #scaling-laws
Zoomology#8499: Is it that rare tho? I feel like I see a lot of papers with both an academic affiliation and a corporate one
StellaAthena#3530: That doesn't contradict what I said. The authors are not typically *compelled* to list both affiliations
Zoomology#8499: Ah, what is the emoji for *missed the nuance*
AI_WAIFU#2844: god I've been reduced to using *colab*.
\*shivers\* |
erin#5432: :(
Merlin#6250: hi all ! just found out about your projects... sounds really interesting. Do you have a list of things you are looking people to help / contribute ?
triggerhappygandi#0001: Man. I feel phyical blow from this statement
triggerhappygandi#0001: :nooo:
Sid#2121: somewhat! this is semi-up-to-date https://github.com/EleutherAI/info/blob/main/jobs_board.md
Sid#2121: what do you have experience in?
kindiana#1016: that link's broken btw, I think this is the correct one https://github.com/EleutherAI/info/blob/main/jobs_board.md
Sid#2121: oh woops, thanks
yashwanth#8869: Hi I am a newbie here, sorry. what exactly are the use cases with this tech right now any saas ideas or so ? Thank you
Daj#7482: We don't really care or work too much on downstream applications, there are plenty of ideas floating around the Twittersphere
spirit-from-germany#1488: I am proud to announce my wonderful in-depth interview with @Daj 🙂
spirit-from-germany#1488: https://youtu.be/Qa-5zeZxQxg
spirit-from-germany#1488: https://youtu.be/9MZ6YH03RjE
spirit-from-germany#1488: @everyone
Daj#7482: Thanks for having me, was great fun!
Louis#0144: 😡
Louis#0144: Jkjk it didn’t tag me
Visarch of Apollo,#7152: I'm not really good at tracking what's going on from a technical standpoint. How many parameters does gpt-neo have now?
Daj#7482: Currently training on TPUs: 2ish Billion (Mostly for testing)
Biggest that has run for one step in TPUs: 100ish B |
GPU code doesn't work yet without bugs, goal is 175B+
sloth_.on._tabasco#9015: is the coral dev board a worthwhile investment to tinker around with?
StellaAthena#3530: A quick search makes it seem like coral is intended for edge devices and is much weaker than what you can get via Google Collab
gwern#1782: (I've heard about Coral any number of times. not so much about Coral *uses*, though.)
sloth_.on._tabasco#9015: ah it does seem like google collab is the way to go
sloth_.on._tabasco#9015: but its a cool products nonetheless
StellaAthena#3530: For people new to ML, Google collab is usually the way to go lol.
Louis#0144: I submitted papers yesterday and I have zero energy to leave bed
triggerhappygandi#0001: :b_Kek:
AI_WAIFU#2844: We got a couple steps at 200B a while ago.
triggerhappygandi#0001: How many steps would be "good enough" though
bmk#1476: Several million times more
triggerhappygandi#0001: 10 million steps, let's say?
jrowe#5371: ok, here's an idea that might be good, might be worthless
jrowe#5371: The ordinal sequence is the original, unordered sequence of weights in a neural network layer. The ordinal sequence of the weights of a neural network layer can be permuted such that for a given permutation of the list of all tweights in a layer the weights are ordered highest to lowest.
jrowe#5371: The "cardinal" permutation is the particular sequence for which the weights are ordered highest to lowest. The curve described by the cardinal permutation can be approximated using a sigmoid curve with logistic interpolation.
jrowe#5371: An entire network ordered in this manner produces a 3d planar topology, the cardinal manifold. Approximations of different manifolds can serve as templates for initializing network weights.
jrowe#5371: Fixing the weights but iterating over different permutations enables the use of monte carlo sampling as a method of searching for parameters.
jrowe#5371: maybe different networks that solve similar problems have similar curves
jrowe#5371: and maybe theres a way of analyzing inputs such that you can know what curves or manifolds might be better , at least for initializing a network |
jrowe#5371: if this is describing something that's already well known or outright bad thinking, please let me know lol
HypnoPump17#9322: https://www.grid.ai/ is that something eleuther might consider?
HypnoPump17#9322: train models in their cloud
bmk#1476: whats the tldr?
HypnoPump17#9322: but i dont know the intricacies/agreements eleuther already has w/ different providers so i cant judge by myself
bmk#1476: whats the advantage of using them?
HypnoPump17#9322: might get a sponsorship maybe? so more power to train models (ie. alphafold)
StellaAthena#3530: Their website contains absolutely no information about anything
StellaAthena#3530: It's a website without a product
HypnoPump17#9322: hm okay we need more mature options i get it
StellaAthena#3530: It's not even that. We need options that exist. There is zero information about anything technical on the website. There is zero evidence that they serve any clients. There's a "sign up for our waitlist" button and nothing else. www.grid.ai doesn't exist in any meaningful sense.
bmk#1476: ~~"sign up to be notified about when the waitlist is ready for signup"~~
janus#0150: @StellaAthena I've been working on a big PR for the omnitrack project. Probably 80-90 hours in but maybe another 20 left before its ready for basic use as a todo list. Hows your progress on the project?
StellaAthena#3530: You what
bmk#1476: ?
janus#0150: https://github.com/EleutherAI/omnitrack
StellaAthena#3530: What progress do you expect me to have made? I haven't been working on this at all.
bmk#1476: that was a repo i made for a random project that i haven't had the time to implement yet
bmk#1476: how did you even find it
StellaAthena#3530: Please do push your code, even if it's not usable yet. I would love to see it |
janus#0150: Oh I assumed you made the repo Stella because you're a watcher. It's on the EAI github
janus#0150: I'm just kidding about working on it 😅 . I thought it was a repo spawned as a joke
StellaAthena#3530: Yes, I know what the repo is
StellaAthena#3530: I'm just confused about what the joke is
janus#0150: That I spent many hours working on a project people aren't actually interested in. It didn't land.
StellaAthena#3530: gotcha
StellaAthena#3530: What is the largest transformer whose weights are freely avaliable online
janus#0150: Are Turing NLG weights public? It was 17B I think.
janus#0150: Looks like no
cfoster0#4356: I believe there was a large Megatron available somewhere. Let me check
cfoster0#4356: https://github.com/pytorch/fairseq/blob/master/examples/megatron_11b/README.md
zphang#7252: T5 is also 11B
zphang#7252: oh mT5 is 13B
janus#0150: Yep, good call https://github.com/google-research/multilingual-t5
The XXL is 13B
janus#0150: and T5-11B is available https://github.com/google-research/text-to-text-transfer-transformer
bmk#1476: what's the largest *unidirectional* transformer freely available out there?
bmk#1476: T5 isnt useful if you want to generate stuff
kindiana#1016: that's still megatron 11b afaik
bmk#1476: is megatron out there? |
kindiana#1016: https://github.com/pytorch/fairseq/blob/master/examples/megatron_11b/README.md
kindiana#1016: from above lol
kindiana#1016: this is also a :thonk: https://cdn.discordapp.com/attachments/729741769738158194/801950334226268180/unknown.png
janus#0150: I think theres also people trying to make an open source gpt-3 sized model.
bmk#1476: who would that be? :berk:
janus#0150: I don't know much about it. I assume OpenAI? Their mission statement is making AI open.
bmk#1476: openai making an open source gpt3, 2021, colorized https://cdn.discordapp.com/attachments/729741769738158194/801952810521591819/BptVE1JIEAAA3dT.png
StellaAthena#3530: Those people are us
cfoster0#4356: (I think this is another example of a joke)
jrowe#5371: no, that mans twin is evil
jrowe#5371: he makes him take awkward selfies
jrowe#5371: all sorts of cruel pics just for insta cred.
jrowe#5371: man - think of how close we are to being able to animate random snark like this - instead of a still frame, we'll use dall-e like text/image/gif to video or 3d
jrowe#5371: computer, generate season 6 of Breaking Bad, with Brad Pitt as the bad guy (as himself,) and Jesse Pinkman starting over as an app developer in Silicon Valley.
Dromarion#3383: Everyone is going to have a variation of "X but with a good ending" as one of their first requests
jrowe#5371: Lord of the Rings, but Gandalf summons the eagles in the first scene
jrowe#5371: "plop", everyone lives happily ever after
3dprint_the_world#6486: breaking bad, but Walter White lives in Canada
triggerhappygandi#0001: Game of thrones, but George RR Martin isn't a lazy fat sloth
RobinYuen#3504: Hello guys, i think this channel would be pretty experienced in this. Lets say for research purposes i want to finetune a BERT as quickly as possible with a single A100, what other than DeepSpeed can i try out? |
RobinYuen#3504: Doesnt have to be practical, just wanna push the limit
ethan caballero#6044: Y'all should plug Eleuther in this thread:
https://twitter.com/boazbaraktcs/status/1352606703716544513
andyljones#7746: Does anyone know of a 'remote job manager' that's serverless and based on rsync and SSH? Just want to be able to queue commands to a handful of boxes and rsync some files back at the end of each one. Every solution I can find is waaay more complicated than what I need.
I've got my own duct-taped-together version, but I'm sure it has some terrible hidden flaws in it.
bmk#1476: Seconding this, I'd love to have this
jrowe#5371: winscp has rsync support - not sure if it's standard or not
jrowe#5371: ahh, didn't grok that on the first read. path of least resistance for me would probably involve a raspberry pi, cron job, and shared folders
andyljones#7746: welp no suggestions so i'm gonna dive in and ~-~roll my own orchestration~-~, woo. i've laid out a high level spec here:
https://github.com/andyljones/boardlaw/issues/12
this roughly what you had in mind, or something completely different?
bmk#1476: Minor nits: how about something like jsonnet instead of json for user-facing config; i think the hardware requirements dict bit seems out of scope, or if you do end up doing it it should be as flexible as possible (i.e having cpus, gpus, etc baked in makes it hard to handle stuff like TPUs, etc); also, having manual placement control would always be nice; stdin/out/err should be handled by the experiment itself and/or experiment organizer, not the orchestrator; monitoring beyond just figuring out when it exits so it can spin up new experiments also seems out of scope, just leave that to the experiment organizer; a nice to have: the ability to specify dependencies between experiments, and potentially copy artefacts from one experiment to its dependent experiments would be nice, though this *might* be out of score (if it is out of scope, I'm going to develop a tool for this piece in particular)
andyljones#7746: jsonnet: have not heard of, will look into. it a step up from yaml?
hardware reqs: yeah, was gonna just have a dict that's passed with the submission, and one that's passed with the machine config. `{'gpu': 3, 'tpu': 1}` kinda thing.
monitoring + stdout: i am very fond of these for debugging purposes, but it's orthogonal to the rest so easy enough to ignore |
deps: this seems out of scope to me (different scopes 🙃), but should be able to make it easy to plug in
tyvm for the detailed feedback 💯
bmk#1476: Semantics of `{'tpu': 1}` would be user configurable right?
andyljones#7746: y
andyljones#7746: `{'magic': 7}` if you want
bmk#1476: Because the problem is that a) tpu v2 vs v3 b) you don't know how big of a tpu pod you can create until you try c) two machines can't create more tpus than one machine, unless they're in different regions, and running code in different regions requires a change of the config d) it's harder to create a v3-256 than two v3-128s
andyljones#7746: ookay so there's a fairly complex alloc procedure that 'greedy' will muck up badly?
bmk#1476: Yes
andyljones#7746: 🤔
bmk#1476: With TPUs, the tldr is you ask Google for a v3-something and then google thinks for a bit and tells you if it actually decided to give you one
bmk#1476: And there's absolutely no way of knowing how big of a tpu you can create without actually creating one
andyljones#7746: tl;dr user configurable job -> machine mapping?
bmk#1476: Yeah basically
bmk#1476: Also you can't assume each machine has a known amount of every resource
bmk#1476: It doesn't make sense to say we have x TPUs because tomorrow that number might go up or down
bmk#1476: And TPUs aren't fungible either; a v3-256 is not two v3-128s, and v3-8s are a totally special species of TPU hardware
bmk#1476: When creating a pod, generally the procedure is to start with, say, 256, and keep downsizing until google lets me make a pod
bmk#1476: Then edit the config to work with that |
bmk#1476: Also there is no such thing as a v3-16
bmk#1476: They do not exist
bmk#1476: It's the only size between 8 and 2048 that does not exist
andyljones#7746: so most of that should be handle-able by rewriting the machines.json in between `manager` runs
andyljones#7746: that's the advantage of not having persistent state
bmk#1476: But i can't even know how many v3-256s i could create without actually creating one, and you only create one right before training because they preempt after 2 hours if idle
andyljones#7746: (back in 30 min, dinner)
mick#2835: this is basically what I'm working on right now, except just using the socket directly instead of rsync
bmk#1476: At some point it might just make sense to give up on trying to include TPUs in the abstraction and just go for gpus only and have a separate tool for TPUs
mick#2835: https://gist.github.com/umbra-scientia/fc3430a2c1b4d0e403dbc3312b1903f4
bmk#1476: Unfortunately we wouldn't really be able to make much use of it in that case
mick#2835: @bmk I isolated just the high level overview part of the abstraction, does we need a more elaborate thing for agreeing on configuration?
bmk#1476: Wait, what are you responding to
bmk#1476: I was talking entirely to andy in my previous messages
mick#2835: I know.
mick#2835: But I figured I should communicate that I already experimented with doing the SSH and file transfer orchestration
bmk#1476: Ah
mick#2835: You brought up the TPU fuckery and I figured I should ask if we need to make the protocol agree on more than just device batch size before forming a circuit
bmk#1476: ..batch size?
bmk#1476: Er |
mick#2835: That code I linked in the gist is reduced to only the "meat"
bmk#1476: I wasn't thinking about batch sizes at all
mick#2835: It's necessary because it's one of the dimensions we have to parallelize across
bmk#1476: Oh, this isn't about the experiment orchestration?
mick#2835: I just figured if we're doing a training orchestration tool anyways we could re-use it for experiments and get extra experience with using it in the process
mick#2835: I try to avoid having a bunch of different versions of the same tool that work differently but maybe we should just make a ton of duplicate code idk, I'm flexible.
bmk#1476: I'm confused because data parallelism is a totally different level of abstraction from experiment orchestration
mick#2835: Is device batch size the only parameter you have to consider when doing this search?
bmk#1476: Batch size has nothing to do with choosing a tpu size at all
bmk#1476: It just runs slower on a smaller tpu
mick#2835: iterate variables
bmk#1476: ?
mick#2835: I'll finish coding and ask you later when I'm a bit more human
bmk#1476: I'm not sure what level of abstraction you're on
mick#2835: Forgive me, so I'm juggling a couple levels of abstraction, maybe a few here
mick#2835: The most important issue is building a scalable docker file for distributed training
bmk#1476: 1. Is this for gpus or tpus or agnostic?
2. Does this thing manage at the abstraction level of multiple training runs, only a single training run, or both?
3. Does this thing handle all the types of parallelism we care about like data, model, and pipeline? Or is that out of scope
mick#2835: Well multiple answers. One of the involved projects leads to a sortof framework for directing machines via SSH and remotely induced file transfer so it seems to pertain to what you were talking about with andy |
mick#2835: But another issue is a consensus protocol, damn. someone needs a door physically opened please wait
mick#2835: To directly answer: There are multiple things.
mick#2835: Between the multiple things, "it" handles each of the types of parallelism.
mick#2835: I think we should share code for the SSH+fast file transfer thing
mick#2835: But the code I linked to you is on a different level of abstraction. It's meant for GPU but the way you describe pod selection sounds compatible.
mick#2835: So! I'm hoping you can provide insight on any modifications that might be necessary to make to the circuit agreement protocol support TPUs, or to provide a nice simple reason we can give (to future developers) for why it has to be different strategies
bmk#1476: I have no idea how tpus work under the hood
mick#2835: Yeah but you described the algorithm to obtain them from Google lol
bmk#1476: Are you talking about obtaining them or actually running stuff on them?
mick#2835: Yes, since as you stated, it has to be part of the training program.
mick#2835: As in, both
bmk#1476: O.o
bmk#1476: Right now, obtaining them and running stuff on them are distinct steps
mick#2835: Right, the gist I linked has both phases
mick#2835: Think of it as precision pseudocode lol
mick#2835: Basically it just rolls the dice and takes the min batch size across all nodes in the circuit
mick#2835: And a higher level abstraction deals with rejecting circuits that are too wasteful
spirit-from-germany#1488: Can anyone tell me why all "This X does not exist"- project I hear about uses stylegan 2 and not VQ-VAE, NVAE or something like that? The faces from VAE's seem to have less artifacts than stylegans...
bmk#1476: Be the change you wish to see in the world
bmk#1476: Help us with out DALL-E impl |
spirit-from-germany#1488: I would love to... But the problem is that I have 2 kids ( 4 and 8 ) who interrupt me approximately 3-5 times per minute whenever I try to write some code 😄 ... Anything that exceeds playing around with ready colab notebooks is currently a little bit difficult, as long as kindergarden and school are in lockdown 😄
jrowe#5371: to work with keras or tensorflow on windows you need python 3.8
jrowe#5371: just in case anyone is meandering down that route, like me
chilli#5665: Lol
jrowe#5371: i made it as far as having jupyter notebook up and running
jrowe#5371: and then i have to rip it all out and start over ><
jrowe#5371: anyone know where to look to have keras use intel-tensorflow instead of vanilla tensorflow?
chilli#5665: Why?
chilli#5665: Also, why are you using keras at all?
chilli#5665: Except within TensorFlow
jrowe#5371: trying to get any sort of ML toolkit with visualizations running
jrowe#5371: anyway, I've got scikitlearn running
jrowe#5371: cutting my teeth on that, then backing into tensorflow and gpt-neo
jrowe#5371: err, easing into gpt-neo
axiom#3599: vtuber veibae is playing ai dungeon 2 on twitch, i thought the weebs among us would find it amusing
axiom#3599: i got on my horse in the cold night air and the rain to bring you this message
bmk#1476: link?
axiom#3599: https://www.twitch.tv/veibae
bmk#1476: vtubing is an infohazard
gwern#1782: but they are leading us into the glorious transhuman future where we are uploaded as cute anime girls, in sexual selection run amok |
bmk#1476: Yes, i too look forward to a world where the only remaining gender is "cute anime girl"
axiom#3599: idk gender is an important mode of self-expression
axiom#3599: We’ll probably have cute anime girl (male) and cute anime girl (female)
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/802380588044255232/select-all-images-with-girls-verify-report-a-problem-the-30204269.png
bmk#1476: (I'm not a true weeb because the only character i can actually recognize in this meme is ferris)
axiom#3599: alstolfo is bottom right
StellaAthena#3530: Oh I just finished Stein's Gate
axiom#3599: naoto from persona 4 is middle right, and best waifu
axiom#3599: i’m guessing bottom left is from “wandering son”
bmk#1476: Assuming the prototypical girl in the very top right is asuna?
bmk#1476: No, that doesn't seem right
axiom#3599: oh yah
bmk#1476: Or is it?
axiom#3599: def asuna
bmk#1476: Ah ok
bmk#1476: Ok so that makes 2/10
bmk#1476: I have failed the weeb test
axiom#3599: umm under asuna, looks like the art style from revolutionary girl utena, but i havent watched it
gwern#1782: no, that's ouran
bmk#1476: *gwern has entered the chat* |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.