data
stringlengths 115
7.61k
|
---|
StellaAthena#3530: No
bmk#1476: the word scrum just triggers bad memories
StellaAthena#3530: Scrum is for scum
StellaAthena#3530: Remember that
StellaAthena#3530: 😛
AI_WAIFU#2844: lmao
bmk#1476: what systems are y'all's favorites
StellaAthena#3530: I have a list of free PM software on my laptop to go through
StellaAthena#3530: I also need to recruit deputy PMs if we are going to actually scale
AI_WAIFU#2844: The only ones I'm familiar with are "work on what needs to be done" and "shitty agile".
bmk#1476: i'd happily volunteer to be a PM for the more softwarey stuff
StellaAthena#3530: @Liminal_Warmth is interested too, though busy right now
bmk#1476: maybe this is the engineer in me speaking but i'm not exactly a fan of "external" PMs, if you know what i mean
Liminal_Warmth#8151: shitty agile works fine at small scale
Liminal_Warmth#8151: Trello is the easiest free tool I like for online collab
StellaAthena#3530: Heya
Liminal_Warmth#8151: I'm still battling a billion projects
bmk#1476: like, ideally all the PMs would be people who have already been involved on the floor, if you know what i mean
StellaAthena#3530: Nobody has suggested bringing in ~~AI illiterates~~ external people to do PM work
bmk#1476: i meant like people who have already worked with our particular stuff |
bmk#1476: for instance, for any mesh tf related project, i'd hope the PM has experienced how utterly cursed mtf is
StellaAthena#3530: I already hate Trello because it thinks I use safari https://cdn.discordapp.com/attachments/729741769738158194/791881176192122890/image0.png
StellaAthena#3530: Also it’s mobile interface is garbage
StellaAthena#3530: If I can’t use it on my phone I won’t use it, tbh
StellaAthena#3530: And my bar is pretty low. I use Overleaf on my phone
Technobird22#2055: lol I thought there would have been a dedicated Trello app
asparagui#6391: trello works till you have more than a few people
bmk#1476: we have like 20 people, and we probably want about 5 projects in parallel at any one time, with many members working on more than one project
AI_WAIFU#2844: Yeah, I think most solutions charge money at that scale.
bmk#1476: hm
bmk#1476: *then we build our own*
asparagui#6391: you can self-host
AI_WAIFU#2844: Is Jira open source?
asparagui#6391: nope
AI_WAIFU#2844: Yeah
AI_WAIFU#2844: So we need to self-host an open source project management system.
bmk#1476: hmm
asparagui#6391: https://www.phacility.com/phabricator/
kindiana#1016: what about github?
bmk#1476: ^ i was gonna ask that too |
kindiana#1016: seemed to work ok for TP
bmk#1476: we didn't really use it lol
StellaAthena#3530: Project management for the Pile was largely force of personality
bmk#1476: also i did a very bad job of delegating, lol
StellaAthena#3530: Both on the PM side and the working side
bmk#1476: i should have kept my hands off analysis after delegating to stella but then i ended up being too involved in analysis anyways
StellaAthena#3530: I would have gone with “Stella had to pry tasking away from me to give to other people” but sure. Your way works too 😛
bmk#1476: in my defence, most of the things i insist on doing myself are things that are difficult to parallelize
StellaAthena#3530: Anyways, we can have a ~~”bash leadership party”~~ constructive feedback meeting when we are done
StellaAthena#3530: Almost there 🙂
bmk#1476: i think a large reason why pile ended up.. lopsided was that we didn't really have a good delegation structure
bmk#1476: ideally we would have broken the paper up into chunks, assigned someone to each chunk, then broken those down recursively, assigned those out, and figured out which things blocked on which things very early on and took that into consideration
StellaAthena#3530: I’m not doing a post mortem of a project that isn’t finished yet at midnight.
bmk#1476: haha
bmk#1476: *it's not dead yet*™
StellaAthena#3530: Okay Jim
StellaAthena#3530: Oh that reminds me https://www.reddit.com/r/WitchesVsPatriarchy/comments/khj5qh/i_already_have_tickets/?utm_source=share&utm_medium=ios_app&utm_name=iossmf
AI_WAIFU#2844: it went private
StellaAthena#3530: What did
StellaAthena#3530: My Reddit link? |
AI_WAIFU#2844: stella are you an approved submitter on r/WitchesVsPatriarchy
StellaAthena#3530: I don’t know, so probably not
AI_WAIFU#2844: Yeah, the link is private for me, I can't see it.
dopa#3178: any one know why when I run this https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/tensorboard_profiling_keras.ipynb#scrollTo=dFWOMyaHkUX5
profile tab does not shows up for me ?
AI_WAIFU#2844: If you go to the right I think there's a dropdown and in the dropdown there may or may not be a profiling option
AI_WAIFU#2844: In tensorboard
dopa#3178: btw, this also work notebook.display(port=6006, height=1000) it will take me days to understand how to profile things in colab 😦
3dprint_the_world#6486: do you guys really need a PM though? 😛 companies just hire PMs to make it look like they have a handle on things and to make sure people don't slack off 100% of the time, just 80% of the time
bmk#1476: Can't slack off if eleuther *is* the thing that people slack off to do
AI_WAIFU#2844: ^
StellaAthena#3530: **discord off
Technobird22#2055: haha yeah
cfoster0#4356: watch me
bmk#1476: *recursive slacking off!* https://cdn.discordapp.com/attachments/729741769738158194/791900809821552661/unknown.png
Technobird22#2055: *hmmm*
Technobird22#2055: slacking off doing eleuther to do eleuther to do eleuther...
Mischa#0599: pm software changed my life for managing my own academic projects and the endurance racing team I own. I can't see why something basic that let contributors here see visually what was in progress and who was doing what would hurt.
Mischa#0599: That's some of the worst words I've made in a long time, but you get the idea
Mischa#0599: and Merry Christmas! 🎄 |
Technobird22#2055: Merry Christmas!
3dprint_the_world#6486: sure, self-managing is great, I'm more talking about having a dedicated PM role
bmk#1476: we're not going to have *dedicated* PMs
bmk#1476: this is also why i'm adamant that all the PMs need to get their hands dirty
Mischa#0599: I took it to mean the software for people to collab with and not so much the singular role
bmk#1476: given the scale (or rather, lack thereof) of our projects, i'd almost be inclined to suggest that all PMs must make at least one major technical contribution to the project
Mischa#0599: I am the type of person who would love to have someone giving me orders as long as they are receptive to transparent feedback and input. Like I brought someone else on to my team because they're better at that than I am, but I am stronger in other important areas of the team
dopa#3178: @bmk you able to run colab with tpu profiler without bucket ?
bmk#1476: why the žižek react lol
bmk#1476: i don't think you need a bucket but idk tbh
triggerhappygandi#0001: You can. But it gives you just one tpu
triggerhappygandi#0001: Not worth it with 8gb memory tbh
dopa#3178: I just want to make sure everything run and I understand it before connecting to bucket since it cost $$$ 🙂
dopa#3178: what do you specify in --logdir= ?
or there is some other work around ?
triggerhappygandi#0001: Create a folder named logs and pass that?
dopa#3178: error: Need to specify a logdir that resides in GCS!
dopa#3178: may be I missing what GCS means or how to specify proper path in colab ?
triggerhappygandi#0001: It means you need a storage bucket on Google cloud. I ran their reformer code on colab with a local txt file and it ran just fine.
dopa#3178: it seems i can use TPU on colab for free but not profile, problem for profiles I need to specify dir, when I try to use local logs dir it does not work (error: Need to specify a logdir that resides in GCS!) |
dopa#3178: I though there somewhere a hack to not use bucket or somehow specify local path within python script to log TPU
dopa#3178: where did you specify local log ?
triggerhappygandi#0001: I didn't 😅
triggerhappygandi#0001: I just ran the reformer model on a different text than the colab tutorial posted in the trax repo.
triggerhappygandi#0001: Though gcp gives you 50gb storage at $1.19 @dopa
dopa#3178: yeah, it seems there no other way hehe, rip my 300 credits 🙂
dopa#3178: thank you!
triggerhappygandi#0001: Technically those $300 are for getting familiar with GCP only. It's not like you can rent a cluster with the free credits. They only let you use the CPU machines with that.
dopa#3178: I did not know they are not allowing to use TPU with free credits 😦
triggerhappygandi#0001: Lol no. They used to but then they realised that they could instead earn more if they didn't.
triggerhappygandi#0001: Or maybe it exists to prevent abuse. I can make 10 accounts and steal away $3000 worth of compute.
StellaAthena#3530: I mean, we get hundreds of thousands of dollars worth of compute per month. They’re not exactly stingy
bmk#1476: We use up $300 of compute every hour or so lol
triggerhappygandi#0001: Praise our overlords at Google. No one else bothered to create colab.
bmk#1476: I don't remember the exact number and i don't feel like looking it up
triggerhappygandi#0001: I'm getting up to speed on TPU code. Soon hopefully I can join in on the fun too
StellaAthena#3530: That sounds low. That projects to only 223k/mo
triggerhappygandi#0001: > only
StellaAthena#3530: I thought your estimate was more like 300-400
bmk#1476: I think i later realized that estimate was wrong |
bmk#1476: Honestly idek
StellaAthena#3530: I mean, they don’t have a public pricing model for the machines we use
StellaAthena#3530: So....
triggerhappygandi#0001: You are burning away a house worth of compute? :guilty:
bmk#1476: I remember making an error in one of my estimates because google actually gives cost in multiples of 8 tpu cores
bmk#1476: Multiple houses
StellaAthena#3530: Lol houses where I live cost over 500k USD
StellaAthena#3530: My parents live in a small house that’s forty years old and it’s worth almost 800k
triggerhappygandi#0001: :zucc:
triggerhappygandi#0001: Still. $300k/month. Is this all from Tensorflow grant?
StellaAthena#3530: It’s from TFRC
bmk#1476: The average here is like 400k CAD
triggerhappygandi#0001: Yeah that's what I meant.
StellaAthena#3530: Yeah
bmk#1476: So like 300k USD or whatever i don't like conversion rates
StellaAthena#3530: We have some other resources too, that aren’t figured into that calc
triggerhappygandi#0001: I didn't know they were pouring so much into it.
kindiana#1016: the marginal cost for google to run these TPUs are not that high
bmk#1476: I'm going to do the math rn to make sure this number is correct which it almost certainly isn't
kindiana#1016: they would be sitting idle otherwise |
StellaAthena#3530: Right. The cost to them is basically the delta on cooling between “idle” and “in use”
kindiana#1016: I'm sure its not _that_ much power either because our MXU utilization is low :berk:
triggerhappygandi#0001: Then why does a TPU v2-512 cost like $3million/year commercially?
triggerhappygandi#0001: They must be earning a lot from it.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/791927435842551839/IMG_20201225_001645.jpg
triggerhappygandi#0001: If any company actually rents one.
bmk#1476: A v3-8 (NOT a single core) costs $2.64/hour
kindiana#1016: "idle compute whenever there is capacity" is much cheaper than "24/7 access to a dedicated TPU"
bmk#1476: We use about 512 cores consistently
triggerhappygandi#0001: I can see why you went with _not_ PyTorch now:smile:
bmk#1476: This number is already the preemptive price
bmk#1476: The on demand price is $8/hr
bmk#1476: 8.80 actually
kindiana#1016: 3M/yr is the on demand price I thought
StellaAthena#3530: Few if I’m getting 170/hour
bmk#1476: Ah, we only use like $120k a month peak usage
triggerhappygandi#0001: I will have to practice coding on TPU before I can contribute to gpt-neo
triggerhappygandi#0001: $8.8/hr is still not bad
bmk#1476: Just in time for us to switch to GPUs lol
bmk#1476: We're fracturing into two codebases |
triggerhappygandi#0001: AWS rips you like $12/hr for 8 V100s
bmk#1476: A tpu and a gpu codebase
triggerhappygandi#0001: Why though
bmk#1476: Because we now have lots of GPUs
triggerhappygandi#0001: :guilty:
bmk#1476: The catch is that we won't have them forever and also there are some .. minor issues with getting them to work
StellaAthena#3530: I’m secretly fabulous wealthy and decided to buy us a server farm
triggerhappygandi#0001: Will you have the TPUs forever though?
triggerhappygandi#0001: Adopt me lol
bmk#1476: Using all that military cash, ofc
kindiana#1016: I think the TPUs are going to go away as soon as jax/pytorch on tpu alpha gets released lol
bmk#1476: oh no
triggerhappygandi#0001: Pain
StellaAthena#3530: JK. Someone who doesn’t like MSFT very much is excited to see them spend however many billions on an exclusive software that then gets open sourced
bmk#1476: Pytorch on TPUs still sucks horribly though
triggerhappygandi#0001: Ah Microsoft.
kindiana#1016: when TPUs become easier to use I think TFRC capacity is going to reallly go down
StellaAthena#3530: Why would jax make the TPUs go away?
StellaAthena#3530: Eh. They’re still a pain in the ass
bmk#1476: bc then using tpus will become easy |
triggerhappygandi#0001: Because more people will use TPUs on demand
triggerhappygandi#0001: Yeah
bmk#1476: andf then everyone will want to use em
bmk#1476: and that means fewer are idle for us scavengers
triggerhappygandi#0001: I'm sure Google is still pushing our an ungodly amount of TPU v4s soon
triggerhappygandi#0001: It won't be a problem in the near future
bmk#1476: v4s are kinda dissapointing ngl
bmk#1476: they dont even have more vram
triggerhappygandi#0001: That indeed is disappointing.
triggerhappygandi#0001: But they outperformed an A100 cluster.
triggerhappygandi#0001: Atleast on image classification.
bmk#1476: you know what else outperforms an A100 cluster
bmk#1476: two A100 clusters
triggerhappygandi#0001: :berk:
triggerhappygandi#0001: If only we could scavenge that.
triggerhappygandi#0001: Doesn't PyTorch already support TPUs?
kindiana#1016: lol
kindiana#1016: yesn't
triggerhappygandi#0001: I figured.
bmk#1476: well yes but actually haha good fucking luck |
triggerhappygandi#0001: They have a blogpost or something
StellaAthena#3530: Famous last words
Daj#7482: Merry Christmas everyone!
Technobird22#2055: Merry Christmas!
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/791941785046876190/68db942.jpg
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/791943804784476189/405.png
Daj#7482: Hope Santa brought you all what you wished for
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/791944457737601044/20101213.gif
triggerhappygandi#0001: He wont give me TPU v3-8192
triggerhappygandi#0001: Can you guys help me understand how this is linear attention?
```
def linear_attention(q, k, v):
batch_dim, seq_dim, head_dim, dim_out = (v.shape[0], v.shape[1], v.shape[2], v.shape[3])
q = mtf.rename_dimension(q, "features_per_head", "features_per_head_in")
k = mtf.rename_dimension(k, "features_per_head", "features_per_head_in")
dim_in = k.shape[-1]
q = mtf.softmax(q, dim_in)
k = mtf.softmax(k, seq_dim) |
context = mtf.einsum([k, v], output_shape=[batch_dim, head_dim, dim_in, dim_out])
attn = mtf.einsum([q, context], output_shape=[batch_dim, seq_dim, head_dim, dim_out])
return attn
```
triggerhappygandi#0001: Looks like Q(K.V) to me
Technobird22#2055: 😦 https://cdn.discordapp.com/attachments/729741769738158194/791961334203809792/unknown.png
triggerhappygandi#0001: Lol abused colab too much. Let it cool down for a few hours now
triggerhappygandi#0001: Or use different google account
Technobird22#2055: lol how long will it take to cool down
Technobird22#2055: good idea
triggerhappygandi#0001: Couple of hours but you can always have a different account
triggerhappygandi#0001: The overlords steal our data, but they do be very generous
Technobird22#2055: yeah, I have a few accounts 😄
Technobird22#2055: can I just share the notebook, or should I make a new one?
triggerhappygandi#0001: Just log in with different account
triggerhappygandi#0001: You may have to upload this notebook in the other account's drive. Or just upload it locally
Technobird22#2055: can I just "share" it as a link to the other acc
Technobird22#2055: or will Google be 🤨
triggerhappygandi#0001: Download and upload |
triggerhappygandi#0001: Easiest way to go about it imo
Technobird22#2055: oh ok
andyljones#7746: Q(K.V) *is* linear attention isn't it?
triggerhappygandi#0001: Oh I thought it meant linear as in O(N)
triggerhappygandi#0001: 😅
andyljones#7746: uh yup that's what i mean too - here's lucidrain's cute lil diagram https://cdn.discordapp.com/attachments/729741769738158194/791964593743855626/linear-attention.png
andyljones#7746: (dunno if it is actually lucidrains', but it's on their repo)
triggerhappygandi#0001: This is from Linformer paper iirc. Or maybe reformer. But it wasn't clear from the code itself how it is linear.
triggerhappygandi#0001: Now I remember. Thanks
asparagui#6391: @triggerhappygandi you can use the free credits for gpu/tpu time
triggerhappygandi#0001: I just found out a few hours back lol
StellaAthena#3530: Also, you can use our TPUs for free
triggerhappygandi#0001: I'm looking at `gpt2.py` code in gpt-neo repo and I can't see anything that can be improved upon. How do you optimize it even more?
triggerhappygandi#0001: Mesh Tensorflow too is kinda steep learning curve
Louis#0144: Merry Christmas u losers lol
triggerhappygandi#0001: I win big, no loser. I win so big everyone else's wins are small by comparison. So I win.
Deleted User#0000: merry xmas
Deleted User#0000: 🎄
Immunosuppressant#4238: Hello
Immunosuppressant#4238: I want to test the GPT3 reimplementation |
bmk#1476: You're a few months early for that
bmk#1476: Would you like to help write code for our various projects instead
otf#1444: Areu guys making your own version
StellaAthena#3530: (Unless you mean test the training code)
otf#1444: Or just building upon gpt3
cfoster0#4356: Yes
cfoster0#4356: Also some of us have ambitions of going even bigger. But we'll see 🤷🏿♂️
cfoster0#4356: Also happy holidays y'all 🎉
Sahl#0630: What skill set is needed for the code? I don’t have experience in ML but I have python experience
chirp#4545: @Sahl bmk / stella are most up to speed, but i can give you a quick overview!
chirp#4545: basically, we have a dataset, but we need to train models and evaluate them
chirp#4545: for training, we have a codebase but it’s a bit hard to use and doesn’t work on GPUs. we’re currently rewriting in a different framework. this work is still in early stages and probably can’t use contributions at the moment
chirp#4545: for evaluation, we have some evaluation code but it can’t run at the needed scale
chirp#4545: i’m prototyping a new framework that will hopefully actually work
chirp#4545: i think the evaluation can use a lot of help from ppl like you (and me!) who know python but not ML
chirp#4545: there may also be stuff to do for the next version of the Pile (our big dataset), @bmk would know
Sahl#0630: I can do refactors if necessary, I can write maintainable code with type hints and the like
StellaAthena#3530: Hey! Welcome 🙂 There are various things that need help currently. Also, we are actually about to wrap up a major effort after which I’m going to refocus on ~~revamping our~~ creating an on-boarding system.
What can you tell me about your background / experience? |
Louis#0144: He’s Canadian, what else do u need to know
Sahl#0630: This
Sahl#0630: I have python and linux experience
Sahl#0630: I’m at UW for CS and Laurier for business
StellaAthena#3530: Experience *doing what*
Sahl#0630: So I can write you guys a business model canvas 🙂
Sahl#0630: I did embedded python and some django
bmk#1476: we have almost no cash flow lol
bmk#1476: our entire balance sheet is a few thousand dollars to pay for cloud fees, and all of us work completely for free
Sahl#0630: hmm
Sahl#0630: how do similar projects get funding
Louis#0144: Donations
Louis#0144: Grants
bmk#1476: we have almost no funding and currently have no need for additional funding
Louis#0144: Etc
Sahl#0630: o that’s good nvm then
Sahl#0630: Anyways for the python I basically did http requests and writing to files, nothing complicated
Sahl#0630: But I think it was maintainable 🙂
StellaAthena#3530: @Sahl Have you ever done any of the following:
1. Read a paper describing an algorithm and then implemented it |
2. Used numpy, pandas, PyTorch, keras, or similar (if you don’t know what this the answer is no)
3. Done statistical analysis of data, either by hand or via a computer
4. Done any data pipelining
StellaAthena#3530: It’s not a problem if you say “no” to all of these, so long as you don’t mind learning 🙂
Sahl#0630: 1. No
2. I’ve used numpy and pandas for personal stuff, otherwise no
3. I did it in a uni course but not in practice
4. No
Sahl#0630: But I’m a good learner :)
Sahl#0630: I’d probably be better placed writing APIs and glue code rather than on stuff that relies on statistics experience although I think I can do ok there
Sahl#0630: Although I’m not sure if I’ll be able to work on this soon and I want to see what tasks are free before I commit
Sahl#0630: I think my next coop has something in the contract where any work I do outside of work hours is still theirs but I can check
StellaAthena#3530: If you live in the US the answer is very likely:
1. Yes it says that
2. It’s lying
Sahl#0630: I’m in Ontario with employer there too
Sahl#0630: We probably have rights here too though but idk the law
StellaAthena#3530: Dunno about Canada, but in the US illegal clauses in contracts don’t invalidate the contract. They just invalidate the portion that clause covers. So people write ridiculously restrictive things to take advantage of the fact people don’t like challenging their employer
Sahl#0630: I think that’s here too
Sahl#0630: But idk if that clause would be illegal |
StellaAthena#3530: TBH, the answer is largely “go learn some stuff and come back in a month.” Right now we don’t have a significant list of tasking that doesn’t require ML experience at all.
StellaAthena#3530: There will be more novice-friendly stuff soon-ish
Sahl#0630: a ok np
Sahl#0630: ofc if nothing comes up I’d love to test out your model 😉
bmk#1476: that may take several months
Sahl#0630: and by test out I mean mess around for fun
StellaAthena#3530: A GPT-3 quality model will take a while more.
Sahl#0630: Is v1 planned to be somewhere in between GPT-2 and 3?
StellaAthena#3530: We are currently building a model that is going to be as powerful, if not more, than GPT3
StellaAthena#3530: The Deep Learning Book is a free textbook that I’ve heard very good things about using it to self-study intro DL. You can find it here: https://www.deeplearningbook.org/. Working though this would be a good way to get started learning
Louis#0144: @bmk yo I can literally make a storygen demo for the model
bmk#1476: the what
bmk#1476: which model
Louis#0144: In a few months
Louis#0144: Like when it’s done and u wanna show it off
Louis#0144: I can make a story generation demo
bmk#1476: oh
StellaAthena#3530: Ah
StellaAthena#3530: Cool
triggerhappygandi#0001: Is every matrix multiplication in mtf done through einsum? |
Immunosuppressant#4238: Bro I can literally bring this under the eyes of Jefferson bezos himself
bmk#1476: What for?
bmk#1476: Yes, even if you use matmul it actually uses einsum under the hood
Immunosuppressant#4238: For funding
StellaAthena#3530: Why would he do that
triggerhappygandi#0001: @Immunosuppressant you know him somehow?
Immunosuppressant#4238: I’ve done business with him indirectly
StellaAthena#3530: Congrats?
bmk#1476: unless you have a serious proposal, please don't waste our time
triggerhappygandi#0001: I thought this was all for shits and giggles
StellaAthena#3530: We get a lot of people saying stuff like that. Probably once a week, if not more. If you’re really actually serious and have something concrete, go ahead and give it a try and let us know what happens. But we are a bit tired of “leads” or “pitches” or “ideas” tbh.
dopa#3178: idea(s) + understanding != optimal course of actions ?
Sahl#0630: energy > idea(s) + understanding
Sahl#0630: that's why money important in a way
dopa#3178: to me money is important in context of stability, without stable environment is very hard to learn
dopa#3178: but at same time non threatening environment make one weak
Sahl#0630: I mean money in the context of driving work
Sahl#0630: Like in a business
dopa#3178: I am not sure I agree with this, personally I am doing current work for money but for glory and discovery of truth
dopa#3178: but I am irrational in this context to a degree |
Sahl#0630: The more money you have the more compute you can buy, the more things you have access to, and the more people you can pay to do work
dopa#3178: but it does not means such work will be performant
Sahl#0630: sure but money is the force behind that
Sahl#0630: the direction is up to who has the money
dopa#3178: discipline and substantiated objective effort is force behind money and a bit of luck
6lackamethyst#4586: Capital > Money
Sahl#0630: true
spirit-from-germany#1488: https://youtu.be/_rfeCSQxDBg
dopa#3178: what about brain, I need one 🙂
spirit-from-germany#1488: lol
spirit-from-germany#1488: .... at some distant point in the future ... that could become feasabile .... but I'm unsure whether we humans would be still around then .... 😉
3dprint_the_world#6486: I've banned Seeker from my yt feed
dopa#3178: why ?
3dprint_the_world#6486: their content quality is generally low
3dprint_the_world#6486: and they use clickbait tactics
3dprint_the_world#6486: they're essentially the same as buzzfeed or gizmodo
dopa#3178: makes sense
chirp#4545: So I talked to my roommate last night and he pointed out a great use case for an open source language model
chirp#4545: https://www.notion.so/ericyu3/User-researchers-spend-10-25-of-their-time-just-summarizing-interviews-c8c9f0629a274c7e97491224226e23fb
dopa#3178: zotero > notion |
gwern#1782: they should try out T5
gwern#1782: I think it's already trained for summarization, and you can run it locally so no corporate approval stuff
Noa Nabeshima#0290: I think they're not really comparable, they're doing different things
Noa Nabeshima#0290: Zotero is for saving and organizing papers/academic sources and Notion is for.. notetaking/wiki or something like that? Don't have a lot of Notion experience
dopa#3178: you can have almost everything in zotero
dopa#3178: 🙂
dopa#3178: but yeah, probably notion is a bit different
3dprint_the_world#6486: looking at the code for gpt-neo, what's the best place to start?
3dprint_the_world#6486: gpt2.py?
StellaAthena#3530: @3dprint_the_world what do you want to do?
3dprint_the_world#6486: just look at it
3dprint_the_world#6486: I'm curious
3dprint_the_world#6486: learn what's been done, I suppose.
StellaAthena#3530: Yeah that’s a decent place to start then
triggerhappygandi#0001: Thats what i'm doing too. Will this code be used for larger models aswell?
StellaAthena#3530: Yes
StellaAthena#3530: We are also building a GPU version using DeepSpeed (GPT-Neox) but there isn't much code there yet so GPT-Neo is the place to start
chirp#4545: https://www.reddit.com/r/GPT3/comments/kkop1p/i_think_gpt3s_private_beta_distribution_strategy/gh3wcmt/?utm_source=reddit&utm_medium=web2x&context=3
chirp#4545: Someone on Reddit says that GPT-3 will be public in February
bmk#1476: where does this info come from? |
StellaAthena#3530: Somehow my faith in government bureaucracy has fallen even further: https://twitter.com/BlancheMinerva/status/1343322911936045057?s=20
AI_WAIFU#2844: wow
bmk#1476: >modern email software packages
bmk#1476: Honestly, i have a hard time accepting that this isn't a meme
StellaAthena#3530: It's like they copy and pasted something written in 1999
3dprint_the_world#6486: it's not totally unlikely that the people writing this actually use netscape
3dprint_the_world#6486: at my old company, in 2018, we still used Windows CE and IE 6
3dprint_the_world#6486: it was medicine/medical technology
StellaAthena#3530: Netscape doesn't exist and hasn't existed for a decade. Netscape 4.x hasn't been "modern" for nearly 25 years. There are college students who have never seen Netscape Navigator.
Sahl#0630: I have never seen it
3dprint_the_world#6486: I've never seen it in use either.
3dprint_the_world#6486: back in the day I just used mosaic
triggerhappygandi#0001: Government is inept with Technology cliche
Daj#7482: @StellaAthena https://www.reuters.com/article/us-alphabet-google-research-focus-idUSKBN28X1CB
Daj#7482: I think both that a) Timnit is mostly wrong and probably very unpleasant to deal with in certain personal contexts and b) Google and all other billion dollar companies are absolutely full of shit and we shouldn't take anything they say at face value ever
Daj#7482: Including, unfortunately not just MS but also (to a lesser extent, but still non-trivially) OA
thenightocean#6100: I mean, I am old and I've never seen it either 😄 .
StellaAthena#3530: @Daj Ahhh
StellaAthena#3530: Cool AMA is just starting:
> My research examines how extremist groups leverage technology to create propaganda, recruit members to ideological causes, inspire acts of violence and impact democratic institutions. I am particularly interested on the nexus of technology and extremist ideologies, and I use a combination of data science and digital ethnography to research extremist groups in particular QAnon. |
https://www.reddit.com/r/onguardforthee/comments/klvcu3/hello_my_name_is_marcandr%C3%A9_argentino_im_a_phd/
triggerhappygandi#0001: Why "unfortunately"? MS is more corporate than Google ever has been.
StellaAthena#3530: I think the “unfortunately” is in reference to OAI
triggerhappygandi#0001: Oh okay.
triggerhappygandi#0001: People have been painting OpenAI aa secretive even before gpt-3
triggerhappygandi#0001: https://www.google.com/amp/s/www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/amp/
Mischa#0599: It took 7 months of waiting to get the beta. It would be hilarious if GPT-3 goes public so soon
triggerhappygandi#0001: I haven't gotten the access even now. Filled the form 3 times.
bmk#1476: but how will i feel superior to people if i'm no longer among the inner sanctum of people allowed to access the Too Dangerous to be Released AI???
triggerhappygandi#0001: By creating our own Too™ Dangerous™ To™ Be™ Released™ AI™
AI_WAIFU#2844: fuck I keep OOMing
AI_WAIFU#2844: why do tpu's have so little VRAM?
bmk#1476: 16GB per core isn't that little
bmk#1476: you sure nothing is leaking memory/allocating where it shouldnt be?
AI_WAIFU#2844: No, I was just copying your configs.
bmk#1476: o.OO
bmk#1476: which one?
bmk#1476: everything in configs/ should just work out of the box
AI_WAIFU#2844: Ok I modified them a bit.
bmk#1476: poast config |
AI_WAIFU#2844: But the layout should be the same, I'm just messing with the width and depth
bmk#1476: poast
AI_WAIFU#2844: {
"n_head": 64,
"n_vocab": 50257,
"embed_dropout": 0,
"lr": 0.0001,
"lr_decay": "cosine",
"warmup_steps": 3000,
"beta1": 0.9,
"beta2": 0.95,
"epsilon": 1e-8,
"ada_epsilon1": 1e-30,
"ada_epsilon2": 1e-3,
"opt_name": "adam",
"weight_decay": 0.10,
"train_batch_size": 128,
"attn_dropout": 0,
"train_steps": 143075,
"eval_steps": 0, |
"predict_steps": 1,
"res_dropout": 0,
"eval_batch_size": 128,
"predict_batch_size": 1,
"iterations": 10,
"n_embd": 8192,
"datasets": [["SmallPileAblation_small_Pile_newinput", null, null, null]],
"model_path": "gs://neo-models/gpt3_scaling_32_pile",
"n_ctx": 2048,
"n_layer": 40,
"scale_by_depth": true,
"scale_by_in": false,
"attention_types" : [[["global", "local"],20]],
"mesh_shape": "x:1,y:32",
"layout": "batch:x,embd:y,memory_length:y ",
"activation_function": "gelu",
"recompute_grad": true,
"gradient_clipping": 1.0,
"tokens_per_mb_per_replica": 2048,
"precision": "bfloat16" |
}
AI_WAIFU#2844: Ok it works if I cut the # layers in half
AI_WAIFU#2844: But that's just 16B on a v3-32
bmk#1476: what does the profiler say about your memory usage?
AI_WAIFU#2844: Haven't checked yet
bmk#1476: thatll prolly be helpful
AI_WAIFU#2844: http://eleutherai.bmk.sh:8003
AI_WAIFU#2844: ok looks like I forgot to install the profiler, BRB
kindiana#1016: 16B on v3-32 sounds pretty reasonable
AI_WAIFU#2844: Shouldn't you theoretically be able to do a bit better than that though? Or are the activations memory hogs?
kindiana#1016: that's 8GB per core for just parameters and optimizer state
AI_WAIFU#2844: right, so what's the other 8 doing?
bmk#1476: gradients, activations i presume
bmk#1476: again, look at profiler
bmk#1476: itll tell u
kindiana#1016: with your current config theres at least 2.7GB of activations per core
kindiana#1016: so you might be able to push layers up a bit more but not too much
AI_WAIFU#2844: As a rule of thumb, how many bytes per param? Including optimiser state.
kindiana#1016: ~16 bytes per param
AI_WAIFU#2844: is everything fp32? |
kindiana#1016: yes
AI_WAIFU#2844: and then 1 for params + 3 for optimiser state?
kindiana#1016: yeah so params, gradient and 2 optimizer states
AI_WAIFU#2844: ah
AI_WAIFU#2844: ok that makes sense
kindiana#1016: gradient can potentially be bf16 but the rest can't
AI_WAIFU#2844: So when running on TPU, the params get converted down to bf16?
kindiana#1016: not sure how its setup on gpt neo tbh
kindiana#1016: usually for bf16 you convert the weights to bf16 for the forward and backward pass, but you keep the fp32 weights around to update
AI_WAIFU#2844: Ok, that makes plenty of sense.
kindiana#1016: (also you have 2GB of temporary buffers needed for self attention activations with n_ctx 2048)
AI_WAIFU#2844: well fuck me
Ryn#4094: I'm still mad at myself for not applying earlier. For some reason I assumed I could always pick it up in the future, but then it exploded, and I see some people with access hardly using it. Meanwhile it could really crack open some interesting questions in my research.
Sphinx#2092: Like what?
bmk#1476: i am hardly using my access; if you let me know what you plan on using it for i might be able to let you use it
chirp#4545: @Ryn someone on Reddit said that GPT-3 will be public in February: https://discord.com/channels/729741769192767510/729741769738158194/792873560845844540
AI_WAIFU#2844: Ok I got 24.6B running on a v3-32
kindiana#1016: lol I think thats about the best you can get
AI_WAIFU#2844: Yeah, now I'm gonna try to use more TPUs
AI_WAIFU#2844: But first I'm gonna get familiar with the TF2 profiler |
kindiana#1016: are you using tf2?
AI_WAIFU#2844: yes
AI_WAIFU#2844: I got it running yesterday
kindiana#1016: :thonk:
kindiana#1016: hows the perf/compile times of tf1 vs tf2 for neo?
AI_WAIFU#2844: I didn't use tf1 enough to get a good feel for it. But if you wan't to try go "conda activate tf2neo" and that should do it.
AI_WAIFU#2844: The code worked out of the box
kindiana#1016: neat, I'm curious but not _that_ curious xP
AI_WAIFU#2844: Ok, tensorboard is giving me 2 numbers:
> FLOPS Utilization
> (higher is better, why two numbers? )
> Utilization of TPU Matrix Units: 17.7%
> Compared to Program's Optimal FLOPS: 0.0%
bmk#1476: No idea what that last one ia
bmk#1476: Never seen it before
AI_WAIFU#2844: it looks like it's new in tf2
bmk#1476: Utilization of matrix units is the number that should be higher
bmk#1476: Like, over 50%
AI_WAIFU#2844: http://eleutherai.bmk.sh:8003/#profile
kindiana#1016: you need to profiler for longer, step 0 is not representative |
cfoster0#4356: Not to be *that guy* but the account that mentioned this is (1) pretty new and (2) active in r/conspiracy
chirp#4545: ah thanks, didn't check 😅
AI_WAIFU#2844: Still about the same http://eleutherai.bmk.sh:8003/#profile
AI_WAIFU#2844: actually wait
kindiana#1016: what command are you running to profile?
AI_WAIFU#2844: I just do it through tensorboard
kindiana#1016: I would profile for like 60 or 90 seconds
kindiana#1016: oh wait
AI_WAIFU#2844: Well that was 100
kindiana#1016: the step counters are not getting set properly
kindiana#1016: https://www.tensorflow.org/guide/profiler#profiling_custom_training_loops
kindiana#1016: you might need to do something like this
Ryn#4094: @Sphinx There's a theory in linguistics (called surprisal theory) that relates human reading times for words to language model (log) probabilities for those words. A recent paper came out using a trigram model (lol) to try and debunk the theory completely. I've found that gpt2 actually predicts reading times in this paper's data as you'd expect based on surprisal theory, which really challenges the paper.
I'd like to go further by showing that gpt3 does a better job at matching human reading times than gpt2 (as you'd expect based on some previous research showing better models better match humans).
It's possible (given gpt2's predictive power being fairly close) that gpt3 would even predict words as well as or better than cloze values (where probabilities are generated by having humans guess the next word in a sentence). If it could be close or match cloze values, that'd be a big deal.
kindiana#1016: hrm @AI_WAIFU it appears that neo uses tpuestimator, so it _should_ work lol, maybe its some interaction between TF2 and MTF
Sahl#0630: Huh I wonder if all language models converge to humanlike reading behaviour...
Sahl#0630: Or if there are different ways to interpret text and arrive at a humanlike interpretation
Ryn#4094: This is something I've been wanting to look into, but haven't had the time for. RNNs show some recency biases that I suspect transformers may not |
Sahl#0630: I guess human biases would be an indication that the model uses similar processes and heuristics
Ryn#4094: Some of these biases are human, and some don't match humans at all though. For instance, there are some ambiguous sentences where humans generally take into account the first phrase they saw in the sentence, but RNNs will take into account the second, more recent phrase.
Which makes sense when you consider the whole hidden state thing and their difficulty with long-distance dependencies.
Sahl#0630: that makes sense, maybe then they’d be similar to people with working memory issues
Ryn#4094: What's interesting here is that humans with lower working memory have sometimes been shown to prefer the further-away, first phrase!
Ryn#4094: And people think it's because of prosody
Sahl#0630: HUH
Ryn#4094: So the people who you'd think couldn't remember the earlier as well because of poor working memory end up chunking the sentence in weird ways
Sahl#0630: That’s fascinating
Ryn#4094: Human brains are just so weird!
Sahl#0630: Yes!!!
Sahl#0630: I want to understand them
Ryn#4094: Yeah! As different as human brains and ANNs are, I think there are a ton of ways that the understanding of one can benefit the other
gwern#1782: (didn't someone show that GPT-2 predicts garden path sentences like humans do?)
Ryn#4094: I definitely saw a paper showing that RNNs don't, depending on the type of garden path. I'll have to see if I can find one on gpt2 and garden paths
StellaAthena#3530: @Ryn I am a little late to the party but think these ideas are really interesting. Ice had similar thoughts, and have been thinking about running some experiments after the papers I am currently working on get out. One thing I’ve been thinking about specifically is that the information content of language is roughly constant over time, causing speech speed and information per byte to be inversely proportional.
Ryn#4094: Yes, I think production of speech and surprisal is hugely important to look into
StellaAthena#3530: If you do the math, this says that the scaling laws (in particular, L(D) from Kaplan’s paper) should be a function of the mean information per byte of a language’s text.
StellaAthena#3530: (There are some caveats about syllables and scripts, but this is reasonable enough for comparing English and Spanish) |
Ryn#4094: @StellaAthena Another thing to consider is that these language models are almost totally trained on production data, while humans are mostly being compared on comprehension
StellaAthena#3530: Can you elaborate on what you mean by that?
Ryn#4094: Language models trained on wikipedia are doing a task where they're guessing the next word; humans are being compared to these models, but mostly only on reading a sentence and then answering a comprehension question
Sahl#0630: A little unrelated, but do all NNs take the same amount of time per character (or token)?
StellaAthena#3530: No.
Ryn#4094: So I think the field is ripe for looking into speech and these models, like you're saying!
Sahl#0630: What’s an example of one that doesn’t?
Sahl#0630: I’m wondering if there’s a way a NN could short circuit the bulk of itself if it’s confident early
Ryn#4094: @gwern That may've been a blog post I made that you're thinking of hahaha This was before I had gotten serious about psycholinguistics 😂
StellaAthena#3530: I’m a little for sure by this.
AI_WAIFU#2844: There are papers that do exactly this
StellaAthena#3530: The main factors in NN run-time are.
1. Hardware
2. How lazy your SWEs are
3. The exact graphical structure of your neural network
None of these things seem analogous to speech.
AI_WAIFU#2844: I think the general consensus is that it's not worth the effort implement unless you're building a realtime system
Sahl#0630: I mean for the same NN, between different tokens
Sahl#0630: I see
StellaAthena#3530: When properly controlling for covariates I see no reason to think this would happen. |
Sahl#0630: Because it should be the same number of multiplications and function applications right?
StellaAthena#3530: Yes
Sahl#0630: Do different layers of a NN typically accomplish different “things”?
Sahl#0630: Because otherwise you could build short circuiting into it relatively easily
Ryn#4094: sorry, what?
StellaAthena#3530: Yes, and there’s a wealth of research on this. Hundreds of papers on CNNs and GANs, probably tens on transformers. I would recommend hitting Google scholar if this interests you.
Sahl#0630: Alright thanks
StellaAthena#3530: @Ryn I dunno what your background is or availability looks like, but if you’re interested in doing some research on talking a linguistic approach to studying language models there are several people here who would be interested in spinning up a project with you.
StellaAthena#3530: I don’t have time to be a lead researcher, but I would happily contribute.
Ryn#4094: @StellaAthena I would be glad to collaborate! The only real obstacle is that I'm a PhD student, so my time is quite limited and is at the whim of my advisors 😅
StellaAthena#3530: @Ryn what’s your PhD in?
StellaAthena#3530: And where?
Ryn#4094: Cognition and neural science at the University of Utah
Ryn#4094: Though my research background from my undergrad is in materials informatics
StellaAthena#3530: The good news is that EleutherAI has a fuckton of compute. We can’t quite get GPT-3 scale models (yet! Will do by the end of the year!), but we can easy run whatever experiments you like. If you were to write up an experimental plan, I bet we can find someone to implement and run it.
AI_WAIFU#2844: Stella the end of the year is in 3 days.
StellaAthena#3530: The end of **next** year.
Ryn#4094: Ooo, that would be very cool. I have some ideas that could potentially leverage the compute. For instance, it'd be interesting to train a transformer that would be comparable to some of the RNNs used in previous research to look at inter-architectural differences in parsing and surprisal. The idea being that they'd have a similar number of parameters (i.e. small as hell) and training data would be controlled for.
There is really a wealth of papers from when attention hadn't taken over yet that could be replicated with some mid-sized transformers. |
Ryn#4094: Some of this is colab-gpu levels of compute, though
Ryn#4094: So well below your capacity
StellaAthena#3530: What would you want to do with 1-10B transformers?
Ryn#4094: With anything above gpt2 scale (which iirc is 1.5 billion?), I'd want to see if their scale improves their fit to human data (which they almost definitely would). Further, I'd want to see if their scale could ever outperform cloze values.
Ryn#4094: Or at least match them
Ryn#4094: for predicting human reading times
StellaAthena#3530: If you can write up an experimental plan and commit to helping with the analysis of the results I can make that happen.
Ryn#4094: I would love to get my hands on anything bigger than gpt2 and collaborating would be awesome -- though there are a couple things to note:
- I'd have to discuss everything with my linguistics-focused advisor, and they're MIA for the holidays.
- The analysis is practically done, as the original paper that used a trigram is open-access, and I've already looked at gpt-2 surprisal, so it'd be as simple as plugging items into the model and getting a list of target-word probabilites.
StellaAthena#3530: Yeah, definitely chat with your advisor! Happy to help out however we can. We are a new lab, and would be thrilled to collaborate with more established researchers (and cool grad students, don’t worry 😉)
Ryn#4094: 😄 Sounds good!
Ryn#4094: Oh, one other thing: have you all trained any generative image transformers?
StellaAthena#3530: I think Lucidrains may have. I’ve been interested in doing so for another project but haven’t done so yet.
StellaAthena#3530: Have you?
AI_WAIFU#2844: Ok we're running 100B without any major hickups(it's just slow as fuck), I think if we get lucky and I can spin up a TPU-256 we can theoretically start training at GPT-3 scale by the end of *this year*:
http://eleutherai.bmk.sh:8003/#profile
AI_WAIFU#2844: but it will take forever
bmk#1476: (for reference, we had a 100B working a long time ago but for some reason the efficiency was 2% back then whereas it's 12% now somehow. also 200B is still the real test, we haven't ever gotten it working yet)
Ryn#4094: I haven't, but I'm interested in whether powerful image models might connect to human behavior in ways similar to language models. |
StellaAthena#3530: Right, but we are going to be training GPT-3 scale in on GPUs and it shouldn’t* be too hard to transfer a trained LM to TPU.
* famous last words
AI_WAIFU#2844: Hahahahahahahahahahahahha
bmk#1476: no see once we figure out how to train gpt3 on tpus there will be no reason to do gpt3 on gpus instead so we can go directly to ***1T or bust*** on gpus
AI_WAIFU#2844: How do you feel about a 1T ensemble?
bmk#1476: if it performs strictly better than gpt3 and we can't figure out a 1T dense model, then sure i gues
StellaAthena#3530: If you have ideas about making it work we would love to try it.
bmk#1476: I still really wanna do a legit 1T, no tricks
bmk#1476: I don't think it's impossible
AI_WAIFU#2844: I want to try the logit caching thing I did a while ago, but instead of using 117M models, we use 200B models.
bmk#1476: Maybe try it with 1B models first
AI_WAIFU#2844: Ok fine
AI_WAIFU#2844: How easy is it to save the output of a TPU computation?
bmk#1476: ¯\_(ツ)_/¯
triggerhappygandi#0001: How do you check this _while_ the program is running? I haven't used TPUs on GCP but when I do on colab, it would obviously run after the training step has finished, at which point the TPU utilization is 0.
bmk#1476: that's a colab problem lol
triggerhappygandi#0001: Oh okay
triggerhappygandi#0001: But even with the new terminal feature in colab, when I run the following code in a .py file on terminal
``` |
import os
from tensorflow.python.profiler import profiler_client
tpu_profile_service_address = os.environ['COLAB_TPU_ADDR'].replace('8470', '8466')
print(profiler_client.monitor(tpu_profile_service_address, 100, 2))
```
triggerhappygandi#0001: it says cant import profiler from tf
triggerhappygandi#0001: maybe still a colab problem
AI_WAIFU#2844: We do it from tensorboard, maybe you can do the same
triggerhappygandi#0001: Will do
chilli#5665: In my mind I've already mapped invalid-user to lucidrains
StellaAthena#3530: That’s how you know you’re an old hat here
chilli#5665: Haha I haven't been around that long
chilli#5665: I think lucidrains just makes up about half of all people joining this server
dopa#3178: I think you need to run this "!pip3 install --upgrade "cloud-tpu-profiler>=2.3.0"" ?
gwern#1782: https://openai.com/blog/organizational-update/ "Today we’re announcing that Dario Amodei, VP of Research, is leaving OpenAI after nearly five years with the company. Dario has made tremendous contributions to our research in that time, collaborating with the team to build GPT-2 and GPT-3, and working with Ilya Sutskever as co-leader in setting the direction for our research.
Dario has always shared our goal of responsible AI. He and a handful of OpenAI colleagues are planning a new project, which they tell us will probably focus less on product development and more on research. We support their move and we’re grateful for the time we’ve spent working together." !
gwern#1782: speculation: Amodei is frustrated that OA is just another SaaS and is launching a scaling hypothesis startup
thenightocean#6100: so what could be next for him? 1) FAANG company, 2) his own startup 3) Mil-Industrial company? |
bmk#1476: hopefully not 3 lol
chilli#5665: Yeah that's what it sounds like
chilli#5665: "focuses less on product development and more on research"
gwern#1782: I guess we'll see. I still think FAANG are deadly conservative and unimaginative and would be a bad home for amodei. the question is, where do they get the funding?
chilli#5665: That seems like a pretty obvious dig at recent OAI GPT3 endeavors
gwern#1782: if they have pretty typical seed funding, they'll still struggle to match gpt-3, much less at least 10x it
chilli#5665: Considering that they have no other products to speak of
gwern#1782: I wonder if they've found some billionaires sufficiently alarmed by GPT-3
gwern#1782: I was beating the drums pretty heavily on that one, and I know my essay was widely read
cfoster0#4356: Wasn't Amodei more focused on AI Safety?
cfoster0#4356: Wouldn't be surprised if his new project is more like that and less like GPT
bmk#1476: what if we hypothetically, uh, reached out to amodei to see if he would be interested in hanging out around here in his spare time, now that he probably has a lot more of it than before
chilli#5665: Lol
3dprint_the_world#6486: tbh that may not be a totally insane idea.
gwern#1782: incidentally, another reason it's not FAANG is OA uses the word 'his *co-founders*'
gwern#1782: if he was just being hired by DM or GB or FAIR, that would make no sense. even if he was heading up some new group or sub-org, you don't usually refer to them as 'founders' or have 'co-founders'. that term only makes sense if he were creating either a startup or a new non-profit
3dprint_the_world#6486: if it was approached appropriately.
bmk#1476: ok guys let's start drafting an email to him
3dprint_the_world#6486: like not to frame it as "hey give us all your openai secrets" but "yeah this was started because we are frustrated with OAI too, seems like we have a lot in common, happy to promote whatever project you're excited about sharing"
3dprint_the_world#6486: or something like that, I dunno |
chilli#5665: Was this started because we're frustrated with OAI?
3dprint_the_world#6486: that was my impression? I could be wrong.
bmk#1476: it pretty much was
bmk#1476: we've pivoted away from it since but
bmk#1476: can't forget our roots
bmk#1476: (there is a pretty wide range of opinions on OAI here)
3dprint_the_world#6486: of course I wouldn't use the word frustrated. it's too negative.
3dprint_the_world#6486: but ygwim
chirp#4545: HN: https://news.ycombinator.com/item?id=25573427
3dprint_the_world#6486: @chirp scroll up 😉
chirp#4545: i saw it!
chirp#4545: posted the hn thread in case there’s discussion there
chilli#5665: There's no discussion there yet lol
chilli#5665: I guess I'll post on r/ml
cfoster0#4356: I wouldn't assume he's frustrated with OAI. Maybe we just ask if he'd want to chat about AI Safety with us in #alignment-general and whatbot, esp since we're looking to level up in that area
3dprint_the_world#6486: I see what you mean but what's the incentive for him to just hang out in some random discord server
bmk#1476: what's the incentive for *any* of us
cfoster0#4356: 🤷🏿♂️
triggerhappygandi#0001: Wtf
triggerhappygandi#0001: Amodei leaving too? |
triggerhappygandi#0001: Rewon Child left this year aswell
triggerhappygandi#0001: After the very deep vae paper
triggerhappygandi#0001: Kingma isn't in OpenAI either.
gwern#1782: didn't kingma leave a while ago?
triggerhappygandi#0001: He did
triggerhappygandi#0001: I'm just listing big names I know
triggerhappygandi#0001: Tim salimans left aswell
triggerhappygandi#0001: And I don't see Ilya's name at the front anymore
triggerhappygandi#0001: Maybe we will see a slight dip in the quality of their research?
triggerhappygandi#0001: Not to mention them being quasi-owned by Microsoft already has sparked a lot of "NotSoOpenAI" comments from a lot of people.
chilli#5665: We're degenerates
triggerhappygandi#0001: Idk about you guys, but I'm just lurking here getting familiar with mtf until I am able to abuse your compute.
3dprint_the_world#6486: I'm just here cause I thought this was a singles group
bmk#1476: i wonder how many lurkers there are around here who follow our discussions but never speak anyways
bmk#1476: ple react w 👀 if you're reading this but don't post often/at all
triggerhappygandi#0001: Well we have a guy named big dick Rick here.
triggerhappygandi#0001: Never seen any big dick Rick in the chat though
chirp#4545: so for a while i’ve felt like openai is vague about where they want to go, and what are their plans for alignment and commercialization. maybe they weren’t sure either, even among themselves
triggerhappygandi#0001: Definitely
bmk#1476: your daily reminder that sama doesnt believe in orthogonality |
gwern#1782: I guess the problem is they feel they have a great thing in gpt-3, no one else is interested in trying, and they have to turn a profit to justify further investment
triggerhappygandi#0001: They really weren't. Why else would they open a "capped profit" subsidiary to fund them? They obviously knew opening a research lab was a money drain. Why not have a "capped profit" subsidiary from the get go?
gwern#1782: "we need to show results in the short-term to maximize our impact in the long run. it's sad that the researchers are too narrow-minded and anti-commercial to understand this"
gwern#1782: "if we can hit a market and unlock a gusher of money, it'll be more valuable than any distractions in the interim"
gwern#1782: "there's nothing more powerful than a startup which has hit a market need - the entire world becomes your ally, and you are suddenly drinking rocket fuel"
triggerhappygandi#0001: I don't really have any problem with them. If they want to make money, good for them. But what I find stupid is them trying to come up with ideas to make money as they realise they need funding.
bmk#1476: the api clearly wasn't thought through very well as a commercial product
3dprint_the_world#6486: they never thought GPT-3 would be popular.
it became popular for reasons mostly outside of their doing and control
triggerhappygandi#0001: Deepmind doesn't seem very _closed_ per se. Atleast we know all the research they're doing. What they do with that in backend with Google is up to them
gwern#1782: hm? you wouldn't engineer an API if you didn't think it'd be popular
chirp#4545: @3dprint_the_world keep in mind they launched their API in february or something, before they published the GPT-3 paper
gwern#1782: and they knew it was hotshit from their pre-launch partnerships
gwern#1782: like ai dungeon's a/b testing
bmk#1476: I can imagine they had an emergency meeting: "quick, daddy msft is demanding to see returns on their investment!" "err, what can we sell?" "Idk let's wrap up gpt3 as an api"
3dprint_the_world#6486: I'm saying they never knew it would blow up like it did
triggerhappygandi#0001: Again, I don't see a problem with that. But they should've done that since the get go.
gwern#1782: they knew it would be more than popular enough to saturate their azure gpus
bmk#1476: "but how does that work as a commercial product? what's our business model" "idk it's cool so people will throw money at it. that's how it works right?"
zphang#7252: I don't know if that's true, they prepared a 40-page (before appendix) paper for it |
3dprint_the_world#6486: @bmk you'd be surprised.
gwern#1782: did they predict *how* popular it would be? probably not. but they didn't need to
triggerhappygandi#0001: It's a 175 billion parameter model
triggerhappygandi#0001: They need no marketing
gwern#1782: they obviously did, weren't you around in may when the paper dropped
gwern#1782: I was intensely frustrated how *little* marketing they did
bmk#1476: Me too tbh
3dprint_the_world#6486: same.
bmk#1476: It seemed very out of character for openai
chilli#5665: OAI wasn't doing a lot of marketing for GPT3?
zphang#7252: but gwern, you *were* their marketing
chilli#5665: :thonk:
bmk#1476: Especially considering how much they hyped GPT2
chilli#5665: I thought I saw it everywhere
gwern#1782: I continue to be baffled how OA decides what merits, say, a blog post and announcement and what just gets released silently overnight on arxiv
bmk#1476: Chilli everyone here lives in a bubble
chilli#5665: Or was that when their API came out
gwern#1782: hernandez on research progress? BLOG IT. G P T 3 1 7 5 B? fuck it, just dump a pdf on arxiv lol
triggerhappygandi#0001: Didn't they get ridiculed for trying to not open source gpt-2? It just seems to me that they didn't want to attract that kind of press again
bmk#1476: I remember when the paper first came out, the first thing I noticed was 175 billion fucking parameters. The second thing was holy fuck why is nobody else as excited as I am |
gwern#1782: you only saw it everywhere in *June and later*, once people got access to the API and had their minds blown and realized I was right all along
triggerhappygandi#0001: "it is too dangerous to release"
"Lmao we are Google we released a 7x bigger model"
"Kekekeke OpenAI"
3dprint_the_world#6486: tbh when they released the paper I mostly just thought "huh that's cool"; but it only started blowing up in my feed when that kid posted that javascript tweet
3dprint_the_world#6486: and that was when all my colleagues started asking me about it
chilli#5665: They didn't release their API at the same time as the model?
gwern#1782: yes. the JS tweet was definitely the key moment. but there were lots of other tweets or posts that would've kicked it off, and it was thoroughly inevitable once enough people got access
Dromarion#3383: I think a bit of the server pop are lay people like myself who are just following developments and waiting for the good/cheap models to use on consumer level like in AI Dungeon. There's not many places that answer my dumb questions as well as here.
triggerhappygandi#0001: They were being cautious after GPT-2
zphang#7252: fwiw my group works quite a bit on model eval, and the zero-shot results stirred quite a bit of interest but since no one could use it it just sort of became a piece of trivia/thing to mention in related work occasionally
3dprint_the_world#6486: yep same at my work
3dprint_the_world#6486: wdym
triggerhappygandi#0001: Was there a time difference between GPT-3 arxiv release and the API early access?
triggerhappygandi#0001: Also I'm pretty sure I only heard the mention of GPT-3 on June 2. Maybe that's why I don't remember them being coy
cfoster0#4356: Yeah
3dprint_the_world#6486: 11 June was when they publicly released the API.
cfoster0#4356: People were asking about them releasing weights in the Github issues before the api was released
bmk#1476: Yeah the Gpt3 paper was released way before the api
zphang#7252: "publicly" |
thenightocean#6100: I was similarly surprised by the way they try to publish it under the radar but I assume they had some reasons for it. (security, secret business plans)
thenightocean#6100: I constantly feel that there is something I am not aware of, that something is happening behind the scenes that is causing all of this weird behaviour from their side
triggerhappygandi#0001: They did it probably because of GPT-2 @thenightocean
bmk#1476: I published a blog post about Gpt3 the day after it came out lol
triggerhappygandi#0001: They tried to advertise it as dangerous and were ridiculed for it.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/793579489756971048/Screenshot_2020-12-29-13-40-42-740_com.android.chrome.png
thenightocean#6100: but why do they care about opinion of twitter trolls
triggerhappygandi#0001: Incidentally that's the first time I heard of OpenAI lol
triggerhappygandi#0001: @thenightocean they don't. But the people they care about thought it was funny too
cfoster0#4356: It's not the Twitter trolls. It's the nightly news they're worried about, I'm sure
triggerhappygandi#0001: Like, UC Berkeley's course on unsupervised learning had the first lecture where the instructors mentioned gpt-2 and the "dangers" of it. Needless to say the whole class giggled.
gwern#1782: after all, if it's just some graphs in a paper, it's not like it's *real*
thenightocean#6100: I mean they are doing groundbreaking stuff and should have FU money. Thats why I dont understand their weird PR strategy shenanigans.
zphang#7252: It's more that we can't really do anything about it
gwern#1782: if it doesn't change your future plans, I dunno what you're doing
thenightocean#6100: well, we should look at it from our perspective, him leaving might slow them down and we can catch up easier 😛
chilli#5665: If gpt-3 doesn't change your future plans?
bmk#1476: At least for me, GPT3 changed my plans a lot
chilli#5665: Really?
chilli#5665: How so |
chilli#5665: It definitely made me more interested in scaling
chilli#5665: But other than that didn't change my plans much
ethan caballero#6044: He/they will probably get Open_Philanthropy funding. Dario is tight with the heads of Open Philanthropy.
thenightocean#6100: well it reduced my AGI timelines significantly and made me worry less about retirement
chilli#5665: Interesting
thenightocean#6100: and I somehow ended at this weird place too... heh
bmk#1476: I mean, Eleuther wouldn't exist without gpt3
chilli#5665: What are people’s AGI timelines here
chilli#5665: I feel like if I truly believed AGI would happen in the next 20 years
chilli#5665: I’d change my plans
chilli#5665: I guess it also depends on what I think a post AGI world would look like
chirp#4545: i feel like i have no clue where AI is going to be in 5 years, and this year didn’t make things much more clear
chirp#4545: that’s why i pay attention to AI even though I’m not a practitioner
bmk#1476: I don't have any very thoroughly thought out numbers, but i think 20 years is probably 40%ish AGI likelihood
bmk#1476: This number is based on absolutely nothing i just pulled it out of my ass
chilli#5665: Also, will the singularity happen
chilli#5665: I feel like general sentiment seems to have moved away from singularity type scenarios?
chilli#5665: I’m not sure
chilli#5665: Thanks 🙂
3dprint_the_world#6486: you mean an 'intelligence explosion' type singularity? |
chilli#5665: Yeah
cfoster0#4356: I think we're maybe 1.5 ResNet-level breakthroughs away from "AGI" . So I'd put it at 25% chance by 2035
chilli#5665: I feel like that viewpoint was omnipresent a couple years ago
chilli#5665: But not anymore
3dprint_the_world#6486: I think the idea of the singularity as a single point in time was always ill-founded
AI_WAIFU#2844: Could be any day now really.
3dprint_the_world#6486: but as a gradual process, sure.
3dprint_the_world#6486: I mean one could argue we're already in the singularity.
chilli#5665: I’d say singularity in the sense that humans can no longer do any useful tasks that computers aren’t doing better
AI_WAIFU#2844: yeah, we've definitely crossed the event horizon.
bmk#1476: Depending on the definition of singularity, my answer ranges between "we're already in the singularity" and "that's physically impossible"
AI_WAIFU#2844: where's that 1 yud article from years ago
chilli#5665: Arguably there’s a gap between “computers are capable of doing this better” vs “computers already are doing this better “
bmk#1476: ~~do you have the slightest idea how little that narrows it down~~
chilli#5665: Well, I think it’s the gap between theory and practice
3dprint_the_world#6486: also Kurzweil gets a lot of flak but I find it interesting his prediction for the singularity was 2045-2050
3dprint_the_world#6486: which may very well happen
chilli#5665: For example, robotics could be a limitation of AGI for AI to affect the real world
3dprint_the_world#6486: (this was his prediction from 20 years ago)
AI_WAIFU#2844: Got it: https://www.yudkowsky.net/singularity/schools |
thenightocean#6100: But I first started to consider it happening much earlier was a year ago when Ilya mentioned that it might be 5 years away in one presentation. Which was kinda shocking given his authority
chilli#5665: Ah thanks looks relevant
dopa#3178: in what way you see robotics will be limitation ?
3dprint_the_world#6486: robotics has generally always been behind other fields of AI
chilli#5665: An AI could be smarter than humans, but it might not have the mechanical ability to manipulate real world things
chilli#5665: Sure you could argue that it’d accelerate the development of robotics
3dprint_the_world#6486: in the 90's, the SOTA in robotics was two-wheeled robots that had ultrasound sensors.
zphang#7252: not a problem... if they can manipulate humans
3dprint_the_world#6486: I'm serious, this was SOTA in research labs
3dprint_the_world#6486: basically pre-roomba stuff
dopa#3178: outside power limitation and communication in degraded environment , I don't see any other limitations
3dprint_the_world#6486: do you know much about robotics
chilli#5665: But it’s not clear to me how quickly a “smarter than human” AI could actually create a good enough robot
gwern#1782: or how much they'd need robots to begin with
dopa#3178: in fact, robocap 3d goal to play soccer by against humans by 2050 seems very much possibility to achieve
gwern#1782: robots are so tremendously anthropomorphic. who needs robotic monkeys when you have gray goo
cfoster0#4356: Seems like some folks think of intelligence as a universal solvent
3dprint_the_world#6486: here's why robotics always lags:
- it costs a lot of money to develop them, so random hackers rarely work in this area.
- obtaining training data is very hard and costly. |
chilli#5665: Sure, but how long until an AGI creates gray too
dopa#3178: I am not sure in what context you asking me, but what hardware limitations you see in robotics, when looking at achievements of Boston dynamics ?
StellaAthena#3530: Also, physics is a bitch.
chilli#5665: I guess, also, if I really believed AGI would come within 20 years I’d think more about alignment
3dprint_the_world#6486: the answer is: "quite a lot", this is a wide-ranging topic and tbh I don't feel like explaining a huge amount of things that other people have already explained
3dprint_the_world#6486: but start with a google search
3dprint_the_world#6486: sorry I don't mean to be flippant
AI_WAIFU#2844: Yeah but as dopa posted in #off-topic
https://youtu.be/fn3KWM1kuAw
bmk#1476: You know what else costs a lot of money
45#2247: for reference: AFAIK, boston dynamics vids are scripted
bmk#1476: GPT3
bmk#1476: :guilty:
AI_WAIFU#2844: The biggest limitation to robotics is being able to control them. Strong AI basically fixes that.
gwern#1782: I don't know how you script that in any real sense
dopa#3178: this why I was asking what hardware limitation are there outside cost, communication, and power ?
AI_WAIFU#2844: Mocap
3dprint_the_world#6486: to start with some idea of how far behind robotics is, go to your backyard (if you have one), look at some random insect on a leaf, and observe how it moves.
no robot we have is even close to that.
gwern#1782: (there's no way you can just precompute a list of timestamps and per-joint torques an hour in advance and run it) |
dopa#3178: is there argument that in 20 years it will be not cheap to build DIY terminators ?
3dprint_the_world#6486: I'm not predicting the future I'm just talking about current technology
gwern#1782: drones already are better terminators than terminators
AI_WAIFU#2844: Did you watch that video
dopa#3178: so current technology outside software is here right now
3dprint_the_world#6486: yes I'm familiar with pretty much all of boston robotics' (published) work
AI_WAIFU#2844: but did you watch the video
3dprint_the_world#6486: as I said, yes
3dprint_the_world#6486: I don't see how 'did you watch the video' is a persuasive argument, sorry
3dprint_the_world#6486: none of the stuff they show in the video is beyond stuff they've already demonstrated in the past
dopa#3178: do you know what DARPA SquadX program is ?
3dprint_the_world#6486: did you go into your backyard and watch an insect?
ethan caballero#6044: Do we have any info/guesses on the "handful of OpenAI colleagues" who are "co-founding" Dario's new thing? My guess is it would be Kaplan and most of OpenAI foresight team. I'm pretty sure Dario (an ex-physicist) was responsible for creating and recruiting OpenAI's foresight team which also mostly consists of ex-physicists.
gwern#1782: I'm not sure about kaplan. isn't he actually at johns hopkins?
thenightocean#6100: Now I want them to replicate that dancing scene from Ex Machina
45#2247: well, I mean there's a scenario of what the robot should be doing, and they know it can do the thing, it's not a live demo
AI_WAIFU#2844: I've done that before. Unfortunately I don't have a backyard anymore. But to me it seems like modern robots are amply well coordinated.
dopa#3178: @3dprint_the_world what this hostility about, you just refuse to substantiate what are robotics limitation hardware limitations
gwern#1782: (the expense is a big one. not a lot of things you can use BD spot for at $75k+ a pup)
AI_WAIFU#2844: Like yeah, it's not 100million years of evolution good, but I think it's good enough. |
3dprint_the_world#6486: because these are well-understood questions and the huge limitations of current robots are things that even the most enthusiastic robotics researchers can talk to you for hours about
dopa#3178: hardware wise !? (outside cost)
3dprint_the_world#6486: @dopa read https://www.nature.com/articles/ncomms14494
ethan caballero#6044: Yes, but he's referred to Dario as "his good friend" before. Kaplan already has neural scaling law collab going with people at Google X right now.
3dprint_the_world#6486: that paper studies just one minor aspect of the locomotion of just one type of insect
AI_WAIFU#2844: Like from a hardware perspective, what limitations do you see? I see power density and that's about it.
3dprint_the_world#6486: and yet that is so far beyond what we can currently do in robotics that it's almost inconceivable
gwern#1782: he does? I hadn't heard about that. what are they doing?
3dprint_the_world#6486: tl;dr: flies dynamically adjust their gait for optimal locomotion speed using *only* feedback from foot pads.
3dprint_the_world#6486: this is way beyond anything boston dynamics has ever demoed
3dprint_the_world#6486: and this is just *one* aspect of fly locomotion. And walking isn't even their primary mode of transport!
AI_WAIFU#2844: Right, but that's a software limitation. Not a hardware limitation.
3dprint_the_world#6486: who said I'm talking about hardware limitations?
AI_WAIFU#2844: Oh, I thought we we're talking about robotics as a limiting factor to AGI.
dopa#3178: see own replay lol
3dprint_the_world#6486: again, never said this.
45#2247: robotics as a limiting factor to grey goo?
3dprint_the_world#6486: > robotics has generally always been behind other fields of AI
that's what I said.
AI_WAIFU#2844: Oh wait that was @chilli |
chilli#5665: It was just an example
AI_WAIFU#2844: Ah
AI_WAIFU#2844: Ok then we're all in agreement.
chilli#5665: Like, I'm not certain how far the barrier is
chilli#5665: Like, let's say you have a 1000x better GPT
chilli#5665: That still has no agency of its own, no?
45#2247: you just plug some RL
dopa#3178: in what way they are behind if not in hardware then what ?
dopa#3178: they are not insects this is your argument ?
AI_WAIFU#2844: Well yeah, you (probably) need more than just a language model to do AGI.
3dprint_the_world#6486: on that note, one thing I've been interested in lately is how much of the robotics stuff you can subsume into an LM
gwern#1782: I've facetiously pointed out that because of imitation learning, if you make a sufficiently good unsupervised model of agents, you can plug it in to actuators get agent behvior without ever formally making a RL agent
gwern#1782: policy can, and is, expressed in language after all
3dprint_the_world#6486: @gwern yes I've been thinking along similar lines.
gwern#1782: 'a sufficiently advanced language model of adolf hitler plugged into a giant gundam *is* Mecha-Hitler'
3dprint_the_world#6486: I'm curious about how much actual real-world physics/control is captured in language.
3dprint_the_world#6486: probably more than 'nothing' but also surely much less than 'all'
ethan caballero#6044: He mentioned it in one of his talks online. I can't find timestamp. He said he has a collab going with Yasaman Bahri & Ethan Dyer (both at google) to develop better theory (than his dimension of the data manifold theory) for why neural scaling laws exist.
dopa#3178: problem is error handling, between signals to sensor events, there issue come down to common sense when actions when there are error, this requires some abstract representations
chilli#5665: Well someone still needs to allow it to actuate the robot |
3dprint_the_world#6486: AI Box problem.
3dprint_the_world#6486: or, well, AI Box not-a-problem
dopa#3178: for abstract representations some try to use only neural fields for this
dopa#3178: I would not be surprised at all if all tech behind boston dynamics is build on neural fields
3dprint_the_world#6486: (it isn't)
cfoster0#4356: Feel like once RL gets its breakthrough we'll be well on track for :firealarm:
gwern#1782: not a problem. 4chan will do it for the lulz
bismarck91#5255: RL is probably the key to AGI.
chilli#5665: Perhaps, but my point is that there are still roadblocks between "Sufficiently smart model" and "this AI has replaced all of our jobs". I'm not saying that the rest aren't comparatively easy compared to getting the "sufficiently smart model" in the first place, but there are some unknowns here
3dprint_the_world#6486: although once you reach a certain level of intelligence and influence, you can probably just get ordinary humans to do your bidding much easier than building robots
gwern#1782: I mostly make the joke to try to make people more imaginative about what unsupervised models can do
3dprint_the_world#6486: not that you couldn't also build robots just as easily
3dprint_the_world#6486: just saying it might not even be necessary
gwern#1782: 'unsupervised models are harmless, not agents, and never can be. they don't *want* anything. all they do is *predict*, *really well*. you just don't get it, weird online crank.' 'ok, so what if you ask them to predict what an agent would do?' '...'
45#2247: With imitation learning you learn a policy from human data, and then at test time when it's all frozen, you still need some kind of RL model to behave in the world, maybe outside of 4chan distribution
gwern#1782: (...something something superintelligent octopus versus stochastic parrot...)
zphang#7252: is that a worldcup reference
cfoster0#4356: It's a dual reference to two Bender papers
andyljones#7746: no, unfortunately it's a research reference
|
efb
Sphinx#2092: Its megashark vs giant octopus /s
gwern#1782: personally, I think I would rather fight 40 stochastic parrots than one superintelligent octopus the size of a horse
zphang#7252: oh cause https://www.cbsnews.com/news/world-cup-final-a-battle-of-octopus-vs-parakeet/
chirp#4545: Jack Clark also left: https://twitter.com/jackclarksf/status/1344041028261580800
chilli#5665: :O
bmk#1476: \:O
zphang#7252: huh
ethan caballero#6044: WOAH!
45#2247: something new = AGI speedrun?
zphang#7252: AGI speedrun commentary, more likely
45#2247: welcome back to my channel, today we're unboxing an AGI
45#2247: oh no
bmk#1476: AGI speedrun any% pb: 13.7 billion years
gwern#1782: hm. clark isn't really a scaling guy, more of a policy guy. maybe it's safety after all 😦
bmk#1476: What if christiano leaves
cfoster0#4356: bet
cognomen#6297: AGI TAS playaround (arbitrary code execution)
3dprint_the_world#6486: "mmmm, I love that new AGI smell"
bmk#1476: i love the smell of *deadly neurotoxin* |
3dprint_the_world#6486: it smells like... victory
45#2247: they did say "if a company is building AGI faster than us we'll help them make it safe"
ethan caballero#6044: Plot twist: Timnit is a cofounder at Dario's startup
bmk#1476: timnit is anti scaling afaict
bmk#1476: or at least not exactly the biggest fan
cfoster0#4356: my money is on a nonprofit research group pursuing some new approach to AI safety
bmk#1476: if that's the case we need to collab with them
bmk#1476: if that's not the case we should probably try to collab with them anyways
45#2247: dario + timnit + EleutherAI™️
spirit-from-germany#1488: https://youtu.be/fn3KWM1kuAw
spirit-from-germany#1488: 😄
bmk#1476: i don't think timnit would agree to helping with a scaling paper
gwern#1782: timnit being anti-scaling is one of the reasons for it. scaling wouldn't make timnit, bender etc so angry if they didn't fear it'd work
gwern#1782: if you really thought scaling wouldn't work, your reaction would just be 'lol 🍿 "
ethan caballero#6044: FYI, I'm joking about Timnit cofounding.
bmk#1476: while i'm undecided on many ethics issues that don't directly relate to alignment and i think bender has a point when it comes to representation (or lack thereof) in nlp of non-english languages, the scaling issue is something where i really strongly disagree and dominates my opinion of them
gwern#1782: (this is, incidentally, one reason I never argued much with bitcoin critics. because *my* reaction to them thinking it was doomed was just "lol 🍿 - being you is its own punishment")
bmk#1476: any ethics solution that doesn't also work with big models is kinda doomed imo
bmk#1476: and so any argument where the crux is "we can't do big models because they're more unethical in x direction, we should instead focus on small models and GOFAI" is extremely unconvincing imo
cfoster0#4356: Let's distinguish "scaling is an effective path to pursue" and "I like that scaling is an effective path to pursue" |
gwern#1782: but they want to have it both ways
gwern#1782: behind every "X won't work" lurks the real "I don't like X if it works"
bmk#1476: i don't *like* that scaling is an effective direction, but the universe has shown quite a track record of disregard for what i like
cfoster0#4356: That's clearly hyperbolic @gwern
gwern#1782: and yet, it's the lawyer-like way in which activists think
cfoster0#4356: Anyways
cfoster0#4356: Agreed
bmk#1476: so i might as well learn to update fast
gwern#1782: they operate under political/legalistic, not rational or academic rules. 'your honor, if my dog bit him he must've deserved it, she was just licking him affectionately, we weren't even on the same street that day, and anyway no one's proved I have a dog'
gwern#1782: '...face recognition can't work in principle, this recognition only works for white people, if it does work you don't have copyright, if copyright is transformative you didn't get informed consent...'
bmk#1476: I mean, i think it would be awesome if theory was the most promising path - I'd much rather spend all day doing math than messing with tensorflow
cfoster0#4356: I think "producing aligned AGI in a way that necessarily avoids wireheading" might not be possible. That doesn't mean *I want it to be impossible*. On the contrary
chirp#4545: but even when that's true, is it actually a problem? after all, people do need lawyers
cfoster0#4356: But yeah the fact scaling works is in some ways unfortunate. In any case, it's what we've got for now
gwern#1782: you need lawyers for negotiating monkey games. the last thing you want in trying to understand the universe and whether we're going to be paperclipped by AGI soon is people with lawyer attitudes barging in. this is too hard and important for them
ethan caballero#6044: https://www.facebook.com/jack.clark.ML/posts/10164730919705710
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/793613625147260978/unknown.png
gwern#1782: does that say anything his tweet doesn't?
ethan caballero#6044: "OpenAI was (and is) an incredible place and I learned a huge amount there, while also getting to work adjacent to big computers and think about the implications of aforementioned big computers, which to me is about the most fun one can have in the world if you're the sort of person that likes thinking about the intersection of computation and civilization. I was tremendously lucky that the org took a chance on me in its early days (thank you Greg Brockman, Ilya Sutskever, Sam Altman) and I hope I can repay it by using the skills I gained there to achieve more good things in the world. I'm very excited about the future."
gwern#1782: hm.... |
chirp#4545: "OpenAI: we're adjacent to big computers"
chirp#4545: big-computer-adjacent?
gwern#1782: well, he personally was adjacent. he didn't actually do any of the GPU programming
gwern#1782: policy guy, as I said
3dprint_the_world#6486: if you want access to the *really* big computers, you gotta convince people that doing AI work is directly related to nuclear weapons safety
bmk#1476: does the govt really have bigger computers than Our Glorious Benevolent Supreme Overlord Google?
3dprint_the_world#6486: believe it or not, my guess would be: yes.
3dprint_the_world#6486: and also the government's supercomputers have way better interconnect topology and speed for compute-intensive workloads
45#2247: oh yes that kind overlord that is totally not reading this discord history after having hacked into discord computers in 2022 is bigger than any govt computer
3dprint_the_world#6486: Oak Ridge's Summit has 27648 V100's
ethan caballero#6044: Why is OpenAI publicly announcing Dario et al's exodus in a blog post? Lots of high profile leadership (Kingma, Karpathy, Goodfellow, Abbeel) has left OpenAI before and OpenAI didn't announce anything about it. It seems like OpenAI is preempting something.
AI_WAIFU#2844: This actually sounds a lot less impressive to me now than it used to. Google was fucking around with 8192 TPU pods IIRC
andyljones#7746: not leaving for a direct competitor, want to make it seem like an amicable parting
AI_WAIFU#2844: Cloud providers might not have the best interconnects, but I do think that they're at the top when I comes to raw compute.
3dprint_the_world#6486: but I'd argue that the interconnects make all the difference
3dprint_the_world#6486: nothing beats having all your hardware in literally the same room
bmk#1476: on the other hand, google has perfected the art of doing things without good interconnects or reliable hardware
triggerhappygandi#0001: Someone should start reworking all philosophy
45#2247: they did say something for elon musk no
triggerhappygandi#0001: No one is actually looking at the world 70 years from noe |
triggerhappygandi#0001: When humans have either to compete in labor market with computers, or flat out have lost in that regard.
triggerhappygandi#0001: What will everyone do when job isn't the most defining thing for people?
gwern#1782: it's the partnership thing, I suspect. if they are going for scaling and research, as it sounds like, who better positioned than OA to actually commercialize it?
triggerhappygandi#0001: _That_ is a worthwhile AI ethics problem to solve
gwern#1782: OA is the only entity who now has experience running models the size of gpt-3 or bigger for real world paying customers
Sphinx#2092: That we know of.
AI_WAIFU#2844: And the microsoft engineers
triggerhappygandi#0001: Yeah them too
gwern#1782: what's the biggest model anyone else runs commercially at substantial scale? T5? long way from there to gpt-3
triggerhappygandi#0001: The 2nd biggest language model wasn't even released as a paper
gwern#1782: (excluding MoE or embeddings)
3dprint_the_world#6486: basically they have 6 V100's per node, and 4600 nodes connected in fat-tree topology with 200Gb/s infiniband.
triggerhappygandi#0001: Let alone the code.
3dprint_the_world#6486: that's an insanely tightly connected system
Sphinx#2092: Why exclude MoE?
ethan caballero#6044: BERT is the highest ranked signal for Google Search now.
triggerhappygandi#0001: @3dprint_the_world who? OA?
3dprint_the_world#6486: no, Summit
chirp#4545: Source?
3dprint_the_world#6486: a government computer running nuclear weapon simulations |
triggerhappygandi#0001: Microsoft has Turing-NLG
45#2247: someone from this discord told me Google had replicated GPT-3 in a few months
gwern#1782: yes, but BERT is so small I assume someone's running a bigger
triggerhappygandi#0001: 17.5B
ethan caballero#6044: a googler told me
triggerhappygandi#0001: It is used in Bing
gwern#1782: really? I've never heard of turing-nlg being used in bing
triggerhappygandi#0001: It's being used _somewhere_
Sphinx#2092: Lol
triggerhappygandi#0001: I recall Bing from their blogpost
triggerhappygandi#0001: Might be wrong
bmk#1476: what who
triggerhappygandi#0001: It's not far fetched though@bmk
triggerhappygandi#0001: Google having their own GPT-3
ethan caballero#6044: Timnit and Jeff Dean
3dprint_the_world#6486: it kinda is far-fetched though, especially on the 'why would they do that' front
bmk#1476: i didnt know jeff dean was in this discord
triggerhappygandi#0001: How so? @3dprint_the_world
AI_WAIFU#2844: LIke we know that they did 600B MoE.
3dprint_the_world#6486: why would they replicate GPT-3 |
triggerhappygandi#0001: They might use it in their autocomplete everywhere
triggerhappygandi#0001: I'm pretty sure they can find a lot many uses for it.
triggerhappygandi#0001: Like the MoE they trained on GShard probably runs all of Google translate now
3dprint_the_world#6486: nah, afaik google still uses LSTMs in their google translate decoder.
for google, there's lots of things way more important than loss/accuracy.
for example, how much a model costs them to run.
triggerhappygandi#0001: I don't believe that
Sphinx#2092: To be fair, the lstm decoder is better
zphang#7252: yea translate has some big boi models
AI_WAIFU#2844: Even if they didn't have an immediate use for it, there's no way they didn't say "lets have this on the shelf in case we need it"
triggerhappygandi#0001: A 600B model isn't something you just train for nothing
gwern#1782: https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/ https://blogs.bing.com/search-quality-insights/september-2020/Introducing-the-next-wave-of-AI-at-Scale-innovations-in-Bing these comments are a little bit sketchy. they use 'project turing' work *in* Bing, and they talk about 'turing-nlg' for autocomplete... but how on earth do you run turing-nlg so fast that you can do it realtime?
triggerhappygandi#0001: What else is it useful for after being trained on WMT?
triggerhappygandi#0001: @gwern idk. Internally maybe?
triggerhappygandi#0001: I have no idea
gwern#1782: seems more likely that they are using a much smaller model *in* the Turing-NLG family for autocomplete, or possibly a very heavily distilled Turing-NLG
triggerhappygandi#0001: But it's hard to believe such huge models aren't being used somewhere
triggerhappygandi#0001: What is the purpose after all? Microsoft didn't even release a paper for it.
triggerhappygandi#0001: I don't think microsoft is like me when I first saw Karpathy's char-rnn
Sphinx#2092: Probably to brag about deepspeed |
Sphinx#2092: And that line of work
Sphinx#2092: Same idea for megatron
gwern#1782: just part of their infrastructure work for deepspeed/azure, in addition to any smaller spinoffs like gpt-2-1.5b for intellicode
45#2247: actually he told me he got the info from Connor and the discord, so if it's not common knowledge among you guys then he must have misinterpreted something
triggerhappygandi#0001: Just to dab on people? I find it hard to believe
gwern#1782: MS is very interested in getting people to use cloud AI, of course. 'commoditize your complement'
3dprint_the_world#6486: @triggerhappygandi ah here's the blog post https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html
bmk#1476: it's not common knowledge
bmk#1476: we have a *suspicion* of it
triggerhappygandi#0001: Google created a 600B MoE on GShard. I refuse to believe they just wanted to flex GShard and nothing else, with their model
bmk#1476: but nobody knows for sure
3dprint_the_world#6486: > Following our work decoupling different aspects of model performance, we have replaced the original GNMT system, instead training models with a transformer encoder and an RNN decoder
45#2247: what's the credence
bmk#1476: we know that google has code that can train gpt3 and we know the efficiency numbers of that code
triggerhappygandi#0001: Damn. But still, it's hard to believe that a 600B model is just sitting idly somewhere
bmk#1476: but there's a world of difference between being able to train gpt3 and, yknow, *actually training gpt3*
bmk#1476: moe parameters are not real parameters
triggerhappygandi#0001: They did dab on everyone with GShard @bmk
triggerhappygandi#0001: Yeah I know
Sid#2121: Colin confirmed that google had trained 'models of a similar size to GPT3' way back at the beginning of this project |
Sid#2121: but he could have been referring to moe / gshard for all we know
triggerhappygandi#0001: But it means they can easily train huge models.
bmk#1476: trained != *trained to completion*
bmk#1476: we've "trained" a 100B model
triggerhappygandi#0001: And who knows maybe they're working on a T6: Trillion parameters text to text transformer
bmk#1476: this is like that whole thing that msft pulled with deepspeed or whatever
AI_WAIFU#2844: Yeah and if the sever didn't OOM that would be 200B
bmk#1476: we trained a 100B model months ago
Sid#2121: @AI_WAIFU had no idea someone was working on scaling up btw - you got 100B working @ 12%? what config did you use?
triggerhappygandi#0001: So do you believe they just shelved the MoE? Surely it must be doing _something_
triggerhappygandi#0001: It can translate 100 languages among each other.
AI_WAIFU#2844: I just made it *E X T R A T H I C C*
Sid#2121: not really much point in speculating i guess, but the performance gains would have to be *massive* to justify the inference costs for a model of that size
Sid#2121: can you... be more specific lmao
AI_WAIFU#2844: I would post the config, but the server is still kill
Sid#2121: ah yeah
triggerhappygandi#0001: This is just sad then. They burn all that energy just to dab on people?
Sid#2121: well, it was a good paper
Sid#2121: and they could be using it, who knows
AI_WAIFU#2844: Realistically it would be more to develop the in-house expertise |
AI_WAIFU#2844: Getting that big is an art
triggerhappygandi#0001: Isn't the GShard paper just about "how to better use your exaflop supercomputer"?
Sid#2121: https://tenor.com/view/try-not-to-laugh-straight-face-lol-%E5%BC%BA%E5%BF%8D-%E5%BF%8D%E4%BD%8F%E4%B8%8D%E7%AC%91-gif-5564467
gwern#1782: as the authors pointed out on twitter, the whole point of doing a moe is that the inference costs are actually very low
AI_WAIFU#2844: "use your exaflop supercomputer" Is a non trivial task
triggerhappygandi#0001: Yeah definitely
3dprint_the_world#6486: mine bitcoin
Sid#2121: sure, lower than your *regular 600B model*
bmk#1476: this is glorious https://cdn.discordapp.com/attachments/729741769738158194/793622599543422986/unknown.png
gwern#1782: as in, low enough that they think such models could still be economical
triggerhappygandi#0001: In any case, if I had access to such compute I would make large models just to gawk at them
bmk#1476: well, i mean, if only anyone here had access to such compute
triggerhappygandi#0001: Why not make an image gpt-3
3dprint_the_world#6486: I would train a gigantic SVM and write a paper on it.
3dprint_the_world#6486: Gotta keep your competitors on their toes...
triggerhappygandi#0001: @bmk mtf is a pain to understand. But yeah. I will _someday_ abuse all that compute!
bmk#1476: S4: Super Sized SVM System
45#2247: transformers are pretty good for cv attm
triggerhappygandi#0001: Yeah. They're now even taking away jobs from CNNs.
Sid#2121: i mean, it's really just tf with named dimensions and 100 extra steps |
triggerhappygandi#0001: Network unemployment is at an all time high due to transformer immigration!
bmk#1476: also we're kinda moving to gpus rn
zphang#7252: train n-gram LM on the pile
Sid#2121: we still have a pretty impressive amount of TPU power to use
bmk#1476: yeah but for 1T we need gpus
Sid#2121: we could definitely do an image GPT whilst training our actual GPT
triggerhappygandi#0001: Why
bmk#1476: Tbh I'm not interested in iGPT3
triggerhappygandi#0001: Now that's just bragging
triggerhappygandi#0001: Also, shame. @bmk
triggerhappygandi#0001: How can you not
bmk#1476: Best case scenario imo is we train GPT3 on mtf and 1T on gpus
bmk#1476: *1T or bust*
triggerhappygandi#0001: IGPT3 is a scientific necessity
Sid#2121: dude, one step at a time lmao. we're not going straight to 1T
triggerhappygandi#0001: How large was image-gpt?
Sid#2121: 7-10B iirc?
bmk#1476: No, of course not, but a few steps on gpt3 to convince us it works is good enough imo
bmk#1476: Don't need to train gpt3 to convergence on gpus
bmk#1476: We do that on TPUs instead |
triggerhappygandi#0001: Yeah we definitely need IGPT3
triggerhappygandi#0001: We need to show those peasant GANs who's the boss
bmk#1476: Good luck getting sparse attention working on TPUs lol
Sid#2121: GPT3 training time on the amount of TPUs we have is still infeasible, i don't know what you think's changed
Sid#2121: linear attention bruh
triggerhappygandi#0001: O(1/n) attention
bmk#1476: I mean we just got efficiency on 100B up 5x
bmk#1476: That's a victory to me
Sid#2121: like i've been saying we can for... *checks watch*. ever
Sid#2121: still infeasible tho
triggerhappygandi#0001: How did that happen though?
bmk#1476: I didn't say it wasn't possible tho
Sid#2121: changing around the configs
triggerhappygandi#0001: What specifically though
bmk#1476: There's a difference between "it is possible to optimize it" and "it has been optimized"
Sid#2121: i could quote many times you said the codebase was irreperably inefficient and should be scrapped and built again from the ground up lol
bmk#1476: I've never said it should be scrapped *because of the inefficiency*
Sid#2121: dunno, waiting on the server to be back up so @AI_WAIFU can post the config
triggerhappygandi#0001: From what I saw in the configs folder the big ones didn't seem to have anything that would make them that much more infeasible
triggerhappygandi#0001: Okay |
bmk#1476: It's always been because of utterly cursed the code is
Sphinx#2092: Eh there should be a multilingual version
Sphinx#2092: Though I guess people are slowly inching in that direction with mT5
triggerhappygandi#0001: How about we also dabble in video generation
bmk#1476: I'm not convinced that languages I do not speak even exist /s
Sid#2121: @Lucas Nestler (ClashLuke) is doing this
triggerhappygandi#0001: Just completely destory Hollywood
triggerhappygandi#0001: He is?
Sid#2121: yes, with code built on top of GPTNeo i think
bmk#1476: ~~I mean have you ever *seen* a french person irl~~
triggerhappygandi#0001: Video generation with transformers? What a mad lad
Sphinx#2092: More interested in how it handles low resource languages
bmk#1476: French people are only a part of the extended Ratatouille universe
triggerhappygandi#0001: French? You mean little Canadians.
triggerhappygandi#0001: There still is a lot we can do
triggerhappygandi#0001: Achieve 0.1bits/dim on images
triggerhappygandi#0001: Simulate the brain
bmk#1476: In all seriousness the hardest part is collecting the data in sufficiently high quality to make my inner perfectionist happy
AI_WAIFU#2844: I already said it. I made it *E X T R A T H I C C*
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/793625939098468462/thumb_amir-dirtbf-wouldnt-it-be-cool-if-french-people-were-47283283.png |
triggerhappygandi#0001: Kek upload it in the repo. @AI_WAIFU
Sid#2121: https://tenor.com/view/beeg-yoshi-thicc-chungus-glitch-gif-16417702
triggerhappygandi#0001: @bmk IGPT3. But where do we get a billion images
Sphinx#2092: Yeah but how much data do we actually need? Could we expect in a giant multilingual setup, we don't need so much high quality data?
triggerhappygandi#0001: Nay, a trillion images
Sphinx#2092: This seems to be the case for things like unsupervised MT
triggerhappygandi#0001: We need to convince the world that it's ethical and beneficial for humanity that we should be allowed to access data from every single Google drive
Sid#2121: the problem with large image datasets is the child porn. We had a discussion about it in #research a while back and tldr fuck downloading 1 billion images without knowing what's in there because you're probably accidentally downloading some child porn.
triggerhappygandi#0001: Holy shit
triggerhappygandi#0001: Yeah
triggerhappygandi#0001: But I thought they didn't exist on public internet?
Sid#2121: of course they do
triggerhappygandi#0001: :guilty:
Sid#2121: it's not like *every image* on the public internet is vetted
triggerhappygandi#0001: Wtf
gwern#1782: the standards of child porn also changes over time
bmk#1476: ~~What if we just pay a few dozen people a tenth of minimum wage to look through all of the data to make sure it's not cp problem solved~~
gwern#1782: I'm pretty sure if I posted some of my childhood pictures, I'd be banned off fb/reddit/twitter within hours...
Sid#2121: yes, it's also context dependent
triggerhappygandi#0001: Can we collect like 1 million most watched youtube videos and split them by frame and make a dataset out if it? It would be highly correlated but could work if split randomly |
Sid#2121: see all the controversy about Sally Mann's photos
gwern#1782: google's released youtube datasets like that
3dprint_the_world#6486: some famous podcaster (I forget who) basically did a bit on this, where they said their mothers had a huge stash of their cp in albums (or hard drives)
3dprint_the_world#6486: one mother even had a picture of their kid naked right in their lounge
3dprint_the_world#6486: she thought it was hilarious
triggerhappygandi#0001: This just makes me sad
triggerhappygandi#0001: :zucc:
cfoster0#4356: *finger hovering over an imaginary #off-topic button*
bmk#1476: no, off-topic is busy having an on topic discussion rn
cfoster0#4356: Lmao
Sid#2121: CHANGE PLACES
AI_WAIFU#2844: actually #off-topic is talking about alignment
bmk#1476: yes that's on topic for eleuther
triggerhappygandi#0001: Creating image dataset by splitting videos by frame seems _not_ totally broken to me though
AI_WAIFU#2844: I read off topic for some reason
Sid#2121: google already has their ultra secret JFT-300M
Sid#2121: but yeah, that could totally work
Sid#2121: maybe we should do it
Sid#2121: i'd be less worried about accidentally downloading cp on youtube
triggerhappygandi#0001: Too bad we can't access it. I bet it had a lot of images from Google Photos |
Sid#2121: you'd get a lot of repetition tho
bmk#1476: also a certain someone may or may not have a copy of all of instagram
Sid#2121: REDACTED
triggerhappygandi#0001: Lol
bmk#1476: we do not speak about it
triggerhappygandi#0001: :bigzucc:
triggerhappygandi#0001: He needs to know
triggerhappygandi#0001: How is this even possible
triggerhappygandi#0001: Where do you store all of instagram
bmk#1476: REDACTED
45#2247: in a zucksb
Sid#2121: there's actually a vice article about it @triggerhappygandi https://www.vice.com/en/article/8xxzjz/inside-the-insane-plan-to-build-an-unofficial-archive-of-all-of-instagram
triggerhappygandi#0001: Noooooooo I need knowledge
Sid#2121: https://www.reddit.com/r/DataHoarder/comments/5m36zr/distributed_archivingsnapshots_of_instagram/ and a reddit thread about the process
bmk#1476: we need him on our "Pile v2: 100 TB" paper lol
triggerhappygandi#0001: Inb4 this gets out of hand and someone has hoarded all of the internet
bmk#1476: lol imagine not already hoarding the entire internet
triggerhappygandi#0001: "iN cASe It dIeS"
triggerhappygandi#0001: Train one model on text, images and audio
triggerhappygandi#0001: With a v4-65536 |
JJungle#0074: https://slideslive.com/38946802/boston-dynamics
chilli#5665: :thonk:
StellaAthena#3530: > Not all catfishing is bad. I do it on a social network where there is a problem with men harassing young women. And when I do it, I always crop and warp the images so they won't show up in reverse image search.
>
> Also I've tried scraping a small sports site. 6 years ago. And they figured it out after 30+ wgets. I highly doubt IG is going to not notice someone scraping their site.
StellaAthena#3530: Well that’s a take
chilli#5665: :thonk:
chilli#5665: if you're spending your spare time catfishing on a social network for white knight reasons
chilli#5665: ...
bmk#1476: lmao what
chilli#5665: like, if you're a guy catfishing girls on tinder or something
chilli#5665: still weird
chilli#5665: but understandable
chilli#5665: this is just very weird
Marcus H#9589: You could also consider the effects on your psychology that practicing catfishing might create
Marcus H#9589: ¯\_(ツ)_/¯
gwern#1782: wow. all I can say is that if he really has talked himself into thinking he's doing good (rather than cloaking gratifying his sadism and machiavellianism under the guise of feminist virtue signaling), I'm glad he's been diverted into a time-suck like scraping instagram which at least might be useful
bmk#1476: No, this guy is *replying* to the Instagram scrape guy
bmk#1476: The ackshuyally-catfishing-is-good guy is arguing with the Instagram scraper guy because the Instagram scraper guy proposed using the scrape to fight catfishing
StellaAthena#3530: Yeah.... he went #NotAllCatfishers really hard there. |
asparagui#6391: tpu's have 496 gbps interconnects, fwiw
AI_WAIFU#2844: \> server is still kill
\> feels bad man
triggerhappygandi#0001: What is the catfish guy against here? Fighting catfishing shouldn't be an contested argument.
chilli#5665: catfishing is sometimes good
triggerhappygandi#0001: He has a hot take
thenightocean#6100: What is catfishing? 🥸
StellaAthena#3530: It’s when you create an identity, typically a social media profile, for the explicit purpose of trying to deceive people as to who are you. Often this involves using photos of somebody else and inventing stories and anecdotes about your life. It’s most commonly used as a way to scam people out of money or to misrepresent yourself on dating apps.
StellaAthena#3530: A common use is to sexually extort people after getting them to send you compromising photos
triggerhappygandi#0001: :zucc:
triggerhappygandi#0001: Why do people send nudes
triggerhappygandi#0001: I know being hyper vigilant about everything is paranoia, but damn don't set the bar so low with the trust
StellaAthena#3530: @triggerhappygandi By definition, you can only be betrayed by someone you trust.
triggerhappygandi#0001: Then never trust anyone enough to send nudes, I guess. Blackmailing has increased alarmingly since the last decade because of that.
dopa#3178: it is possible to betray your self, for example procrastination, delusions, and dogmatic beliefs. 🙂
StellaAthena#3530: Do you think that’s actually viable?
StellaAthena#3530: On a more mechanistic level, a lot of people enjoy taking nudes
triggerhappygandi#0001: Idk. I am way too detatched from most social media to understand the fun of it. How is this fun to anyone when you know it can be used against you?
Sphinx#2092: You could argue that about a lot of things.
dopa#3178: some people, simply, just don't care if it some how will be used against them |
StellaAthena#3530: This isn’t about social media. We’re talking about one-on-one interactions, sometimes but not always in person.
StellaAthena#3530: The list of “things that can be used against you one day” is *extremely* long.
dopa#3178: also it just world view, for some I can imagine sharing nudes is same as taking any other picture.
realistically, there not many ways it can be used against you, unless your work has policy against it
dopa#3178: for example, NYT doxxed onlyfuns girl, who is EMT I am not sure if she got fired
dopa#3178: also if some dedicated whats to use some material as compromise, honey pot usually likely course of action
dopa#3178: highly level espionage at least what publicly available mostly was done because of outright money, threats to family members (this worst one), or honeypots, if I recall correctly
dopa#3178: 1. subject does want to do it for money then
2. use honeypot if not successful then
3. threaten by exploiting family weakness for small favor then
4. use small favor as compromise to further control and exploit individual/organization
StellaAthena#3530: @dopa Yup. 90% of the US security clearance process is
1. What would you do for a million dollars?
2. How blackmailable are you?
3. Do you have close friends or family living in countries where they could be kidnapped in an attempt to coerce compliance.
StellaAthena#3530: The number 1 no-no is lying to the investigator. They don’t care how much drugs you did in college or what fetishes you have. They care if exposing those things would cause you extreme distress. And the fact that you’re trying to conceal it from the investigator is highly correlated with that.
StellaAthena#3530: Lots of people get disqualified for lying to cover things up that aren’t disqualifying, which is hilarious and a little sad.
dopa#3178: it is sad that some people just have need to lay 😦
dopa#3178: the thought that family member can be threatened is the scariest one does not matter which country
triggerhappygandi#0001: @StellaAthena yeah but I'm talking specifically about sending nudes. |
triggerhappygandi#0001: I know people who have been fucked because of it.
chilli#5665: How so
triggerhappygandi#0001: By being blackmailed because they send nudes to an overall piece of shit person
chilli#5665: Are they guys or girls?
triggerhappygandi#0001: Girls exclusively
dopa#3178: what was purpose of blackmail ?
chilli#5665: Ah I could see it
triggerhappygandi#0001: Hmm. 2 of my friends were being harassed by an ex
triggerhappygandi#0001: Another didn't tell me. That's my sample size.
StellaAthena#3530: Here’s a positive example, since you’re biased towards negative stories: I have sent nudes to probably a dozen-ish people without any issues.
dopa#3178: also blackmail in 80% cases is fake, it is a bluff
triggerhappygandi#0001: I mean, in this case, negatives far outweigh the positives don't they?
StellaAthena#3530: I’ve also sat for nude charcoal drawings and been recorded having sex
dopa#3178: what is negative, someone will find naked picture on goolge ?
StellaAthena#3530: I’m not saying otherwise. I’m saying that unless you’ve received them you have no way of knowing how many people have positive stories about sending nudes.
StellaAthena#3530: In such situations I think it’s an epistemic good to provide contrary data points
triggerhappygandi#0001: Probably. If it's now a thing, it must be because most people don't face any issues because of them. I'm not saying otherwise either. Just had some bad experiences because of it.
chilli#5665: You could say something similar about a lot of things, like drugs
triggerhappygandi#0001: I would but it's now a sensitive topic to people.
dopa#3178: I still don't understand what the issue 😦 |
dopa#3178: for drugs you can for sure get fired and for right reason too, same for drinking
dopa#3178: showing up hangover in my old work would result in higher chance of getting fired on spot
StellaAthena#3530: Sending nudes isn’t “now” a thing. It’s something I have personally done for over a decade and has been widespread for much longer
triggerhappygandi#0001: Let's say someone you know finds your nudes. The effects could vary from being mildly shocked to outright drastic life changing measures like having to pull out from the place you contacted the person, such as school. @dopa
Dromarion#3383: They dole out the death penalty in Asia for moving drugs, which a lot of travelers get tricked into doing.
dopa#3178: you can get kick out from university for nude photos ?
StellaAthena#3530: Some places, yes.
triggerhappygandi#0001: Your parents could pull you out. Based on how conservative your household is
triggerhappygandi#0001: And yeah you can be
dopa#3178: that just sucks
triggerhappygandi#0001: I know
dopa#3178: this my long time principle if don't want for information to be released publicly, do put it any digital or writing form with any one
Dromarion#3383: I think even using drugs are dealt with draconian measures in Asia, like you can be detained in rehab and put on monitoring. It's because of the Opium wars.
triggerhappygandi#0001: Yeah. It's not worth the fun.
triggerhappygandi#0001: @StellaAthena you seem to have a wild life
dopa#3178: the older I get the more I dislike drugs, from personal experience and observations, alcohol is also drugs as legally prescribed pills and illegal drugs
dopa#3178: issue for me is that alcohol worst since it is social accepted, but you don't get dependent on it as easily before it will start effecting your life
dopa#3178: with other drugs, you get bust in performance or fun, but realize to late that it effected your life in substantial negative way
dopa#3178: it not about physical addition as is about changes in behavior
triggerhappygandi#0001: Same with smoking. |
dopa#3178: I am smoker, I know it, I understand it, but still love smoking
triggerhappygandi#0001: I suggest you stop then
dopa#3178: in fairness if I am with people who don't smoke I don't smoke also
dopa#3178: this very interesting problem to me, because I understand, but do not connect to put my understanding in to action
triggerhappygandi#0001: Your lungs are subjected to more radiation than an astronaut unsheltered by the atmosphere, or even the basement in Chernobyl, due to the C-14
triggerhappygandi#0001: Have you tried nicotine?
Sahl#0630: If you make a plan but there’s no way to put it into action then it’s not a plan
dopa#3178: I but I can't find literature on that covers from understanding to decision gap
dopa#3178: I honestly just like smoking
Sahl#0630: Why do you like it? Social reasons?
dopa#3178: I quit multiple times
dopa#3178: it something about taking break and going outside or kitchen and have smoke
triggerhappygandi#0001: Well if it's accelerating your mortality then it's not worth all the fun in the world.
dopa#3178: I can't put into words
dopa#3178: just habit that I enjoy
Sahl#0630: Sure
dopa#3178: don't get me started on mortality
dopa#3178: 🙂
Sahl#0630: Honestly if living longer isn’t as important to you maybe smoking is optimal
Sahl#0630: But maybe you could replace smoking with a better habit that lets you take a break too |
Sphinx#2092: There are more toxic habits, like playing dota 2.
triggerhappygandi#0001: League of legends@Sphinx
triggerhappygandi#0001: If you want to die faster.
dopa#3178: on one hand it would be sad if my brain is still functional but I have do die early, but on another hand if my brain is deteriorated to degree where I can do work as I am doing it now, I think accept death
triggerhappygandi#0001: I can't believe they made an AI to play dota 2. Combined with GPT-3, it will be genuinely the worst entity to have ever been birthed by a computer.
dopa#3178: funny think is faced near death moment, was very close of being killing, and I was not scared or anything like that, I just got so pissed off, that evil people will be alive, it I had died that day it would be my last thought
Sahl#0630: I think most old people don’t have a neurodegenerative disease
Sahl#0630: If you die you can no longer take action against evil
dopa#3178: some of my family member are getting very old, and it is scary how brain deteriorates, and it will happen to all of us
Sahl#0630: Some people die lucid
Sahl#0630: Besides you should be making that choice later
Sahl#0630: Once you know
Sahl#0630: You’re stealing life from your future self
Dromarion#3383: My main vice is gambling and speaking from experience it's not worth it lol.
dopa#3178: it not certain that smoking will effect me is such way
Sahl#0630: It is very likely
Sahl#0630: Smoking scores low on your value function
dopa#3178: that is so true
triggerhappygandi#0001: @Dromarion did you lose like a house or something?
dopa#3178: I think I will not quit but dramatically slow down smoking |
triggerhappygandi#0001: You should
Sahl#0630: What has helped me personally is replacing the habit
dopa#3178: the crazy part is that in my 20's I had better stamina then non smokers
dopa#3178: while I was smoking ~pack a day
dopa#3178: then computers made me weak 🙂
Sahl#0630: So find something else you can do for a break (tea or something)
Dromarion#3383: My highlight was being up 3k shorting Disney in the span of an hour on March. It all goes away in the end. Markets only go up.
dopa#3178: yep, I am growing avocado tree from a seed might help me slow down smoking, I will take a break and talk to a tree instead
Sahl#0630: pog
Dromarion#3383: I don't know how Tesla investors do it. They either don't actually look at the stock or they age twice as fast.
triggerhappygandi#0001: Gambling on stocks
triggerhappygandi#0001: I see you hate money
dopa#3178: casino is better to gamble at least you get free drinks 🙂
triggerhappygandi#0001: Also, isn't there a cap on how much money you can earn on short selling?
triggerhappygandi#0001: Atleast with the normal game, there is no cap. If the company goes up 1000x your money scales.
triggerhappygandi#0001: But with short selling the best you can do is get away with all the money you made on shorting, and that's when the company goes to 0.
triggerhappygandi#0001: I don't think they tend to do that.
dopa#3178: by short selling stock you can make as much position price
triggerhappygandi#0001: Yeah
triggerhappygandi#0001: By playing normal you can earn without a cap |
dopa#3178: sharp ratio
dopa#3178: you just try to maximize profits by doing more trades, at least in theory or belief
dopa#3178: my advice if you getting in active trading know from who or what you making money of, if you don't know go to casino, or open saving account.
Dromarion#3383: I don't know, if you thought I hated money before, my position was actually in put options. In any case it's not like any companies were going up in March.
Sahl#0630: Don’t you maximize profits by investing in ETFs since they have no unsystematic risk
dopa#3178: they have average returns, that is generally lover then some stocks
dopa#3178: but yeah, if just buying and hold SPDR ETF's is way to go, then you play game to guess which market sectors over years will generate higher returns
Sahl#0630: There is some way to adjust the risk and be compensated for it
Sahl#0630: I don’t remember it though
triggerhappygandi#0001: Hahahahaha yeah it was a very _safe_ gamble
triggerhappygandi#0001: But still a gamble nonetheless
Sahl#0630: Otherwise though if your portfolio is only a few stocks you’re taking on unsystematic risk and not being compensated
triggerhappygandi#0001: Call me old fashioned but I think Warren Buffett does it right. Just buy a stock and keep it for 5-6 years.
triggerhappygandi#0001: Provided you don't buy shit in the first place.
dopa#3178: you also can buy preffered stocks ETF with higher return then any savings account
Sahl#0630: Warren Buffett tells people to do index
triggerhappygandi#0001: If you had Google stock in 2015 you can't possibly be regretting it.
dopa#3178: it right now on 5% annually in dividends
triggerhappygandi#0001: To beginners @Sahl
triggerhappygandi#0001: Index funds to better than many hotshots with fancy MBAs |
Sahl#0630: Yes
triggerhappygandi#0001: Which then makes them come up with "efficient market hypothesis"
dopa#3178: options wise, I think there is opportunity to write pricing engine on scale, but I am not looking getting into this again, yet
triggerhappygandi#0001: Like yeah when everyone is blinded to the long term of course one person can't outsmart everyone else in 3 months.
dopa#3178: it is like a drug, you can't do anything else your life style form around trading, I did not like that
triggerhappygandi#0001: If your portfolio was simply Microsoft, Amazon, Apple and Google, you have probably beaten the shit out of index in last 5 years.
Sahl#0630: Yeah but it still might have been the wrong decision
Sahl#0630: You can always beat the index in hindsight
dopa#3178: looking back is always easy to say such things 🙂
triggerhappygandi#0001: How so
triggerhappygandi#0001: Oh yeah
triggerhappygandi#0001: Index is a very safe bet.
Sahl#0630: Because if your evidence wasn’t good enough you shouldn’t have invested
Sahl#0630: Even though it turned out well
triggerhappygandi#0001: Not at all. You can still buy Amazon/Microsoft and not regret it 5 years later.
Sahl#0630: Because in most outcomes it didn’t
Dromarion#3383: I definitely had to get up way earlier when I was trading.
dopa#3178: so AMZN and MSFT are risk free money ? 🙂
triggerhappygandi#0001: I would say so
triggerhappygandi#0001: In the _long term_ |
Sahl#0630: hmm I wonder what you’re being compensated for
dopa#3178: please don't say risk free, your money in bank account is not safe, let alone returns on stock market
triggerhappygandi#0001: There is no way Amazon is going to be worse off 5 years later than it is today
Sahl#0630: legislature in US?
triggerhappygandi#0001: It could adversely affect them, but Jeff Bezos isn't a hollow sales person leading someone else's company
dopa#3178: you know US credit is not AAA, but AA right ?
triggerhappygandi#0001: I would still bet on him to get a workaround
Sahl#0630: you’re taking on unsystematic risk though
Sahl#0630: that’s risk you don’t get compensated for in theory
Dromarion#3383: Tesla stock is like a mental roller coaster
triggerhappygandi#0001: I read a book on Warren Buffett just for fun because audible suggested me. He says that going so deep in numbers is unnecessary. If you think the management is adequate, and motivated in performing good, it's always good to invest in them@Sahl
triggerhappygandi#0001: I don't understand any of the finance terms, but if I was American I would still buy Google nonetheless
triggerhappygandi#0001: Or Nvidia
triggerhappygandi#0001: Or AMD.
triggerhappygandi#0001: Actually AMD and Nvidia are probably the best bets I can think of right now
Sahl#0630: That’s true. However, investing in negatively correlated securities allows you to have high return and low risk!
Sahl#0630: Where each alone would be high return high risk
triggerhappygandi#0001: How so
triggerhappygandi#0001: Oh I understand
triggerhappygandi#0001: You invest in opposite companies? |
Sahl#0630: https://en.m.wikipedia.org/wiki/Modern_portfolio_theory
Sahl#0630: This model
dopa#3178: https://www.thestreet.com/investing/stocks/a-short-history-of-aol-from-youve-got-mail-to-verizon-13148737
triggerhappygandi#0001: I doubt I'll understand any of it 😅
Sahl#0630: And the end prediction of this theory is that every investor holds the same portfolio with weights adjusted to risk
Sahl#0630: So you buy a combination of index and T-bills
dopa#3178: this math looks so simple compare to ML lol
Sahl#0630: Otherwise you are not being compensated for risk
Dromarion#3383: All the AMD investors I know simp for the CEO
Sahl#0630: Super simple math yeah
triggerhappygandi#0001: All I know is that people in Stanford teach you that you can't beat the market, while Warren Buffett does so with casual common sense.
triggerhappygandi#0001: Lmao
Sahl#0630: Keep in mind this theory makes assumptions like efficient market hypothesis
Sahl#0630: But if that’s close to true the theory should be more and more predictive
triggerhappygandi#0001: AMD is relatively cheap. Only $80 or close as of now. I have zero doubts that it could get >500% in 5 years.
dopa#3178: > zero doubts
while I have some doubts that I might be not alive lol
Sahl#0630: Unsystematic risk is risk you can get rid of just by diversifying so if you buy only a few securities you take on extra risk
dopa#3178: only zero doubts is death
dopa#3178: and entropy |
dopa#3178: 🙂
Sahl#0630: na I’ll be eternal in a reversible computer
Sahl#0630: Simulating the same 10000 years back and forth
triggerhappygandi#0001: Well... Okay if you're going to be like _that_, then yeah. I have 3% doubt
triggerhappygandi#0001: But you get my point
triggerhappygandi#0001: In 2025 AMD stock would be hovering around $400 easy
dopa#3178: just look at historical prices over 100 years
Sahl#0630: What if NVIDIA comes out with more and more machine learning optimizations and AMD dies out?
Sahl#0630: plausible outcome
triggerhappygandi#0001: But why? Like I understand there might be some mathematical equation, but math doesn't determine future supply/demand or innovation
dopa#3178: econometrics do, at least they model it
triggerhappygandi#0001: They're already working on their version of CUDA @Sahl
Sahl#0630: It’s like playing both sides in a war
Sahl#0630: At least one will win and prosper
triggerhappygandi#0001: Yeah
triggerhappygandi#0001: And I'm saying both companies will be much bigger than they are today
triggerhappygandi#0001: It's a non zero sum game
dopa#3178: math determines everything, the fact they money is numbers is major evidence for it
Sahl#0630: I don’t know exactly how predictive the theory is
triggerhappygandi#0001: It doesn't predict future though |
Sahl#0630: But it probably is to some extent
dopa#3178: it doesn;t but you do ?
Dromarion#3383: I remember now that one of my friends risked his life over the Disney trade. He's a literal Florida man who went to Disney world during a pandemic to interview employees on how it was affecting their business.
triggerhappygandi#0001: No. But I can see that people aren't just going to stop creating better computers @dopa
Sahl#0630: Wouldn’t that be insider trading if the results of the interview weren’t public
triggerhappygandi#0001: And these two are the most innovative companies in the hardware market right now
dopa#3178: but people might stop buying better computers or slow down
triggerhappygandi#0001: Why
dopa#3178: economics ?
dopa#3178: less income
triggerhappygandi#0001: They haven't in 20 years
dopa#3178: stagflation (more correct)
dopa#3178: etc ...
triggerhappygandi#0001: Very unlikely scenarios
Sahl#0630: The main reason I don’t pick stocks is I don’t think I’m smarter than most other investors
Sahl#0630: If I’m not then I won’t gain out of it more than index
triggerhappygandi#0001: How does Warren Buffett do it then according to you?
Dromarion#3383: Well he didn't work at Disney, he just went to their parks.
Sahl#0630: Because he’s smarter than most investors?
dopa#3178: he might be just lucky |
Sahl#0630: True
triggerhappygandi#0001: His strategies seem to me very sensible though
Sahl#0630: Yes but his intuition could be better than yours
triggerhappygandi#0001: It is
Sahl#0630: Or you could be better who knows
Sahl#0630: But likely not
triggerhappygandi#0001: But I'm saying that it's not hard to understand how he did what he did
chilli#5665: Fundamentally, AMD is not a software company
chilli#5665: That's always been their big problem
dopa#3178: in fairness, buffet puts lots of working in companies reaserch
triggerhappygandi#0001: Yeah@dopa
dopa#3178: if you do same, like 10 hours a day
dopa#3178: then you make money
dopa#3178: but if you think you can just buy stocks, without doing lots of work, just because of beliefs
dopa#3178: then your are gambiling
triggerhappygandi#0001: Who knows? You think Nvidia is a better software company? Just asking
dopa#3178: you have to treat it is as such
chilli#5665: What? Yes, Nvidia is obviously the better software company
Sahl#0630: Another investing tip: always assume AGI won’t exist in the future 🙂
chilli#5665: Are you asking who knows this? |
triggerhappygandi#0001: Yeah
Sahl#0630: smart AGI that is
triggerhappygandi#0001: I think their version of cuda could work too
chilli#5665: I think most people who have worked with both could tell you this
dopa#3178: this good point, like when investing you have to think in reverse
dopa#3178: not how much money you can make
triggerhappygandi#0001: They managed to catch up to Nvidia in terms of consumer GPU performance
dopa#3178: but how much you will lose
triggerhappygandi#0001: And they're beating the shit out of Intel rn
chilli#5665: But you could always just look at performance of cudnn vs rocm
chilli#5665: Or how nvidia has been pushing heavily on things like apex, cudagraph, etc.
dopa#3178: AMD and Nvidia is interesting battle
triggerhappygandi#0001: Okay yeah Nvidia is better then. But that doesn't translate to AMD never improving. Rocm _could_ take off
chilli#5665: Or any of Nvidia's more commercial software offerings like DLSS, autonomous driving, or their recent videoconferencing stuff
dopa#3178: because AMD tends to be more opensource where NVIDIA is not
triggerhappygandi#0001: @chilli alright alright I get the point
dopa#3178: AMD GPU getting on par with NVIDIA to some extent
chilli#5665: Perhaps, but AMD has been lagging behind for nearly a decade now
triggerhappygandi#0001: I hadn't thought of Apex and dlss lol
chilli#5665: It's partially just how they view themselves as companies |
chilli#5665: AMD views themselves as a hardware company
chilli#5665: Nvidia calls themselves a "solutions" company
dopa#3178: with open source drivers
triggerhappygandi#0001: Yeah. And Nvidia now sees themselves as an AI visionary
chilli#5665: They have open source drivers because they're so far behind nvidia lol
triggerhappygandi#0001: Jensen Huang is all in on AI
dopa#3178: NVIDIA remindes me for some reason of matlab
triggerhappygandi#0001: Why
dopa#3178: then there comes out python
dopa#3178: matlab is useful as service more then anything else, this is how I see it
dopa#3178: but python is gaining more and more in popularity
dopa#3178: the question is can solutions service company compete with open source communities
dopa#3178: there fair possibility that python will get more and more of matlab user base, in future
dopa#3178: I don't think it will go away completely but it is over for there's growth, unless they adapt
dopa#3178: seems like same happening with AMD and NVIDIA
dopa#3178: hardware + NAVIDA close source, ready solutions and hardware + AMD open source, github solutions 🙂
dopa#3178: may be MS see's this also same way, and it reason why MS changes 180 degree in relation to open source
dopa#3178: it also possible NVIDIA might be planning for open source strategic move and only will respond after AMD becomes sufficient threat
Dromarion#3383: Is talking to customer facing employees really Insider trading? Like if I go to a MacDonald's and the cashier tells me their place is doing well and it compels me to buy MacDonald's stock, am I Insider trading 🤔
bmk#1476: No, but also when the shoeshine boy gives you stock tips... |
triggerhappygandi#0001: It's literally zero inside information lol
triggerhappygandi#0001: You think he's in any loop?
triggerhappygandi#0001: Like even if you face a lawsuit for it, the judge will laugh at it aswell
Dromarion#3383: I guess it's more comparable to due diligence then. Still I was pretty freaked out over my friend actually going to Disney world just to do research, a week later there was a report of a guy who went there later dying of Covid
dopa#3178: I knew person who would call companies and ask them questions or requesting additional information
triggerhappygandi#0001: Anyone here interested in chess? Something very very dramatic just happened today.
triggerhappygandi#0001: Magnus Carlsen, arguably the best player in history, got straight up checkmated over the board, after his opponent had blundered a piece.
gwern#1782: welp. even homer nods
StellaAthena#3530: @triggerhappygandi Wait what!? By who?
StellaAthena#3530: (For some context, over-the-board checkmates are rare and Magnus is by far the best chess player in the world. In October of this year he finally ended his *two year undefeated streak* in classical chess)
dopa#3178: what is the argument that Magnus is not best player in the world ?
triggerhappygandi#0001: His rating
triggerhappygandi#0001: 2881 is the highest ever recorded.
triggerhappygandi#0001: @StellaAthena a Russian GM Daniil Dubov
dopa#3178: hmm, rating does not compute to me as a measure
triggerhappygandi#0001: How about him going years without losing?
triggerhappygandi#0001: As Stella said, he recently ended his 2 years unbeaten streak in classical format.
dopa#3178: years without losing in chess == best player in the world
dopa#3178: sorry I just realized you said in history, not in world like current world
triggerhappygandi#0001: Well, he's the youngest ever world champion, hasn't lost his title ever since, and is the #1 rated player in all 3 formats. |
dopa#3178: I am not sure if he best player in history
StellaAthena#3530: Carleson is one of the four best players in history, for sure. The argument is largely about how you measure "best"
dopa#3178: probably that would be Kasparov ? 🙂
triggerhappygandi#0001: He is probably the best human chess player to ever live. Now I know it's unfair to do a historical rating but still. Him being checkmated is very, very rare.
triggerhappygandi#0001: He was 13 when he drew a match against Kasparov, and beat Karpov in a match at 12 @dopa
dopa#3178: god
triggerhappygandi#0001: Not a game but a match
dopa#3178: I did not know that much about him
triggerhappygandi#0001: Yeah he is pretty hardcore
triggerhappygandi#0001: His endgame skills are probably better than Fischer (can't compare them sadly) and he can calculate middlegame like stockfish lite
dopa#3178: I wish we could see his brain scann compared to other humans 🙂
StellaAthena#3530: Carlsen plays the best chess of anyone ever
Kasparov was the most dominant player over a long period of time
Fisher was the most dominant player over a short period of time
Wildcard slot for your personal fav who doesn't really belong in the conversation
triggerhappygandi#0001: For context, when he was 4, he knew the area and population of every single municipality in Norway@dopa
triggerhappygandi#0001: Lol@StellaAthena
triggerhappygandi#0001: Mikhail Tal
triggerhappygandi#0001: Obligatory
dopa#3178: by age of 4 I took TV, table a part and made multiple holes in wall |
triggerhappygandi#0001: But yeah. There probably is no one more competitive than these three. They can grind out a game for 6 hours til the kings are bare naked and then agree to a draw. And even out of these three, magnus squeezes the most water out of a stone.
dopa#3178: was electrocuted also, blame electric shock for my writing skills lol
triggerhappygandi#0001: There was a statistical analysis that Magnus won positions where 70% of the grandmasters would just agree to draw.
dopa#3178: I can't play longer then 40min, I just stop thinking, cognitive fatigue is real
triggerhappygandi#0001: It is.
bmk#1476: by the age of 4, i had irreparably broken a computer
bmk#1476: this started a continuing trend
triggerhappygandi#0001: By 4 I ate dirt.
dopa#3178: I so much thankful that my parents let me take anything apart
bmk#1476: i continue to break computers to this day
dopa#3178: now I just create problems in computers lol
dopa#3178: is Magnus memory is the main factor or there is something else ?
triggerhappygandi#0001: Probably.
triggerhappygandi#0001: His memory is near perfect
StellaAthena#3530: There's a lot of factors. I stopped playing competitive chess because I was maxing out how long I could focus on one thing.
triggerhappygandi#0001: You've played competitive? Damn.
triggerhappygandi#0001: Had a FIDE rating? @StellaAthena
dopa#3178: I am practicing chess to play Stella one day 🙂
StellaAthena#3530: Memory is important, but memorizing the first 10 book moves of every opening and every 10 piece end-game won't make you the best player in the world.
triggerhappygandi#0001: True. But even among his peers magnus has an unreal memory |
zphang#7252: > Carlsen also plays speed chess online under many screen names. He is best known for trollish openings and playing strong despite banter and gags.
dopa#3178: this is reason why I asked, what are driving functions; attention, memory, causal inference, what else ?
triggerhappygandi#0001: He can tell the name of players, the year, the location and even the tournament where the position occured, just by looking at board position.
StellaAthena#3530: Memory, focus, mental discipline, and methodicalness are the most important over the board traits.
triggerhappygandi#0001: A far better memory than anyone in your weight class sure does help. Not to mention he's atleast 1 standard deviation above genius iq.
bmk#1476: i wonder how much of strong long-term memory is nature vs nurture
triggerhappygandi#0001: What I wonder is how to get even 1/4 of this in me lol
dopa#3178: probably both
bmk#1476: sure, but how much of each
dopa#3178: how do you even measure this ?
dopa#3178: I mean you would need to reverse engineer neural genetics, to measure it
StellaAthena#3530: These traits are less important than they might seem though, IMO. Chess is about practice and prep, not sheer brilliance.
triggerhappygandi#0001: But sheer brilliance acts as a cheat code.
triggerhappygandi#0001: But yeah
triggerhappygandi#0001: That's why Russians dominated chess for almost all of FIDE history
StellaAthena#3530: This is a very common misunderstanding. Getting good at chess takes a very long time and a huge amount of work. You'll do better as someone who is very studious and willing to practice 12 hours a day than merely being brilliant over the board.
triggerhappygandi#0001: Indeed. Russians even have dedicated schools for that
dopa#3178: also it can be still both, like more weight nature be lower but weight on nurture be higher and vise versa
interesting if there is genetic natural limit.
triggerhappygandi#0001: And they're not your average summer camps. |
bmk#1476: this might be a very hot take but it's kind of a waste to funnel really smart people into full-time chess, go, etc
StellaAthena#3530: Many of the top players in the world right now were 2400 to 2500 ELO when they were 13.
bmk#1476: especially early age dedicated schools
triggerhappygandi#0001: Einstein lamented that Emmanuel Lasker didn't study math full time lol@bmk
StellaAthena#3530: It took carlsen seven years to go from 2450 to 2800
bmk#1476: to be clear, this isn't a "lol we should ban chess" take
dopa#3178: why not if someone super smart wants to play only chess who cares
StellaAthena#3530: @bmk I agree actually. I'm glad that as a child my mother didn't force me to study chess all the time. A lot of people will make it to their first junior nationals or whatever and their parents will decide that They Were Destined For This and make it their world.
dopa#3178: I don't think everyone's mission should be to contribute to science/society in life
triggerhappygandi#0001: Indeed. I always imagine what these 2800 ELO players could do in math/physics @StellaAthena
triggerhappygandi#0001: @dopa probably, yeah. But you have to wonder what they can achieve in academia
bmk#1476: especially since math/physics isn't nearly as "prestigious" in a sense at that age
dopa#3178: I think there should be factor of motivation, will to contribute
dopa#3178: they might not enjoy doing science at all
bmk#1476: like nobody is making special math schools and hunting for kids with an iq above [number] to bring to their Ultra Super Math School
zphang#7252: imagine, they could be fine-tuning BERTs and hill-climbing on SOTAs instead
dopa#3178: this is true we need more STEM 100%
bmk#1476: *more grad students for the grad student descent god*
gwern#1782: not in the US, anyway. there are occaisonal institutions overseases like I think Kolmogorov had a math one in russia akin to the chess schools
bmk#1476: ah |
dopa#3178: it also sadness me that there is probably 10 years olds, who have no chance of capitalizing his talents/predisposition
triggerhappygandi#0001: It's a shame though
triggerhappygandi#0001: We don't even know how much potential is never realised simply because we never found intelligent people
dopa#3178: so talent will be wasted to large extent 😦
triggerhappygandi#0001: Ramanujan is a good example. Hardy found him just by chance.
triggerhappygandi#0001: And without formal education he was basically SOTA on math
StellaAthena#3530: Frankly, I don't think that people should be funneled into anything at such a young age.
triggerhappygandi#0001: Like how
triggerhappygandi#0001: Hey, if they are funnelled into chess, why not find some for math too
triggerhappygandi#0001: Also not encouraging child abuse or anything
dopa#3178: but they should have all opportunities if there is interest
triggerhappygandi#0001: But what if we aren't even finding all the kids who are actively interested in math
triggerhappygandi#0001: And have an insane IQ
dopa#3178: kids need guidness by example
dopa#3178: there is no other way, in reasonable context
StellaAthena#3530: In the US, we are destroying people's interest in math
triggerhappygandi#0001: How?
StellaAthena#3530: Terrible high school education
triggerhappygandi#0001: I thought US was despite all it's shortcomings atleast the largest talent magnet
dopa#3178: in higher education yes, otherwise it is just depressing |
StellaAthena#3530: @triggerhappygandi At the undergraduate and graduate school level yes
StellaAthena#3530: But for people who are under 18 the education system is extremely variable and often abysmal
dopa#3178: and some argue education is not the right 😦
triggerhappygandi#0001: I see. Are there not more schools like John Hopkins school for gifted?
triggerhappygandi#0001: I know for sure John Hopkins has a special school. I probably got the name wrong
StellaAthena#3530: @triggerhappygandi the vast vast majority of students don’t go to such schools, and there isn’t great evidence that such schools make you better
Varon#6292: Ive heard the gap between high school and post secondary education in US is really bad The rate of getting into a proper secondary education institute is really low
triggerhappygandi#0001: I see. Then why are such "gifted" schools still running
triggerhappygandi#0001: If they aren't producing the smartest kids
Varon#6292: International students keep alot of universities running
StellaAthena#3530: All sorts of reasons. One is that, even if they don’t produce the smartest kids, they do fail students less. In many areas getting into a gifted program is what allows you to get a *not awful* education
StellaAthena#3530: One of the big problems with US youth education is inequality. Most school districts receive a significant portion of their funding from *local property taxes*
StellaAthena#3530: In a poorer area, the “gifted school” might be no better than a good school in a wealthy area. But it’s clearly far superior to the other schools that the poor student has access to
Varon#6292: With US school privatization there's no fixing the inequality, all schooling should be public but it wont ever happen.
Dromarion#3383: Gifted programs probably look good on an application but who knows how much that really improves your chances
StellaAthena#3530: Looking at a list of the top high schools in the US, all of the top 30 are either in cities (where there is likely a higher density of rich people sending their kids there) or in counties that have abnormally high median incomes
triggerhappygandi#0001: I think US would fare infinitely better if the government stopped being so money minded all the time.
gwern#1782: oh, they don't. magnet schools don't cause a thing
gwern#1782: you can look at the regression discontinuities
Varon#6292: Thats like saying the mortality rate is lower in places with higher incomes capitalism fixes everything for a price |
gwern#1782: (they might be good for quality of kids' lives, but if you're looking at college or adult income, the effect is pretty much null)
StellaAthena#3530: ^^
triggerhappygandi#0001: But shitty high schools are a problem in most countries imo
dopa#3178: all US education is broken in my opinion, money should not equal degree, students have huge loans instead of buying house
StellaAthena#3530: There’s a big difference between “magnet schools don’t make good students into top students” and “shitty schools don’t fail students.”
gwern#1782: buying houses isn't a great thing either
dopa#3178: it for profit education, where everything setup to get money from you.
dopa#3178: you can't even declare bankruptcy on student loans
Varon#6292: Money equals everything in the US not just education, everything someone is gets valued at what they are worth
gwern#1782: if only
triggerhappygandi#0001: What country would you give an example to the contrary? I guess South Korean? There was a study that compared the level if education expected to win IMO (the Olympiad) and the percentage of it covered in syllabus if the countries. Iirc South Korea covered the most syllabus.
gwern#1782: if money really did rule everything in the USA, maybe it would be half as well run as singapore
bmk#1476: that is a very bad measure of education quality imo
dopa#3178: @Varon as anywhere else, my point is it token to extreme levels
triggerhappygandi#0001: Lol. Then what _does_ rule it? @gwern
Varon#6292: Look at Finlands education model
dopa#3178: USSR education and health system 🙂
triggerhappygandi#0001: :berk:
StellaAthena#3530: It’s not even good as a measure of math education
StellaAthena#3530: The USSR had a decent education system... if you weren’t Jewish and had Russian grandparents. |
gwern#1782: a vast mish-mash of competing interests, secular religions, and decay. case in point: coronavirus. nothing about the US response to coronavirus maximizes anyone's profits or money
dopa#3178: USSR system with American values, did not mean without it
triggerhappygandi#0001: As an outsider perspective, I think US government thinks in very transactional terms.
Dromarion#3383: Doesn't the availability of student loans contribute to the goofy price inflation at universities? Like if tuitions double in a decade does it *really* mean that these courses are twice as good now?
triggerhappygandi#0001: Probably a lot of countries do
dopa#3178: it loans based this what is worst
dopa#3178: so people own money for 10 years or so, I guess
dopa#3178: and it a lot of money
dopa#3178: really house worth of money
triggerhappygandi#0001: And the fact that they would give loans to _anyone_ for however frivolous course they do. All they care about is money
bmk#1476: the US would be so much better if everything was driven by money, honestly
bmk#1476: as it is right now, it's nowhere near the optimum
Varon#6292: The tuition cost has scaled astronomically seeing as the minimum wage didn't scale with it
bmk#1476: the inefficiency is astounding
gwern#1782: 'remember, they can't both be right, but they can both be wrong'
triggerhappygandi#0001: Sums it up
dopa#3178: it is really bad, it is as extreme as it can get probably
triggerhappygandi#0001: I remember that graph which says "what happened in 1971"
triggerhappygandi#0001: Wealth started stagnating around then
bmk#1476: also what about cost disease |
bmk#1476: https://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/
bmk#1476: everything costs more and is worse
bmk#1476: redistributive policies are a bandaid over the big problem of "why the heck does it *cost so much*"
Varon#6292: The gap widens every year
Dromarion#3383: I remember textbooks alone resembling a racket where you absolutely had to spend $150 on the latest book which will be worthless to you in three months because they update the questions every edition.
bmk#1476: *libgen noises*
triggerhappygandi#0001: Well since people atleast our age seem to realise there's something fucked up, I have hope thay future US ooliticians will get their shit together.
Varon#6292: The wake up call is there is more poor than rich why serve the rich if they don't serve you?
the rich poison the water, don't pay taxes properly and get an easier ride
Its time to eat the rich, and restructure
triggerhappygandi#0001: Never knew USSR was particularly anti-Semitic. I mean, most countries back then were but I didn't know they discriminated against Jews particularly. How does that relate to communism in any way?
dopa#3178: this how I almost got in jail printing books x10 cheaper for years
triggerhappygandi#0001: How is this a sending to prison worthy offense
dopa#3178: I hate that education if for profit simply because nation depends on educated population
Varon#6292: Copyright infringement
triggerhappygandi#0001: Wtf
triggerhappygandi#0001: It's not like he's selling them
triggerhappygandi#0001: I guess I'm a federal criminal then
dopa#3178: well I was not sent in prison I was hire by back then gov
dopa#3178: I was selling them 🙂 |
triggerhappygandi#0001: Lol
StellaAthena#3530: Extremely so. Jews were soft-banned from studying entire fields, including pure math. In the book *Love and Math* the author talks a lot about how he had to sneak into math lectures and pretended to be doing an applied math thesis while really researching pure math.
triggerhappygandi#0001: Damn. Did a whole another degree _in pretend_
StellaAthena#3530: They also designed deceptively hard problems for Jews at oral exams to disqualify them. There’s a paper on this phenomenon here: https://scholarworks.umt.edu/cgi/viewcontent.cgi?article=1320&=&context=tme&=&sei-redir=1&referer=https%253A%252F%252Fwww.google.com%252Fsearch%253Fclient%253Dfirefox-b-1-m%2526sxsrf%253DALeKk01Tdr37YCSHZa5FLYIK9kfbkH7RlA%25253A1609365252926%2526ei%253DBPfsX_iAOKLP5gKJjaP4CA%2526q%253Dussr%252Boral%252Bexams%252Bdiscriminate%252Bagainst%252Bjews%2526oq%253Dussr%252Boral%252Bexams%252Bdiscriminate%252Bagainst%252Bjews%2526gs_lcp%253DChNtb2JpbGUtZ3dzLXdpei1zZXJwEAM6BAgjECc6BQghEKsCOgUIIRCgAToHCCEQChCgAVDCFFjmN2DIOGgBcAB4AIABYYgBpBCSAQIyOJgBAKABAcABAQ%2526sclient%253Dmobile-gws-wiz-serp#search=%22ussr%20oral%20exams%20discriminate%20against%20jews%22
dopa#3178: I really hate nationalism so much
triggerhappygandi#0001: Any specific reason for their hatred? @StellaAthena
dopa#3178: fear of some sort
dopa#3178: I never really understood why Jews are hated so much
StellaAthena#3530: What do you want me to say exactly? “The Jews were evil and cursed the crops”?
dopa#3178: to me best people to do business with 🙂
triggerhappygandi#0001: Nono. I know that generic scapegoat excuse, but I thought USSR being a communist country would focus more on class or something @StellaAthena
Dromarion#3383: I think a lot of minorities in the USSR were repressed if their identity seemed distinct enough to present a threat of separatism, like the Cossacks and people of Central Asia.
StellaAthena#3530: The USSR heavily persecuted certain racial minorities, labeling them as “enemies of the people” and sometimes going as far as to deport entire races of people under the guise of treason
triggerhappygandi#0001: Plain racism then. Hypocritical of them to discrimmate against workers
dopa#3178: nationalism
dopa#3178: it is all rooted in nationalism
bmk#1476: The ussr was hypocritical in a lot more ways though
triggerhappygandi#0001: "X country is for Y people only"
dopa#3178: yep
gwern#1782: russians were always very anti-semitic. and the communists did a ridiculous amount of horrible shit (I was reading about one atrocity where they just deported thousands of random people to a random island to 'colonize it' and no one realized this was a bad idea until enough reports of rape and murder and starvation surfaced to shame the officials involved). so there's not necessarily anything communist specific to explain. but another problem is that overachieving minority groups are always acutely embarassing to blankslate egalitarian revolutions. china had the same problem - they took away all of the money and assets of the former elites, literally leaving them sitting half-naked on bare floors, but the 'black' kids kept coming out on top over 'red' offspring during examinations... |
dopa#3178: communism made my grand grand parents nearly homeless
StellaAthena#3530: Entire communities of Chechens, Tartars,and Koreans were deported to the middle of nowhere
triggerhappygandi#0001: Siberia?
StellaAthena#3530: Often times, but not exclusively
StellaAthena#3530: There’s a Wikipedia article on this actually: https://wikipedia.org/wiki/Forced_settlements_in_the_Soviet_Union
dopa#3178: I use to build this as kid
triggerhappygandi#0001: You what
StellaAthena#3530: Oh and the Kulaks
triggerhappygandi#0001: You mean gulags?
dopa#3178: @triggerhappygandi in summer house I use to build like this shitshacks for fun when I was like 10
gwern#1782: the one I was mentioning was not really a gulag in the usual sense but would count as a forced settlement, I guess: https://en.wikipedia.org/wiki/Nazino_tragedy
StellaAthena#3530: No, Kulaks are an ethnic group. There was an official policy of “Dekulakization”
triggerhappygandi#0001: Oh okay
dopa#3178: so if you reach you are kulak and it is game over, specifically if you where farmer
StellaAthena#3530: Mass arrests, mass murder, deport ant, deliberate starvation.
triggerhappygandi#0001: Pre WW2 entire world seems very unsavory
dopa#3178: brutal
triggerhappygandi#0001: Yes
dopa#3178: like really horror, I don't enjoy reading history past wwII it makes depressed
StellaAthena#3530: > After the dissolution of the Soviet Union the researchers gained access to the archives of the NKVD. Data on 1 January 1953 show 2,753,356 "deported and special settlers". Dmitri Volkogonov, in his book about Stalin, quoted an MVD document that reports 2,572,829 on 1 January 1950 |
triggerhappygandi#0001: Makes me thankful for democracy. I remember first reading about fundamental rights when I was like 12 and I thought "hmm how is that anything special" like an entitled idiot.
StellaAthena#3530: The “special settlers” were often what @gwern was talking about. You were officially settling and island or colonizing the wilderness but really it was a deportation camp.
triggerhappygandi#0001: I wonder what made Stalin so inhumane. Was he a sadist who enjoyed seeing others suffer?
dopa#3178: he was criminal
dopa#3178: like mafia criminal
triggerhappygandi#0001: There's a story about how he had people in his census bureau killed because they reported fewer Russian people than he expected to hear.
dopa#3178: so unless he know you personally he does not care about you at all
StellaAthena#3530: Speaking of “like mafia criminal” I just heard about Trump’s pardons last week 😦
triggerhappygandi#0001: Legit killed/gulag-ed people for doing their job
dopa#3178: as person who meet mafia people, Trump is not mafia, just person who mafia uses
triggerhappygandi#0001: So that's how he justified it.
bmk#1476: Oh no what did he do now
dopa#3178: he does not have to justified, it just things how they are, he does views people as chickens
triggerhappygandi#0001: Probably _didnt_ pardon Snowden
dopa#3178: like you don't think about poor chicken that your are eating
dopa#3178: I am not fun of showden btw
triggerhappygandi#0001: Idk what people's views about him, but he did uncover very important info
dopa#3178: yeah by leaking it to the world instead resigning and reporting it
dopa#3178: there are channels for this
dopa#3178: he did not even try to do |
dopa#3178: he is traitor.
bmk#1476: I'm definitely thankful for the information he revealed, but i don't know if there were counterfactually better outcomes
triggerhappygandi#0001: Yeah same. Atleast we now know US government spied on its own people
bmk#1476: Maybe it would have been better if he went with the official channels, maybe it wouldn't, i don't know
triggerhappygandi#0001: And Angela Merkel too?
dopa#3178: compare showden to Pentagon Papers (Vietnam War)
StellaAthena#3530: IDK how I feel about him. I will say I like Chelsea (sp?) Manning a lot more.
triggerhappygandi#0001: Regardless, he deserves to come back to his home
dopa#3178: he works for russian now
dopa#3178: there is no way russia is not using him
dopa#3178: like what leverage he has ?
dopa#3178: lol
triggerhappygandi#0001: I guess
dopa#3178: russia throuws own doctors from windows, you telling me showdown is there because russia is nice ? 🙂
StellaAthena#3530: @bmk most notably, Trump pardoned a large number of people who committed crimes that he personally benefited from
bmk#1476: Yikes
triggerhappygandi#0001: I think he will flee the country in January
bmk#1476: Lol trump going to Russia would be funny and also completely expectable
dopa#3178: that would be epic
triggerhappygandi#0001: I would say Shanghai |
dopa#3178: and so much demoralizing
triggerhappygandi#0001: It would be funnier
bmk#1476: I don't know what the national security implications would be though
StellaAthena#3530: Including former business associates Roger Stone and Paul Manifort and his sun-in-law’s father
triggerhappygandi#0001: Does he care?
bmk#1476: I do
dopa#3178: he is manipulated easily
StellaAthena#3530: Oh fun
triggerhappygandi#0001: Yeah but you're not fleeing a country@bmk
StellaAthena#3530: He also pardoned people convicted of war crimes in Iraq
triggerhappygandi#0001: He will definitely flee
triggerhappygandi#0001: Probably has worked it out by now
dopa#3178: he pistoff so many people I can't even start
triggerhappygandi#0001: That's why
bmk#1476: No i mean i care that it will have implications for me
triggerhappygandi#0001: How so
dopa#3178: there is so many law suits waiting for him
dopa#3178: it will be endless
bmk#1476: Trump leaking national secrets to Russia is probably not a good thing for American citizens
StellaAthena#3530: The list is: |
- 4 people convicted in the Muller investigation
- 4 war criminals
- two former Republican congressmen
- two other Republican political operators
- his son-in-law’s father
- two people I am yet to identify
dopa#3178: I think it underestimated how big cyber attack was on America
dopa#3178: it is like nuclear bomb dropped, just in cyber space
StellaAthena#3530: In the past he’s pardoned other politicians, including the man who tried to sell Obama’s senate seat
dopa#3178: they how trump thinks
dopa#3178: there is no way to make money honest way, only way is to cheat
dopa#3178: and he thinks every one cheats also
dopa#3178: this is his world view
dopa#3178: it me, my family and cheat everyone else
dopa#3178: from this perspective all his actions kind of make sense, just he is not adapt to do smart cheating, thankfully
dopa#3178: it is legitimate survival strategy
dopa#3178: but there is not concept of honor, glory, ethics, it just me, family, and cheat everyone else
triggerhappygandi#0001: Hiw come it all just didn't amount to anything?
triggerhappygandi#0001: The Russian cyber attack
dopa#3178: we just do see it |
dopa#3178: why would you show what you know
dopa#3178: you would not be thinking in chess out-loud
triggerhappygandi#0001: Hmmmmmm
dopa#3178: even if no information was stolen
dopa#3178: just scale of attack and how long it took before it was discovered it is just insane, as far I understand
dopa#3178: the what scary is that kaspersky was kicked out of US because of similar reasons to some extent, and then this attack happens
dopa#3178: out of everything it is just demoralizing
dopa#3178: I really think this Russia intent, not to do anything else but demoralize US
dopa#3178: it worst then any attack
dopa#3178: this big factor why USSR collapsed also
dopa#3178: when people just lost belief into system, it is game over
StellaAthena#3530: If anyone wants to read our paper on the Pile and give feedback we would hugely appreciate it! Here's a link to read it: https://www.overleaf.com/read/wgmnfqvzckjz
dopa#3178: where is paper link ?
StellaAthena#3530: I edited it into my comment.
dopa#3178: not sure if it is page rendering in browser on page name there is some usual space between paragraphs
StellaAthena#3530: Is this better https://cdn.discordapp.com/attachments/729741769738158194/793987207324368916/EleutherAI-3.pdf
dopa#3178: nope same thing
dopa#3178: I am not expert in this but it seems very off
dopa#3178: page 9 and 10
dopa#3178: looking at latex it seems like there is extra enter |
zphang#7252: oh that's just weird typesetting I think
StellaAthena#3530: Ah yes, I see what you mean
zphang#7252: we can do some vspace shenanigans
dopa#3178: other wise reading it at least first 6 pages
dopa#3178: it very well written, in direct no BS manner and I like it
StellaAthena#3530: There's no reason to mess with it right now, as it might look totally different when it's done. Manually fiddling with adaptive spacing is the last thing you should do.
dopa#3178: no red flags so far 🙂
StellaAthena#3530: Awesome!
StellaAthena#3530: We are planning on thanking all reviewers by name. Would you like to be included, and if so what's your full name?
dopa#3178: on page 10
dopa#3178: sentence ends abruptly:
dopa#3178: in particular, the sets with a similar methodologyto the GPT-3 training data (Pile-CC, OpenWeb-Text2, Books3, and Wikipedia) are in the upperhalf of
StellaAthena#3530: Noted
dopa#3178: it is not critical but table 4 is smaller font (me being picky here)
StellaAthena#3530: That's so that it'll fit
dopa#3178: would be worse if it is rotate 90 degrees ?
zphang#7252: I think we can convert it to two rows
bmk#1476: I think keep should keep it as is
zphang#7252: it is unreasonably small, I think
StellaAthena#3530: @zphang Yeah, that's a good thing to try |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.