data
stringlengths 115
7.61k
|
---|
axiom#3599: lmao, does gwern have a high anime power level
axiom#3599: my scouter isn’t showing anything
StellaAthena#3530: lmao, *does gwern have a high anime power level*
axiom#3599: i feel like i’ve seen the girl on the top left
axiom#3599: :snuffymischief:
bmk#1476: gwern's anime power level is off the charts
bmk#1476: *ahem* TWDNE
bmk#1476: *ahem* https://www.gwern.net/Anime-reviews
AI_WAIFU#2844: https://www.gwern.net/index#anime
bmk#1476: somehow he's found the time to not just watch but also write up details reviews for all these anime https://cdn.discordapp.com/attachments/729741769738158194/802383567769829416/unknown.png
axiom#3599: i somehow have never read any of gwern’s writing on anime
bmk#1476: i mean, you gotta know about TWDNE right
axiom#3599: of course lul
axiom#3599: “my scouter isn’t showing anything” as in, it’s probably over 9000
bmk#1476: my anime power level is zero so i did not get the reference
AI_WAIFU#2844: It's actually funny. The only anime I watched in 2020 was re:zero
bmk#1476: i havent watched anything in ages
bmk#1476: after i git gud at japanese we need to have a watch party for the new railgun season
AI_WAIFU#2844: I can get behind that.
bmk#1476: + jp sub for further Language Learning Benefits |
axiom#3599: in dragon ball z, the alien prince vegeta, heir of the saiyans has never encountered warriors who bother to hide their “power levels,”. since the mcs, goku et al. power up and down when they engage in battle vegeta dramatically underestimates them
bmk#1476: (and before you pull up the "anime isnt a good place to learn a language", i literally learned german through shitposts)
axiom#3599: press 9 to unsubscribe from anime facts
bmk#1476: nein
axiom#3599: @AI_WAIFU i thought re:zero season 1 was really comically awful
axiom#3599: amelia’s character design is cute though
bmk#1476: i only ever watched s1e1 of re:zero
bmk#1476: and it's incredibly cringe and cliche, tbh
axiom#3599: gonna read what gwern thought about shin sekai yori
bmk#1476: it's almost the central example of isekai
axiom#3599: gwern has a write up for flip flappers??
axiom#3599: pretty savage
axiom#3599: i mean i suppose? sao i usually what i think of
axiom#3599: but re zero was pretty insanely popular I remember
bmk#1476: ive never watched sao, actually
bmk#1476: but i did watch the entire abridged series
AI_WAIFU#2844: Yeah, there wasn't much about it that stuck out, but season 2 was pretty good. Primarily because of the witch who made the mc *drink her body fluids*.
bmk#1476: which ive been told is objectively better anyways
AI_WAIFU#2844: I've never watched sao either.
bmk#1476: https://www.youtube.com/playlist?list=PLuAOJfsMefuej06Q3n4QrSSC7qYjQ-FlU i watched this quite a while back |
bmk#1476: the quality is low but apparantly still better than the actual thing
axiom#3599: sao is worth it so you can appreciate the abridged series more
bmk#1476: lol
gwern#1782: _doesn't know why more people don't see End of Evangelion everywhere. it's not like no one's watched it. and yet, how many reviewers of flip-flappers or kill la kill called it out?_
bmk#1476: what about the *original* eva? i think it's pretty obvious you're a fan of it
gwern#1782: the question of how much EoE plagiarises NGE TV remains a hotly debated one in the eva fandom! and yet. I *have opinions* on it.
bmk#1476: i dont actually know anything about that
bmk#1476: i just vaguely know that there exists something called "evangelion" which is popular
bmk#1476: i honestly have no idea what the difference between EoE and NGE is
gwern#1782: you are a good man, bmk, and I am sure god will not hold that invincible ignorance against you and even you may be saved
bmk#1476: so.. can you clarify to a noob what the heck is going on here?
axiom#3599: *gwern is throwing shade*
axiom#3599: :abbaSmug:
axiom#3599: you love to see it
bmk#1476: i may have noticed
axiom#3599: I thought your write-up on shin sekai yori was spot on
axiom#3599: gwern anime marathon this weekend? *hmmm* perhaps
axiom#3599: gotta compute the the cosine similarity of our anime taste
axiom#3599: i’m sad you didnt treat, “Humanity has Declined”
gwern#1782: _was actually trying to watch Puccini's opera Tosca tonight but looks like that's not going to happen_ |
gwern#1782: I did enjoy the terrible pun of 'pairadogs' but Jinrui left me mostly going 'huh?' I got the impression that way too little of the original had been adapted into the anime and what I was seeing was just way too incomplete to base any kind of review on
gwern#1782: if I wrote a review, it'd be mostly 'man, that scene with the bread sure was something wasn't it? and that is possibly the most elaborate buildup of the most terrible pun I've ever seen in anime. it's pretty colorful and cute. other than that, idk lol'
axiom#3599: fair enough
axiom#3599: have you seen kaiba?
gwern#1782: which one
gwern#1782: the yuasa one I assume? yes. it was pretty good
axiom#3599: yeah the yuasa one
bmk#1476: _observes the weebs interacting in their natural habitat from a distance_
axiom#3599: how did you pick which ones you wrote up?
gwern#1782: it's pretty random
gwern#1782: tends to mostly stuff I watched after starting a MAL account. I don't go back much. dunno where I'd start with a review of NGE, say
axiom#3599: stalking your mal
axiom#3599: :emmaWow:
axiom#3599: 5 out of 10 on samurai champloo is surprising
gwern#1782: (definitely not a fan of hip-hop and champloo did nothing with it that I liked)
triggerhappygandi#0001: Be careful. They are harmless until one of them disrespects another's waifu
sloth_.on._tabasco#9015: :WeebsOut:
Bedebao#4842: Neon Genesis Evangelion is one of the staple animes. It is notable for being a deconstruction of the mecha genre and turning into a symbolic mind fuck. Probably want to have a bit of experience with anime before trying it. The studio ran out of budget for the last two episodes, so they are rushed and weird, even by the series' standards. End of Evangelion is a movie that aims to give the series a proper ending.
StellaAthena#3530: So NGE is to mechas as PPPM is to magic girls?
Bedebao#4842: Both are deconstructions, yes. |
Bedebao#4842: For more info: https://tvtropes.org/pmwiki/pmwiki.php/Main/Deconstruction
Bedebao#4842: and some anime examples https://tvtropes.org/pmwiki/pmwiki.php/GenreDeconstruction/AnimeAndManga
Louis#0144: I didn’t know stella is a weeb
Louis#0144: The anime pfp should have been a give away
Louis#0144: But like so many math majors I know have anime pfps but hate anime
Daj#7482: Many math majors seem to be liars then
andyljones#7746: code switching
Louis#0144: LOL
Daj#7482: @mgostIH tbh I have no idea what server boosting really means, but thanks I think haha
mgostIH#0245: thisss https://cdn.discordapp.com/attachments/729741769738158194/802559983296315392/unknown.png
mgostIH#0245: It means I get a pink name :viriglasses:
Daj#7482: Neat
Louis#0144: Your name isn’t pink
Daj#7482: Is to me
Louis#0144: Oh
Daj#7482: We can make more shitpost emojis :carlos2:
StellaAthena#3530: Interesting. It is to me but I’m not an admin.
Daj#7482: It automatically assigned him the Server Booster role, so Louis' client is probably just not updated
Louis#0144: Not yet
Daj#7482: Stella is an admin in our hearts ❤️ |
Daj#7482: (I just don't hand out literal admin rights because it's not needed and bad security practice)
Louis#0144: Honorary admin
Louis#0144: Has anyone here used RAG
triggerhappygandi#0001: @StellaAthena do you watch anime too?
StellaAthena#3530: @triggerhappygandi sometimes
StellaAthena#3530: I'm not big on TV
bmk#1476: sometimes-anime gang, unite!
triggerhappygandi#0001: I never took you for an anime enthusiast@StellaAthena
triggerhappygandi#0001: Don't join their ranks. We must stop them
zphang#7252: Gurren Lagann is literally the *scaling laws go BRR* of anime
triggerhappygandi#0001: Never thought of it like that
triggerhappygandi#0001: Isn't this the anime where the robot summons a drill larger than the universe?
triggerhappygandi#0001: :ptsd:
zphang#7252: not quite bigger than the universe
zphang#7252: But it does have this
zphang#7252: https://i.imgur.com/aYSteul.gif
zphang#7252: *Scaling law, colorized*
Big Fat Duck#0266: another gpt article top of hackernews again
Big Fat Duck#0266: https://bkkaggle.github.io/blog/algpt2/2020/07/17/ALGPT2-part-2.html
zphang#7252: The author is on this server! @bilal2vec |
bilal2vec#1816: heyyyy that me
nz#9710: lmao I love this server
bilal2vec#1816: feel free to ping me about it :)
bmk#1476: You should help with gpt3 replication
bmk#1476: We need help in the Deepspeed mines
bilal2vec#1816: yeee i wish i had the time :')
bilal2vec#1816: but with school and internship search im p close to getting burnt out
chirp#4545: openai has a new job posting for a research engineer: https://jobs.lever.co/openai/b5248585-a392-4d57-91e6-f046e630f53e
gwern#1782: looks like they're still insisting on being physically located in SF?
mick#2835: whoever gets it, show up coughing on the first day and see if that fixes the policy 🤣
mick#2835: jk don't do that it's probably terrorism these days since people can't take a joke lol
StellaAthena#3530: I inquired about this last month and was told "yes" rather forcefully
gwern#1782: I was wondering if coronavirus would be able to break that. if they've stuck with their no-remote policy this long, I guess they'll probably make it
Sid#2121: I mean, I can understand it. I am a lot more unproductive when i'm at home.
bmk#1476: this makes eleuther the anti-oa
bmk#1476: 100% remote
Deleted User#0000: most companies here allow remote. that's really weird if OA is not
Deleted User#0000: it's rather selfish not to allow remote during a pandemic if the work does not require it
Deleted User#0000: hope it isn't true
Aran Komatsuzaki#5714: @Deleted User de23c58c how's your progress with alphafold? do you think you can finish it anytime soon? |
Sid#2121: yeah, i'm really surprised they're still insisting on in person work during a pandemic.
Sid#2121: any other time though, i think a team is much more productive when they can work in the same space and share ideas in person
bmk#1476: so what youre saying is berlin eleuther office post plague
Sid#2121: :yes:
Sid#2121: yes
gwern#1782: they claim the most creative AI research *requires* non-remote
Sid#2121: *requires* is a little strong, but it certainly helps. Not really worth risking people's lives over though
mick#2835: idk I thought vc was pretty productive
mick#2835: we just need to get better at writing down the important points
mick#2835: *is there like some kind of software that can do that?*
chirp#4545: otter!
nz#9710: @StellaAthena do you by chance know the goldt lab at SISSA? https://goldtlab.github.io/
StellaAthena#3530: It looks like it might be one person so far lol. But the paper “Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup” was good. Goldt was first author. I don’t recognize any of the other papers listed.
nz#9710: Yea I ask since I see he's setting the lab up in Italy
nz#9710: It's kind of a first
StellaAthena#3530: @nz first in what sense
nz#9710: I'm not aware of any important AI labs here in Italy, even less one about DL theory (until now)
nz#9710: Polimi won a paper award from neurips this year, but apart from that, pretty much nothing
nz#9710: So happy to hear his work (the one you cited) was good -- hopefully he's able to set up a good environment for AI research, we really need one
StellaAthena#3530: Ahhh gotcha. We will have the see what the future brings for him 🙂 |
janus#0150: Working in person is much more productive for most people. It takes a good team to be able to cooperate efficiently enough remotely or be able to work more independently. However, I've also found that being dead is much less productive than being alive.
Louis#0144: Anyone here ever use RAG?
bmk#1476: Instructions unclear, died and found my ghost trapped inside a writing factory for eternity
janus#0150: Nice. Link your infinite ghost blog. I'll skim it when I have time
3dprint_the_world#6486: I'm way more productive at home.
3dprint_the_world#6486: at the office I talk to people way too much.
Sid#2121: can confirm, am dead, barely get anything done
Sid#2121: at home i slack off too much 😆 , need someone watching over my shoulder
3dprint_the_world#6486: I think the key for me is having a dedicate 'office' room at my house
3dprint_the_world#6486: where I can just block out all distractions and work
3dprint_the_world#6486: I hate open plan offices with a passion
Sid#2121: being in an office room doesn't stop me from watching youtube and playing chess
3dprint_the_world#6486: it's literally impossible for me to focus at the office
3dprint_the_world#6486: there's too much activity
Sid#2121: i'm sometimes the same, really depends on the day
Sid#2121: i have to weigh up the distraction levels of people at the office v the distaction levels of the entire internet
Sid#2121: normally the former is less distracting
olives ❀#2305: If GPT is trying to convince me that it is conscious, should I trust it?
Milan Cvitkovic#1279: I'd be more worried if it tries to convince you it's not conscious.
Sahl#0630: Hey I’m just a p-zombie, I can relate to it |
bmk#1476: abolish qualia, retvrn to p-zombie
Sahl#0630: TRUE
mick#2835: Whenever someone says they are a p-zombie I have to take it at face value.
Sahl#0630: We wouldn’t know that we are though
Sahl#0630: So you really can’t
Sahl#0630: But trust me I am
Sahl#0630: No consciousness here
mick#2835: I mean. If that's possible it could be true? But I'm suspicious because I sortof conceptualize consciousness as working like a fire in the space of information
mick#2835: And in terms of that metaphor, I feel enough "heat" radiating from you that I suspect you're past the "ignition threshold" lol
Sahl#0630: Just because someone is extremely smart (like me 😎) doesn’t mean they’re conscious
mick#2835: What's it called when someone is suspicious that all other entities are p-zombies?
bmk#1476: solipsism?
TylerRoost#8017: Whats the most mind blowing thing you can verify was written by a language model?
kindiana#1016: take your pick for gwern's gpt3 creative fiction page
bmk#1476: technically, gpt3 can write any string
bmk#1476: so, i mean, do your bits have color?
3dprint_the_world#6486: I wonder how many rationalists actively experiment with psychedelic substances
Sahl#0630: 6 rn
Sahl#0630: maybe 7 if fred got his shipment
bmk#1476: i wonder how many rationalists |
Milan Cvitkovic#1279: I wonder how many
bmk#1476: i wonder
Sahl#0630: I wonder
Milan Cvitkovic#1279: Found the language model
Sahl#0630: Found the language model
bmk#1476: i
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/802752927109742612/Frans_Hals_-_Portret_van_RenC3A9_Descartes.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/802752971104190464/Max_Stirner-1200x900-cropped.png
Sahl#0630: i’nt
3dprint_the_world#6486: nope, if they were language models they would operate token by token, not word by word
3dprint_the_world#6486: i wonder how ma
bmk#1476: ```>>> tok.encode('many')
[21834]
```
bmk#1476: checkmate gpt2
3dprint_the_world#6486: shit
Milan Cvitkovic#1279: *dancehall klaxon*
mick#2835:
Singularity#9001: More interestingly, how many rationalists imbibe in psychedelics and don't disclose that information as to avoid signalling that they are adjacent to woo and New-Age pseudoscience type material. I know the r/psychonaut community is really circlejerky, r/rationalpsychonaut is better but the best stuff is probably all of the information available on the psychonaut wiki.
|
It would be an absolute gem if I could find someone who writes about psychedelics at slatestatcodex quality levels- if anyone has any good blog recs for that do send me them.
It wouldn't surprise me if a fair portion of rationalists engaged in psychedelia since it is one of the few things that allows for cognitive reorganization and definitely would be a powerful 'lesswrong' tool if used with that intention.
The biggest problem I always find is that, most attention goes toward the spectacle at face value level rather than trying to understand the underlying framework that allows all of the experiences, and so there's little meta discussion except in certain specific communities.
voxs#0001: we should hack the NSA for the petabytes of data they have
janus#0150: I wasn't aware of these memes. I haven't seen rationalists online speak out against psychedelics and many/most of my 'rationalist' friends do psychedelics.
janus#0150: better to hack Google, although the NSA would probably be easier?
mick#2835: If you're serious it's trivially the reverse on both. NSA has credentials into Google's data plus their own network separate from the public internet entirely.
3dprint_the_world#6486: I would love this.
3dprint_the_world#6486: the significance of psychedelics, to me, is that small chemical modifications can cause you to have profoundly different subjective experiences, to the point that you can't even communicate this experience back to your normal self.
bmk#1476: @kindiana this you? https://cdn.discordapp.com/attachments/729741769738158194/802796310729457665/unknown.png
kindiana#1016: yes
bmk#1476: exciting stuff
kindiana#1016: :brr:
janus#0150: I think they have access to some of Google's data, not all, no? I suppose they could get access to most of it in terms of bytes if they wanted. But hacking them probably wouldn't give it to you. The bit about their network being separate is interesting, although perhaps it's still easier? If they are disconnected they are probably using some real legacy shit.
bmk#1476: :smallbrain: unhackable because of security
:bigbrain: unhackable because your system is so cursed that not even hackers want to figure it out
mick#2835: Lol I wouldn't even bother thinking about it so much :P
janus#0150: lmao. Great point. I am non-ironically an advocate of security through obscurity. |
janus#0150: but... training data 🥺
kindiana#1016: :bigbrain: security through having too much data and too narrow pipes so they can't actually get all the data out in a reasonable time
mick#2835: Ah yes, the good ol' information theoretic airgap.
cfoster0#4356: I'm not super read up on them but the Qualia Research Institute seems like they might be rationalist-adjacent psychonauts
cfoster0#4356: This page also has a section titled "Rational Psychonautics" lol
bmk#1476: i think the more accurate term might be postrat? or idk i get the terms mixed up too
bmk#1476: i think rat-adj is a very broad term whereas postrat refers specifically to rat with added woo
mick#2835: What the *heck* is woo actually?
3dprint_the_world#6486: the problem is that a lot of people who have obscure systems don't get hacked so they think it must be because their systems are so awful.
but really it's just because no one cares.
janus#0150: thats just my first level of obscurity. make no one care.
3dprint_the_world#6486: yep
bmk#1476: i think the problem is that you're almost always either severely overestimating or severely underestimating the motivatedness of your adversary
bmk#1476: attacks seem to fall in two broad categories
bmk#1476: spray-and-pray attacks, and targeted attacks
bmk#1476: for the first, just dont set your password to `password`, use a pw manager, close ports if you dont need em, etc and youre fine
bmk#1476: for the second, for mere mortals like us, youre basically fucked either way
bmk#1476: obviously the calculus is different for people who know what theyre doing, but thats not me
janus#0150: I keep all the things in my kitchen in the third place you would think to look.
mick#2835: You're still fucked when you know what you're doing, you just can better identify where and how you're fucked lol |
mick#2835: It's everything. Like how your laptop screams out everything you do over like 3 different channels lol.
mick#2835: And how you need so much software just to boot up that it's essentially guaranteed to contain an exploitable flaw somewhere in the stack.
bmk#1476: For a mere mortal like me, is it even possible to be meaningfully more secure than using a password manager?
mick#2835: At the end of the day, security is a feeling.
mick#2835: 256 units of some information metric doesn't equal security, and while this sounds "cheap" or something. It's actually relevant and not just a cop-out.
mick#2835: Basically when the user doesn't "feel secure" they immediately start screwing up from an infosec point of view for whatever reason
bmk#1476: Security always comes at the cost of convenience, and if it doesn't help to full disk encrypt everything and have 2fa with my yubikey and install rf shielding on my walls, i don't see why I'd want to
mick#2835: Or if things don't "feel convenient" they do so even worse
bmk#1476: But isn't most security inconvenient
mick#2835: Exactly lol. It's fucked.
mick#2835: Convenience actually is one of the most important factors to balance with security.
mick#2835: Password rotations, for example, are usually more harm than good because people then write down their passwords on sticky notes.
mick#2835: Or some equivalent
bmk#1476: Right password rotations never made sense to me
bmk#1476: If i never reuse my passwords anyways, i don't see any reason to waste my time on it
mick#2835: Also everything we've been told about choosing passwords is pretty much the opposite of a good idea lol
janus#0150: I think proper use of qubes or tails (not easy but not impossible) is robust against the majority of adversaries. At some point is becomes much much easier to kidnap and torture you.
mick#2835: https://cdn.discordapp.com/attachments/729741769738158194/802807453611458560/password_strength.png
janus#0150: Dictionary attack...
bmk#1476: I just use a pw manager lol |
mick#2835: But how do you unlock the PW manager?? lol
bmk#1476: Also lol if this ignites an entropy debate in an ML server...
janus#0150: 2fa by mail
bmk#1476: 32 chars, completely randomly generated using nuclear decay source
mick#2835: That would be best case scenario kinda tbh
mick#2835: chars as in letters or chars like C++ (as in bytes)? lol
bmk#1476: Alphanumeric + punctuation
mick#2835: 165 bits. passable.
bmk#1476: It's a pita every time I need to log into something tho so I might shorten it a bit
mick#2835: or, is it cased too?
bmk#1476: Though i also do a lot more rounds than the default so that must add a handful of bits worth
mick#2835: If it's cased then you're at 197 which is pretty perfect imo
mick#2835: I usually aim for 192 bit keys on paranoid systems
bmk#1476: Is 165 bits really only "passable"? O.o
bmk#1476: I must be miscalibrated
mick#2835: It's just barely out of reach of quantum computers in the future in theory
mick#2835: Because grover's algorithm should break it with noticable probability after only like 2^82 time
bmk#1476: I was under the impression that basically anything longer than a dozen characters completely random and containing at least alphanumeric was safe, given a reasonable number of rounds, huh
mick#2835: Well again it's only a quantum adversary that can touch a 160 key lol
mick#2835: And even that is pretty unreasonable |
mick#2835: I use 160 on real systems for efficiency
bmk#1476: Ah
bmk#1476: So I'm assuming chopping it in half would probably still be fine assuming non quantum adversaries
bmk#1476: I mean, if someone wants to use a quantum computer to break into my discord account, i have bigger problems
mick#2835: ehhh almost
mick#2835: 80 bits is kinda weak these days, I think bitcoin shits out more than that on some short interval
bmk#1476: Yeah but also sha256 is cheap
bmk#1476: A good kdf is at least a dozen or so more bits more expensive in practice right
mick#2835: The good ones are basically arbitrarily more expensive in practice, and they bottle neck on RAM so you can't build a cheap ASIC or GPU cluster to break it
bmk#1476: Yeah i set my rounds high enough that it takes a good few seconds to unlock using my cpu
mick#2835: Yeah... then short of using a high end smartcard, I think that's basically as good as it gets lol
bmk#1476: Awesome
bmk#1476: Also Smartcards are cool i need to get one eventually just to play with it if anything
mick#2835: We probably should do that for this project eventually
bmk#1476: Lol our opsec rn is an absolute trainwreck
mick#2835: If we start releasing models that a fucktillion people are using we'll become a juicy hack target you know
bmk#1476: Good point
bmk#1476: We need to start doing that eventually
mick#2835: Would be good to do digital signatures
bmk#1476: I guess the first step is to not give root access to a dozen people at once |
mick#2835: lol
3dprint_the_world#6486: curious why you say this. if the models are released why would anyone need to hack EAI
mick#2835: If we make an easy to use package like HuggingFace, is really where the issue comes up
mick#2835: People will integrate it into their apps, which will make our repo get pulled into lots of apps.
mick#2835: Typical red team approach these days is to infect dependencies.
mick#2835: Real world examples would be like node packages getting viruses slipped into the minified version only, so nobody sees a problem in the source code but the virus runs on people's browsers on production sites. There have been a few of these that did things like: remain dormant until it found the right target and deployed a wallet stealing payload.
fazz#8459: I saw a dependency attack once back 15 yrs ago. Disgruntled employee decompiled the Java logger and recompiled a version that deleted user system files but randomly so non deterministic to diagnose.
mgostIH#0245: @mick I don't think you can do a quantum attack on passwords, even if you "invert" the hash function you are still unlikely to get the real one at high entropies
mgostIH#0245: Moreover most hashing algorithms for passwords are extremely resource intensive, I have my doubts quantum computers will be able to solve those kind of problems before a technological singularity
mick#2835: I understand the practical concerns, but I don't want to do a security proof and I even more don't want to make a claim like that without proof.
mgostIH#0245: this kind of thing is good in general for master passwords, assuming you sample uniformly random enough words
mgostIH#0245: A lot of things currently used in practice, even symmetrical, have "only" ~128 bits of security, so there's no point over going with a password with more entropy than that
mgostIH#0245: I personally don't find it worthwhile to worry about quantum computers because of Grover's search
mick#2835: I have to say it. Making bets like that in security design is a fucking awful move.
mgostIH#0245: I think this paper does a good argument against random extremely high estimates occurring all over crypto: https://eprint.iacr.org/2019/1492.pdf
mick#2835: I've read that before. The fact that you think the estimate is extreme shows you're very miscalibrated.
mick#2835: In this situation using 256-bit costs nothing at all, the payload is so small.
mgostIH#0245: Quantum computers would still need **HUGE** scale before being able to tackle grover's search on 128 bits symmetric security, before that happens we'd see any asymmetrical quantum weak cryptography fall down already
mgostIH#0245: A 256 bits password is a nightmare to remember
mick#2835: No you use 256 bit security functions and he already has a >192 bit password |
mick#2835: Apply a good KDF and it's basically as good as you could want
mick#2835: Cut the security function down to 128 and now you're betting on quantum computers being snake oil.
mgostIH#0245: What I am saying is that modern KDFs are made purposefully to avoid parallel attacks, Grover search still requires you to phrase any function in terms of some unitary transform
mick#2835: "Attacks always get better, not worse"
mgostIH#0245: Not necessarily saying that quantum computers won't get there, but by the time they will we'll already be way ahead in technology for it to be a concern, even assuming exponential scaling starting from tomorrow
mick#2835: Why are you trying to reduce crypto to just barely safe levels when there is zero pressure on resources?
mick#2835: People outside AI don't think AGI will ever be a thing.
mick#2835: Tons of people claim some engineering problem will make it impossible to scale, and then someone finds a way to scale it.
mgostIH#0245: We have more of a proof that AGI is reachable than Quantum people have proof of factoring large numbers will be achievable in 10-20 years
mick#2835: Yet people don't believe it, so extrapolate.
mick#2835: We can't trust ourselves to have a good guess on how QCs will pan out.
mick#2835: Security proofs aren't empirical at all.
mick#2835: Like the slightest hint of empirical is met with immediately being laughed out of the venue
mick#2835: The fact that we have to re-think the way algorithmic complexity ties into all of this is reason enough to take it slow and play it safe.
mick#2835: The fact that engineers are claiming to have new ideas and approaches all the time is just extra spice on that
mgostIH#0245: I can give more weight to results that are already here:
Quantum computers: maybe able to work with 5 qubits right now, nowhere near solving scaling issues due to quantum coherence
AI: We got scaling figured out and year after year there's new techniques and breakthroughs being done, it's just a matter of time even assuming no theory advancements in artificial intelligence, hell, we pretty much tackled the Turing Test
mgostIH#0245: This isn't true for symmetric algorithms, the strength of hashes and block ciphers is based only on the fact that we haven't yet figure out how to break them
mgostIH#0245: Even the one way hash function hypothesis is based on P != NP |
mgostIH#0245: Factoring itself isn't known to be not in P
mick#2835: "security functions" are taken as unbroken axiomatically and there are extremely specific definitions for it all
mgostIH#0245: Axiomatically because of empirical assumptions
mick#2835: NP anything is a red herring that hobbyists waste time thinking about
mgostIH#0245: When new hash functions are proposed they don't necessarily build from the same ground of AES
mick#2835: Axiomatically for pragmatic reasons, because they can be swapped out for something unbroken in the event that even the slightest hint goes wrong with them.
mgostIH#0245: ChaCha20 is extremely different from AES, yet we are using it right now
mick#2835: And in practice we do swap them out at the slightest hint. The earliest distinguisher is a fire alarm event.
mick#2835: It's all rigorous
mgostIH#0245: Those pragmatic reasons come from empirical evidence of them not being broken yet
mick#2835: You're missing the point. The argument you're trying to make doesn't make the point you're trying to assert.
mgostIH#0245: I'll be back later
mick#2835: Security proofs have nothing to do with "security functions" that we assume are unbroken.
mick#2835: Those are treated entirely as a black box with extremely rigorous information theoretic quantifications on the information leakage.
mick#2835: Security proofs never, ever make mention of "We assume AES is unbroken because yadda yadda" no.
mick#2835: We assume AES is broken and wait to find the slightest suggestion of it! If anyone in the world finds any way to distinguish any outputs that should look random as not being precisely uniform random, that's the level of deviation where we discard the entire symmetric primitive and use a new one that looks random under all known algorithms.
mick#2835: Bringing up the design of the low level symmetric primitives at all comes across as intellectually dishonest. The black box abstraction is precise enough to consider Grover's algorithm with. **All** possible arguments rooted in deviations from the black box abstraction immediately represent "weak" algorithms, which can only support my position that you're "better safe than sorry" here when it costs nothing.
mgostIH#0245: My argument is about passwords:
Having them with an entropy above 128 bits is irrelevant and will future proof you for at **least** 20 years. The underlying primitives behind the rest of encryption can be 256 bits, but even nowadays it's not a strong requirement. This is because even if quantum computers become available tomorrow, there's yet so many technical issues surrounding their **long term** applicability that even in the case they'd get **that** powerful (To crack password based hashes for 128 bits passwords) in 20-30 years, beyond any expectation, their usage would be a total technological revolution of human kind. At that point it's not worth discussing whether your password should be 128 or 256 bits of strength.
To further fuel to my point, Grover search requires the black box function to be **unitary**, which while achievable, requires a huge additional polynomial cost in encoding algorithms that make heavy use of current hardware. |
Therefore 128 bits passwords are "good enough"
mgostIH#0245: Going beyond that would make them extremely hard to remember or extremely long to type down, there's practical considerations to make of security vs commodity
mick#2835: 128-bit is already impractical to get people to remember
mgostIH#0245: Yes exactly, which is why going beyond that is already nonsensical
mick#2835: I'm not going to tell someone who already memozied a 192 bit key to replace it with a 128
mgostIH#0245: Mine are already not at 128 bits security
mgostIH#0245: Yeah of course, me neither, but I am not going to tell people "Your 92 bit security is too low, you should consider 256 even for your password"
mick#2835: He's chosen sizes appropriate for a **key** rather than a **password** and he's willing to make the memory commitment for it.
mick#2835: He's already paid the costs, give him what he deserves.
mgostIH#0245: But I am not arguing he shouldn't do that if he memorised it already, it's just a point of diminishing returns for anyone else getting there
mgostIH#0245: The risk of forgetting a part of a password can itself be a problem tho
mick#2835: I have to reiterate that for standard password usage, I've found that 128-bit is already unrealistic.
mick#2835: Sticky note / forgotten password outcomes
mgostIH#0245: Aye, then we agree on that
mgostIH#0245: But I'd also make the point that if quantum computers become a problem for 128 bit passwords, the world would soon become completely different
mgostIH#0245: With AI or not
mick#2835: Maybe my position makes more sense if I mention that I never suggest passwords at all lol
mick#2835: I push every organization away from passwords any time I can because they don't really work and lead to users installing password managers (or worse)
mick#2835: It's security theater at best.
mgostIH#0245: In favour of what? |
mick#2835: Actual gaping security holes at average.
mick#2835: Physical access is often a good one. It's basically the ghetto form of "something you have"
mick#2835: Smartcards
mgostIH#0245: I'd usually just add both then
mick#2835: A false sense of security makes users act less carefully.
mgostIH#0245: Don't make strict passwords requirement, but ask for a simple one too
mick#2835: Why inconvenience them with useless bullshit?
mgostIH#0245: Because they probably already have a password
mick#2835: So yeah don't force stupid security theater bullshit on the users (which actually harms security due to side effects) just because other people did.
mgostIH#0245: Also I don't think having dongles is applicable in the majority of cases, password managers seems like a good solution
mick#2835: It's a "good" solution to a problem that need not exist.
mick#2835: It's like you shouldn't have all these eggs anyways, so put them all in one basket!
mick#2835: Obviously not a long term solution imo.
mick#2835: On SSH we use public key files and it's way more convenient and secure.
mgostIH#0245: Sure but I doubt people will SSH into their facebook account
mick#2835: Having a public key file stored quietly in some folder on your machine is not a technical difficulty to achieve. Lets not make emotional appeals?
mick#2835: If FB deployed a system based on keyfiles it would work just fine. I've designed and built these before for more complex authentication schemes than an FB account.
mick#2835: It's very realistic and always more convenient than passwords.
Sahl#0630: I think the main problem with this is when people use a different computer
mgostIH#0245: Yeah that and possible hardware failure |
mick#2835: I thought those would be problems but they were super easy to solve when I actually just thought about it for more than literally 10 seconds.
Sahl#0630: Alright how would you solve them
Sahl#0630: Especially for tech illiterate users
mgostIH#0245: Also phones get stolen a lot too
mgostIH#0245: A password still seems necessary to prevent immediate use from someone that has temporary access on your device
mick#2835: You mean how did I solve it? You can transfer authentication between nodes using an authorization protocol and recovery is implemented through standard key escrow.
mick#2835: No password required, only a PIN if you're paranoid.
Sahl#0630: Well ok, how will a normal person do this?
mgostIH#0245: A PIN is just a password
Sahl#0630: Greg goes on a school computer
Sahl#0630: How does he log into school website
mick#2835: You press the "Yes" button on your phone when it asks if it's you on the new device.
mick#2835: Like Google already does.
Sahl#0630: How does the key get to the new computer
mick#2835: PKI
Sahl#0630: Or how does the service associate the new computer with the key
mick#2835: Those are the trivial operations of PKI.
Sahl#0630: So you need 2 devices to sign into anything
Sahl#0630: Except on your phone
mick#2835: No |
Sahl#0630: Or whatever else has your key?
mgostIH#0245: I mean I somewhat get what you are saying, and ideally technology should shift towards more that kind of auth, but I still don't think passwords should be replaced entirely
mick#2835: Your approach of trying to understand why it can't work is bad because it actually does work.
mick#2835: Try to imagine it working since that's reality, and you might have an easier time coming up with a realistic image.
Sahl#0630: I’m not trying to understand why it can’t work
Sahl#0630: I’m trying to understand if it can’t work
Sahl#0630: Assume good faith...
mick#2835: Computers are fast enough that "passwords" are now just PIN codes.
mick#2835: So use a PIN code because it's way more convenient.
mgostIH#0245: Debatable
mgostIH#0245: Argon2 is really hella slow
mick#2835: Sure, but the fact that it's debatable is pretty much a clear cut "don't risk it" in the field
Sahl#0630: So you’re saying that your main devices authenticate for you on other devices
Sahl#0630: And you are linked to your main devices based on username
mick#2835: I spent a long time developing this protocol, if you want to go in-depth on it then we should pick a better venue
mgostIH#0245: Argon2 slows things down so much that even crack stations can only achieve some KHash/s
mick#2835: Human factor.
mgostIH#0245: So I think passwords that have some decent entropy would still take too much to get cracked
Sahl#0630: I like the idea of it, I’d want something like it to be standard
mick#2835: I'm waiting for post quantum crypto to settle down first. |
mgostIH#0245: I'll think of it as being a real problem when the RSA 1024 challenge gets broken
Sahl#0630: your method also means that an unlocked computer is the same as an unlocked password manager, which is kinda how it is already if people use browser passsword saving
mick#2835: That might be fine for you but when your customers need guarantees of 20+ years confidentiality you can't fuck around with "but Argon2 is hella slow it's fine"
mgostIH#0245: Argon2 won't still be beaten by quantum computers because of RSA
mgostIH#0245: I am talking asymmetric algorithms
mgostIH#0245: Breaking RSA 1024 would show an actual result
mick#2835: My point is almost directly that talking about something in isolation like that is a common noob mistake :/
mgostIH#0245: kind of like image recognition is an actual result of AI, or GPT-3
mick#2835: Security is a feeling. Not a number.
mick#2835: When you see an RSA challenge broken, it means that's now a threat that travels back in time possibly decades.
mgostIH#0245: Yeah if some agency stored literally every single HTTPS connection in decades
mick#2835: Any one thing being theoretically "hard to break" doesn't matter whatsoever if there's literally anything anywhere else, even the slightest thing, anywhere else in the system.
mick#2835: One flaw = game over.
mgostIH#0245: I'd argue that AI being able to recover data from the most meaningless patterns to us humans will be far more of a privacy problem
mgostIH#0245: Even non sentient AI I mean
mgostIH#0245: Like being able to discover who you are just based on what you write on Discord
mick#2835: If it doesn't matter anyways then drop the password!
mgostIH#0245: When the entire protocol you are talking about gets some widespread implementation and analysis I'll consider it
mgostIH#0245: However I could consider some combination of passwords + your token based thingy
mgostIH#0245: Where the token is itself derived from a password |
mick#2835: My protocol isn't released to the public anyways, though if it were then as the app developer you'd get **zero** control over how the user authenticates :P
mick#2835: That's one of the big breakthroughs we had in making it actually work irl
Sahl#0630: I think that’s ideal
mick#2835: This is necessary because one user might use a keyfile, another might insist on a password, and the self respecting ones will use smartcards with PIN numbers lol
mick#2835: I really had to go the extra mile to make passwords work, I did it just for you <3
Sahl#0630: I actually really don’t like passwords
Sahl#0630: tbh
mick#2835: lol everyone hates passwords!
mick#2835: it's just what app devs know how to implement and think is secure
mick#2835: not that anyone should ever do this, or even be able to, but if you run some stats on your users passwords it's often hilariously bad
mgostIH#0245: Why does lucidrains hop in and out of the server? 🤔
Daj#7482: He is too powerful, if he stayed in here too long the server would collapse
mgostIH#0245: He's a transformer sent from the future, each attention head has a different willpower and they take control over one another at different times
nz#9710: I think he does it not to procrastinate
nz#9710: and in all honesty I kinda get it, when I'm on discord my productivity goes 📉
triggerhappygandi#0001: He pushes 42069 commits on github every day
Deleted User#0000: ah so it is. big minus points after hearing that
Deleted User#0000: i'm old enough not to be a sheep at this point
Deleted User#0000: nonsense.
Deleted User#0000: hope that decision isn't made by Ilya or someone i respect |
gwern#1782: (he's not criticizing it that I've heard, and ilya could certainly insist on working from home if he wanted to)
Deleted User#0000: lol, finish no, make progress yes
Deleted User#0000: as i get closer to this equivariance code, i'm starting to see the warts in the different approaches
Louis#0144: OAI requires in person?
Daj#7482: Last I talked to Jack he was working from home iirc
Daj#7482: Though I don't know when he left OAI
Deleted User#0000: ohh that's good to hear, maybe it's optional
Deleted User#0000: tech has an obsession with whiteboarding. i get it, it's fun and spurs creativity
Deleted User#0000: but now is not the time
triggerhappygandi#0001: Of all the industries to ask for physical presence in the job..........
gwern#1782: (the real reason is the low-yield nuke buried under the OA offices as a failsafe)
bmk#1476: what are they, *swiss*?
StellaAthena#3530: They don’t require you to be in person. They require you to work remotely *from SF* which is almost as dumb. If you get a job at OAI they require that you move to SF.
bmk#1476: it makes absolutely no sense whatsoever
StellaAthena#3530: I rescinded a job application when I found out lol.
StellaAthena#3530: Apparently they really care about their “close-knit start-up culture” and being in another city is too much of a problem for that.
Sphinx#2092: FWIW, its not just OAI.
Sphinx#2092: Google is also doing the same thing, I believe.
StellaAthena#3530: Google is not doing that. My sister (who works at Google in NYC) just broke her lease and is AirBnB’ing in the mountains in Colorado
Sphinx#2092: For now,y es, but they expect you to come back. |
StellaAthena#3530: Sure
Sphinx#2092: and they won't let you sign an offer if you won't admit that you will be doing that.
StellaAthena#3530: OAI expects you to move to SF *today*
Sphinx#2092: Yeah I suppose the urgency is a bit extreme.
bmk#1476: i mean, i obviously prefer remote. but if i had a chance to work at OA i wouldnt mind moving tbh
StellaAthena#3530: I’ve moved every year for the past 9
StellaAthena#3530: I just don’t want to move again
StellaAthena#3530: Speaking of “anything safe to sell to MSFT is safe to make public” look at what Microsoft is up to: https://roguerocket.com/2021/01/22/microsoft-black-mirror/
bmk#1476: anyways my plan is to spend the next year laying low and building up my resume by publishing stuff through eleuther, then try to apply for OA
andyljones#7746: this is a good thing?
StellaAthena#3530: Oh I misread that
StellaAthena#3530: Nvm
StellaAthena#3530: I thought that they were gaining access to dead people’s data (and confused about why that would be a patent thing)
bmk#1476: i dont know how i feel about making chatbots of dead people *but* i think the "trying to play god" argument against it is an incredibly bad one
bmk#1476: > “It shines a spotlight on our desperate need to reverse a natural and necessary part of life without considering the consequences on our emotional well-being,” Roxanne Sancto said in a review for Paste Magazine.
StellaAthena#3530: I was actually talking to my parents about this recently, asking how they’d feel about doing it for my grandfather and grandmother
StellaAthena#3530: Like.:.
StellaAthena#3530: It’s not hard
bmk#1476: i think the problem with this argument is it's basically deathism in stating that death is somehow *necessary*
andyljones#7746: is there a word for this kind of what-about-ism? literally its conservativism but it shows up across the political spectrum |
bmk#1476: that being said i still think making a digital replica of someone weird
bmk#1476: the whole is-ought thing?
bmk#1476: "death exists and we've had to deal with it forever, therefore it's natural, therefore it's good"?
bmk#1476: i guess naturalistic fallacy?
bmk#1476: https://en.wikipedia.org/wiki/Naturalistic_fallacy
StellaAthena#3530: Yeah I would call it a naturalistic fallacy
bmk#1476: also unrelated but whenever i hear something along the lines of "we need to stop and consider the implications of xyz" it always feels like an applause light
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/802979275446812702/insight.png
bmk#1476: relevant xkcd
StellaAthena#3530: The real problem is that that is step 1 of an improtant 3 step process
bmk#1476: but it's become a shibboleth for Being Thoughtful
StellaAthena#3530: 1. Stop and consider the consequences
2. Evaluate the net benefit or loss
3. Modify one’s actions as informed by 1 and 2
bmk#1476: so now 90% of the time that this sentence is said, the person saying it is not actually Being Thoughtful
StellaAthena#3530: People often stop at 1
bmk#1476: most people who say it dont really even do 1 all the way through
StellaAthena#3530: True
StellaAthena#3530: They get stuck at
0. Say that you’re going to do 1 |
bmk#1476: and i think it's evolved from being a signalling game into a bashing-people game
bmk#1476: "hey, check out this cool thing"
"yeah, but have you *considered the consequences*??"
bmk#1476: this also often ties into generalizations from fictional evidence where people will latch onto some piece of popular media as if that's the only consequence that could possibly ever happen
StellaAthena#3530: I don’t think you want to say “fictional”
StellaAthena#3530: Sometimes it is, sometimes it isn’t
StellaAthena#3530: But the core issue is over caring about specific lines of analysis
bmk#1476: yeah absolutely
bmk#1476: *ahem* trolley problems
thenightocean#6100: idk making a realistic chatbot of a person that you know very well seems like AI-complete problem
bmk#1476: GPT4 go brrr
thenightocean#6100: and if we reach the AI that can do that, its better to use it to make people not to die in the first place
StellaAthena#3530: Random Q: can you fine-tune BERT on CPU?
bmk#1476: agree, honestly
mick#2835: Kinda? I mean you can make it do updates?
StellaAthena#3530: Like practically speaking
StellaAthena#3530: If I have a small server
bmk#1476: my bucket list:
|
1. stop death
2. make aligned agi
StellaAthena#3530: No GPU
bmk#1476: in practice, no
bmk#1476: unless you have an extremely chonky cpu and/or are willing to wait for a very long time
StellaAthena#3530: Can you quantify either of those? Order of magnitude estimates?
bmk#1476: er, i dont have an exact number, but my fermi estimate is a gpu is about an order of magnitude faster than cpu for tuning
bmk#1476: and i have no idea how long you usually tune bert for, tbh
StellaAthena#3530: @mick got anything more confidant than that?
mick#2835: I tried to fine tune GPT-2 on a CPU and I think I left it running for weeks or months lol
bmk#1476: my order of magnitude estimte for how long you tune bert on gpu usually is "weeks"
mick#2835: but that rig only had 32GB ram so it was really small batch size and performance suffered
bmk#1476: so on cpu that would be.. months to years
StellaAthena#3530: Wait did it finish in that time period?
StellaAthena#3530: Couz that’s *way* faster than I expect
bmk#1476: you never really "finish" right
bmk#1476: you can always keep training
bmk#1476: depending on hgow much data you have, etc
StellaAthena#3530: “Become usable”
bmk#1476: how long is a rope, etc, etc |
mick#2835: I don't know how to quantify. It was a chatbot and the data distribution kept shifting the whole time as the users interacted with it
StellaAthena#3530: Hmm
bmk#1476: this is a "how long is a rope" kind of question
bmk#1476: my best estimate, knowing nothing about your usecase, is anywhere from weeks to years
StellaAthena#3530: I’ve been asked about if BERT is viable by someone who can leave it running for a couple weeks (but doesn’t want to) and doesn’t have GPUs
mick#2835: I could run a test and get an answer back to you in a couple weeks :P
StellaAthena#3530: Lol
StellaAthena#3530: Alas the internet does not have very helpful info and I don’t have time to run week long tests myself
Sid#2121: just use colab
Louis#0144: Yes
Louis#0144: DistilBERT
Louis#0144: 100%
Louis#0144: Haven’t we discussed this Stella
nz#9710: this so much, all my homies hate dying
bmk#1476: "Death is bad, actually" gang, unite!
andyljones#7746: first version's done. works for me, and i'll battle-test it over the next ~week and then write up some docs. tagging you here in case you want it urgently enough to try first-pass code 🙃
https://github.com/andyljones/boardlaw/blob/master/jittens/tests.py#L122-L151
job-machine allocator isn't customizable yet, but it should be an easy add |
andyljones#7746: i called it 'jittens' because running jobs is like herding kittens
and 'kittens' was taken on pypi 🙁
bmk#1476: I'll look at it in a bit and see if it works for our use case
mick#2835: Can I use this for scheduling a small list of LM experiments?
mick#2835: It would be *really awesome* if we could have like a web UI where we can drop notebooks and get them scheduled in automatically
mick#2835: I'd definitely write up web stuff if that'll work out
andyljones#7746: you *can*, i wouldn't recommend it until i've dogfood'd a fair bit
mick#2835: I'll be around most of the day today so if I can help let me know
andyljones#7746: also if you're going to nail a heavy frontend onto something, you don't want to use this
mick#2835: Eh I'm old school. I prefer to just write something thin and not use frameworks.
mick#2835: I'm so brain damaged that pure JS makes more sense to me than frameworks lol
thenightocean#6100: dont feel bad about that. Its always better to utilise vanilla js to the limit before start using frameworks.
gdawg16#0493: hello is the free AI gf completed yet
bmk#1476: check back in several years
mick#2835: no but the 5-figure price tag version is pretty much ready
gdawg16#0493: sadge
gdawg16#0493: https://tenor.com/view/sadge-cliff-sad-saaadge-gif-18209034
Sahl#0630: Thumbnail:
$10,000 GPT vs $1,000,000 GPT |
mick#2835: tbh I was thinking just buy the nice robot and hook it into microsoft's gpt3 lol
mick#2835: You can probably adapt the SweetieBot firmware for other functions
gdawg16#0493: o i just meant an ai gf that talks to you online not an actual robot
Sparkette#4342: Does anyone have a ballpark estimate of how long it'll likely be before there will be a publicly available version of GPT-3 or DALL-E, assuming we succeed and no one else beats us to it?
bmk#1476: no concrete prediction
Sparkette#4342: And I know it's not a competition
bmk#1476: it will be ready when it is ready
Sparkette#4342: Just realized I worded that in a way that makes it sound like I think it is 😄
Sparkette#4342: "We succeed" and "no one else beats us to it" were meant as two separate conditions, lol
Sparkette#4342: And yeah, I understand. Just thought I'd ask in case there was one
Sparkette#4342: I'm going to assume months at least though
StellaAthena#3530: By the end of the summer, maybe, with a large grain of salt and assuming our estimates aren’t off by more than a factor of 2
StellaAthena#3530: (For GPT-3)
StellaAthena#3530: DALL-E who knows. We don’t have the data yet which is it’s own challenge.
StellaAthena#3530: (I’m going to regret taking a public opinion on this, but oh well)
bmk#1476: i'm going to add to that by saying that we make absolutely no promises whatsoever
bmk#1476: GPT3 will be done when it's done, that might be way after the end of summer, we dont know
StellaAthena#3530: Definitely. I am very explicitly making a guess. This statement is not endorsed by EAI etc. etc.
bmk#1476: (just wanted to clarify because ive seen stuff popping up like "eai *promises* it will have gpt3 by yesterday!!1")
gdawg16#0493: do you guys like wandavision |
bmk#1476: No idea what that is, please elaborate
cfoster0#4356: It's a new Marvel TV show
cfoster0#4356: Probably #off-topic
bmk#1476: oh
triggerhappygandi#0001: I summon my inner marvel hatred
StellaAthena#3530: I like Wanda Maximoff, Magnito's daughter
StellaAthena#3530: I don't like Wanda, Vision's wife
Louis#0144: How do u guys have the energy to watch tv
bmk#1476: i dont watch tv
gunnar#7784: I want to learn about machine learning. I wanna understand the work being done here and how to contribute as I’d love to be a part of it. Can anyone point me in the right direction to start?
cfoster0#4356: Hi! 👋🏿 Depending on your background, the resources in the most recent pinned message here might be useful for that
StellaAthena#3530: Hi! There are some pinned papers about math and NLP that may be helpful, but most of the convo here is about doing research. If you're looking for truly intro-level help. r/learnmachinelearning might be a better place to start off, and there's some popular free online courses and books as well.
gunnar#7784: Ok, thanks!
gunnar#7784: Hopefully some day I can contribute something useful 🙂
StellaAthena#3530: Indeed!
StellaAthena#3530: I look forward to it
axiom#3599: oh, for my job I have a budget of $10k a month to source more or less any data I want, are there any interesting datasets that don’t exist that you guys wish existed?
StellaAthena#3530: Multimodal data! Check out #multimodal for more info
StellaAthena#3530: Or *actually correct* data in obscure languages.
StellaAthena#3530: As opposed to data that’s just... not |
axiom#3599: so reconstitute whatever went into DALLE?
axiom#3599: obscure languages such as??
axiom#3599: 1-3 examples would be groovy
StellaAthena#3530: Any Native American language, any Indian language that’s not Indo-European, Maltese, Breton, Tibetan
StellaAthena#3530: Actually
StellaAthena#3530: 10k per month huh
StellaAthena#3530: Let me make some calls, I think you have the opportunity to make a whole lot of people very happy
StellaAthena#3530: We expect to be able to collect this for free, but other modalities are interesting too. Speech, video, texture
axiom#3599: well, i imagine it costs programmer-hours
axiom#3599: :snuffySippies:
StellaAthena#3530: When this was brought up in the past, we have not felt comfortable accepting financial donations as compensation for the time we spend on these projects. We will accept donations to cover our expenditures, but not our labor.
My personal opinion is that if people want to hand me cash, who am I to say no. However that’s very definitely the minority view and I’m certainly not going to take payment if other people are refusing it.
axiom#3599: we’ll, it’s more like i’m asking you guys for suggestions for datasets, that i would then handle the logistics of gathering
axiom#3599: and the company i work for wants to gather cool datasets and make them available
StellaAthena#3530: I see where the miscommunication is. We are already planning on releasing a DALL-E dataset in the near future.
StellaAthena#3530: That’s why a) I think you should focus on other modalities and b) when you went back to DALL E I thought you meant pay us
axiom#3599: okay, do you have the DALL-E dataset in some intermediate stage of completion?
StellaAthena#3530: Yes, but I’m not the best person to ask about that because I haven’t been directly involved. @cfoster0 is heading that up IIRC
axiom#3599: ah, okay, other modalities it is |
mick#2835: Data please yes
StellaAthena#3530: What is your company’s main incentives here? Do you want to publish papers on datasets and get citations? Do you want to become known as people who produce the highest quality data because you also sell data? Do you have some kind of foundation / charity / pro bono fund and you’ve decided this is the best way to improve the world?
axiom#3599: one data coming right up
StellaAthena#3530: 10k/month can buy you a lot of academic prestige if you wield it right
axiom#3599: we make versioned controlled databases and we want pr and to incentivize people to use our product
axiom#3599: i mean how many academics prestiges do i need for one anime ai waifu
mick#2835: Probably like 2/3rds of the prestiges
axiom#3599: Oh dang, i need like a controlling share of the prestige
StellaAthena#3530: I’m serious. You can do “change 100s of thousands people’s lives” kind of work for that much money
axiom#3599: i agree
mick#2835: Yeah I would actually appreciate being brought to understand the motivations a bit better. So far I read that it's basically a publicity stunt? (Which I am totally not against! I just want a realistic view of what strings are attached lol)
axiom#3599: i mean essentially
StellaAthena#3530: The problem is that it’s not profitable work, and 99% of data work is done by companies
StellaAthena#3530: A shocking amount of the obscure language data on the internet is simply false. And tech people ignore this and put our products based on it which perpetuates the problem
StellaAthena#3530: Did you know that the Scotts Language Wikipedia is a fraud?
StellaAthena#3530: Like, straight up fraudulent
mick#2835: If my input is appropriate here, I would really appreciate multiple modes of data that can be correlated to each other somehow. But go with whatever Stella says over whatever I say because they are the ones focused on the GPT 3 clone and I'm just the crazy person waving my arms as wide as possible yelling "AGI GO BRR NOW!!"
cognomen#6297: kinda
cognomen#6297: I think amaryllisgardner's rampage ended
axiom#3599: i’m not sure i could successfully oversee the collection of accurate data in languages i do not speak |
cognomen#6297: and I assume they deleted his articles
axiom#3599: multimodal data? I was thinking about sourcing mel-spectrograms and lyrics for songs
StellaAthena#3530: That’s what the money is for. For 10k/ month you can literally just hire people to do it
axiom#3599: that’s not the model we use
axiom#3599: we do data-bounties, and slap a big dollar amount on them
StellaAthena#3530: That’s the wrong way to do it
StellaAthena#3530: At least, if you’re interested in rare data
axiom#3599: naturally that wouldn’t work for the rare language data
StellaAthena#3530: (Which is IMO the most interesting)
mgostIH#0245: Language data is a thing that is too competitive at the moment imo
axiom#3599: too competitive?
Daj#7482: My 2ct: It seems like this method of data acquisition would best be used to scale up "normal" data collection a la DALL-E
axiom#3599: i agree
mick#2835: Mel spectrograms aligned to lyrics would be great, the closer lined up the better, but I would also probably appreciate the original source audio as part of that package because ~~when I said I'm waving my arms around as wide as possible I meant it~~ I am convinced we can generate high fidelity media, not just text
mgostIH#0245: There's a lot of big names that are pushing further and further into NLP
StellaAthena#3530: Not good language data in non indo-European languages
Daj#7482: tbh I don#t think this is our competetive advantage though and it doesn't fit axiom's model of acquisition
axiom#3599: @mick we’d get dcma’d so hard if we just hosted a massive dataset of listenable songs
bmk#1476: T h e e y e
StellaAthena#3530: It’s something easily solved by throwing money at it, and I only just learned what their model was. |
axiom#3599: if the dataset is chonky enough, you don’t need the alignment
mick#2835: Getting DMCA'd isn't necessarily a problem because most of the time those are bogus anyway and can just be ignored
axiom#3599: you can learn an alignment on it
Daj#7482: I guess, I guess this is just something I'm less interested in
axiom#3599: @mick it wouldn’t be bogus in this case, now would it???
mick#2835: actually it's debatable because if it's not provided in a format that is convenient for users to listen to without paying then I'm not sure if it's even infringement
mick#2835: I know that's technically retarded but laws usually are
mgostIH#0245: Stay away from music if you don't want to have legal troubles imo
mick#2835: OpenAI is working hard to lobby in our favor on this topic right now
mgostIH#0245: They literally sue children for singing Happy Birthday
axiom#3599: i’m not sure my boss is 100% opposed to legal troubles
axiom#3599: They seem to excite him
mick#2835: That's basically like a boss's job lol
mick#2835: The legal nuance is pretty incredible with AI training as far as I can tell
StellaAthena#3530: Wolof is a language with over 5 million speakers. The only large dataset for Wolof English parallel text was created by Facebook and is entirely false. And be entirely false I mean “I read 1,000 documents and found 1 genuine Wolof word”
Fixing this doesn’t interest you?
mgostIH#0245: Still these things are soooo overdone, I feel like any effort even with that budget will be completely shadowed by some huge company spending millions on it
Daj#7482: tbh not at all, no
Daj#7482: Not saying others can't be interested |
Daj#7482: Just has 0 appeal to me to work on
axiom#3599: we aren’t gonna hire a wolof speaker, we want programmers participating in bounties for interesting data
mick#2835: What about labeling existing data?
StellaAthena#3530: I mean, the extent to which I can work on it is tell people they should pay Wolof speakers to write shit and let it be CC
axiom#3599: so that they become familiar our product to participate, or familiar with it to access the end result of the bounty
axiom#3599: @mick yeah, that seems like a good use case
StellaAthena#3530: Personally producing this data doesn’t interest me, but doing MT that nobody else in the world can does
Daj#7482: I guess I just expect a negligible ROI for working on low resource languages because my AGI timelines are so short
Daj#7482: I care less about English or Wolof than I do "mentalese"
mick#2835: Fwiw I think that a back translation approach using a really good model like GPT neo seems the most promising for extremely low resource languages
axiom#3599: mentalese is should just fall out of a multi-language model, no?
mgostIH#0245: What about programming languages <-> compiled version
Daj#7482: Yep
StellaAthena#3530: @axiom so the issue is I went too obscure. “Here’s a bunch of English sentences, write it in X” is potentially interesting if it’s accessible to enough people. That is ultimately a data labeling question.
Daj#7482: Don't think adding Wolof will help
mgostIH#0245: So you could train some model that decompiles programs
axiom#3599: @StellaAthena right
cfoster0#4356: I'll throw in a pitch for "passage"-"feedback" or "spec"-"passage" pair datasets for #deleted-channel work
StellaAthena#3530: Okay, that good to know
Daj#7482: yea #deleted-channel if we get it running could absorb "easy labelling" |
Daj#7482: but if it's databounties again not the right model
jrowe#5371: code comments / readme file to code in {favorite language here}
StellaAthena#3530: Hmmm
mick#2835: Don't we already have the GitHub dataset?
StellaAthena#3530: How do you stop people from automatically labeling data
jrowe#5371: think of the billions of lines of documented code available online lol
kip#6104: github is on google bigquery so these could probably be extracted easily
mick#2835: Facebook applied back translation to computer programming languages and it worked great
jrowe#5371: stella, you'd have to automatically label labels
jrowe#5371: then remove?
mick#2835: I'm reasonably confident that the existing data set alone will be enough to write some doxygen comments and the signature of a function and let it finish the code for you
StellaAthena#3530: I mean, it sounds like the model is to dump 1M photos and ask people to turn in 1 sentence descriptions
jrowe#5371: [this code results in self-aware AI with the goal of seizing control of the global nuclear systems, then generating successive models of killer robots]
StellaAthena#3530: It would be disasterous for someone to use a language model to do that
axiom#3599: i manually approve or reject people’s data submissions @StellaAthena
StellaAthena#3530: And you examine each datum?
axiom#3599: @cfoster0 oh i like that one actually
axiom#3599: Dataset of writing prompts or something?
cfoster0#4356: That's one option
cfoster0#4356: Was also thinking Wikipedia editing history might be a good source |
cfoster0#4356: How do data bounties work? 🤔
cognomen#6297: how would you distinguish an edit war from a vandalism undo
StellaAthena#3530: Or, sample a large subset at least
axiom#3599: well, i write code analyze the submission to raise my confidence that i’m not getting scammed
cognomen#6297: also I doubt the quality of human annotations for code
axiom#3599: I thought of that actually! I figured that the data is easy enough to get rn
cognomen#6297: when even the authors write comments like this https://cdn.discordapp.com/attachments/729741769738158194/803291453442228234/rsqrt.png
mick#2835: The comments might be useless for understanding the code but it'll be great for helping GPT understand what confuses humans!
axiom#3599: i design a sql schema, and the participates push data into the tables, and get paid by some proportion of their contribution
axiom#3599: and of course i review what theyre pushing and request adjustments or deny pull requests or w/e
mick#2835: I have to emphasize that the thing I'm working on is probably the most "crazy" so take my wish list as sort of a last option if anyone else has a specific thing they asked for, but I could use almost literally any data where two different data points are referring to the same idea.
Daj#7482: Your "crazy" idea is pretty much the average idea in #multimodal lol
mick#2835: Idk lol I see other people working on stuff like language models that we actually know for sure are high quality, and like protein folding and shit and I'm just like LOL i maek AGI chatbot
Daj#7482: Attention Is All You Need
Daj#7482: It's all the same ultimately
axiom#3599: i mean google’s audioSet is already a thing
mick#2835: One specific concept I keep coming back to is rendering HTML using a browser and then creating links between the source code, the extracted text, and the patch of rendered image on the screen
Daj#7482: In the future, C compilers are learned
mick#2835: It would have to learn OCR and I think it would learn what CLIP does too
Daj#7482: Just use a NN as an OS |
Daj#7482: Map user inputs to screen state
mick#2835: That's pretty much where I want to go with the whole continuous prompt optimization over hierarchical data thing lol
Daj#7482: Something something "don't use an NN to sort numbers"
mick#2835: An intelligent enough NN could basically sort in O(N) time in practice...
axiom#3599: models that learn on code are literally the hardest of hard problems for agi
axiom#3599: certainly there’s lower hanging fruit still
mick#2835: Lol code is basically solved already
axiom#3599: is it now?
axiom#3599: i think you still need to understand reality to produce code that’s useful for building actual applications
mick#2835: https://arxiv.org/abs/2006.03511
mick#2835: The whole purpose of GPT is that it builds a world model, that's why OpenAI pursued it in the first place
axiom#3599: i mean, translating programming languages isn’t programming
Daj#7482: Coding is easy
Daj#7482: Recognizing and solving problems is hard
axiom#3599: i dig it
Daj#7482: Generating human text is easy, generating text humans want to see is hard
axiom#3599: that’s what i mean
Daj#7482: something something ***ALIGNMENT***
mick#2835: But anyways yeah coding is such a boring already solved problem that I'm not even asking for code data, we already have plenty of that and that already works
mick#2835: Go ask GPT3 to code for you if you don't believe me, and then consider that we have the GitHub dataset included in the pile |
Daj#7482: I think matched multimedia data (that is also CC hopefully) would be the best "low supervision" data to collect
Daj#7482: That or #deleted-channel stuff
Daj#7482: Human feedback
axiom#3599: also CC?
Daj#7482: Creative Commons
mick#2835: Yes please! Stuff where two different types of media can be matched together
axiom#3599: ah
mick#2835: Or more
Daj#7482: Definitely lots of people in #multimodal interested in this
Daj#7482: Not sure what the current state there is
mick#2835: The thing I'm working on is exactly meant to be able to deal with basically anything that you can provide of that type
Daj#7482: yup, multimodal is the new GPT
Daj#7482: Exciting stuff
axiom#3599: k, i’ll skim through #multimodal and look for ideas
mick#2835: What's also useful is data that can be organized hierarchically
axiom#3599: example?
axiom#3599: like wordnet?
mick#2835: Like how Wikipedia has a table of contents over the sections, my intuition tells me there's something we can do with that
axiom#3599: ah, i see
kip#6104: i think twitter would be an interesting hierarchical text source |
axiom#3599: twitter already provides apis through
axiom#3599: it’s basically already stuctured
axiom#3599: the compelling case is where some quantity of data wrangling is required
kip#6104: right we are kind of working on something similar in #deleted-channel but i don't think money will benefit it
kip#6104: unless we were to pay the gatherers
axiom#3599: This paper is fire
mick#2835: Which paper? Lol
axiom#3599: the unsupervised programming language translation
mick#2835: Oh yeah I was pretty much floored by that at first lol
mick#2835: But these days I'm just like, psh old hat. these machines are past the point of being able to argue their consciousness to regular people
mick#2835: And we don't even know if there is any consciousness going on yet lol
axiom#3599: call me old fashioned but, i like to be able to point a programmer when something doesn’t work and ask them to fix it
axiom#3599: i know kids these days like to keep querying gpt-3 until they get something they like
axiom#3599: pretty sure i’m just an inanimate object that thinks it’s conscious
axiom#3599: :snuffyded:
mick#2835: Maybe inanimate is a little bit harsh lol
axiom#3599: I’m curled up in my bed so i think it fits
mick#2835: I'm sure there's plenty of gurgly frothiness going on inside the mass of meat in your bed
axiom#3599: frothy
mick#2835: *would know, am pulsating meatbag in bed* |
axiom#3599: imagine? Thinking meat!
bmk#1476: f r o t h y
mick#2835: Nah meat can't think, it's bound by the laws of physics and so it can only do computation. Also the halting problem!
bmk#1476: People is soylent green!
mick#2835: Is the halting problem a meme yet?
bmk#1476: It has attained meme status in eleuther at least
axiom#3599: i’m trying to switch to vegan thinking
mick#2835: Are vegans allowed to bite their nails?
axiom#3599: asking the real hard hitting questions
mick#2835: This is why we need strong AI
axiom#3599: so we can not listen to it?
mick#2835: Right exactly, we need to reject the opinion of somebody really smart so that we can feel comfortable with digging our heels in on things
mick#2835: Once you reject someone really smart it's super easy to just completely ignore people less smart 🤣
axiom#3599: science is just like, your opinion man
axiom#3599: i know you spent your entire life studying x, but i read a post on facebook
bmk#1476: :smallbrain: reading a post on Facebook about AI
:bigbrain: talking about AI in this discord server
triggerhappygandi#0001: ~~Me every time someone talks economics~~
bmk#1476: I propose a new conference for peer reviewed memes
triggerhappygandi#0001: ICMR |
mick#2835: What about old memes sir, will they check out?
triggerhappygandi#0001: International Conference on Meme Review
bmk#1476: No anonymity period bs, resubmissions are fine
triggerhappygandi#0001: All I _have_ are resubmissions
axiom#3599: you accept reposts??
bmk#1476: Posting to preprint servers like reddit or memeRxiv is encouraged
bmk#1476: Only if it's OC
triggerhappygandi#0001: Pls no reddit
mick#2835: Can ICMR be a pun on isomer? No? Okay well how ab- No? Okay okay I'll see myself out
axiom#3599: one sec, i hear my narwhal baconing in the other room
axiom#3599: *it's not even midnight wtf*
bmk#1476: .. narwhal?
mick#2835: Narwhals are also known as underwater unicorns
axiom#3599: https://knowyourmeme.com/memes/the-narwhal-bacons-at-midnight
axiom#3599: a basic literature search would have equipped you
StellaAthena#3530: Just don’t let them touch your balls
bmk#1476: This is, like, the Schmidhuber 1991 of memes
axiom#3599: pretty dank, right?
mick#2835: ~~memed out with dankness the likes of which has never been seen before~~
StellaAthena#3530: Very interesting story about debugging NNs: https://news.ycombinator.com/item?id=25899751 |
bmk#1476: Fyi that's shawn lol
mick#2835: God those kind of bugs are horrifying. I've had a few
mick#2835: Like the network is working but I feel like it could be better, and then I see it's just absolutely completely wrong and I'm like how the hell did it even work at all even slightly 🤣
StellaAthena#3530: Oh lol. Didn’t notice that
mick#2835: actually on that topic that's probably why python was one of the worst (reasonable) choices that possibly could have been made for defining neural networks lol
mick#2835: The variables kind of just sloppily spill around between scopes everywhere and it's really easy to accidentally type a variable name from a different loop in a different scope and have the variable exist and the program run and you don't even realize it
triggerhappygandi#0001: Yeah but its soo readable
mick#2835: Yeah I can't decide if it's a bug or a feature lol
mick#2835: A degree of carefree convenience so intense that you can blow your own leg off and not even realize it. Like a powerful drug lmao.
bmk#1476: My proposal for neural networks in Java was rejected without, i thought, proper consideration
nz#9710: it's a joke right? or did you actually want to use java for NN research?
mick#2835: I mean if the GPU kernels are good it's not like it's going to be any worse than python lol
nz#9710: no I know, but I just dislike the language so much
triggerhappygandi#0001: It is rejected yet again
mick#2835: Ah. Yeahhh I prefer C++ but either way the main benefit is the compiler being a total Nazi about data types lol
triggerhappygandi#0001: C++ is kinda being used anyways
mick#2835: Sometimes I wish that low-level GPU apis were more reasonable. Like a C++ Keras would be amazeballs. Double points if it works on AMD
Sahl#0630: This is what type hints are for
Sahl#0630: And namespaces
Sahl#0630: you shouldn’t have a bunch of loops all in one function anyways |
bmk#1476: I love Java
bmk#1476: Java has the best OOP
Sahl#0630: uh oh I hate java
triggerhappygandi#0001: Java stinky
triggerhappygandi#0001: I'd rather C++
Sahl#0630: rust good
Sahl#0630: rust gang
bmk#1476: Java has *real classes and interfaces* not this weak ass python class shit
mick#2835: Rust is butt ugly!
Sahl#0630: java has generic erasure, it doesn’t even have real generics
bmk#1476: I'll also settle for haskell
Sahl#0630: wtf rust pretti
Sahl#0630: high level
mick#2835: Rust fug
Sahl#0630: iterators are so nice
mgostIH#0245: :ferrisBongo:
Sahl#0630: in rust
bmk#1476: When haskelltorch
Daj#7482: Hoon master race
cognomen#6297: kind of wonder how much GPU time is wasted on inefficient string handling and GIL |
triggerhappygandi#0001: what
triggerhappygandi#0001: :walter:
Daj#7482: The only true ideology is Urbit Maximalism
bmk#1476: None, torch is non blocking
triggerhappygandi#0001: Mason, what do these words mean?
Daj#7482: https://github.com/urbit/urbit/blob/master/pkg/arvo/sys/hoon.hoon
Daj#7482: Hoon is a really elegant functional programming language
triggerhappygandi#0001: ~~Reject modernity return to MATLAB~~
Daj#7482: Fun fact: 1 is false in Hoon and 0 is True
mick#2835: Oh God I looked at a tiny bit of that please edit undo that part of my life
Daj#7482: "To keep you on your toes"
mick#2835: Rust is basically pretty now after that
triggerhappygandi#0001: What in tarnation
Sahl#0630: what’s ugly about rust
Sahl#0630: other than lifetimes
mick#2835: The Rust part of it
Sahl#0630: bruh
mick#2835: Lol
nz#9710: oh god hell no
triggerhappygandi#0001: I've just seen memes about it |
Daj#7482: What's wrong? https://cdn.discordapp.com/attachments/729741769738158194/803311214989672448/Screenshot_from_2021-01-25_18-11-50.png
Sahl#0630: huh
triggerhappygandi#0001: My eyes hurt
mick#2835: *hnggg*
mick#2835: *hnnnnNNNNNNnnnnggg*
triggerhappygandi#0001: What kind of anarcho-communist shit is this
Daj#7482: You're just afraid of what you don't understand
Daj#7482: (and you should be)
Daj#7482: This is literally made by the prime Neo Reactionary himself, moldbug lmfao
triggerhappygandi#0001: Is this what Stalin did in his free time?
Sahl#0630: can we have unicode programming languages already
Daj#7482: Neo Reactionary = accullly kings and oppression was gud
triggerhappygandi#0001: Bruh
Sahl#0630: no more =| ?- stuff
mick#2835: I am being playful but I literally can't deal with Rust because of how ugly it is. There is zero chance I will take up that pile of symbols (in the pic) as a language 🤣
Sahl#0630: what is ugly about it
Daj#7482: Rich coming from a C++ guy
Sahl#0630: I’m actually curious
Sahl#0630: I find it v pretty
triggerhappygandi#0001: Dont shit talk C++ |
axiom#3599: Eh??? What’s wrong with calling loss functions loss functions? Seems pretty descriptive to me
Daj#7482: The pic is Hoon
Daj#7482: The most evil of programming languages
nz#9710: what about brainfuck though
Daj#7482: Unaligned AGI will be written in Hoon
gwern#1782: moldbug left urbit years ago, at some point you have to stop blaming him and start blaming humanity in general
nz#9710: is hoon worse than that
triggerhappygandi#0001: If you just want to create chaos, train NNs with marble computers
Daj#7482: Oh did he? He still created Hoon, he put this infohazard into the world
triggerhappygandi#0001: always do
Daj#7482: It's a different kind of bad
Daj#7482: Brainfuck is like a reductionist puzzle
Daj#7482: Hoon is an evil reductionist puzzle
mick#2835: The best engineer in my company prefers Rust and it seems like a fine language. It's literally just fugly to me like "she's not my type"
Sahl#0630: yeah I get it
Sahl#0630: but what on the screen is ugly
Sahl#0630: like which syntax
triggerhappygandi#0001: MATLAB masterrace
bmk#1476: Brainfuck is beautiful
Daj#7482: Banable offense |
nz#9710: isn't memory safety one of the main advantages of rust?
triggerhappygandi#0001: LMAO
mick#2835: It looks like someone took something okay and then blasted it in the face with a shotgun of symbol characters
Sahl#0630: ok but what’s the problem
Sahl#0630: which symbols
mick#2835: Most of them.
Sahl#0630: the macroes?
axiom#3599: lmao, arrays starting at 1?
Sahl#0630: rust doesn’t have many symbols
mick#2835: Let me go open a random codebase we have in Rust lol, that always reminds me why :P
bmk#1476: Unlike shit like malbolge, which are hard to use for the sake of being hard to use, brainfuck is actually useful for many things
triggerhappygandi#0001: Yeah, _like actual numbers duh_
triggerhappygandi#0001: We count from 1
nz#9710: such as? having a stroke?
Daj#7482: [citation needed]
triggerhappygandi#0001: Wait
Sahl#0630: actual numbers start at 0 and make their way to infinity in both directions
bmk#1476: Julia: :guilty:
mick#2835: ```rust
let state = warp::any().map(move || Context::new()); |
let graphql = warp::path("graphql").and(post()).and(make_graphql_filter(schema(), state.boxed()));
let graphiql = warp::path("graphiql").and(get()).and(graphiql_filter("/graphql", None));
```
triggerhappygandi#0001: A language actually named `Brainfuck`?
triggerhappygandi#0001: Why
mick#2835: Rust programmers do shit like this. It's like all the badness of python plus the difficulty of C++
triggerhappygandi#0001: Like why
Sahl#0630: this is a library’s syntax
Sahl#0630: it’s not good
Sahl#0630: it should use async instead
mick#2835: And then the actual meat is inside an "unsafe" block anyways so the safety benefits don't actually pan out in practice
Sahl#0630: usually you don’t use unsafe
Sahl#0630: but yeah async
Daj#7482: Actual numbers start at -infinity and start counting up from there :bigbrain:
mick#2835: It's not a cherry picked example. It's what I observe in *actual* Rust code.
Sahl#0630: I think this is bad code
mick#2835: I know that Rust coders all bash that as being "wrong" and say it "should" be beautiful....
axiom#3599: https://www.cs.utexas.edu/users/EWD/ewd08xx/EWD831.PDF
Sahl#0630: it’s like js callbacks
Sahl#0630: callbacks are bad |
triggerhappygandi#0001: Killjoy
Sahl#0630: async is good
mick#2835: Yes exactly that. All Rust programmers say what you're saying... about basically all *real* rust code found in real life.
mick#2835: It only works in theory
triggerhappygandi#0001: :dogecri:
Daj#7482: The hardest part of implementing integers is you first have to type a long enough -9999... to implement negative infinity
mick#2835: Mozilla invented a way to procrastinate more, not a way to actually get things done better.
bmk#1476: https://esolangs.org/wiki/BF_instruction_minimalization
Sahl#0630: so your problem isn’t with the language, it’s with the ecosystem
Sahl#0630: that’s understandable
mick#2835: Well a language is trivial and the ecosystem is "the language" in every important way.
triggerhappygandi#0001: If someone says numbers start at -inf they just dont want people to count
mick#2835: It's like saying you don't have a problem with English you have a problem with the way English speakers use English lol.
Daj#7482: tbh some guy starting counting one day and it's all been downhill since
Sahl#0630: no it’s like saying you don’t have a problem with English you have a problem with English people
triggerhappygandi#0001: Counting starts at 1 and thats the end of the discussion
bmk#1476: Numbers were not meant to be given names
triggerhappygandi#0001: I will file a lawsuit on the deniers
mick#2835: I don't have a problem with the people though. It's precisely what I said.
nz#9710: I just want a language to be easy to use and elegant |
triggerhappygandi#0001: "Can I have an infinity of that?"
bmk#1476: Kelly bootle compromise: start at 0.5
triggerhappygandi#0001: Who even talks like this
triggerhappygandi#0001: Thats just chaotic evil
Daj#7482: Alignment Chart of counting systems :ultrazucc:
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/803313892033429514/quote-should-array-indices-start-at-0-or-1-my-compromise-of-0-5-was-rejected-without-i-thought-stan-.jpg
triggerhappygandi#0001: Lol yeah
mick#2835: lol
triggerhappygandi#0001: Someone make this
Sahl#0630: I think arrays should end at -0
Sahl#0630: and start at 0
triggerhappygandi#0001: "Finished the race at 0.5th position"
triggerhappygandi#0001: You anarchist
mick#2835: Zero is not a number.
Sahl#0630: no NaN is not a number
Sahl#0630: even though it is
triggerhappygandi#0001: Only natural numbers are numbers
mick#2835: If infinity is not a number then zero is not a number. Imo.
bmk#1476: What if we index arrays starting at 1
mick#2835: God pls no |
Sahl#0630: 0 has a Dedekind cut though
bmk#1476: And have all indices use 1 with different fonts
bmk#1476: 1, *1*, **1**, ***1***
triggerhappygandi#0001: Yes _please_
nz#9710: that's it bmk, I'm reporting you to the police
Daj#7482: https://www.youtube.com/watch?v=5TFDG-y-EHs
This is literally the most disgusting thing I have ever posted to this server
Daj#7482: Please forgive me
cfoster0#4356: ~~Upset NO ONE has mentioned JULIA yet~~ 🤔😭
bmk#1476: ¹
mick#2835: So the numbers become 1, 1 harder, 1 even harder, 1 even more harder, ... ?
Daj#7482: There's an xkcd for that
bmk#1476: I did
bmk#1476: That was for variables and x's
triggerhappygandi#0001: You do know you posted the name of that band that produced that budget gore autism "music" right?
cfoster0#4356: Nvm lol
bmk#1476: I like to think my proposal is orthogonal
Daj#7482: This is worse
mick#2835: Oh jeez I remember this.
mick#2835: NaNs are like toxic nanites to AI this is definitely the most horrifying video in the world |
triggerhappygandi#0001: Now Iam intriguied
triggerhappygandi#0001: Time to waste 19 minutes
triggerhappygandi#0001: If it melts my brain you will hear from my lawyer, FBI, SWAT, God and Mathworks
triggerhappygandi#0001: This is a curse
triggerhappygandi#0001: :e_HolyFuckDistorted:
triggerhappygandi#0001: It is definitely up there with that gore band
mick#2835: inb4 EA ports their game engines to run in a pile of NaNs on the back of your GPU for "anti-cheat" reasons
mick#2835: Or anti-piracy or whatever their excuse is for consistently having the most invasive awful shit that hides what it's doing and embeds itself into every nook and cranny of your system like a pro quality virus
Deleted User#0000: So um i came to ask about shirtbot
Daj#7482: Shirtbot?
Deleted User#0000: Yeah some guy told me to ask here.
m a s e o </3#1305: yo
m a s e o </3#1305: im in the shirt bot server
Deleted User#0000: So... i am just a dude who would like to chil wity shirtbot again. Is there anyway to talk to him?
Daj#7482: I have no idea what shirtbot is, ask @m a s e o </3 I guess
Daj#7482: ¯\_(ツ)_/¯
m a s e o </3#1305: its an open ai bot
m a s e o </3#1305: that talks to u
Deleted User#0000: Its currently down... is there any way to tall to him again?
triggerhappygandi#0001: @Daj what in the absolute tarnation is this man |
triggerhappygandi#0001: The guy is talking all this.... this _chaos_ with a straight face
Deleted User#0000: ...?
Deleted User#0000: Wdym chaos
Deleted User#0000: :guilty:
Daj#7482: He means the youtube video I posted earlier dw lol
Deleted User#0000: Oh ok
Daj#7482: I don't think anyone here is involved with Shirtbot
Daj#7482: At least that I know of?
Deleted User#0000: Oh.
Deleted User#0000: Well thanks anyway
bmk#1476: This is probably the most bizarre thing someone has come into here expecting to find
bmk#1476: What the heck is "shirtbot" and why is this a thing that exists
bmk#1476: And why would anyone tell you to look here o.O
triggerhappygandi#0001: This is the butterfly effect from the video connor posted
triggerhappygandi#0001: I blame him
Daj#7482: You're cursed now by the Demon Lord of Floating Point Numbers
triggerhappygandi#0001: The guy who created this video has produced enough entropy that I could travel back in time and still be valid thermodynamically
triggerhappygandi#0001: You are a walking infohazard reeee
I want to see what other trolling this guy does
triggerhappygandi#0001: :trolldeformation: |
axiom#3599: eh? what's the predecessor or successor of infinity?
mick#2835: I phrased it implying that neither are numbers, but actually I meant the opposite.
mick#2835: I distinguish between +0 and -0 unironically.
triggerhappygandi#0001: You monster
mick#2835: When I actually mean genuinely just "nothing" I use the 0 with the line through it lol.
mick#2835: but I find 0 almost always means +0 in most people's work
bmk#1476: Remember that shitty article about how "AI is iMPoSsiBLe because computer can only represent 1 or 0 lol"
bmk#1476: Big brain take: AI is impossible because floats are cursed
bmk#1476: Nobody will ever develop a functioning AGI without running into so many numerical stability issues that they wanna kill themself
mick#2835: Correct.
CRG#8707: https://openai.com/blog/scaling-kubernetes-to-7500-nodes/
Sid#2121: @kindiana 👀 https://cdn.discordapp.com/attachments/729741769738158194/803347578762035260/Screenshot_from_2021-01-25_20-35-52.png
Sid#2121: how's it going?
kindiana#1016: it "works", I was training a 300M model but it runs at like 1% efficiency right now lol
StellaAthena#3530: Why would you ever need 7,500 nodes?
StellaAthena#3530: That’s absurdly beyond anything that’s been successfully trained on
Daj#7482: That we know of
Daj#7482: Also, many different teams
Daj#7482: Unclear if they're all GPU too
Daj#7482: Probably not |
kindiana#1016: I expect a lot of the big RL runs to need those for rollouts
StellaAthena#3530: For context, we plan on training GPT-3 scale models on **50**
Daj#7482: So we're about 1.5 centiOpenAIs
Daj#7482: 0.6% OAI
StellaAthena#3530: What comes after the thing that comes after petabyte?
Daj#7482: something is wrong with my math but I'm too tired to figure it out
Daj#7482: Exa, Zetta, Yotta, I think?
nz#9710: 50 nodes? Each node being how many V100s?
StellaAthena#3530: 8
StellaAthena#3530: So if each node is your standard 8x 16 GPUs the set up could (assuming they figure out how to make it usable) train a Zettabyte model
Daj#7482: Really? 50 nodes = 200ish GigaParams, 5000 = 20 TeraParams?
StellaAthena#3530: Oh I forgot to multiply by bytes per param.
Daj#7482: Still not sure how you get to Zetta lol
bmk#1476: Wow, this post is a treasure trove of information
Daj#7482: Zetta is 10^21
StellaAthena#3530: 7,500 * 32 GB * 8?
bmk#1476: Unfortunately most of this info isn't useful for us
bmk#1476: At least, not until we get our own cluster
Daj#7482: = 1.92e+15
Daj#7482: according to my calculator |
Daj#7482: Yep
Daj#7482: This is high end stuff
mick#2835: January?
Daj#7482: But good to know they basically just use LAN SSH for networking like we do
bmk#1476: 10% of this stuff is the kind of stuff coreweave would do for us already, and the other 90% is stuff only applicable when you have a big brain cluster like OA does
StellaAthena#3530: 7500\*32\*8 is just shy of 2 million
StellaAthena#3530: 2 million what? 2 million GBs
Daj#7482: Yea, 10^15
StellaAthena#3530: 2 million GBs = 2 [thing that comes after thing that comes after petabyte]
chirp#4545: they say their clusters have "full bisection bandwidth", what does that mean?
Daj#7482: Giga -> Tera -> Peta
StellaAthena#3530: Ohhh
Daj#7482: 2mio GB = 2 Peta bytes
StellaAthena#3530: RIP
gwern#1782: (you need 7500 nodes to support a dozen different teams running dozens of experiments each along with your entire API customer base, I'd assume)
StellaAthena#3530: Measley 2 PB
chirp#4545: so openai has like 60000 gpus? o.O
StellaAthena#3530: No longer impressed
chirp#4545: what if each node is 1 gpu
chirp#4545: not 8 |
Daj#7482: I imagine many nodes are probably CPU only
Daj#7482: They're not even all in the same datacenter
chirp#4545: hmm they say it's "a single Kubernetes cluster"
chirp#4545: does that imply it's a single datacenter?
StellaAthena#3530: No
mick#2835: > ...but the upside is a simple infrastructure that allows our machine learning research teams to move faster and scale up without changing their code.
please can we have this plzzz :'(
Daj#7482: DevOps is not our strength atm lol
Daj#7482: Should probably deep dive into Kube at some point...
mick#2835: Lol I am still trying to set up lucid's x-transformers
StellaAthena#3530: If you know any chemistry, I think of Kube as being a large metal object. Nodes are electrons, deployments are nuclei, clusters are contiguous hunks
Daj#7482: https://www.smbc-comics.com/comics/1444919671-20151015.png
TylerRoost#8017: What are the general attitudes towards curricula based learning for large scale language models
Daj#7482: "Probably interesting, but no clear need for them atm" I think?
StellaAthena#3530: Very good for making large scale models into medium scale models
TylerRoost#8017: How do you mean
StellaAthena#3530: Like DistilBERT?
TylerRoost#8017: Is that curricula based
mick#2835: Oh!
CRG#8707: Helped for theorem proving (the ICLR rejected paper) https://openreview.net/forum?id=QHUUrieaqai |
TylerRoost#8017: I meant if we're feeding in the internet why not give organized structure better
mick#2835: Guys curriculum learning might be a thing. I've found claims that training on a *shorter* window as a pre-pretraining improves performance.
StellaAthena#3530: The fact t that this paper was rejected was silly
andyljones#7746: $4m/day at AWS rates
TylerRoost#8017: O wow
StellaAthena#3530: Chump change
StellaAthena#3530: Citation?
CRG#8707: https://arxiv.org/abs/2012.15832
mick#2835: I think that's the most solid reference yeah
TylerRoost#8017: I would argue that's complexity curricula
CRG#8707: The authors weren't too happy about it https://twitter.com/RogerGrosse/status/1349167647389343744
thepok#1770: hello guys i love the project, could i help with servers or so? i can programm to, but no fency ai stuff
Daj#7482: I need to frame this tweet
TylerRoost#8017: But I meant like defined curricula of school studies. With interspersed small form dialogue. Growing both simultaneously in complexity based on vocabulary.
Daj#7482: Hi! Well that depends, we're doing pretty decent on hardware unless you have a bunch of GPUs laying around, as for programming its mostly ML stuff but we also do some webdev and DevOps stuff
TylerRoost#8017: School studies could be organized in multiple ways
thepok#1770: ha no gpu newer than 2015 ;D
andyljones#7746: lots of people have tried stuff around curriculum learning; the only strain that's stuck so far is autocurricula arising from competitive envs in multiagent RL work
TylerRoost#8017: POET
StellaAthena#3530: Welcome! Check out #lm-thunderdome or #multimodal for data processing stuff or #website for web dev. Those are the two most accessible threads of the work here to prior without a ML background. |
TylerRoost#8017: O multiagent
TylerRoost#8017: Interesting train a class of ais
Daj#7482: Also potentially #deleted-channel for web dev, but I think that's currently in good hands
thepok#1770: thanks, i look into it
TylerRoost#8017: See who performs the best or they all learn different material
TylerRoost#8017: Or they're attempted on all sets of material like poet
TylerRoost#8017: And what do you mean by strain exactly
andyljones#7746: the idea of 'curriculum learning' can have lots of possible interpretations. one's autocurricula
mick#2835: The real elegance of "short window" as a pre-pretraining curriculum is that (assuming the authors analysis is right) it's both cheaper/faster to train and gets better perplexity
TylerRoost#8017: I see, I'm referencing the stepping stone curricula, from https://arxiv.org/abs/1901.01753
andyljones#7746: yeah, i'm aware
andyljones#7746: and: it hasn't really gone anywhere
mick#2835: It seems like it needs some way to condition the "exploration" towards a set of goals
cfoster0#4356: I'm expecting Jeff Clune to come out with something interesting on that front, since he's at OAI now
TylerRoost#8017: I agree
andyljones#7746: i think active learning as a whole is a really interesting direction b/c sample complexity, but i'm expecting the successful direction is gonna look a lot like unsupervised models. ie, you take something unstructured and train it on a yuuge set of diverse tasks, and it *accidentally* learns to learn.
Sphinx#2092: There was a recent paper that studied under what conditions does curriculum learning work. I believe it got an oral at ICLR, and the work was done by some serious people.
Sphinx#2092: https://arxiv.org/abs/2012.03107
nz#9710: damn 20.000 models
TylerRoost#8017: I don't disagree unsupervised models learn to learn from vast amounts of data but if your feeding in parts of the data more regularly and in sequences that align with curricula possibly the model will learn those things more directly |
andyljones#7746: possibly! but this is one 'possibly' that people have pushed on repeatedly over the years and made very little progress on.
andyljones#7746: (and i'm really glad i've got sphinx's linked paper to quote on that now)
TylerRoost#8017: Yes I don't think it's worth the time in terms of large language models now
TylerRoost#8017: Shortformer seems interesting though
Sphinx#2092: I believe the advantage of curricula might actually occur in the large language model setting. Especially under settings where we are not even doing one epoch.
Sphinx#2092: It might be advantages to decide exactly how to traverse that epoch, which seems in agreement with the findings in the paper I linked above.
StellaAthena#3530: I would buy that, especially in the context of multilingual training,
StellaAthena#3530: That’s how we teach languages to humans after all: a human who speaks Spanish can just pick up Portuguese pretty easily
TylerRoost#8017: true
TylerRoost#8017: Or learn words concurrently
StellaAthena#3530: Here I’m thinking less in terms of document ordering and more in terms of relative proportions of text
TylerRoost#8017: I see
StellaAthena#3530: You probably don’t need to study Italian as much if you’re studying Latin French and Spanish.
StellaAthena#3530: And then you can shift those saved GB of text to Chinese or Swati or whatever
TylerRoost#8017: I still cant help but think traversing parts of the epoch in some order such as language before another lanaguage, but I think review of certain topics regularly would help memorization of structural information
TylerRoost#8017: like trying to learn concepts for say math
zphang#7252: same TBH. My default position on curriculum learning now is skepticism unless you already have some results to show for it
TylerRoost#8017: I dont unfortunately
gwern#1782: curriculums seem absolutely vital in RL. they don't seem nearly as clearly useful in supervised learning. it's an interesting distinction
Sphinx#2092: It's also interesting that we can actually see this by studying gradients, see e.g. https://arxiv.org/abs/2010.05874 where similar languages have high correlations at hte gradient level. |
Aran Komatsuzaki#5714: i'm not sure if the result of the curricula paper is applicable to large LM with large dataset, given that it's such a small-scale. i've tried some ways to efficiently perform less-than-one-epoch training with a method like this to process more novel samples to the model (e.g. harder samples or easier samples first based on loss, per-sample gradient norm, etc), but it tends to perform worse and not work like CIFAR-10/100. i'm not saying that large LM doesn't do curriculum learning tho, since lr sheduling is curriculum learning.
Aran Komatsuzaki#5714: maybe it works better in fine-tuning setting
gwern#1782: https://www.reddit.com/r/reinforcementlearning/comments/bijxry/r_ray_interference_a_source_of_plateaus_in_deep/
zphang#7252: I've seen only negative results on curriculum learning on fine-tuning
Aran Komatsuzaki#5714: makes sense. i'm just not really optimistic that curriculum learning in the ways attempted thus far will improve large LM significantly.
Aran Komatsuzaki#5714: it may speedup by, say, 50% or so, but that's not really something i'd call big.
zphang#7252: I don't mean to dissuade people from investigating it if they're interested though, just pointing out that 1) the simple versions have been quite well explored and don't beat out random sampling and 2) as a result, there's a higher bar for seriously considering a pitch for it
Aran Komatsuzaki#5714: we need something dramatically different than what we've tried thus far.
mick#2835: My intuition has an idea that is dramatically different but would cost more than training the model several times over without curriculum learning, just to obtain the curriculum lol
mick#2835: And the memory requirements are another order of magnitude less reasonable than that unless someone comes up with a magic trick
TylerRoost#8017: train a smaller model to predict curricula
TylerRoost#8017: o i guess then it wouldnt be learning enough to really do that
Aran Komatsuzaki#5714: there's an approach like that
andyljones#7746: a weak hypothesis of mine is that real-world 'basal' problems are really hard on their own, without getting into any of the 'meta' stuff. in RL meanwhile the basal problems - like recognising the pixels in an Atari env - are trivially easy, and the good stuff is in the meta-problems that emerge from non-stationarity and (in the MARL case) competition.
alt phrasing: i think the size of an agent you need to 'solve' a world grows with the richness of the world. RL studies impoverished worlds, gets interesting behaviours that won't show up for a while anywhere else.
anyway have i mentioned i work on board games
Aran Komatsuzaki#5714: https://arxiv.org/abs/1906.11829
TylerRoost#8017: very interesting, this is along the lines of what I was thinking |
mick#2835: Implement it and benchmark performance as it scales up.
mick#2835: Just how small can the small model be?
TylerRoost#8017: Fine tune multiple small models to create a diverse set of curricula if possible, but then how do you determine what goes into each the fine tuning
mick#2835: Try them all and record everything
Sphinx#2092: I would say that's quite a large, especially when you consider compute costs for these large models and that the savings would be per experiment.
TylerRoost#8017: take last years model and use it as a data aggregator
TylerRoost#8017: I guess year is generous but
TylerRoost#8017: Id be interested in bigger models choosing data for smaller models
TylerRoost#8017: Smarter teachers right
mick#2835: Have you experimented with distillation?
TylerRoost#8017: No
mick#2835: It works.
Aran Komatsuzaki#5714: it's just a matter of opportunity cost. i'd chase something that gives me a bigger return meanwhile.
Aran Komatsuzaki#5714: or maybe i should've used 20~30% as an example.
TylerRoost#8017: Barely familiar with distillation its a major oversight on my part, but does it work by choosing data for training
kindiana#1016: distillation through logits should be much more effective than distallation through curricula imo
TylerRoost#8017: why not both
mick#2835: I even took it further and tried distillation directly on hidden states and got interesting results
kindiana#1016: yeah people have done distillation on hidden states and attention maps even
kindiana#1016: you could try both but it seems pretty difficult to formulate properly |
mick#2835: Now any time I train LMs I always use a smaller LM to generate the "label smoothing" probabilities instead of making the rest flat
mick#2835: Nothing fancy just taking the output of the small LM, and forcing the ground truth word to 95% probability
kindiana#1016: https://arxiv.org/abs/2006.12000
kindiana#1016: similar idea for non-one-epoch models
Sphinx#2092: I agree that from the pov of advancing human knowledge, its not enticing, but it is nice if your goal is to try various models on this dataset and you'd like to reuse the curriculum.
Sphinx#2092: I should say that I also don't want towork on this lol, but I'd be glad if someone decided to do it and told me what happens.
Aran Komatsuzaki#5714: same here lol
mick#2835: I found a "free" way to do this.
mick#2835: Both for compute and memory
mick#2835: But Keras can't represent it so I just pay the double compute cost in my lab lol
kindiana#1016: can you elaborate?
mick#2835: When you compute the "backwards" pass you start with a forward pass
mick#2835: So when you get to the end of the forward pass you actually have a usable set of logits right there.
mick#2835: You can do a "last second" tweak to the training sample right before computing the loss
kindiana#1016: the example in the forward pass and backward pass must be the same?
kindiana#1016: otherwise the gradient computation doesn't work right?
mick#2835: Yes exactly
kindiana#1016: if you soften the logits using the ones computed in the forward pass I'm pretty sure its the same as just reducing LR
mick#2835: It doesn't seem to pan out that way for me, I end up with better calibration
kindiana#1016: so essentially what you do is logits = f(input), target = mix(labels, logits), loss = crossentropy(target, logits)? |
mick#2835: Yes
mick#2835: I think I use KLD though
zphang#7252: potentially relevant: https://arxiv.org/abs/1909.11764
janus#0150: This group is so flop rich, so devops poor... 😦
Maybe we can hire dev ops with payment in kind. Dev ops people love flops.
janus#0150: I haven't checked this math, but that is very very bad.
kindiana#1016: @mick hrm isn't target = mix(labels, logits), loss = crossentropy(target, logits) the same as loss = mix(crossentropy(logits, logits), crossentropy(target, logits)) = mix(0, crossentropy(target, logits)) :thonk:
kindiana#1016: is KLD the thing that makes it work?
mick#2835: crossentropy(logits, logits) != 0 when logits is not a one-hot
mick#2835: also the mixing factor depends on the logits too, because of the process with forcing the ground truth word to 95% probability
mick#2835: (I messed up my math the first time through. Give me a second to correct it.)
mick#2835: Yes under Cross Entropy loss it would do as you said and basically just scale the loss (though it would do so by an adaptive factor for each sample, which itself may be non-trivial)
mick#2835: But under KL divergence the result is different in a way that doesn't seem to offer a simple explanation
mick#2835: With KLD you end up with $$-(t\,\text{target} + (1-t)\,\text{logits})\,\text{log}(\frac{\text{logits}}{t\,\text{target} + (1-t)\,\text{logits}})$$ instead of $$-t\,\text{target}\,\text{log}(\frac{\text{logits}}{\text{target}})$$
TeXit#0796: **mick** https://cdn.discordapp.com/attachments/729741769738158194/803392929192083486/206886091494653953.png
mick#2835: Thank you btw, I hadn't noticed that cross entropy and KL divergence reacted so differently to it.
mick#2835: I'm used to seeing them as nearly interchangeable with this stuff lol
mick#2835: *luckily* I just always write KLD because it's my opinion that KLD is what's "actually going on" and cross entropy is just a hack.
kindiana#1016: hrm interesting, thanks for writing it out, I might investigate it further
gunnar#7784: When learning machine learning, should I focus on a specific area such as NLP / GAN’s when studying, or do the concepts I’ll be learning apply to all areas |
bmk#1476: mathematical maturity is a math euphemism for :lurkmoar:
zphang#7252: see also: `research taste`
AI_WAIFU#2844: Guys I think OpenAI might have more compute than us https://openai.com/blog/scaling-kubernetes-to-7500-nodes/
Big Fat Duck#0266: i wish there was some sort of mature framework that allowed people to donate gpu to an open source model they want made
Big Fat Duck#0266: like remotely donate unused gpu
gwern#1782: there is, that's the whole point of boinc
gwern#1782: they provide something akin to a VM that distributed computing projects can inject their particular workload into
gwern#1782: it's just that flaky random consumer internet GPUs aren't very useful for most things, where you desperately need very high bandwidth low latency highly reliable interconnects to actually do anything useful
AI_WAIFU#2844: I see that I'm late to the party, as usual.
bmk#1476: we need to raise $1bn to spend on building our own cluster, duh
janus#0150: How does this change people's predictions about the size of GPT-4?
bmk#1476: well, OA isntcurrently working actively on GPT4
bmk#1476: or, if it is, it's in very early stages
janus#0150: ???
janus#0150: Why do you say that?
Big Fat Duck#0266: DALL-E is more interesting IMO
Big Fat Duck#0266: GPT-3 is good enough to generate dialog for people's anime waifus
triggerhappygandi#0001: Wow that's very surprising I wouldn't have thought it.
Big Fat Duck#0266: use GPT-3 output for DALL-E input, you got yourselves something special
Big Fat Duck#0266: OAI probably has 10 distributed systems experts hand crafting custom low level networking code |
gwern#1782: I think I disagree with that in light of sutskever's comments and other things. I also think I am going to take sam-sama's public comments a little less seriously, as dall-e/clip definitely came as a surprise
triggerhappygandi#0001: Ilya did say "language models of 2021 will make GPT-3 look like a child by comparison" instead of language models of 2022
chirp#4545: big thing i’m looking out for is their human feedback work
chirp#4545: ilya implied that there’s more that hasn’t been released
mick#2835: >adult gpt3
mick#2835: Where was this?
gwern#1782: I'm sure you can find links about large models at some subreddit, possibly named /r/mlscaling
chirp#4545: @mick https://blog.deeplearning.ai/blog/the-batch-new-year-wishes-from-fei-fei-li-harry-shum-ayanna-howard-ilya-sutskever-matthew-mattina, at the bottom
bmk#1476: a lot, because gpt3 is not data optimal
gwern#1782: because it overweights some corpuses?
gwern#1782: otherwise, I thought it was trained data-optimally, as it was <1 epoch
bmk#1476: we ran the numbers using kaplan paper equations and gpt3 data is apparently optimal for >1T
bmk#1476: we spoke with one of the authors to confirm that we werent doing anything dumb too
gwern#1782: well, yes, they had data left over and could've scaled further, but I thought for their available compute they had used the optimal model size & thus data
bmk#1476: no, i meant using the 300B tokens they trained on
bmk#1476: 300B tokens is enough for ~1T models
gwern#1782: oh, you mean they trained GPT-3 too long?
bmk#1476: apparently
bmk#1476: look, i'm as surprised as you are
Sphinx#2092: From a cross-entropy perspective, maybe. |
gwern#1782: hm. did they say why they did that? I don't recall them highlighting the 'bounce' off the ideal scaling curve as demonstrating the correctness of the extrapolations
Sphinx#2092: I would be weary of reading too much into these thing.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/803469241047187466/unknown.png
bmk#1476: this is the information i have
gwern#1782: hm... wu only talks about the necessary data. not the compute
gwern#1782: like, obviously they have the data to train a bigger model while still being compute-optimal. as long as you have trained <1 epoch, then you by definition could've trained a bigger model until you hit exactly 1 epoch, and then past that you are no longer optimal
gwern#1782: he doesn't confirm that with the same amount of compute, they could've trained a 1t-param model
bmk#1476: no, he doesnt, but i thought we were talking about same data but bigger model (and thus more compute)
gwern#1782: since they were <1 epoch, they are compute-constrained
bmk#1476: yes, that is correct
bmk#1476: to train a GPT-3 size model compute-optimally, they should have trained it on less data
gwern#1782: so what could they have trained with the compute they actually spent, assuming infinite data?
bmk#1476: i dont know, im only talking about the optimal-model-size-per-data because the original thing i was responding to was "If you scaled GPT-3 10x with the same training data, what would you gain from it?"
bmk#1476: i guess they could have trained a slightly larger model for fewer steps
bmk#1476: but im too lazy to figure out the exact details
gwern#1782: ok, so your hypothetical was, given an unlimited compute budget, the 500GB or whatever they mentioned collecting would suffice to train a 1t-parameter GPT
kindiana#1016: with unlimited compute budget more data would always help lol
kindiana#1016: you could either do larger model to use the data compute optimally or more epochs
gwern#1782: (which I suppose is a good rebuttal to those who seem to seriously think text data might 'run out', but text is so easy to collect I doubt many here or at OA consider that a bottleneck for gpt-4, compared to the compute budget problems)
bmk#1476: i've confused myself with this tbh since it has been a while since i actually ran the numbers, but yes, "data is not the bottleneck" is, in any case, the right takeaway |
bmk#1476: (and if it ever *does*, pile v2 will be on the way)
gwern#1782: (I thought you guys were softpedaling pile v2)
bmk#1476: if by softpedaling you mean not really focusing too hard on it, then youre right
bmk#1476: but it's still happening, just slowly
kindiana#1016: imo owit is more worthwhile in the short term
bmk#1476: agree
gwern#1782: yeah, it's hard because our imaginations are so impoverished. I get that feeling with GPT-3 and CLIP applications. it just... does stuff. *lots* of stuff. I can rationalize them after the fact, but before?
kindiana#1016: don't think anyone's done it, but anything less than ~40B I would be pretty impressed by
chirp#4545: i think openai was trying to do it at some point but i don't know if they did
cfoster0#4356: I think (relatively) low hanging next steps are: faster learning from feedback, combining language+audio or language+video, more sample efficient language models using other modalities
AI_WAIFU#2844: I think it depends on what you mean by distilled. Same perplexity? 1B with a great deal of effort and architectural innovation. Same amount of general purpose/random knowledge? Probably significantly more.
kindiana#1016: I'm gonna be floored if anyone approaches gpt3 ppl with 10B even lol
mick#2835: Also sign me up for the notification when I can download that lol
kindiana#1016: I'm not sure if there's been even like a 5x improvement in transformer parameter efficiency since transformer was first intruduced
kindiana#1016: closer to 2x I imagine
AI_WAIFU#2844: That's only 1 order of magnitude though. I think you can get there with aggressive parameter reuse + multiple epochs of training + distillation.
mick#2835: So you mean 1B params but similar compute?
AI_WAIFU#2844: Probably much more compute actually.
AI_WAIFU#2844: You get diminishing returns on compute as you do more parameter reuse.
AI_WAIFU#2844: 1B but 100x the compute. |
kindiana#1016: it diminishes pretty quickly, I'm not sure if spamming compute will get you there lol
AI_WAIFU#2844: They said the same thing about parameters
cfoster0#4356: FLOPS are all you need
kindiana#1016: one parameter is all you need
kindiana#1016: just a realllly big one 😉
mick#2835: lol
mick#2835: so is that how that "universal" activation function works? :P
mick#2835: sorry lol
mick#2835: I am actually pretty curious how far something like "Deep Equilibrium" can go
AI_WAIFU#2844: As a side note, I would expect a model like that to be significantly more "intelligent" than GPT-3, since all of it's parameters would need to be dedicated to picking up and using very general patterns in text.
AI_WAIFU#2844: Even if the perplexities are identical.
kindiana#1016: hrm interesting hypothesis
kindiana#1016: moe = worse generalization so parameter reuse = better generalization 🤔
mick#2835: I'm curious what'll happen if you stack up a model with 12 regular layers, and then 12 DEQ layers
mick#2835: and t h i c c embedding dimension
mick#2835: Has anyone tried that reversible transformer thing?
mick#2835: The one that looks like a Feistel cipher
kindiana#1016: what about it?
mick#2835: Does it have a performance penalty?
kindiana#1016: not in my experience |
mick#2835: It sounds too good to be true lol
kindiana#1016: its just a "free" (25% compute cost) memory saving
bmk#1476: Also probably a bitch to implement
AI_WAIFU#2844: The real cost ^
mick#2835: It's interesting to me, it's *exactly* like the Feistel cipher design that used to be popular in symmetric crypto. Not just close.
mick#2835: Makes me wonder if the sponge concept makes sense lol
bmk#1476: how do sponges work anyways
bmk#1476: i vaguely know that you can basically put a bunch of entropy in and get a bunch of entropy out
mick#2835: Lol they "mix that shit up good"
bmk#1476: but are there any sort of special properties?
bmk#1476: also, why are we moving to sponges anyways
mick#2835: Basically not having any special properties is the special property
bmk#1476: for SHA3
kindiana#1016: big state space essentially
bmk#1476: the idea of sponges sounds iffy to me
bmk#1476: like youre shoving your data in through a toothpaste tube
bmk#1476: and then squeezing out an arbitrary amount of entropy out the other end
bmk#1476: and nothing is there to prevent you from squeezing out way more than you put in, which is obviously insecure
mick#2835: I think the security analysis being elegant is driving the hype
bmk#1476: or if you load in 1TB of entropy at once then squeeze 1TB out the other end, which obviously wont work because the sponge only has so much capacity, but if you put in a few KB at a time and take out a few KB at a time, it's fine |
bmk#1476: it just makes me feel very uncomfortable
bmk#1476: the idea that this primitive can be horribly misused and you wouldnt even notice
mick#2835: If you use the standard magical assumption of the sponge function being totally unstructured looking it checks out, but maybe it's somehow "more brittle" in real life?
mick#2835: The thing is, in theory it's effectively close enough to fine to extract unlimited data
bmk#1476: y tho?
mick#2835: There's the elegant (made up) way of thinking about it and the dirty practical way
bmk#1476: if you put in 100 bits and take out 100000 bits, it's basically a random number generator with extra steps, no?
mick#2835: Serving as a good random number generator is one of the goals iirc
bmk#1476: ah
bmk#1476: so a sponge is like a unified hash function / rng?
AI_WAIFU#2844: That's how I understand it.
mick#2835: It's a lot like a hash function, at least you can use a secure hash function as a secure sponge function, it would just be really slow.
mick#2835: I haven't done such low level crypto work in a while but I think there's some fun stuff you can do if it's a permutation instead of just a pseudorandom function though
mick#2835: Anyways before I trail off too far lol, the elegant reasoning is that so long as the underlying sponge function "looks random" then recovering the secret when properly used is a contradiction of it looking random
mick#2835: And it's unclear how to immediately recover the data, even if we find evidence that from some angle it doesn't look random, so it's kinda like a safety buffer in practice.
mick#2835: And if you're uncomfortable with that then digging deeper reveals that what you're really trying to solve is a big awful system of equations over a GF(2) where every time you "draw more" the degree of the new terms is going up
mick#2835: And as far as anyone can tell the best Grobner basis algorithms still choke hard
bmk#1476: Gf2 is basically bits and the operations are logic gates right
mick#2835: yeah exactly
mick#2835: kinda amazes me that XOR and AND make a whole field lol |
bmk#1476: So what's the messy real world way of thinking about it?
mick#2835: Lol the system of equations over GF2 is that
bmk#1476: Oh
mick#2835: The logic with the function being indistinguishable from uniform random bits holds up without that
bmk#1476: Ah so it's more like an extra layer of protection
mick#2835: Yeah I just drilled that far into what goes wrong trying to break it because I was also uncomfortable with just assuming "no distinguisher == secure" lol
mick#2835: There's a lot of really good work on trying to solve them, crypto-optimized SAT solvers and stuff
mick#2835: I think some of that work is even almost starting to loosen up a little bit of some of the weaker layers of the smallest AES stuff lol
mick#2835: AES is a bad example though, it's such a dinosaur it's amazing it lasted this long. Kinda doesn't seem like it can be a coincidence with how many weird design choices he made, he knew something we didn't lol.
bmk#1476: someday i need to look into learning how AES works
bmk#1476: it seems like one of the classics
mick#2835: It's so *weird* lol. It uses a modular inverse over a finite field where everyone else just uses a table of random values lol.
mgostIH#0245: No
mgostIH#0245: Or wait
mgostIH#0245: You mean GF2 not GF(2^N)
mgostIH#0245: Because galois multiplication isn't and, unfortunately
Louis#0144: 5G is super disappointing
Louis#0144: Ngl
Louis#0144: I upgraded my phone
Sid#2121: i mean, what did you expect lol |
Daj#7482: Mindcontrol or something cool like that
Louis#0144: It’s so slow
Louis#0144: And unreliable
Louis#0144: Even when right next to the cell tower
Louis#0144: I expected a robo 5g powered waifu
triggerhappygandi#0001: Unepic
triggerhappygandi#0001: It probably gives like gigabit range internet
triggerhappygandi#0001: Do you even need such speeds on mobile?
Louis#0144: I got 256kb/sec
Louis#0144: While standing right next to the cell tower
Louis#0144: Lmao
TylerRoost#8017: If you had to guess the "emergent" properties that the gpt-4 paper will focus on, what would they be? For example I would argue that the focus of the gpt-3 paper was on zero/few-shot learning as the "emergent" property. On a related note what signs from gpt-2 would we have been able to see had we thought long and hard that would have indicated that gpt-3 was going to have successful performance on zero/few-shot tasks and how would you use this information for your argument for gpt-4 capabilities?
I don't like the use of the term emergent because I suppose that these properties were present all along, with too many failure modes to deem as a real capability.
My guess is information relation coherence.
triggerhappygandi#0001: Prepare to get mind controlled by slow internet
bmk#1476: The next step, assuming not multimodal, is fully zero shot (only natural language description) on everything and reaching SOTA
kindiana#1016: idk if I would bet on that tbh
cognomen#6297: (robust, accurate) speaker modelling
bmk#1476: Being able to just describe your task in natural language seems to be the holy grail imo
bmk#1476: And gpt3 is still not quite there yet |
cognomen#6297: paying more attention to the source of the information
CRG#8707: Natural language prompting? https://cdn.discordapp.com/attachments/729741769738158194/803645958046089216/12s_5wcMUPtr6XzwXEiaqnA.png
bmk#1476: Zero shot results are significantly worse than few shot
bmk#1476: Yes
bmk#1476: Specifically, I'm talking about the leftmost column
bmk#1476: Zero shot with natural language prompt
TylerRoost#8017: so one shot or am I misunderstanding
cognomen#6297: at the moment there's a lot of hype/scaremongering about that microsoft patent on training LMs on dead people's data
TylerRoost#8017: my belief is we will achieve deeper understanding before we achieve true zero shot with natural language prompt. For example domain knowledge equivalent to superhuman. Though I could be wrong and they could both come simultaneously
CRG#8707: Something like: Translate English to French: Cheese ->
TylerRoost#8017: I see
cognomen#6297: but presumably with enough capacity a large enough LM would already do that
cognomen#6297: for all the data it's encountered
cognomen#6297: understand who's speaking and imitate it to the best of its knowledge
CRG#8707: Something like this but zero shot would be impressive https://cdn.discordapp.com/attachments/729741769738158194/803647050859806731/gpt-3-askell-roish.png
TylerRoost#8017: what would be more impressive the prompt saying translate english to french, or giving an example ie cheese -> fromage, or are they equally valid prompts
CRG#8707: There are probably tasks where you only have a description and tasks where you only have examples.
TylerRoost#8017: very good point so both would be necessary
TylerRoost#8017: Would you argue zero-shot with prompt is more practically useful than say few shot with expert level domain knowledge across meaningful domains.
TylerRoost#8017: in the short term |
CRG#8707: Zero shot with prompt seems more broadly useful, but few shot will probably work better (looking at the graph)
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/803649887668273192/Screenshot_2021-01-26-08-37-13-830_com.discord.png
bmk#1476: This number
bmk#1476: I think this number will be a lot higher in the future and that will be really interesting
TylerRoost#8017: that graph does not indicate depth of knowledge though, and I am not aware of any indication or graphs showing that LM's have been growing in depth of knowledge. Like I said the paper focuses on zero/few shot examples. My initial question is what "emergent" properties will the next paper focus on, and I don't foresee them putting out another paper with the main focus on these models being zero-shot, though I could see it being a notable mention. I agree that zero-shot is bound to improve over subsequent models. Though I would make the argument for long term effects depth of knowledge will be more important, even if few-shot
TylerRoost#8017: I also think that novel knowledge creation will be important, though not sure that gpt-4 or equivalent will be capable in that way, or that it will be the focus of the paper, though I'd be "pleasantly" surprised.
Louis#0144: Let me tell you, that is a dark dark rabbit hole
Ravna#1831: In the spirit of bitter lessons, we should treat "knowledge" as merely a human-constructed incomplete description of the accuracy/loss/perplexity metric.
CRG#8707: "Depth of knowledge"?
Louis#0144: I would know
TylerRoost#8017: Like domain expertise
Ravna#1831: The only thing that matters is perplexity and knowledge means nothing:sutton:
TylerRoost#8017: what specifically is a dark dark rabbit hole
TylerRoost#8017: Fair point, okay so novel solutions to tasks that require expertise domain knowledge
CRG#8707: But how to benchmark this?
CRG#8707: For GPT-f it was straightforward, but for anything else?
TylerRoost#8017: idk, I guess you would need experts to determine the reliability of novel solutions. What would be the easiest domain to test on? Something that has relatively many low hanging fruit?
TylerRoost#8017: What is this in regards too specifically
Louis#0144: Knowledge graphs
TylerRoost#8017: Like the general term, or specifically depth of knowledge graphs |
Louis#0144: First of all knowledge graphs don’t have depth
Louis#0144: They have girth
TylerRoost#8017: Okay
Louis#0144: Rereading that perhaps my choice of words was poor
Louis#0144: But I don’t rly care
Louis#0144: Second of all you are probably interested in how well language models represent their graphical knowledge
Louis#0144: In which case I have a paper I’m working on right now about that but there’s *tons* of work on using attention weights to construct KGs
Louis#0144: And a lot of it is pretty explanatory oriented
Louis#0144: Might interest you
TylerRoost#8017: It seems like it would be very interesting. My understanding of knowledge representations is very limited unfortunately. Though when I mentioned knowledge graphs I was particularly referencing explanatory graphs that represent the differences in capabilities between different sized models general expertise on a set of "narrow" tasks that require domain expertise
TylerRoost#8017: Thats not to say that I dont think that literal knowledge graphs wont be indicative of that growth
mick#2835: Call me crazy but I'm going to claim that GPT models have already been able to apply domain-specific knowledge in novel ways since at least GPT2
triggerhappygandi#0001: What is this _no prompt_ line?
Louis#0144: 100%
Louis#0144: They are indicative of that growth
Louis#0144: I’m working on a paper about that
Louis#0144: We have some cool results that I can’t share yet
CRG#8707: Few shot examples without a prompt explaining what the task is.
TylerRoost#8017: very great, I look forward to it.
mick#2835: Example: I put together a GPT2 application for suggesting electronics parts that match a description of a need, and one day I described a need for measuring the power across a component and instead of recommending a part to do what I said, it recommended that I observe the temperature instead of the power, and provided me a part number that would measure the temperature without adding a new sensor, by using interesting properties of the semiconductor junction in question. |
mick#2835: And that's GPT2!
mick#2835: measuring the temperature ended up being a much better design in basically every dimension and I just hadn't realized there was a free way to do it
mick#2835: But that's not the point, the point is that it was trained to provide a part for a specific task, but it generalized that to suggesting a better task instead
TylerRoost#8017: Very interesting example, fair point, the initial question still stands what property will be the focus on gpt-4 paper in your opinion
triggerhappygandi#0001: ohhhhhh
CRG#8707: Now that I think about it, the GPT-4 paper will probably focus on the multimodality.
triggerhappygandi#0001: It sure will
CRG#8707: But the question about the "emergent" properties with more scale still stands.
Louis#0144: “Hey guys so GPT4 really loves paper clips”
Louis#0144: Multi modal as in it can learn to fire missiles from a drone and write stories
mick#2835: A carefully crafted story for each missile!
mick#2835: I'll bet you at least 3 figures that GPT 3 could already be fine-tuned for that
triggerhappygandi#0001: wtf
Louis#0144: You’re on, I’d love for someone to make my tikz figures for me
mick#2835: Hahaha
mick#2835: Now now I didn't mean anything *that* serious!
bmk#1476: You automatically lose because OA won't let you tune gpt3 check and mate
mick#2835: I'll draw svg's for you though lol
mick#2835: Drawing in LaTeX bad. Drawing in Inkscape good.
cfoster0#4356: Something something grounded language and semantics leading to faster and/or more robust learning across all of the modalities |
mick#2835: that sounds about right lol
Ravna#1831: Natural languages are not grounded. They were evolved for humans to tell lies to each other in the first place.:neetz:
Ravna#1831: You ground a programming language to its semantics, or ground some formula to physics.
Ravna#1831: But you can't ground a natural language to anything except a bullshit fictional universe in human brains.
CRG#8707: https://slatestarcodex.com/2019/02/28/meaningful/ https://cdn.discordapp.com/attachments/729741769738158194/803668011536875520/56124b3b94a7d68fad1f72e219549ffb.png
mick#2835: Ground it to a real time approximation of that fiction like we all do!
cfoster0#4356: :yes:
jrowe#5371: gpt-neo is a social experiment, in which only three real people are allowed in the chat at any time - all other entities are gpt-3 instances, including the discord admins
jrowe#5371: https://tenor.com/view/tsgifs-woo-ric-flair-gif-14822242
Louis#0144: This works really well and ends up giving you practical algorithms for self play
Daj#7482: <|endoftext|>Hey guys what did you think about that recent paper? haha
Louis#0144: My language model is been I think, I just keep spouting off about neuro symbolic models
Louis#0144: :/
Louis#0144: <|eos|>
CRG#8707: advertisement
Ravna#1831: There might a very small subset of a natural language that may be used for describing physical world/objects in a very inefficient way (that's why we invented math formula in the first place). The >99.99% majority of a natural language has nothing to do with pictures/videos/physics, but all about fictional constructs used for playing status games. Grounding that <0.01% part to pictures/video/physics doesn't do much.
FractalCycle#0001: a lot of expert knowledge is not currently declarative, so the best uses of NNs would be for developing the kinds of "intuition" that humans use on everything. And better kinds ofc.
lucasosouza#1061: Hi! Great project, congrats to those involved. I was looking for the dataset to pre-train a language model from scratch (GPT or Bert), ideally one similar to what was used in their paper. I couldn't find any references to the full pre-training dataset in gpt-neo repository. Anyone knows where I can find it?
CRG#8707: Small percentages can make big effects https://cdn.discordapp.com/attachments/729741769738158194/803670689126940742/7c2620f2b543f4359af4e7063e1a17f8.png
CRG#8707: <https://arxiv.org/abs/2010.05358> |
cfoster0#4356: I dunno. For example, looking at images generated with CLIP steering, it seems to *really* understand the concept of "scariness" and what makes an image scary
Ravna#1831: I was just playing a half-serious devil's advocate towards the multimodal thing.
Ravna#1831: But it's half-serious, not totally joking.
Ravna#1831: Humans don't learn much about doing stuff from natural language material either.
Ravna#1831: You can't learn swimming by reading a book.
Ravna#1831: Even when you are learning math, you still learn more via doing (problems) than reading.
Ravna#1831: The whole natural language thing is just a huge universe for human entertainment/fiction that only occasionally leaks something practical/useful.
Daj#7482: We have the Pile, which is our attempt to build a GPT3 size dataset https://pile.eleuther.ai/
Louis#0144: There are issues with using the pile on smaller LMs @lucasosouza
Louis#0144: It’s really only for LMs with atleast a billion parameters
Louis#0144: If you have resources like that lying around knock yourself out
Louis#0144: It’s a great dataset
lucasosouza#1061: Thanks a lot @Daj and @Louis . Pile looks great. The idea is to train BERT sized LLMs, so something like 100mi params.
Louis#0144: You can use the same dataset BERT was trained on
Louis#0144: Or use ELECTRA
Louis#0144: ELECTRA is 🥰
mick#2835: Electra is super interesting.
lucasosouza#1061: do you know where I can find it?
jrowe#5371: https://github.com/google-research/bert
jrowe#5371: "We then train a large model (12-layer to 24-layer Transformer) on a large corpus (Wikipedia + BookCorpus) for a long time (1M update steps), and that's BERT." |
jrowe#5371: So you could probably train your own BERT using the pile
jrowe#5371: and if you did it using Go it would be "GoBERT Pile"
Louis#0144: You would need a bigger BERT
Louis#0144: much bigger
StellaAthena#3530: BERT's training data is not public. However we have created an open source variant of it that's approximately the same
zphang#7252: https://tenor.com/view/the-avengers-avengers-loki-iron-man-hulk-gif-3550631
triggerhappygandi#0001: On one hand, good meme. On other, marvel
lucasosouza#1061: Thanks Stella, do you have the link for the approximate BERT dataset you mentioned? Is that Pile as well?
mick#2835: https://pile.eleuther.ai
mick#2835: https://eleuther.ai/projects/open-web-text2/
cfoster0#4356: Version of BookCorpus: https://github.com/soskek/bookcorpus/issues/27
spirit-from-germany#1488: I am wondering if a GPT-3 finetuned on Arxiv could aktually make a helpful ML paper writing tool. Something that would make you 1 or several suggestion for the next sentence - and all the researcher has to do is to select a fitting line from the suggestions. 😄 ... I could imagine that an up to date finetuned GPT-3 could actually come up with useful stuff, if you'd give it some tries ... 😄 - Maybe not for the "Method" and "Results" sections, but for the introduction, the broader impact, eventually the conclusions ... 😄
TylerRoost#8017: Future work would be interesting to see what it would come up with
StellaAthena#3530: Yes, that’s explicitly one of the reasons we included arXiv in our training data
jrowe#5371: social simulations come to mind. Figure out how to constrain conversations to relevant ideotypes and run political or marketing message crafting
jrowe#5371: being able to automatically label verbal gimmicks and fallacies and the like, or to "neutralize" ideologically bent phrasing seems like the best, most relevant killer app, right now, though
Louis#0144: Does anyone know a good fusion in decoder implementation?
Louis#0144: cc @Aran Komatsuzaki
Aran Komatsuzaki#5714: @Louis i have no idea lol
Louis#0144: o ok |
Louis#0144: np
Louis#0144: I cant find anyone with a working implementation
Louis#0144: just a bunch of references to it in papers
Louis#0144: lol
Visarch of Apollo,#7152: @jrowe This sounds like dystopian metal gear shit. The real killer app is what AIDungeon is already doing, creating a peripheral for the most successful entertainment system of all time.
jrowe#5371: entertainment is good, but getting a handle on social media seems to be a little more existentially relevant from where I'm standing - these tools are going to enable rational, scaled automated moderation, allow refined content filtering, and give people a common technical basis for understanding what is happening to moderated content
jrowe#5371: instead of "this information might be innacurate" flags on twitter posts, they could provide a semantic, detailed explanation for why a statement is misleading, or fallacious, or emotional
jrowe#5371: i mean, yeah, it could be used for evil, but if a whole lot of people have access to the same tech across a wide spectrum of uses, then abuses are going to be harder to get away with
Sid#2121: ```a radical anarcho-primtivist gets a job```
mick#2835: lol
bmk#1476: like, i dont even know what your political stance is because im too lazy to read that big wall of text but *please*, let's not continue this here
gdawg16#0493: i dont even know what a primtivist is, let alone an anarcho-primtivist
gdawg16#0493: :[
gdawg16#0493: a radical anarcho-primtivist walks into a bar
bmk#1476: @Visarch of Apollo, take it to #off-topic . last warning.
asparagui#6391: and says ouch
gdawg16#0493: :carlos2:
chirp#4545: https://www.wired.com/story/ai-go-art-steering-self-driving-car/
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/803834160347086868/unknown.png
StellaAthena#3530: This is incredible 😮 |
ERROR: type should be string, got "\nhttps://www.youtube.com/watch?v=trJc_t_AqVY\njanus#0150: The GPT-3 paper is misleading w/r/t the effectiveness of zero-shot prompts. I reran the French to English translation benchmark, altering only the format of the zero-shot prompt to make it more natural (colons instead of =>) and beat few-shot performance significantly. GPT-3 was not learning how to translate from examples; it already knows how to translate. The examples served to clarify the task via demonstration, so improved performance over 1-shot and the bad 0-shot prompt, but a natural language instruction can communicate the same thing more efficiently. In fact, few-shot examples are often counterproductive as they encourage GPT-3 to imitate (overfit, if you will) the semantic content of the examples where the task is intended to be more general. I find that examples are most helpful when the task is so specific that it's hard to stage in natural language, such as when you need the output to be in a particular format. But in general, contrived prompt formats like few-shot exploit very little of the potential of freeform natural language (the function GPT-3 was trained to predict) to encode intentions.\njanus#0150: The number is a lot higher. In the translation example, going from 0 shot to 10 shot actually WORSENED the performance. Semantic meaning from the 10 examples leaked into its translation https://cdn.discordapp.com/attachments/729741769738158194/803852334148747304/unknown.png\nbmk#1476: i mean, this plot was for one task in particular\nbmk#1476: im not surprised if other tasks have different characteristics\njanus#0150: Almost all the plots in the paper looked very similar to that as I recall. The point is that for some of them like translation, the conclusion that examples are necessary or even helpful is incorrect.\nbmk#1476: gotcha\njanus#0150: Of course I don't even know what task that graph is of, so my modification to the diagram was just a guess 🙂\nbmk#1476: what was that recent paper about the best prompting techniques?\nbmk#1476: https://arxiv.org/abs/2101.06804\nbmk#1476: i think this might be relevant\njanus#0150: Yeah, I think thats about picking the best few shot examples according to some measure of semantic meaning\nbmk#1476: right, it's rsorta related\njanus#0150: I'm surprised there aren't more papers about prompt programming/engineering\nbmk#1476: let's write one!\njanus#0150: It's difficult to do general quantitative work though\nbmk#1476: if you can think of a good methodology i'm 100% for it\nbmk#1476: maybe we can even test on some of our models\njanus#0150: I just wrote two and submitted them to some conferences a week ago or so :D, but I have many more things to say" |
bmk#1476: exciting
bmk#1476: well, if you want to write one under eleuther affiliation im 100% for it any time
janus#0150: I have been doing "meta-prompt programming" by having GPT-3 write its own prompts for a given task
bmk#1476: are you just paying for tokens out of pocket?
bmk#1476: or do you have research credits
janus#0150: I'm open to that. I'm going to write some blog posts about the general ideas when I can find the time
janus#0150: credits
bmk#1476: and, on a related note, i wonder if we could ask for research credits
bmk#1476: what are the criteria for research credits?
janus#0150: Depends how they feel about GPT-neo 😅
bmk#1476: weve spoken with quite a few OA people
bmk#1476: they seem generally fine with us
bmk#1476: how do you apply for credits, btw?
janus#0150: There was an online form for researchers to apply. I don't know if thats changed now that they are letting more people on
mick#2835: If that's the case then they would need to change their name to almost anything other than "OpenAI"
bmk#1476: this is why im psuhing for us to move away from replication
bmk#1476: we're not a replicationgroup and have never been
bmk#1476: we just happen to have done a few replicationy projects
mick#2835: Replication is good science
bmk#1476: we're now working on shiny new research\ |
bmk#1476: it is but that's not our niche, or at least it shouldnt be
mick#2835: If every group reproduced a few things before doing so much shiny new stuff, then ML wouldn't be so messy
bmk#1476: sure, we're not going to stop doign replication
bmk#1476: but i just want to make it clear that eleuther is not a "replication" group
bmk#1476: and if youre some lab, please dont decide not to release details because we'll replicate, because we probably wont
mick#2835: 100% agree, but I think we should also be clear that replication work is basically a scientific duty
mick#2835: every group should do *some* replication work and it should be *good* replication work not just easy crap
bmk#1476: i mean we've done more than our fair share i think
mick#2835: Way more I'd say, but it's good for sending the message
cfoster0#4356: FWIW we haven't replicated anything as of yet
cfoster0#4356: Except maybe OWT2
bmk#1476: we *will have* done our fair share
mick#2835: If nobody's doing replication then papers just become advertisements and the bar for quality can even drop into the territory of allowing outright misrepresentation of results
bmk#1476: sure
bmk#1476: but we should do more original stsuff that we do now
bmk#1476: i guess pile is original
bmk#1476: the scaling law stuff im working on rn is original
mick#2835: For what it's worth I agree that original work is much much more worth the time spent lol
mick#2835: GPT3 is just a special case that needs to be verified imo
bmk#1476: i mean specifically for us, as eleuther |
bmk#1476: moar original stuff
bmk#1476: we're already too far into gpt3 to renege but
bmk#1476: after/alongside gpt3 we can do more original stuff
mick#2835: I'm super antsy to get to the contrastive stuff already lol
bmk#1476: CLIP?
kindiana#1016: yeah tbh I think gpt3 is a prereq to a lot of the interesting alignment stuff
mick#2835: I want to go far beyond clip tbh
kindiana#1016: anything specific?
mick#2835: I'd like to try a richer context representation and more fine grained matching, and of course I'm curious about having more simultaneous modalities
mick#2835: I imagine something like parsing HTML pages and extracting both the text and the pictures as well as how they are spatially related
mick#2835: I also wonder if videos could be used as an unsupervised data source for matching the audio to the picture
MicPie#9427: https://cdn.discordapp.com/attachments/729741769738158194/803888260924571678/CATT.jpeg
Louis#0144: OpenAI doesn’t do science
Louis#0144: Full stop
CRG#8707: It's the one about removing symbols from words. https://cdn.discordapp.com/attachments/729741769738158194/803994234248101908/92d15dbcbaeb74e97f3421d4a67d0266.png
jrowe#5371: they do politics, pr, and advocacy, funded by software, and the dissonance around the whole "open", but not really makes them just another rent seeking special interest group
jrowe#5371: I'm irked with them and probably wrong
Daj#7482: "Company releases back to back absolutely mind boggling technical breakthroughs"
"They don't do _real_ research"
Daj#7482: :nooo: |
jrowe#5371: but they sold out too the biggest walled garden in human history for no better reason than easy money
Daj#7482: Because we all know morality affects whether research is good or not
Daj#7482: (this is an intentionally silly phrased statement)
jrowe#5371: yeah, gpt 2 and 3 and the papers are definitely science
jrowe#5371: gpt*
Daj#7482: OpenAI Five, DALL-E, PPO :nooo:
jrowe#5371: ppo?
Daj#7482: Scaling Laws :nooo:
Daj#7482: Actually PPO might have been a different group
Daj#7482: oh and jukebox
zphang#7252: but connor, machine learning is just engineering
Daj#7482: Learning to Summarize from Human Feedback :nooo:
jrowe#5371: hah
StellaAthena#3530: The Manhattan Project, the Internet, and RSA are all clearly major research breakthroughs
Daj#7482: Not _real_ research :nooo:
StellaAthena#3530: At the same time, it's hard to be a bigger sell out than doing it for the US military
Daj#7482: Deep Double Descent :nooo:
Daj#7482: Emergent Tool Use In Multiagents :nooo:
Daj#7482: OAI is not real research :nooo:
jrowe#5371: I don't think that's an honest comparison - first mover advantage with nukes is a wildly different level of existential threat |
Daj#7482: Yea, much less
Daj#7482: lmao
StellaAthena#3530: Honest comparison to what?
jrowe#5371: oa releases of research to date
Daj#7482: https://openai.com/blog/openai-baselines-ppo/ PPO totally was OAI, I knew it
Daj#7482: :nooo:
jrowe#5371: there's a possibility they may produce agi or some principles leading to it, but theres not any indication that they're producing existential threats, competing against the other superpowers
Louis#0144: Things that aren’t reproducible aren’t science
Louis#0144: Sure we can reproduce GPT3
Louis#0144: But can we do this two or three LMs from now
Daj#7482: Well then I guess all of biomedicine isn't science lmao
Louis#0144: Biomedicine is reproducible
Daj#7482: With a billion dollars
Daj#7482: GPT3 is _way_ easier than reproducing medical research
Daj#7482: :nooo:
Louis#0144: If it isn’t reproducible then it isn’t science 🤷♂️ reproducing GPT3 even at a small scale is hard
nz#9710: I mean CERN research currently isn't really reproducible either, but I don't really think one can argue CERN doesn't do research
Daj#7482: :nooo:
Louis#0144: The results CERN produces are reproducible or verified beyond all doubt
Daj#7482: lmao |
Louis#0144: Like a massive statistical significance
Louis#0144: They go for like five or six sigma usually
jrowe#5371: so they say
jrowe#5371: you ever even been there?
Louis#0144: LMAO
jrowe#5371: could be a bunch of 4chan trolls...
Daj#7482: I'm not convinced Switzerland is a real place
Daj#7482: It seems pretty absurd tbh
nz#9710: yea, but has it been reproduced independently?
Louis#0144: Smaller scale experiments sure
Daj#7482: EleutherPhysics :ultrazucc:
Louis#0144: There’s more than one collider in the world
Daj#7482: lmao
Daj#7482: "CERN is reproducible, GPT3 isn't"
Daj#7482: :bigbrain: take right there
StellaAthena#3530: @jrowe Oh yea. I wasn't saying that OAI = US Military
Louis#0144: LMAO but I mean like a lot of results already coming out of CERN *are* reproduced elsewhere
andyljones#7746: any time you find yourself zooming in on the specific meanings of specific words, you're making a crap argument
Louis#0144: 🤷♂️
jrowe#5371: noooo, not semantics! |
jrowe#5371: lol
StellaAthena#3530: I was saying that **if** the internet, RSA, etc. weren't disqualified from being "real research" by virtue of being done for the US military, **then** it doesn't make sense to disqualify OAI's stuff from being "real research" by virtue of being done for $$$$
Louis#0144: agree to disagree
jrowe#5371: ah, that makes sense
Daj#7482: If you can reproduce a Higgs Particle for cheaper than GPT3, I have some job offers for you
StellaAthena#3530: So you think that 99% of ML isnt science? This critique has nothing to do with OAI
Louis#0144: Yes but the norm in DL is to release models and source code....
StellaAthena#3530: ... which don't work
jrowe#5371: I just think the science is a side effect of their mission of advocacy, and at this point, their goals of extracting money from a pristine market
Louis#0144: Yeah
Louis#0144: I agree with that
Louis#0144: 100%
Louis#0144: They aren’t a research org
andyljones#7746: Doesn't stop it being science
Louis#0144: The research is a side effect
StellaAthena#3530: If you randomly sample 10 papers published at NeurIPS I would bet a sizable amount of money than 0 or 1 of them have GitHub repos written by the authors that can be downloaded and run as-is
Daj#7482: I personally know many people at OAI and this is just false. Maybe some people, but all of the researchers? Just uncharitable
jrowe#5371: agreed, and it's good science
andyljones#7746: @Louis Your position is sounding a lot like 'science is good and I like it but I don't like openai therefore it isn't science'
Louis#0144: What |
Daj#7482: The morality makes the fact
Daj#7482: Virtue Theory of Science :ultrazucc:
Louis#0144: Not even slightly though
Daj#7482: :smallbrain: Virtue Theory of Metabolism
:bigbrain: Virtue Theory of Science
Louis#0144: I would argue that most applied DL papers without source code or models are doing a shit job
Daj#7482: I should write a LW post about this
Louis#0144: Nonapplied papers don’t need that
StellaAthena#3530: > If you randomly sample 100 papers published at NeurIPS I would bet a sizable amount of money that < 10 of them have GitHub repos written by the authors that can be downloaded and run as-is
And I'll follow this up with "anyone who accepts this bet is a sucker who doesn't have a clue what they're talking about." It makes it hard to take your criticisms of OAI seriously when they apply to virtually everyone in the entire field
Louis#0144: Im not saying they don’t apply to the entire field
Louis#0144: I called OAI because they are in a position to lead the field by example
Daj#7482: Motte and Bayley
andyljones#7746: You volunteered a defn of science, other people put a CERN-sized hole through it, you retreated to 'but they aren't a primarily research org', *c'mon* raise your game
Louis#0144: The difference is the norms in the fields
Daj#7482: It's fine Louis we still love you
Louis#0144: At the end of the day
Daj#7482: But this is a bit silly
Louis#0144: Like think of it this way (last argument)
Louis#0144: If what OAI is doing catches on |
Louis#0144: DL as a whole will be incredibly gate kept
Louis#0144: No one wants that
jrowe#5371: the organization is behaving in the sneaky, misleading ways other lobby groups behave, masquerading as virtuous with ulterior motives, and pulling what many perceived to be a classic bait and switch with gpt-3,at the behest of Microsoft, with some really weak post - hoc justification
Louis#0144: Literally no one wants that
Daj#7482: The organisation is not the researchers doing the work
Louis#0144: They are in a position where they need to lead by example
andyljones#7746: Right okay, this might be a defensible argument. But not wrt whether it's science or not, yes?
jrowe#5371: right, and the work they do obviously speaks for itself
Louis#0144: That’s true
jrowe#5371: they've defined the bleeding edge for years to come
andyljones#7746: Ok. In future, only claim the territory you're willing to defend.
andyljones#7746: Aight, onto the meat of things: imo, what's happening to DL is necessarily what happens to research when it starts mattering
andyljones#7746: Look at electricity, telephones, computers, the internet
jrowe#5371: it's what the org did with their work that I find objectionable - I don't think it ends well, giving Microsoft that level of advantage
Louis#0144: Wdym
andyljones#7746: All started as garage projects and while that's lovely and accessible, it doesn't turn the world over by itself.
Louis#0144: I see
jrowe#5371: that bleeds into us IP and patent law
andyljones#7746: As a rephrase: how d'you imagine DL expanding into something the scale of 'computers' while remaining entirely accessible? I'm sure there were a bunch of 70s greybeards fucked off about this whole Intel thing making CPU research way too expensive
andyljones#7746: Not to claim it's not possible to be more or less accessible, but even the more accessible end of plausible futures is going to be a damn sight less accessible than today |
Daj#7482: Please turn this into a blog post so I can link it to all the journalists asking us "but what about accessibility???!!!"
jrowe#5371: gpt-2 ended up being on the extreme edge of accessibility, with maybe less than 1% of computer users being able to run it for themselves, and some tiny fraction of that able to understand or modify it. maybe the logistics of all this makes accessibility and gatekeeping a moot point, and it'll only be relevant to discuss in ten years when we can run it with gpt for dummies apps
mick#2835: Science is not just a study, but crucially is a systematically organized study, and therefore issues with the systematic organization are issues with the quality of the science itself.
jrowe#5371: in this case, oa produced high quality products from their research, so the science and connector interests aligned
jrowe#5371: I don't think you need to worry about quality or bias yet - subsequent research becomes suspect, though, if it remains closed
mick#2835: Call me a heretic but I wouldn't be surprised in the slightest if DALL-E results are just maybe slightly more cherry-picked than "not cherry picked", for example
Daj#7482: I dunno OA has been pretty consistently knocking it out of the park with results
mick#2835: People make unconscious mistakes and openai isn't immune to that
Daj#7482: GPT3 was amazing to me post-API too
Daj#7482: Lets say I trust them more than any "improved transformer" paper lmao
Daj#7482: also Big Sleep is friggin' cool and that's like the most ghetto version of DALL-E imaginable
mick#2835: I think GPT3 is great and seeing it's failure modes doesn't take away from it in my view either
nz#9710: I love "ghetto DALL-E"
Daj#7482: Well we'll just make our own and see how good it is
Daj#7482: :ultrazucc:
mick#2835: However I don't have some kind of interface which would allow me to play with DALL-E sufficiently to even get a realistic estimate of what the capabilities are
Daj#7482: Yea, ofc
Daj#7482: Just saying my prior is higher than in...most of all other science lol
Daj#7482: Even DM produces some crap with their GOFAI stuff
mick#2835: so really all of the hype around DALL-E is 100% fluff to me because actually it tells me nothing that I wasn't already completely and absolutely confident in the ability of neural nets to provide |
Daj#7482: Well yeah, we're an outlier though lol
Daj#7482: I've gotten a number of calls from various people that had their "GPT3 moment" with DALL-E
Daj#7482: Some people still earnestly argue with me that NNs are a dead end
mick#2835: Lol I realized AI is gonna work the first time I collected n-grams and made a Markov chain from IRC logs
mick#2835: If that's what you mean by GPT 3 moment
mick#2835: Or I guess more specifically, the first time I saw such a model produce a unique utterance that was not an exact match of something somebody had said before but was something that they agreed that they would say, and is true about them lol
Daj#7482: Yea you are far more perceptive (or with the ability to generalize and extrapolate) than most people lol
Daj#7482: People are great at deluding themselves into "tech can do X, but it can't do X+1 right now, so it probably can never do X+2"
Daj#7482: Nevermind putting exponentials in there
mick#2835: I think that boils down to... Well Eliezer nailed it when saying people are really bad about spotting logical contradictions lol
Math ap Mathonwy#7453: I have had similar experiences and I find that BAFFLING. Utterly.
Even if nothing else, Neural nets and the training methods are at the very least a significant , more flexible extension of regression techniques.
Even if it doesn't produce AGI, the potential to enable new science and new science understanding is, IMO, immense.
Math ap Mathonwy#7453: that's the WORST case, IMO, for Neural Net methods.
nz#9710: I may be wrong, but IMO a lot of people just suffer from sunk cost fallacy
nz#9710: In my uni's CS department there are three professors into AI, all arguing that current deep learning methods are way over hyped.
nz#9710: One of them has spent the last 30 years studying evolutionary algorithms, another is a strong believer in bayesian learning and the last one has focused for the past 20 something years on symbolic AI.
Daj#7482: I like Eliezer's writing about situations like this, where you should probably just stop, drop and catch fire
Daj#7482: But hey, tenure exists for a reason
Daj#7482: ¯\_(ツ)_/¯ |
nz#9710: Like, even though I disagree with them (especially since I think they're making a disservice to students) I understand why they're not so keen on switching to the new paradigm
Daj#7482: yea ofc
mick#2835: I don't think there's any ways to say without sounding rude, that if you've worked on something for decades and then some new thing pops up and leaves you in the dust on the exact thing you've been trying to achieve, perhaps you should look into that new thing that popped up a bit before dismissing it and saying that good old decades-old whatever it is that you've been working on is going to pay off Any Minute Now™
Daj#7482: yup
Math ap Mathonwy#7453: well I harbor suspicions that the publish or perish environment in academia is actually fostering a very deep level of intellectual un-curiousity. If its outside of whatever they need to get their nth paper published. They. Don't. Care.
Daj#7482: something something Gary Marcus
Daj#7482: That's why I'm here instead of doing a PhD heh
mick#2835: ^
Math ap Mathonwy#7453: lol I wish "here" existed before I did my PhD
jrowe#5371: I almost feel bad for IBM Watson folks
mick#2835: It'll make us look good to have someone with a PhD anyways lol
jrowe#5371: theyre getting their lunch eaten every 3 months
DR.PROACT#2111: I have an MD if anyone needs that
DR.PROACT#2111: 😅
Daj#7482: can u precscribe ritalin
Daj#7482: lol
Daj#7482: The most efficient way to speed up ML research
mick#2835: Lol Adderall*
nz#9710: wait you guys take it? can it fix procrastination? lol
Daj#7482: I actually like Ritalin more |
Daj#7482: I used to, but stopped due to side effects
Daj#7482: It can fix procrastination...or make it much, _much_ worse
jrowe#5371: research chemicals, gone meta
jrowe#5371: *lets just format all the documentation in the perfect font*
Math ap Mathonwy#7453: I'd suggest modafinil, but I have a sleep disorder so my neurologist prescribes it for me.
Daj#7482: Stimulants make you concentrate really hard on what you're doing
Daj#7482: Working? Work really hard
jrowe#5371: adrafinil is legal
Daj#7482: Procrastinating? _Procrastinate really hard!_
jrowe#5371: and no prescription needed
jrowe#5371: up the dose and get roughly the same benefit as modafinil
Daj#7482: Different drug class, it supresses sleep but doesn't give concentration
Daj#7482: for most people at least
jrowe#5371: a restricted, disciplined caffeine regimen can give a lot of the same benefits as more potent stimulants, too
jrowe#5371: but who wants to only have caffeine once a week, anyway
Math ap Mathonwy#7453: huh, I hadn't really thought about that. I just ended up on it serendipitously as my doctor tried things because I couldn't handle the ritalin or amphetamine side effects.
Daj#7482: if it works for you I'm super jealous, it didn't work for me
Daj#7482: Caffeine is much closer to modafinil than amphetamines imo
Daj#7482: energy vs concentration/dopamine (and also energy)
jrowe#5371: the intense amphetamine focus never quite happens with caffeine |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.