data
stringlengths 115
7.61k
|
---|
gwern#1782: @Daj in a very similar vein you might enjoy this NYer article that just went up: https://www.newyorker.com/magazine/2020/11/30/the-art-of-building-the-impossible
Daj#7482: I shouldn't care but I'm a sucker for good (real or fictional) aesthetics
gwern#1782: @Daj and the Minnesota cave thing I mentioned: https://www.outsideonline.com/2414888/john-ackerman-caves-minnesota
Daj#7482: Do you think I even know what an auction house looks like?
Daj#7482: But thanks for the tip if I ever need to pull off a heist and find targets/collaborators
gwern#1782: I think they mostly just look like very nice hotels. at least, all of the media photos I've seen of places like Christies do
gwern#1782: anyway, I see restorationists as an example of the countless very well paid niches you've never heard of, one of those careers like chicken sexers or emergency naval salvagers (which involves a surprising amount of computer modeling to figure out how to safely refloat giant cargo ships etc)
Sid#2121: apparently you can just get @-Archivist to make a fake for you instead of pulling off a heist
Sid#2121: sorry a 'replica'
Daj#7482: No, that's what I use to replace the original
Daj#7482: Have you never watched heist movies?
Sid#2121: I wonder how many of the world's most famous paintings in museums are really replicas, and the real thing is just in some rich guy's living room or a warehouse somewhere
Daj#7482: egg...sexers?
gwern#1782: sorry, chicken sexers
Daj#7482: "emergency naval salvagers" also is dripping with ***AESTHETIC***
Daj#7482: Holy shit this is such good fiction material
Daj#7482: haha
gwern#1782: oh, more restoration links in https://www.gwern.net/newsletter/2019/11#green-2013 some great stuff like https://publicdomainreview.org/essay/exquisite-rot-spalted-wood-and-the-lost-art-of-intarsia/
Daj#7482: _Of course_ gwern has more stuff
Sid#2121: how did you start out doing restoration if i can ask, @-Archivist ? Art student? Conservationist? |
gwern#1782: https://www.wired.com/2008/02/ff-seacowboys/?currentPage=all the ship salvage article I was thinking of
gwern#1782: but they're all great articles worth reading nyo~run~
bmk#1476: Wow, we have all the cool cyberpunk people in one room
Daj#7482: :jc:
Sid#2121: you can only stay here if you fake us a rembrandt @-Archivist
gwern#1782: (I did consider getting one of those but after looking into them, they were absurdly expensive and the LL Bean goat leather bomber jacket was very nice and I found one for $115 on ebay, a total steal. it's a little ironic to wear it in town because there are actual bomber pilots there, but so far no one's yelled at me for 'stolen valor' so it's fine)
Daj#7482: :jc:
Daj#7482: ^ gwern irl
andyljones#7746: @-Archivist since it's my one point of contact with restoration - what's the professional opinion on the baumgartner youtube channel?
bmk#1476: Interesting
bmk#1476: I hear that commercial pdf parsers are significantly better, although we will obviously need to test all options
bmk#1476: @-Archivist are you already mirroring arXiv?
bmk#1476: Ah, perfect
bmk#1476: One minor issue with arXiv is that the dumps cost money to download from aws
bmk#1476: Not a lot of money, but it does make it mildly annoying
bmk#1476: So a mirror of pdfs+sources would be very nice
bmk#1476: I know the arXiv people ask that the data not be mirrored, but.. *gestures furtively at literally all of The Eye*
Daj#7482: I admire your dedication to your craft hah
gwern#1782: (manga is really difficult to transcribe text from. there's a bit of research on it, even just localizing the text is hard)
Airatak#7842: Well if you are looking for text from manga, you could reach out to the scanlation groups. I used to work in a few. They have a ton of transcribed docs are usually willing to share. I know this is a bit intensive but the larger groups with over 100 titles, with a few hundred chapters each is a decent amount of data. |
bmk#1476: the media manga has less text than the typical book
bmk#1476: and 100 books is not a lot of text
Airatak#7842: Yea, it is true that manga has significantly less text than normal books. Light Novels are a better alternative.
bmk#1476: even still
bmk#1476: assuming it has as much text as a regular book
bmk#1476: 100 books isn't a lot
bmk#1476: 1000 books isn't even a lot
Airatak#7842: oh light novels, you could get a ton
Airatak#7842: Say, you get 1000 titles, each title has 12 volumes on average
bmk#1476: about 100,000 is how many you need before it's worthwhile to think about
bmk#1476: that's the *lower* bound
Airatak#7842: hmmm
Airatak#7842: That is achievable I guess
bmk#1476: anything on the order of 1,000,000 books is what i'd ideally be looking for
bmk#1476: for instance, libgen is 2M books
Airatak#7842: Yea but distributing a dataset based on libgen would run into a ton of copyright issues
bmk#1476: it would
bmk#1476: but let's just think about size for now
bmk#1476: is there anywhere else where we can find >1M books, copyright or not
bmk#1476: ideally, books that wouldn't be on libgen |
Airatak#7842: I can make a dataset of around 100k light novels, each being around 50,000 words each
Airatak#7842: I already have around 9k with me, I downloaded them as a need to read but never got around to it
bmk#1476: so one thing is i want to make sure there's little overlap with other data
Airatak#7842: But other than that, I don't think there is anything with 1M+ books which won't be on libgen
bmk#1476: so can you pick a few random ones to see if there's overlap with bibliotik and with libgen
bmk#1476: you can browse bibliotik here https://the-eye.eu/public/Books/Bibliotik/
Airatak#7842: Ok let me check
bmk#1476: and also search through libgen too
bmk#1476: actually do libgen first
bmk#1476: that's probably more interesting
Airatak#7842: just to make sure, the english version only right?
bmk#1476: we would prefer to have both the original and the english versions and any other versions in other languages that might be out there
bmk#1476: as many as possible
bmk#1476: more is better
Airatak#7842: Ok, so Libgen only has the english versions for the mainstream ones
bmk#1476: what about some rarer lns?
bmk#1476: are they on libgen at all
Airatak#7842: Well the top ones like 'That Time I Got Reincarnated as a Slime' and The Rising Of The Shield Hero' are on Libgen
bmk#1476: i have no idea what those are, tbh
bmk#1476: what about some not-mainstream ones |
bmk#1476: are they in libgen
bmk#1476: and also would they be in ff.net/ao3?
bmk#1476: we *might* be including ff.net too so overlap there is also best avoided
Airatak#7842: The lesser known ones, like 'Gamers!' or 'And you thought there is never a girl online?' are not there
bmk#1476: ok that's good
Airatak#7842: And a lot of them have a few volumes but not the rest
bmk#1476: perfect so your dataset will be very helpful
Airatak#7842: Like Libgen has 'Classroom of the elite' Volume 4 but non of the others
bmk#1476: do you think these would be on ff.net and ao3 or are those websites for completely different things
Airatak#7842: Oh no they won't
Airatak#7842: these are original works
bmk#1476: oh ok
bmk#1476: got it
bmk#1476: ok then definitely go ahead and make the dataset, it sounds very valuable
Airatak#7842: Cool, will do. I'll make a dataset with all lightnovels and volumes. Japanese and English. Crossovers can later be checked and removed using the libgen api.
bmk#1476: awesome
bmk#1476: and if there are any other language ones, include those too
Airatak#7842: sure, there are a ton in Korean and Chinese too
bmk#1476: perfect
Airatak#7842: It'll take me a bit of time tho, I'm kinda occupied with some stuff |
Airatak#7842: by when would you need this?
bmk#1476: oh no hurry
bmk#1476: this isn't going to make it into v1 so you have all the time you need
bmk#1476: and v2 is.. idk, probably a year or two down the road
Airatak#7842: oh cool, I can get this done before the end of this year
bmk#1476: yeah no hurry
bmk#1476: heck, if you can get it done by end of *next* year that would be fine
Airatak#7842: oh cool
bmk#1476: actually maybe that might be a bit far, details for v2 are still up in the air
bmk#1476: but yeah there's no rush
Airatak#7842: Quick question: you want a text corpus, right?
Airatak#7842: Or just pdfs and epubs
bmk#1476: text corpus
bmk#1476: raw text, preferably nice and clean
Airatak#7842: cool
CKtalon#7792: generally (Chinese-translated) web novels are around 2-4 million words. There are probably a thousand of them.
Airatak#7842: 2-4 million!? I don't think that is correct
Airatak#7842: Japanese Web and Light Novels are around 50k per volume
CKtalon#7792: Chinese web novels are different
CKtalon#7792: they go to thousands of chapters |
CKtalon#7792: each chapter being about 1500-2000+ English words
Airatak#7842: Huh.. these seem to dwarf one piece
Airatak#7842: Cool, I was scraping Korean Web Novels, I'll add Chinese ones to the list also
CKtalon#7792: korean web novels are like 10-20% of what Chinese ones are
CKtalon#7792: of course, since they are translations, the English might not be as good as native English writing
CKtalon#7792: more translationese and "chinese" terms
CKtalon#7792: but I guess diversity
bmk#1476: unrelated but are you chinese
CKtalon#7792: by ethnicity, yea
bmk#1476: do you speak chinese
CKtalon#7792: yes
bmk#1476: 又多一个会中文的!
CKtalon#7792: 呵呵
bmk#1476: (其实我不知道这里有多少会中文的,但是我不认为多)
CKtalon#7792: discord需要翻墙 (笑)
bmk#1476: 啊,你在国内?
CKtalon#7792: 新加坡
bmk#1476: 啊
CKtalon#7792: I have a 133+GB corpus of Chinese web novels (in Chinese).
And I did scrape ~81GB of English novels (but probably overlaps with biblio) |
bmk#1476: i will take all 133GB of that
bmk#1476: where can i get em
CKtalon#7792: let me see where I can upload them
bmk#1476: and is there a writeup of how you did it somewhere
CKtalon#7792: i just scrapped pirated sites
CKtalon#7792: but it's all in one file
CKtalon#7792: i didn't bother separating them
bmk#1476: hm, that might be a problem
CKtalon#7792: you can scrape it yourself i guess
bmk#1476: does the script work for servers from outside china
CKtalon#7792: yea
bmk#1476: i know some sites have ip blocks
bmk#1476: ah
CKtalon#7792: i scraped it from SG
bmk#1476: perfect
bmk#1476: if you could post the script that would be best
CKtalon#7792: let me clean it up..lol
Airatak#7842: Btw you guys have the US congress records? There are a ton of those and if you consider the records of state legislatures then you'll get even more
bmk#1476: We decided not to include them
bmk#1476: There's a section in our paper explaining why |
Airatak#7842: oh ok
bmk#1476: The tldr is they were *really* racist back in the day
Airatak#7842: That makes sense
Airatak#7842: But well, you can get the ones after 1900 I guess
Airatak#7842: Or the gov. records of other countries
Airatak#7842: Like the UK, Canada or Australia
Airatak#7842: They are generally really easy to obtain and there is a ton of them available
CKtalon#7792: https://pastebin.com/jMRk9JdW
bmk#1476: 谢谢!
CKtalon#7792: after scraping, you'll need to delete all the files that are below a certain size
CKtalon#7792: there are like "empty" files on that page
CKtalon#7792: not really difficult
bmk#1476: yeah don't worry i can handle that
CKtalon#7792: might have corrupted UTF encodings too
CKtalon#7792: i'm not sure of the parsers that work for Chinese
bmk#1476: i can fix that up
bmk#1476: i've had the exact same problem elsewhere so it'll be an easy fix
CKtalon#7792: but it's like ~100 files at most
Airatak#7842: So far the titles I checked for the light and web novels, there seems to be around a 10% overlap with Libgen and Bibliotik. Hopefully this decreases as I start to scrape more unpopular books and books in different languages
Airatak#7842: I've gotten around 25 GB of data so far (125000 chapters) |
Airatak#7842: I think once I'm done with all the Books from all the different sources, it'll cross 1TB
StellaAthena#3530: We have the EU Parlement
Airatak#7842: Should the text corpus be filtered or formatted in a certain way?
Airatak#7842: Also, anyone knows of a decent way to convert epubs to txt, the python scripts I found online seem to be putting each section in twice
kindiana#1016: shawwn wrote something like that for bibliotik
StellaAthena#3530: It should be stored as a `.txt` file. Beyond that, anything that is comfortably readable to a human is good. Since we are working character by character you’ll need `\n` to denote new lines, but other forms of punctuation shouldn’t need to be escaped. @bmk is there a symbol we are using to denote the tab character?
If there’s any HTML, remove as much as you can with regex. If there are files that are hard to remove HTML from for some reason, we’d rather scrap a subset of the data than include junk data as a general rule.
I need to double check with @bmk (our data master) but I’m pretty sure markdown syntax is fine.
CKtalon#7792: also has anyone scraped pages like fandom.com
CKtalon#7792: even though they are powered by wiki, there's no dump most of the time
bmk#1476: Huh? Why would you escape newlines??
Airatak#7842: Oh cool thx. I don't really have any markdown or HTML. Just wanted to make sure about one more thing, somebooks have 2 `\n` while others have only one for the new lines. I think I should fix this? Also, should I leave stuff like the table of contents, name of translators, other publisher details such as isbn in?
CKtalon#7792: so that you will have an OOM when reading it in
CKtalon#7792: 😛
bmk#1476: Please don't use literally the two characters`\n`
bmk#1476: Just use normal newlines
Airatak#7842: Yea that's what I thought
Airatak#7842: Weirdly some books even have 4 `\n` after each line |
CKtalon#7792: it might be converted from epub with <p> being 2 \n
StellaAthena#3530: Some of the data has `\n`, which is why I had assumed that was desired?
bmk#1476: Which data?
bmk#1476: I do not remember any of our data having \n
bmk#1476: And if it does, that's undesireable
bmk#1476: It fucks up BPEs and makes life hard
StellaAthena#3530: Uhhh IDR. When we were looking through samples to make sure there wasn’t any fuckery I thought it was like that?
StellaAthena#3530: I could be confusing with something else
bmk#1476: There aren't any in the sample section of the appendix
bmk#1476: Unless this is a rare artefact
Imperishable_NEET#1969: Seems whenever I talk about AI or AGI on the internet there are plenty of ML people ready to downplay language models and the steps toward AI we already have. I don't doubt many experts for having conservative expectations of AI progress because there have been many failed predictions and AI winters in the past.
Daj#7482: History has always had the old curmudgeons and the plucky stupid young scientists (i.e. us)
Imperishable_NEET#1969: Do you think even in some far off future the actual singularity of recursive self-improvement will be prone to the same hype cycle? https://cdn.discordapp.com/attachments/729741769738158194/783008501047230474/researchmethodology-illustration-hype-cycle.jpg
Daj#7482: "AGI can't paperclip us, because it doesn't have _understanding!_" - Someone about to be paperclipped
Daj#7482: lol
Daj#7482: I think humans are _amazing_ at taking something truly groundbreaking and just thinking it's extraordinarily boring and obviou
Daj#7482: Hindsight bias
Daj#7482: I think humans are _amazing_ at taking something truly groundbreaking and just thinking it's extraordinarily boring and obviou
Daj#7482: Hindsight bias
Daj#7482: https://www.readthesequences.com/Hindsight-Devalues-Science |
Imperishable_NEET#1969: *"It's just a Chinese Room that doesn't understand anything. We're no closer to AGI than when computers beat humans at chess for the first time."*
Imperishable_NEET#1969: *"It's just a Chinese Room that doesn't understand anything. We're no closer to AGI than when computers beat humans at chess for the first time."*
Imperishable_NEET#1969: Was discussing this on a certain 4chan board, in a waifubot thread perhaps inspired by a certain fanfic. :celestia: https://cdn.discordapp.com/attachments/729741769738158194/783010083473981510/Screenshot_20201130-114102.png
Daj#7482: :nooo:
Daj#7482: ^ That guy
Daj#7482: also why is there such an overlap between AGI, anime, furries and MLP
Daj#7482: Actually, I might have just answered my own question
Imperishable_NEET#1969: ~~Touhou Project, as well.~~ :kagulaugh:
Imperishable_NEET#1969: @gwern actually wrote a long blogpost on this, it's fascinating. https://www.gwern.net/MLP
Daj#7482: @gwern truly is the overlap of all the categories
Daj#7482: but this is actually cool, I've wanted someone to seriously look into MLP and wtf was up with that
Daj#7482: I was there in 2010, I remember when it all happened, it was weird
Daj#7482: > depicts an underappreciated plausibly-contemporary capitalist utopian perspective on self-actualization
Daj#7482: oh bby
Imperishable_NEET#1969: I was never a brony during the previous decade the show was still coming out or Bronycon was still being held, but I did go to anime conventions and got super deep into Touhou Fandom and attended meetups for it.
gwern#1782: I guessed as much from the username 🙂
Imperishable_NEET#1969: I think I read *Friendship Is Optimal* before I actually watched the show.
gwern#1782: (and the whole kaguya/mokou thumbnail)
Daj#7482: I was probably one of the trolls responsible for /mlp/s creation lol
Daj#7482: I at first thought it was just a new form of shitposting |
Imperishable_NEET#1969: I forget what my username used to be on LessWrong IRC circa 2016-7. Might've been *rm -f botnet*, or *RMF Botnet*
Imperishable_NEET#1969: Oh, yeah, now I remember: *BeyondTheBorg*
Daj#7482: > perhaps one should experiment with viewing MLP under the influence of psychedelics to see if it could teach basic social skills faster
Daj#7482: gwern essays are the best
Daj#7482: > perhaps the real magic of friendship was the serotonin receptors we made along the way
Imperishable_NEET#1969: I mean, it is true to a certain extent. Humans excelled because of their abstract thought, language abilities, social skills, and cooperation. We're wired to be social creatures to the point that it's a physiological need.
Bayesian Conspiracy podcast's episode on the topic of longevity last week brought up red wine studies that actually concluded it was likely that red wine drinkers tended to be more sociable and have healthier social lives to fulfill emotional needs, which physiologically manifested through less stress and more graceful aging. It may have been the *social ritual* rather than the wine itself causing the longevity benefits. https://link.springer.com/chapter/10.1007/978-94-007-6689-1_6
Imperishable_NEET#1969: In other words, *Friendship is Longevity*
Daj#7482: huh, @gwern 's comments on media optimized for fandom is basically how D&D media works
Daj#7482: It's all about baiting DMs into expanding the skeleton lore into their own thing
Daj#7482: Never conceived of fandom that way
Imperishable_NEET#1969: Japanese *Doujinshi* culture also works this way
Imperishable_NEET#1969: Character shooters, MOBAs, cinematic universes, and gacha games also seem tailored for fandom.
Imperishable_NEET#1969: One of the reasons I like Touhou so much is BECAUSE ZUN's copyright / IP enforcement is so lax, and I don't feel like I'm being constantly monetized like in gacha fandoms.
Imperishable_NEET#1969: That's probably why Touhou sticks around chugging along and Kancolle / FGO / Girls Frontline, etc. are more fleeting.
Imperishable_NEET#1969: https://youtu.be/_rYEJ4-MaWs
Imperishable_NEET#1969: If you wanna get even more political, look up BreadTuber Peter Coffin's concept of *"Cultivated Identity"* https://youtu.be/X9Lf1GcG5M4
gwern#1782: yeah, that's a striking difference between touhou and k/fgo/gf. when I look at the danbooru statistics by tags, the latter get an enormous amount of fanart... but it's all standalones or one-shots or variants on official art, while if you look at the touhou ones, they tend to be much more part of fanon and creating genuine stories and new manga/games/music. the music scene is like that too. kancolle got a modest music scene, but then it just seemed to die almost overnight, while touhou keeps on trucking
gwern#1782: the corporate properties elicit a giant heap of random artwork, but it never goes anywhere. they haven't figured out how to make it self-sustaining or build on itself, or how to make ascended fanon
gwern#1782: the contrast between them and touhou/vocaloid/mlp/d&d is stark |
Daj#7482: I always had a gut reaction that _something_ about MLP and D&D was connected, I feel enlightened
Daj#7482: also I apparently shouldn't have stopped watching after S2
Daj#7482: Though we had Gravity Falls and Adventure Time then
Daj#7482: Though we had Gravity Falls and Adventure Time then
Dromarion#3383: Is it related to the ease of a fans ability to make original characters or something? I don't think touhou has that though.
Daj#7482: Gwern describes his ideas about "ascended fandom" here: https://www.gwern.net/MLP
Daj#7482: lol I think we've had that DM release a dozen times today now
Daj#7482: Lucid just wrote a sonnet about it in #research
gwern#1782: _adds an `insight-porn` tag since it's helping people make connections_
Dromarion#3383: Come to think of it, how soon do you all believe that AI is going to disrupt the entertainment industry in a major way? The reason I'm in the rabbit hole to begin with is AI Dungeon and it feels like a step towards being able to generate your own works in universes that conform to your interests.
Daj#7482: I'm the "10% chance there are no more humans by 2030" camp
Daj#7482: So yea lol
cognomen#6297: I'd assume it would be a quiet revolution
cognomen#6297: that writers would be reluctant to admit using LMs
Daj#7482: I expect the ability to create Hollywood quality films with little to no training or talent within the next decade or two
Daj#7482: Using GANs and their successors, advanced TTS, LMs, etc
cognomen#6297: probably not but preproduction will get faster
Daj#7482: Just registering my predictions
Daj#7482: ***E X P O N E N T I A L S***
Daj#7482: Moore's Law is pretty magic (and/or Wright's Law) |
Dromarion#3383: I'm in the fiction writing community and posing the question, some are open to it. I guess for a lot of us writing a good book is hard and we'd rather be idea guys while getting a computer to do the heavy lifting and prose for us.
gwern#1782: I used to think that AI would disrupt entertainment, but watching hollywood budgets and the failure of the long tail (the tail is longer, but more skewed, than ever before), I still believe that the net effect is going to be that the media scene will look a lot like it does
cognomen#6297: if brain imaging gets better I could picture shots literally from the imagination of a director being used in lieu of a storyboard
cognomen#6297: but it wouldn't be pleasant to watch by itself
Daj#7482: What is your current AGI timelines, gwern? Once we have super human AGI I can't imagine enterntainmnent being the same
gwern#1782: it's just an arms race to making ever more absurd SFX, in other words. entertainment is not about entertaining, it's about coordinating social relationships and providing fodder for politics and discussion, and the more globalized and social things come, the more being the top work matters
Daj#7482: ah yes this seems at least mostly true
Daj#7482: I'm kind of out of the loop since I only consume weird media (and Marvel movies)
gwern#1782: the real avenue to watch is *parasocial relationships*. the future is not, 'AI, make me a Marvel movie where Spiderman is actually a human-sized spider', but 'AIexia, my boss got mad at me today and it wasn't my fault :('
gwern#1782: ai dungeon coomers, projekt melody, vtubers, 15.ai, gpt-3 chatbots - think _Her_/_Bladerunner 2049_, not star trek's holodeck
gwern#1782: OnlyFans, Patreon, Cameo
Daj#7482: I guess I expect strong AGI/uploading pretty soon
Daj#7482: That makes the timeslice where all that is relevant pretty narrow
Daj#7482: Same with bio
Daj#7482: I expect we won't have glorious furry Biopunk future because we'll already be uploaded
bmk#1476: i'm skeptical of uploading
gwern#1782: whether it's the next marvel movie or the next netflix surprise drop, there's still going to be giant media franchises which everyone coordinates around. that's going to stay the same, AI will simply supercharge SFX even further and accelerate writing and R&D etc. where DL will really change things is in the creepy digitialization of status and social relationships and parasocial needs
gwern#1782: if you want to imagine the future, imagine a man fapping to Miku Tiktok DMing him an AR video of her stepping on his face - forever
Daj#7482: and/or just no more humans existing, only post-human AGI
bmk#1476: that's not a desireable outcome at all, oh no |
Daj#7482: christ
Daj#7482: I did not have to read that today
Daj#7482: Pinned a message.
gwern#1782: ("'christ what an imagination I've got', shalmanzeer said")
gwern#1782: (_Stand on Zanzibar_ is still worth reading fwiw)
cfoster0#4356: ```I did not have to read that today```
*pins message so everyone has to read it*
bmk#1476: to be fair, our pins are already useless
cfoster0#4356: Haha yea
Daj#7482: but fwiw I actually _genuinely_ think gwern's foot fetish suggestion is one of the most likely AGI scenarios I have heard to date
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/783032532278444092/unknown.png
bmk#1476: hot take: this is in the top 5 percentile of desireable outcomes
Daj#7482: I love our pins
Daj#7482: We are agents of chaos, don't forget it
bmk#1476: the other 95 are just everyone dies a horrible death
Daj#7482: No, that's what I've been ranting about all the time
Daj#7482: lol
bmk#1476: oh lol
Daj#7482: but mUh CaTgIrLs
Daj#7482: But I guess it's a matter of degree |
Daj#7482: I would put it in 70-percentile maybe
bmk#1476: like if in 20 years time the world still exists but seems perverted from the perspective of today, i would count that as a massive success
Daj#7482: No, this is a christian server
Dromarion#3383: It looks like the userbase of AI Dungeon is getting overtaken by those using it to make their own erotica. You know with the amount of lewd input and output I kind of wonder how many of the engineers working on this groundbreaking tech are just coomers :thonk:
bmk#1476: i think you're being too optimistic
Daj#7482: I mean, I consider "total human extinction" to be 50-percentile
Daj#7482: maybe you're right and I should be even more pessimistic lol
bmk#1476: i think if you restrict your attention to AGI-exists scenarios then extinction is more likely
bmk#1476: if you think about AGI-doesn't-exist scenarios then probably nothing of note happens
Daj#7482: well yeah
Daj#7482: That's true I guess
Daj#7482: maybe
Daj#7482: depends on what other future tech becomes possible
bmk#1476: and so the only variable is *when* it goes from 0 to 95 percentile extinction
Daj#7482: ~~grey goo~~
bmk#1476: AGI is a generalization of grey goo
andyljones#7746: q: in 1950, what odds d'you reckon you would have given all-out nuclear war in the next 50 years?
asparagui#6391: people too distracted by virtual pron --> stop reproducing
bmk#1476: it's the category theory of x/s risks
Daj#7482: Very high |
andyljones#7746: i think i'd have gone with more-than-50%
Daj#7482: I'm still surprised it didn't happen
Daj#7482: but all-out-nuclear war is basically morally trivial compared to AGI lol
bmk#1476: this is perfect in conjunction with longevity tech, honestly
Daj#7482: (this is hyperbole)
andyljones#7746: i get your point (and agree), but i think it's a useful anchor
Daj#7482: Agreed
bmk#1476: metronome meme: "overpopulation" "underpopulation" "longevity opponents"
asparagui#6391: agenda 21 time
thenightocean#6100: Basically this is what happens in this story: https://zerohplovecraft.wordpress.com/2019/10/22/god-shaped-hole/
Daj#7482: I keep getting recommended that story
Daj#7482: Is it good?
thenightocean#6100: yes
Daj#7482: > The following contains sexual content of a graphic nature. But that’s what you’re hoping, isn’t it, you dirty slut?
Daj#7482: Why is the story threatening me
Daj#7482: Now I won't read it out of protest to protect my christian sanctity
bmk#1476: :yes:
gwern#1782: yeah, 0hpl's been on this beat for a while. it's obvious if you pay any attention
bmk#1476: (jk, why would i force myself through a long winding nrx post when there's already porn easily available)
Daj#7482: Question to the old time rationalists: Who is 0hpl? |
Daj#7482: He keeps popping up on my timeline with various vaguely sexist stuff
thenightocean#6100: well he is roleplaying his hero both in the writing AND politics
bmk#1476: all i know is he's nrx but i don't know the details
gwern#1782: quasi-nrx horror writer who started up a few years ago. afaik he's not a new pseudonym of anyone but a young guy who's just vaguely LW-affiliated
gwern#1782: I assume he was a lurker before putting up his shingle as 0hpl
Daj#7482: > NRX horror
_sign me the fuck up_
Daj#7482: This sounds dope
Daj#7482: (I'm a huge horror fan)
Daj#7482: (more rat horror pls)
gwern#1782: well then, read god-shaped hole, it's the best of his I've read so far
gwern#1782: the minotaur one was also good
Daj#7482: nice
Daj#7482: oh fuck it's long
Daj#7482: Not very lovecraftian
asparagui#6391: build a language model to summarize it
Daj#7482: I also need to finish the gwern essay about ponies
Daj#7482: and my other 600 tabs
bmk#1476: 600? amateur
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/783036229628985344/unknown.png |
Daj#7482: You say that literally every time I mention tabs
Vexera#2754: what.
asparagui#6391: gosh darn
Daj#7482: I miss Ritalin
gwern#1782: "how many orders of open tabs are you on" "uhhh like 2 or 3" "you are a like a tiny baby"
bmk#1476: impressive
Daj#7482: > This is all well and good, but where do the bronies come in? The bronies, I think, are an expression of a New Sincerity
@gwern Holy fucking shit, this is _exactly_ what I argued when I was 16
Daj#7482: I knew I wasn't crazy
Daj#7482: Or you're the same kind of crazy as me
asparagui#6391: the latter satisfies occam's razor
Daj#7482: > Why Bronies are superior to the Neoreaction
This essay is a gold mine of :bigbrain: takes
Daj#7482: Beautiful essay @gwern , for a small moment, I felt like 16 again
gwern#1782: and yet, some people don't get it. you know people on SneerClub were seriously asking if the evangelion allusions were intentional? :picard:
Daj#7482: I don't know, maybe you had to be there
Daj#7482: I never watched evangelion but fuck you make me want to watch an anime now
bmk#1476: fwiw i have absolutely no idea what you're talking about
gwern#1782: it's always easy to laugh at sincerity from the outside. that's why it must be reinvented anew each time
Daj#7482: Yes! |
Daj#7482: Hit it on the head
Daj#7482: The sincerity _was the entire point_
gwern#1782: to slip past the memetic defenses of snobbery, disappointment, cynicism, and exhaustion
Daj#7482: Yea man, I haven't thought about any of that for, what, almost 10 years now
Daj#7482: wild
Daj#7482: First time I heard anyone else articulate what I was thinking in my teenage proto-brain, great stuff
thenightocean#6100: Kinda weird, I grew up in a culture thats very cynical by default, and I was yearning to get in touch with a western culture of sincerity and optimism. But now it turns out the eastern european cynicsim and small-minded snobbery successfully infected the west too 😦
Daj#7482: Maybe that is why we're in AGI after all
Dromarion#3383: Another avenue for disruption in entertainment I thought was VR, but I'm not really buying the proponents saying "Bro it's gonna replace movies, why watch a movie when you can *be* the movie". But that's like video games and while it's bigger than Hollywood, hasn't completely replaced film anymore than film replaced books. Maybe it's just because we're experiencing the scuffed version like mobile phones 20 years ago
gwern#1782: @thenightocean I blame the trend breaks at ~1970 for destroying western optimism and fostering negative-sum egalitarianism identity-politics/resentment-based dynamics
Daj#7482: Modernism and utopian thinking gets such a bad rap
Daj#7482: And I know why
Daj#7482: But also
Daj#7482: ugh
gwern#1782: (you notice the chinese, who have been enjoying their catchup exponential growth, seem a lot more optimistic)
Daj#7482: If we're not building a utopia what's the point?
Daj#7482: btw, did you ever write anything on technological/economic stagnation? It's been one of my secondary obsessions and seems like something you might have been interested in
Daj#7482: (or if not, could _greatly_ benefit from your meticulous analytic style)
thenightocean#6100: Yup. This book gives a good summary of this development: https://www.amazon.com/Where-My-Flying-Car-Memoir-ebook/dp/B07F6SD34R/ref=sr_1_1?crid=31N0EHTQ2SSI3&dchild=1&keywords=where+is+my+flying+car&qid=1606762820&sprefix=where+is+my+flyin%2Caps%2C381&sr=8-1
Daj#7482: Hah I am just finishing that exact book |
thenightocean#6100: me too 😄
Daj#7482: I'm not sure if I buy his causality or not
Daj#7482: I have like a bucket list of 5-7 explanations for the stagnation
Daj#7482: Just added one today lol
Emad#9608: You should read this on technological stagnation https://danwang.co/how-technology-grows/
Daj#7482: great, even more data points to add to my essay-i-will-never-publish
Daj#7482: Thanks!
thenightocean#6100: but lets look on the bright side. Stagnation might be over.
Daj#7482: ~~I would _pay_ gwern to write a comprehensive analysis of all the varied stagnation claims~~
thenightocean#6100: The stuff in AI, RNA vaccines, Boom supersonic, SpaceX, Tesla, Cryptos(19 k ATM), bunch of crazy startups lately.
thenightocean#6100: maybe the nerds will finally beat jocks and freaks and regain their position of power they held until 1970s. :ultrazucc:
Daj#7482: Yea, one of the hypothesis I am most partial to is that the stagnation is because around the 60s we stopped growing our population in tandem with the economy
Daj#7482: So we couldn't turn dollars into scientists anymore
Daj#7482: That may change soon
Daj#7482: imo some pretty strong circumstantial evidence for the singularity
thenightocean#6100: Thats that SSC article?
Dromarion#3383: I always thought self driving would probably be a prerequisite to cars flying. Like imagine the amount of accidents today and add in a Z axis
Daj#7482: Yea, https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/
Daj#7482: I evaluated a bunch of hypothesis for an essay once and found this one the most plausible
thenightocean#6100: super interesting. Though I am not sure that large population directly causes Sci-Progress in a simple causal way. You need other prerequisites |
Daj#7482: sure but it's the least-weird hypothesis I could find
Daj#7482: which is subjective
bmk#1476: By the way, is there an actual good explanation for "what happened in 1970"
Daj#7482: No that's what I'm saying
Daj#7482: There are a bunch of hypothesis with varying degrees of iffiness
bmk#1476: Oh didn't see
Daj#7482: I tried writing a gwern-style essay about it
Daj#7482: But kinda failed
gwern#1782: I did a bunch of reading and some writing about it many years back, when cowen's _great stagnation_ had repopularized it and genetics/DL were still only green buds, but concluded that it was too hard and large a topic for me to hope to do a decent essay on
gwern#1782: and with genetics/DL, stagnation is now a *choice* rather than any kind of technological or scientific inevitability, so it has less interest than then
gwern#1782: have you seen the reviews of _dude where's my flying car_? it seems to make an interesting case that the break is largely due to abdication and regulation, particularly in halting the trends of energy consumption by nuclear phobia, which feeds into the population bust
Daj#7482: Interesting, so you see it as obvious that the stagnation is about to end?
cognomen#6297: also the idea of filling the skies with millions of potential missiles doesn't seem likely post-9/11
Daj#7482: For the record, I also think the stagnation is about to end. Both because we can Soon efficiently turn dollars into researchers and because with solar getting cheaper we might undo the mistake of having skipped the nuclear techtree
thenightocean#6100: We can still do nuclear techtree if we want btw.
Daj#7482: No book has destroyed my hope for nuclear more than that one lol
thenightocean#6100: As long people become more rational about understand nuclear risks (or lack of them).. not a bet I would be willing to take
Daj#7482: Again, all my predictions are conditioned on "AGI really soon"
thenightocean#6100: ah that. Yes I agree.
Sid#2121: "As long as people become more rational" gonna stop you right there buddy |
Dromarion#3383: Another perspective is that the world is big enough that there's a lot of scientific circles that don't actually care about risks and do it anyway
gwern#1782: there are other techtrees. of course, the jedi wouldn't tell you about them.
gwern#1782: either way, the current stagnation is largely an accident of history and clearly only a lull and short delay in the grand scheme of things. whether it's in 2030 or 2050, does it matter? do you recall any delays in printing press distribution between 1520 and 1560 AD?
gwern#1782: either way, the current stagnation is largely an accident of history and clearly only a lull and short delay in the grand scheme of things. whether it's in 2030 or 2050, does it matter? do you recall any delays in printing press distribution between 1520 and 1560 AD?
Daj#7482: Yep, fully agree
Daj#7482: Though I just read the SSC post on picketty which seemed at least somewhat interesting
Dromarion#3383: Well to what extent does moral resistance really impedes things anyway? At this point with every society that thinks AGI is scary and researchers should crawls on eggshells forward, there's likely another that has massive government funded departments dedicated to making nuclear powered waifu flying cars that spy on you.
Daj#7482: I don't see any of the eggshell crawling anywhere (in AI) tbh
andyljones#7746: what's your theory as to why there haven't been more mad experiments in nuclear power and human genetic engineering?
mgostIH#0245: Which one?
Daj#7482: "Where's my flying car?"
bmk#1476: i'm reading the brony nrx post and this paragraph is just.. this is such a weird take that i don't know what to say
> Bronies, particularly the ones with a conservative or libertarian bent, sometimes are, if not Christian, sympathetic to Christianity. Bronies see the strong, loving, absolute monarch that Celestia is to her ponyfolk and have their eyes opened to the True Divine, Our Lord.
>
> The neoreaction doesn’t appear to offer that opportunity. If anything, its attachment to the bio-determinism known as HBD (short for “human biodiversity”, but in practice just means “blacks are stupid but run fast, Asians are uncreative grinds and whites who comment on HBD blogs are the perfect mix of clever and creative”) leads people away from God and towards a materialistic, instrumentalist view of the world and the people within it.
bmk#1476: "the worst thing about hbd is it leads people away from god"
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/783103019633344562/unknown.png
Ken#8338: Any wild timeline predictions?
Aran Komatsuzaki#5714: maybe around 2030 |
Ken#8338: @Aran Komatsuzaki I am guessing most people outside of a group like this are guessing a far later timeline. But I would think the EleutherAI group might have a better perspective than the general public or other specialized segments.
Aran Komatsuzaki#5714: i'm saying this cuz it won't take much time from now until a model can do research on their own.
Aran Komatsuzaki#5714: my guess is that AGI will be achieved around the time when a model can do ML research
Ken#8338: I have similar thoughts regarding when models can do their own research.
Aran Komatsuzaki#5714: but i guess it doesn't matter to us if it'll happen in 2030 or 2040. 10 years aren't that huge difference for us youngsters.
cfoster0#4356: ~~~What does a model doing ML research look like to you?~~~
cfoster0#4356: nvm I'm quibbling
Airatak#7842: Is it only me or is watching data being scraped kinda fun?
Airatak#7842: I've been up all night just looking at my scrapper
StellaAthena#3530: I would say "get a life" but I've spent most of the past 5 hours watching youtube videos and engaguing in Pokemon battles
Deleted User#0000: i guess i'm more of in a hurry 😢
Deleted User#0000: darn you
Aran Komatsuzaki#5714: tbf you're young enough too lol
Daj#7482: The closest published timelines to my own are the "aggressive" predictions from the biological anchors report
Timelines: https://docs.google.com/spreadsheets/d/16WlWJAmUe32oyQfiI9di86BXzX1EI0eWZ0fOakSA_f0/edit#gid=505210495
Report: https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit# (excellent work, really worth the read)
Daj#7482: But yeah, whether it's +/- a few decades really doesn't matter much in the grand scheme of things
Ken#8338: Thanks @Daj . I agree that the linked report is really good. I remember devouring it when it came out.
CKtalon#7792: @bmk can you hook me up with a epub to txt script?
bmk#1476: https://github.com/shawwn/scrap/blob/master/epub2txt-all |
bmk#1476: This one is very good
CKtalon#7792: thanks!
CKtalon#7792: and by any chance, do you know of any script or machine-learning powered way of splitting a big text file into sections (like chapters)
CKtalon#7792: language agnostic perhaps
bmk#1476: I don't think so, sorry
CKtalon#7792: no problem
DewOnTheGrass#6143: that sounds really tough
DewOnTheGrass#6143: you'd need something that can roughly determine a change of setting or moment of dramatic tension based on contextual phrases
DewOnTheGrass#6143: idk if that's even possible
CKtalon#7792: well, i'm thinking more of just how a chapter starts with a chapter title (with perhaps a number)
CKtalon#7792: and perhaps some extra newlines in between
CKtalon#7792: it's easy to split per file, but doing for a few hundred files is a pain :p
StellaAthena#3530: @DewOnTheGrass if the chapters are marked with a know sequence of symbols that’s just ReEx
StellaAthena#3530: If we can assume that every chapter begins “Chapter N” for some integer N just search for that pattern
CKtalon#7792: the problem is different books have different sequences of symbols. too many possibilities 😛
StellaAthena#3530: Potentially of interest to people here: https://twitter.com/random_walker/status/1333744337881018369?s=19
Sid#2121: can i assign values to a tensor using einsum?
Sid#2121: i have a 2d tensor that i want to assign to a row of a 3d tensor
Sid#2121: so x = [1,2,3,4,5] , y = [[0,0,0,0,0], [0,0,0,0,0]] --> <some magical einsumming with x and y> --> y = [[1,2,3,4,5], [0,0,0,0,0]]
bmk#1476: are you able to set the value of a 1d tensor |
Sid#2121: yes
bmk#1476: ok so this is what you can do:
bmk#1476: wait, it has to be done in one einsum? hmm
bmk#1476: i can get you a thing that you need to add to y
Sid#2121: i mean, i just need to do it in mtf in a while loop
Sid#2121: so shapes need to stay constant
bmk#1476: ok i can do that
Sid#2121: some sort of masking should work i guess
bmk#1476: pseudocode
``` x = [1,2,3,4,5] , y = [[0,0,0,0,0], [0,0,0,0,0]],
y :: [count, dim_each]
z = [1,0]
z :: [count]
x :: [dim_each]
insert singleton dimension at the end of z so its shape is now [count, 1]
insert singleton dimension at the front of x so its shape is now [1, dim_each]
w = einsum(z, x, [count, dim_each])
y += w``` |
bmk#1476: so you set z to `[1,0,0,0,0,...]` to set the first one, `[0,1,0,0,0,...]` to set the second, etc
Sid#2121: nice! that worked, thanks
bmk#1476: awesome
wilbown#7317: Hi all 👋😊
Sid#2121: wait, no it didn't lol @bmk
Sid#2121: hey @wilbown
bmk#1476: oh no what's the problem
Sid#2121: probably me just being tired and missing something but
Sid#2121: ```def assign_to(mesh, mesh_tensor2d, mesh_tensor1d, row_idx):
singleton_dim = mtf.Dimension("singleton", 1)
n = mesh_tensor2d.shape.dims[0].size
mesh_tensor1d = expand_tile(mesh_tensor1d, singleton_dim, axis=0)
range_dim = mtf.Dimension("range", n)
indices = [0] * n
indices[row_idx] = 1
z = tf.convert_to_tensor(indices, mesh_tensor1d.dtype)
z = mtf.import_tf_tensor(mesh, z, mtf.Shape([range_dim]))
z = expand_tile(z, singleton_dim, axis=1)
mesh_tensor2d += mtf.einsum([z, mesh_tensor1d], output_shape = mtf.Shape([z.shape.dims[0], x.shape.dims[0]]))
return mesh_tensor2d``` |
Sid#2121: is the function
Sid#2121: and it returns a tensor of shape=(2, 5, 2, 5)
Sid#2121: like this https://cdn.discordapp.com/attachments/729741769738158194/783457997782646804/Screenshot_from_2020-12-01_23-22-03.png
Sid#2121: oh wait i think i know what the problem is, my bad
Sid#2121: aand fixing it has made it worse, okay going to bed now 😅
bmk#1476: @Sid you need two seperate singleton dims
Sid#2121: @bmk not the problem
Sid#2121: ```def assign_to(mesh, mesh_tensor2d, mesh_tensor1d, row_idx):
singleton_dim = mtf.Dimension("singleton", 1)
singleton_dim2 = mtf.Dimension("singleton2", 1)
n = mesh_tensor2d.shape.dims[0].size
mesh_tensor1d = expand_tile(mesh_tensor1d, singleton_dim, axis=0)
range_dim = mtf.Dimension("range", n)
indices = [0] * n
indices[row_idx] = 1
z = tf.convert_to_tensor(indices, mesh_tensor1d.dtype)
z = mtf.import_tf_tensor(mesh, z, mtf.Shape([range_dim]))
z = expand_tile(z, singleton_dim2, axis=1)
print(z.shape.dims[0], mesh_tensor1d.shape[1])
mesh_tensor2d += mtf.einsum([z, mesh_tensor1d], output_shape=mtf.Shape([z.shape.dims[0], mesh_tensor1d.shape.dims[1]])) |
return mesh_tensor2d```
Sid#2121: ```x = mtf.range(mesh, range_dim, tf.float32)
y = mtf.zeros(mesh, mtf.Shape([mtf.Dimension("pos", 2), mtf.Dimension("pos2", 5)]), x.dtype)
w = assign_to(mesh, y, x, 1)```
Sid#2121: ```tf.Tensor(
[[[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]
[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]
[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]
[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]
[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]]
|
[[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]
[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]
[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]
[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]
[[0. 0. 0. 0. 0.]
[0. 1. 2. 3. 4.]]]], shape=(2, 5, 2, 5), dtype=float32)```
bmk#1476: what is the shape of `mesh_tensor2d` immediately before the einsum
Sid#2121: 2,5
Sid#2121: actually just returning the result of the einsum works haha
bmk#1476: no it doesnt
bmk#1476: wait |
Sid#2121: i am literally looking at it working, with my eyes
bmk#1476: no
bmk#1476: wait
bmk#1476: what is the shape of the einsum result
Sid#2121: also 2,5
Sid#2121: so i have no idea what's going on with the addition operator
bmk#1476: it's broadcasting when it's not supposed to be
bmk#1476: here's why it's not working: try an input with anything that's not all zeros
Sid#2121: yeah i know, but it doesn't matter for my use case
bmk#1476: wait what
bmk#1476: this doesn't do what you need
bmk#1476: if you set it twice the second one overwrites the first
Sid#2121: ah, true
Sid#2121: then why tf is this broadcasting happening
bmk#1476: that's what im figuring out rn
bmk#1476: aha
bmk#1476: i know why
bmk#1476: you need to rename the dimensions to have the same names
bmk#1476: wait
bmk#1476: wat |
Sid#2121: lool. fuck mtf
bmk#1476: like you need to make sure the 2s have the same name and the 5s have the same name
bmk#1476: but i cant tell why that isn't already the case from your code
Sid#2121: nice! that's done it
bmk#1476: oh lol
Sid#2121: why does the addition operator work like that in mtf
Sid#2121: that's absolutely fucked
bmk#1476: because named dimensions
bmk#1476: this one actually makes sense tbh
bmk#1476: if you didn't do this with named dimensions everything would be fucked
Sid#2121: now i just need to do this for 2d -> 3d.
bmk#1476: Oh same thing
bmk#1476: In fact
bmk#1476: Post your latest version code and I'll modify it to be arbitrarily dimensional
Sid#2121: ```def assign_to(mesh, mesh_tensor2d, mesh_tensor1d, row_idx):
singleton_dim = mtf.Dimension("singleton", 1)
singleton_dim2 = mtf.Dimension("singleton2", 1)
n = mesh_tensor2d.shape.dims[0].size
mesh_tensor1d = expand_tile(mesh_tensor1d, singleton_dim, axis=0)
range_dim = mtf.Dimension("range", n) |
indices = [0] * n
indices[row_idx] = 1
z = tf.convert_to_tensor(indices, mesh_tensor1d.dtype)
z = mtf.import_tf_tensor(mesh, z, mtf.Shape([range_dim]))
z = expand_tile(z, singleton_dim2, axis=1)
e = mtf.einsum([z, mesh_tensor1d], output_shape=mtf.Shape([z.shape.dims[0], mesh_tensor1d.shape.dims[1]]))
e = mtf.reshape(e, mesh_tensor2d.shape)
mesh_tensor2d += e
return mesh_tensor2d```
bmk#1476: ```def assign_to(mesh, to_tensor, from_tensor, row_idx):
singleton_dim = mtf.Dimension("singleton", 1)
singleton_dim2 = mtf.Dimension("singleton2", 1)
n = to_tensor.shape.dims[0].size
from_tensor = expand_tile(from_tensor, singleton_dim, axis=0)
range_dim = mtf.Dimension("range", n)
indices = [0] * n
indices[row_idx] = 1
z = tf.convert_to_tensor(indices, from_tensor.dtype)
z = mtf.import_tf_tensor(mesh, z, mtf.Shape([range_dim]))
z = expand_tile(z, singleton_dim2, axis=1) |
e = mtf.einsum([z, from_tensor], output_shape=mtf.Shape([*z.shape.dims[:-1], *from_tensor.shape.dims[1:]]))
e = mtf.reshape(e, to_tensor.shape)
to_tensor += e
return to_tensor```
bmk#1476: actually wait i may have broken something
bmk#1476: one moment
bmk#1476: actually yeah it *should* work
bmk#1476: i could probably make one that allows you to assign any arbitrary tensor anywhere (for any arbitrary slice, possibly across multiple dims) inside any other bigger arbitrary tensor but it would be overkill and i'm too lazy lol
Sid#2121: i need to learn the ways of the einsum
bmk#1476: Basically every position in the output is the sum of pointwise products over all the dimensions that dissapear
bmk#1476: So in this case you're summing over the two singleton dims
bmk#1476: So there's only one thing
bmk#1476: And that thing is a product of either a 1 or a 0 with the value you want to keep/toss
Airatak#7842: So the corpus of Japanese, Chinese and Korean Web/Light Novels (+Translations) I was working on has now reached 100GB+ of raw text. I still have a ton of more data to get but my laptop is running low on space. Is there a central repo where I can push whatever I have so far?
bmk#1476: uh, i assume your data so far is compressed?
bmk#1476: i can give you ssh access to one of our servers to rsync data to
Airatak#7842: Nope, it is not. I think the size should decrease once I zip it.
bmk#1476: yeah definitely
bmk#1476: make a big tar.gz with the files you have so far
bmk#1476: and rsync it over |
bmk#1476: post your ssh pubkey and i'll add you
Airatak#7842: cool, will do
StellaAthena#3530: If anyone here has non-zero web design skills we would *love* your help.
Mischa#0599: nonzero is a low bar. I have like, design *proclivities*. I wouldn't call them skills, and I'm not a webdev. What are you working on?
StellaAthena#3530: Currently most of the icons on our website are clip art. If you could, e.g., create a logo for some of our projects that would be immensely helpful.
bmk#1476: yes we need help with logos
bmk#1476: i'm still looking for submissions for pile logo
bmk#1476: i'm thinking basing something on a gaussian pdf would look cool
bmk#1476: it looks like a pile and the added statistical connection is nice
Mischa#0599: I've done basic branding for my own podunk projects. One was original and the others I modified from AI-generated logos lol.
StellaAthena#3530: I don’t mind if they’re AI generated
StellaAthena#3530: We, uh, suck at graphic design
bmk#1476: our logo font is ai generated lol
Mischa#0599: iracing series https://cdn.discordapp.com/attachments/729741769738158194/783561643447418910/image0.jpg
StellaAthena#3530: How about something that has a “machine learning” feel
bmk#1476: again
bmk#1476: gaussian pdf
bmk#1476: i think that would look really cool
Mischa#0599: This is my generatively designed racing hardware startup logo https://cdn.discordapp.com/attachments/729741769738158194/783562160919674920/image0.jpg
Mischa#0599: its meh |
Mischa#0599: maximum effort and all
bmk#1476: imagine taking this, removing the axes, making it slightly narrower, and filling it in with black and adding a slight gradient or some pattern or something https://cdn.discordapp.com/attachments/729741769738158194/783562309603688478/unknown.png
StellaAthena#3530: If you make it narrower it’s not a Gaussian
StellaAthena#3530: Rule 1: Gaussians are always wider than you think, unless d > 3 in which case they’re exceptionally narrow
Mischa#0599: I like the concept though
bmk#1476: this is way too pedantic let's not go there
bmk#1476: also by make it narrower i mean just make sigma lower
bmk#1476: that's still a normal distribution, no?
bmk#1476: and i mean if you just squish it all you need to do is just scale it up by a normalization constant
Mischa#0599: #flattenthecurve
Mischa#0599: is that icon going to have text underneath?
bmk#1476: and since this is a damn logo technically any scaling is a valid distribution, no?
bmk#1476: probablt not
bmk#1476: the text goes seperately
bmk#1476: the logo should be about 1:1
Mischa#0599: roger
bmk#1476: or i mean what do you think
bmk#1476: is this a bad idea
Mischa#0599: depends on the use case. Logos, as much as I have always wanted them to be a certain thing, are ultimately just there to communicate what you have as a product or whatever.
Mischa#0599: or your company dna, you get the idea |
bmk#1476: yeah
Mischa#0599: whatever for does that, great.
Mischa#0599: form*
Mischa#0599: What about layered distributions with decrementing sigma values that correspond to a gradient or opacity value?
bmk#1476: ooh, that sounds cool
bmk#1476: idk if it would look nice though
Mischa#0599: tallest is most opaque, shortest is solid etc
bmk#1476: i was thinking the other way around
Mischa#0599: try both
bmk#1476: just keep in mind that the idea is it's supposed to represent a pile of documents
bmk#1476: hence the name pile
Mischa#0599: yeah, I guess knowing what it's representing would help
Mischa#0599: where does the distribution come in?
bmk#1476: it's in the shape of a pile
Mischa#0599: opened PS and old file gave me an idea, just going to throw it out there
bmk#1476: ooh
bmk#1476: what isi t
Mischa#0599: le Ferrari but what if... https://cdn.discordapp.com/attachments/729741769738158194/783564621607993354/unknown.png
Mischa#0599: https://cdn.discordapp.com/attachments/729741769738158194/783564665627344916/unknown.png
Mischa#0599: you took that and mirrored it to make a geomertric dist |
Mischa#0599: words
Mischa#0599: geometric
bmk#1476: i don't think i have a mental image but if you think it would look cool then go for it i guess
Mischa#0599: it might look like hot garbage lets see
Mischa#0599: obviously not 1:1, ignore colors, and the middle is empty but that's the idea I meant https://cdn.discordapp.com/attachments/729741769738158194/783566104031264808/unknown.png
Mischa#0599: the middle black part could be a third tone that fills it in
Mischa#0599: idk
bmk#1476: hmm, i'm not sure
Mischa#0599: what was the gradient idea?
bmk#1476: i saw these really nice curves in some blog post https://cdn.discordapp.com/attachments/729741769738158194/783566659540615198/unknown.png
bmk#1476: maybe they could be good inspiration
bmk#1476: i'm actually starting to think that a gradient is probably not a good idea
bmk#1476: but anyways this is kind of a bit of inspiration if you think it's useful
Mischa#0599: well played berkeley
Mischa#0599: https://cdn.discordapp.com/attachments/729741769738158194/783568361573449748/c2c3b6bb-3fa7-42a0-b226-fc8dd66c6c58_rw_1920.png
bmk#1476: ooo
Mischa#0599: *chef's kiss*
bmk#1476: ~~why didnt we think of that~~
Mischa#0599: oh this is what I meant by corresponding values https://cdn.discordapp.com/attachments/729741769738158194/783568611557900328/standard-bell-curve-gaussian-ppt-300x250.png
Mischa#0599: the darkness |
bmk#1476: hmm yeah i think it would look weird
Mischa#0599: in steps
Mischa#0599: same idea LITE https://cdn.discordapp.com/attachments/729741769738158194/783568735633539132/6072-05-concept-curves-bell-2.png
bmk#1476: there's one other avenue we can try
bmk#1476: https://images-ext-2.discordapp.net/external/RGgupLaTZbLnzOrZF6IHceg-3BwzpW_NYd3Krd634i4/https/media.discordapp.net/attachments/730090075051786322/776910197813018685/Pile_Logo.png?width=557&height=450
Mischa#0599: gradient quantized lol https://cdn.discordapp.com/attachments/729741769738158194/783568937626370058/normal-distribution-chart-or-gaussian-bell-curve-vector-10686601.png
bmk#1476: so we have this logo, although i'm not a fan of quite a few aspects of it
Mischa#0599: Looks art deco, I kind of dig it
bmk#1476: maybe something based on it would be nice
bmk#1476: the boxes are in particular based on the sizes of our datasets
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/783569132598591498/unknown.png
bmk#1476: the weight column should be the relative size
bmk#1476: i think this idea has potential but needs better execution
Mischa#0599: apply that idea here https://cdn.discordapp.com/attachments/729741769738158194/783569294314569776/3ff2e2c7-8ec5-45b9-a441-4db9f1cacde4_rw_1920.png
Mischa#0599: instead of uniform circles, have weighted ones
bmk#1476: ooh!
Mischa#0599: no bear
bmk#1476: that would be nice
Mischa#0599: do something unique instead
bmk#1476: or just squares |
bmk#1476: or rectangles
Mischa#0599: yeah, docs
bmk#1476: i like this idea
Mischa#0599: something something steve jobs quotes about art
StellaAthena#3530: For GPT-Neo, what about something along these lines but a brain-book hybrid instead of a brain-gear hybrid https://cdn.discordapp.com/attachments/729741769738158194/783569800126791691/Capture.PNG
bmk#1476: ee seems kinda cliche imo
bmk#1476: and not entirely fitting
StellaAthena#3530: What’s the point of the bear, btw
bmk#1476: it's cali
Mischa#0599: berkeley stole our dots idea
Mischa#0599: I just realized I have an unhealthy obsession with gradient steps https://cdn.discordapp.com/attachments/729741769738158194/783570364587573288/unnamed-chunk-7-1.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/783570504883372092/IMG_20201201_224915.jpg
bmk#1476: this is kind of what i'm thinking
StellaAthena#3530: I like that
bmk#1476: maybe with a normal curve line on top to emphasize it
StellaAthena#3530: I meant to suggest that and then got distracted with fancy generating tools that are pretty useless
bmk#1476: and ofc the sizes should be roughly proportional to the actual set sizes
Mischa#0599: you could probably "draw" it with physics in wolfram
Mischa#0599: this definitely isn't a google AI ripoff. Not at all. Nope. https://cdn.discordapp.com/attachments/729741769738158194/783571543783243846/solutions-broadminded-talents-concepts-artificial-260nw-1316485832.png
bmk#1476: haha |
Mischa#0599: these guys did the same concept as what you just sketched up but in a location marker instead of dist https://cdn.discordapp.com/attachments/729741769738158194/783571860045692974/unknown.png
Mischa#0599: nifty
StellaAthena#3530: I like that color pallet too...
Mischa#0599: I just like gradients. Too much.
StellaAthena#3530: No such thing
Mischa#0599: So wait GPT-neo needs a logo?...
StellaAthena#3530: Yes
Mischa#0599: :picaface:
StellaAthena#3530: Like I said: it’s all clip art
StellaAthena#3530: (Minus the EleutherAI logo and the proposed Pile logo @bmk shared)
StellaAthena#3530: Literally everything else on the website is clip art
Mischa#0599: I'm willing to donate for like, an actual designer for neo.
Mischa#0599: I am not a designer.
bmk#1476: Pile and gptneo are our only substantial projects atm
bmk#1476: The other things are all in very early stages
Mischa#0599: Do you want all of your products to have a cohesive design feel or are their going to be really independent of one another?
Mischa#0599: the other smaller ones too I mean
bmk#1476: I'm personally fine with a more eclectic mix
bmk#1476: It would certainly fit with how Eleuther is loose
bmk#1476: Our projects all have a high degree of autonomy |
StellaAthena#3530: I think vaguely cohesive logos would be nice
StellaAthena#3530: I mean, they shouldn’t be jarring next to each other
Mischa#0599: I love something design philosophically about both the Eleuther logo and the sketch you posted. I love the constituent parts not being the same thing but coming together to make a larger whole.
bmk#1476: I think vaguely cohesive would be nice but between cool individual logo and meh cohesive logo I'd go for the former without a doubt
Mischa#0599: It's not too hard to strike a balance. The "glue" can be very subtle and classy while retaining a lot of character for each
bmk#1476: What do you think would work
bmk#1476: As the glue
Mischa#0599: anything. color palates, stylistic choices like smooth logos or geometric logos or even something like.... hold on... lemme get it
Mischa#0599: youll have to click to enlarge but the server icons for the Discord Science Network servers are all totally different but have the same art style, greyscale 3d red/blue grunge https://cdn.discordapp.com/attachments/729741769738158194/783574604824182824/unknown.png
Mischa#0599: I'm curious to see what you guys do for a website because at some point later this year I have to make one too.
Mischa#0599: 🤦♂️
StellaAthena#3530: The one I made in 30 minutes is up at www.eleuther.ai
bmk#1476: I for one like sharp angles, right angles, sleek curves, am neutral on circles, and strongly dislike organic looking curves
bmk#1476: By organic i mean like weird squiggly lines and stuff
StellaAthena#3530: Needs updating tho
bmk#1476: Not like literally plants
bmk#1476: I'm conflicted on symmetry
Mischa#0599: that's a pretty good 30 minute website
StellaAthena#3530: Yeah it’s the images that need work
bmk#1476: Generally symmetric is good but there are a lot of cases where breaking symmetry looks simply amazing |
StellaAthena#3530: Also the content
Mischa#0599: https://cdn.discordapp.com/attachments/729741769738158194/783576291214360616/unknown.png
Mischa#0599: okay that's neat
StellaAthena#3530: Big picture the website does what we want. It would be nice to upgrade pieces though
bmk#1476: Is it an aesthetic hot take to like modern style houses?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/783576714642456576/9e978052c3da56cec171517e65028965.jpg
StellaAthena#3530: No
StellaAthena#3530: That’s an ice cold take
Mischa#0599: so a custom walmart tier website is $10k, a midrange one is 20-50, and a "pro" website is easy 6 figures. OR you can find a beautiful website you like the overall look/feel/structure of and just base yours off it using wordpress and plugins or Wix or whatever you want
bmk#1476: Ok i have no baseline lol
Mischa#0599: for like freeish
StellaAthena#3530: It’s literally the thing that’s popular architecturally
bmk#1476: Is it a hot take to say that ornate architecture is overrated
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/783577007723642900/100130_150006_Dresden_Frauenkirche_winter_blue_sky-2.jpg
bmk#1476: Sorry to any dresdeners
StellaAthena#3530: Depends how many levels of retro the people you are talking to are on
Mischa#0599: lol
bmk#1476: I've heard a lot of people shit on modern architecture in the same breath as shitting on abstract art and praising 1Xth century castles and churches
StellaAthena#3530: Would it be cool to have a stream from GitHub on our website, the way some pages have Twitter feeds?
bmk#1476: which, i don't get abstract art, so i'm like "oh shit is this what it feels like to like abstract art" |
Mischa#0599: open ai is using ai to dynamically tailor their webpage based on microphone data. I just said I liked gradients
Mischa#0599: https://cdn.discordapp.com/attachments/729741769738158194/783577632372686878/unknown.png
Mischa#0599: their homepage
bmk#1476: oh no your preferences must have created a ripple effect
bmk#1476: nooooooooooo https://cdn.discordapp.com/attachments/729741769738158194/783577807766421504/unknown.png
Mischa#0599: hey why is yours prettier
bmk#1476: unrelated but what if you took one of these, put a letter delta around it, turned it 180, and substituted it for the gradient symbol in latex https://cdn.discordapp.com/attachments/729741769738158194/783578370461138964/unknown.png
bmk#1476: it would make all the math look *fabulous*
Mischa#0599: https://www.awwwards.com/ always shop website design inspiration here, they have tons of categories and tags to search with too
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/783582559305072640/nablatheta.png
mgostIH#0245: Hell yes we need more colors in math papers
mgostIH#0245: Code does that and it helps a lot
bmk#1476: if anyone wants to convert this into a font or something that we can put in papers, go ahead
bmk#1476: all of eleuther's gradients must be f a n c y
Mischa#0599: Apple gradient quality but with more personality
Mischa#0599: https://cdn.discordapp.com/attachments/729741769738158194/783745134347944016/unknown.png
cognomen#6297: https://phiresky.github.io/blog/2019/rga--ripgrep-for-zip-targz-docx-odt-epub-jpg/
cognomen#6297: something that popped up on HN recently
cognomen#6297: runs searches obscenely fast through big archives like books3
bmk#1476: books3 is already text |
cognomen#6297: without extracting
cognomen#6297: it took forever for gnu tar to just list the files in it so I'm just surprised this can run so fast
bmk#1476: you can already do this using lmd
StellaAthena#3530: @cognomen how are you searching through books3 that is slow?
chirp#4545: https://twitter.com/sama/status/1334196199088287744
chirp#4545: 🔮
bmk#1476: "hot take: 2020 was an amazing year even after taking the pandemic into account"
zphang#7252: "with notably rare exceptions, 2020 was an amazing year"
cognomen#6297: *"2021 marks another year of startling progress in the northern states of america..."*
Dromarion#3383: You could probably find plenty of good things that happened in any year. What's specific technological advances are expected in the next year anyway?
bmk#1476: well, for one, eleuther will publish a lot of papers next year
thenightocean#6100: Its hard to speculate, but based on that Sam Altman we will see a lot of foomy things
StellaAthena#3530: Real world quantum computing (for the third or fourth time depending on how you count)
thenightocean#6100: Every time I hear some new achievement in Quantum computing it sounds amazing, and then I read Scott Aaronsons comment and he makes it sounds pedestrian
StellaAthena#3530: tl;dr theoretical quantum computing is super cool, real world quantum computing is super cool if you are into hardware and not if you’re into algorithms
StellaAthena#3530: That’s an overgeneralization, but a decent rule of thumb
mgostIH#0245: It's because it's written in Rust 🙏
triggerhappygandi#0001: @StellaAthena is it any good for non-quantum computation?
triggerhappygandi#0001: Like yeah I guess it would do good with simulation of molecules or something, but can there be any use for plebians?
StellaAthena#3530: @triggerhappygandi IRL today, or some day we hope? |
triggerhappygandi#0001: Some day
triggerhappygandi#0001: Because as of now only Google has a processor that beats classical computers.
StellaAthena#3530: Yes. The two most important examples are database search (you can search a database of size N in time sqrt(N)) and factoring (there is a known *exponential* speed up for factoring)
StellaAthena#3530: Consequently, there's a lot of work right now on inventing encryption protocols that aren't subverted by quantum computers
triggerhappygandi#0001: Is this based on some theoretical proof or has someone attempted factoring using a quantum computer?
StellaAthena#3530: The algorithm is faster
StellaAthena#3530: Classical *hardware* is faster than quantum hardware
StellaAthena#3530: With an apples-to-apples hardware comparison the quantum computer wins
StellaAthena#3530: But we aren't good at building quantum hardware
StellaAthena#3530: (hence why I said quantum hardware is exciting)
StellaAthena#3530: It's like how Google can trounce the clever AI algorithm on your personal computer by throwing more compute at the problem than you could ever afford
StellaAthena#3530: @triggerhappygandi does that make sense
triggerhappygandi#0001: I see.
triggerhappygandi#0001: But from what I know qubits aren't as reliable to store information as regular bits.
StellaAthena#3530: Yes
StellaAthena#3530: There are a lot of interesting and hard hardware problems surrounding actually operating a quantum computer efficiently for a lengthy period of time
triggerhappygandi#0001: It is a cool engineering challenge, but will it continue Moore's law across the board? I doubt so.
triggerhappygandi#0001: Are there even other options?
StellaAthena#3530: Other options for what? Moore’s law?
triggerhappygandi#0001: Yes |
bmk#1476: Insert cerebras here
bmk#1476: Also, a large percentage of the planet's surface isn't cpu manufacturing facilities yet [citation needed]
triggerhappygandi#0001: Lol. I guess that's one way to go about it.
StellaAthena#3530: Problems that quantum algorithms solve efficiently and problems that classics algorithms solve efficiently are (probably) overlapping but distinct
triggerhappygandi#0001: Can't wait for them to prove/disprove string theory
triggerhappygandi#0001: Also, :neetz:'s existence suggests that there should be a Socrates too
StellaAthena#3530: Hey, that just means that proving it is our only hope
cfoster0#4356: Rarely does a stranger's post capture *exactly* what I'm thinking https://mcbal.github.io/post/an-energy-based-perspective-on-attention-mechanisms-in-transformers/
cfoster0#4356: TL;DR of the same https://twitter.com/MatthiasBal/status/1332653831470129153?s=19
Deleted User#0000: @cfoster0 thanks for the links!
Deleted User#0000: yea, i like reading about these interpretations too
Deleted User#0000: even if they aren't too practical yet
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/784110191868248104/elephant.jpg
bmk#1476: an elephant is like a torus
Mischa#0599: *screams quietly*
Kazumi#1297: would the plural of torus be torai?
StellaAthena#3530: Tori
Kazumi#1297: ah, yeah
bmk#1476: an elephant is homotopy equivalent to a circle, too
asparagui#6391: there's a hole through the middle, no? |
bmk#1476: an elephant is a tube
CRG#8707: This comment made me think, has anyone tried doing something like passing the QKVs through a nonlinearity/MLP before the attention? https://cdn.discordapp.com/attachments/729741769738158194/784121322073751583/52962e4ebade5292b8583e8fcb17d91f.png
3dprint_the_world#6486: I wonder if I should get into robotics.
spirit-from-germany#1488: https://youtu.be/zBvvbOLq3t0
andyljones#7746: on one hand: hardware is hard. reality is a very slow simulator. things are expensive.
on the other hand: exactly those points put lots of other people off working in it, so on the general principal of 'run in the opposite direction to everyone else' yeah go do it
Airatak#7842: Hey Guys! Can someone please share the pretrained GPT Neo models? I'm trying to make an essay generator but GPT2 seems to be very bad at it
Daj#7482: Our models are currently not better than GPT2 unfortunately
Deleted User#0000: i've tried almost everything you said and more, and none of them really yielded better performance than just plain multi-head attention
Deleted User#0000: other things i've tried is (1) mlp for values [nonlinear] (2) attention on the outputs of the multihead attention (3) intra-features attention (4) GLU on q,k,v
Deleted User#0000: perhaps I should build some experimental repository where people can plug and play these combinations and perhaps stumble into something that works tho
Deleted User#0000: after all, the GLU on feedforward seems to be a success
CRG#8707: There really should be some kind of repository of negative results.
Deleted User#0000: these days, when i read papers, i take omissions as negative results by default, unless if explicitly stated they didn't have the resources to do so
Deleted User#0000: papers have an incentive only to report themselves in a positive light
Deleted User#0000: pervasive problem in academia tho
bmk#1476: hallo
bmk#1476: what brings you to these parts today
bmk#1476: well, uh, we're musing over the idea of maybe building a dataset of bio stuff but we plan on first finding some real biologists to help up figure out which data is useful for solving which problems and stuff |
bmk#1476: yeah that's a thing we might do Eventually™
bmk#1476: the other thing that's been a thing we need to do for ages is figure out how to do html->text actually properly, and then convert all 3.5PB or whatever of CommonCrawl to text
bmk#1476: that only means our dataset will have more impact! 😄
bmk#1476: good point
bmk#1476: so yeah a good multilingual garbage-free html to text extractor+filterer is our big data project atm and we really don't know how we want to do that yet
Sid#2121: well, the pile still isn't *done* per se right? is the repo complete? what needs to be done on the paper still
bmk#1476: mostly analysis and ablation
Sid#2121: i guess what i'm saying is, these new projects sound cool, but we should finish our old ones too
bmk#1476: i've basically taken over ablation from you lol
bmk#1476: but don't worry, i'll see to it that pile gets finished
Sid#2121: we found extraction quality is better in english using the WARCs, but i have to agree the WETs might be better for multilingual. Haven't actually looked at them too much, though.
bmk#1476: that's another option, if we figure out a good WET cleaning technique that would be an option
Sid#2121: yeah sorry about that, happy to take it back over but you seem to have things planned out
bmk#1476: we chose WARCs for Pile because cleaning WARCs is easier
bmk#1476: yeah if we figure out a way to clean WETs that would be nice, the ccnet people seem to have done not too bad but their pipeline is complicated and i'm a bit skeptical of a few of the components
bmk#1476: though i've been thinking, CommonCrawl is *big* but 3.5PB isn't *that* much for like 8 years or whatever of scrapes and it certainly is much smaller than the entire internet [citation needed], and after extracting and filtering the entire thing for high quality text we may only have a few dozen TB left at the other end
bmk#1476: yeah
bmk#1476: that's WARC size
bmk#1476: a few dozen TB is a lot of text but it's not completely implausible that we will have models that can use more (or we can do more aggressive filtering)
bmk#1476: 100% free |
bmk#1476: so i was thinking, what if you made an *even bigger* crawl than CC
Sid#2121: are there any details on how CC extracts the WET files from the WARC files?
Sid#2121: is it just *all* the visible text on the page?
bmk#1476: they only add like 200TB (WARCs) a month, and the internet has got to be way bigger than that
Daj#7482: This seems like a massive project. Like, six figure+ big
Sid#2121: yea
bmk#1476: archivist buys 12 PB at a time lol
Daj#7482: Why not move to multi modal instead?
Daj#7482: We aren't archivist lol
bmk#1476: good idea!
bmk#1476: we can build the worlds biggest image dataset
Daj#7482: Multi modal seems much higher leveredge
Sid#2121: didn't you already scrape like all of instagram @-Archivist ? did you ever do anything with that data?
Sid#2121: / is it public?
bmk#1476: lol
cfoster0#4356: **Lawyer** is typing...
Daj#7482: Making a _legal_ dataset is probably a good bit harder than just a dataset
Sid#2121: yandex / google images is safe, presumably, no?
bmk#1476: i believe it's perfectly fine to *train* on any data, something something fair use
Sid#2121: that's where most of imagenet probably came on |
Daj#7482: Google image is actually totally illegal, at least in Europe
Daj#7482: lol
Sid#2121: i'm gonna assume you mean *scraping* from google images otherwise i'm in serious trouble
Daj#7482: I didn't want to assume, hah. What kind of compute are we talking, btw?
Daj#7482: Funnily, even viewing it is _technically_ illegal in germany
Daj#7482: But no one will enforce that
Sid#2121: are... you joking?
Daj#7482: Yes I talked to a copyright lawyer about this at length
Daj#7482: No
Sid#2121: *what*
Daj#7482: Germany/EU copyright is actually batshit insane
bmk#1476: [google street view karte]
digitalisierung in deutschland (symbolbild)
Daj#7482: We wanted to scrape images from Google for our video game
Daj#7482: Very illegal
Daj#7482: Never brought to court
Daj#7482: But _technically_ illegal
Sid#2121: well, oops
Daj#7482: Yea if we don't wanna publish almost anything goes I guess |
Daj#7482: Do you still have that porn stream archive, Archivist? lol
Daj#7482: Multi modal, text chat to video
Sid#2121: *asking for a friend*
bmk#1476: *owo what's this notices mode*
bmk#1476: woah
Sid#2121: if i could put you on silence mode for owo ing in here i would
Daj#7482: Uhhhh oh
Sid#2121: go stare at the corner and think about what you've done
Daj#7482: Honestly, I think it would be interesting to train on twitch streamers
Daj#7482: I'm a bit ugh on pornographic material
bmk#1476: where do you get all this compute and what do you use it for lol
Daj#7482: Yea, we've never had compute like that, that might open new possibilities
bmk#1476: so here's what i'm thinking
bmk#1476: how big is a cluster?
Sid#2121: that would be incredibly useful
bmk#1476: dang that's not enough for gpt3 replication
Sid#2121: understand the sentiment, but after running things on TPUs, GPUs are practically user friendly
bmk#1476: yes
bmk#1476: ok so here's a concrete path to HUMONGOUS:
bmk#1476: 1. we need native speakers to vet the data at http://data.statmt.org/cc-100/ |
Daj#7482: I mean, if you have a bunch of GPUs laying aoround you don't need, we probably can put them to good use
bmk#1476: the english data, at least, in CC100 is surprisingly good
bmk#1476: if i can get validation from other speakers that the quality is good, the project can go forward
bmk#1476: 2. we need to run https://github.com/facebookresearch/cc_net on all of CC
Sid#2121: how does cc net do extraction?
bmk#1476: they use WETs plus some language model based filtering plus some kinda aggressive heuristics i believe
bmk#1476: we can tone down the heuristics and leave everything else as is
bmk#1476: i'll just write off all my WARC efforts as a sunk cost
Sid#2121: what's the advantage of using ccnet over MC4?
bmk#1476: well, for one, i don't have a copy of C4 to inspect the quality of
Sid#2121: i thought we were getting one
bmk#1476: we were
bmk#1476: we will have it eventually
bmk#1476: hopefully
Sid#2121: we could also easily run CC on @-Archivist 's hardware
Sid#2121: not like we need to run the whole pipeline, either
Sid#2121: just enough for a good sample
bmk#1476: C4 you mean
Sid#2121: yes
bmk#1476: i mean that's more work |
bmk#1476: and C4 codebase is by google
bmk#1476: who, well
Sid#2121: well, we should know what we're dealing with.
bmk#1476: *ahem*
bmk#1476: mtf
bmk#1476: nyways
Sid#2121: why would we pour a load of work into a multilingual pipeline if C4 / MC4 performs well?
bmk#1476: what no i was proposing just using cc_net as is lol
Sid#2121: we should at least look at it before writing it off
Sid#2121: well, yeah, but you're choosing cc_net over C4 because? reasons?
Sid#2121: from what i can tell, it's just because you dislike google for some reason
bmk#1476: no it's because getting a copy of C4 is hard
Sid#2121: not with chonky compute / storage
bmk#1476: i'm not putting in all that extra work lol
Sid#2121: we just need to run a few commands
bmk#1476: if you want to try to make it work go ahead
Sid#2121: i'd happily do the work this weekend
Sid#2121: if you can get me access @-Archivist
Sid#2121: woops
bmk#1476: i can literally download a copy og CC100 in 5 minues |
Sid#2121: @-Archivist
Sid#2121: sorry, meant to tag you above instead of the other guy
bmk#1476: if you want to go procure mC4, be my guest
Sid#2121: could we run a few things on your servers? I can either send the instructions or you can give me ssh access, whichever works
bmk#1476: i personally think collecting mC4 is a big waste of time
Sid#2121: why would you reject it off hand without even looking at it?
Sid#2121: that's just stupid to me
bmk#1476: i think the payoff to effort ratio isn't really worth it but if you want to take responsibility for it then go ahead
Sid#2121: it's just a few commands, not like it really takes any active development
bmk#1476: ok sure
Sid#2121: i believe we just worked it out :berk: . So if i'd like to run something on your servers, how should i go about it?
bmk#1476: i'm absolutely traumatized by "just running several commands" over the past few weeks to run ablations
bmk#1476: so yeah
bmk#1476: we've worked it out, sid will do it
bmk#1476: we volunteer him as tribute
Sid#2121: tbf @bmk , i'm reading the cc_net paper now and it looks much better thought out than C4. But i still think we should look at both.
bmk#1476: the only info i have is CCnet looks clean at a glance and Louis tells me that C4 is garbage
Chaotic Evolution#4046: @Sid why was I pinged?
bmk#1476: sorry we accidentally pinged you lol
Sid#2121: sorry, mistyped |
bmk#1476: it was a typo
Chaotic Evolution#4046: Gotcha
Chaotic Evolution#4046: No worries then lol
Sid#2121: a cluster would be ideal, yes. I gotta admit i'm not particularly well versed in that sort of thing.
Sid#2121: CC_net should technically work with ~three commands, but i doubt it'll be that straightforward
bmk#1476: it's set up to use some weird apache beam thing https://cdn.discordapp.com/attachments/729741769738158194/784519212013977610/unknown.png
Sid#2121: C4 i'll have to look into
bmk#1476: it was like 3k or something
bmk#1476: something absurd
bmk#1476: yeah i love hardware too
bmk#1476: unfortunately i only have puny amounts of hardware compared to what you've got over there lol
bmk#1476: lol that's insane
Daj#7482: Your anecdotes are amazing lol
bmk#1476: i aspire to have a career as exciting as that lol
StellaAthena#3530: @bmk is there a place I can easily skim some data in cc_net?
bmk#1476: yes
bmk#1476: http://data.statmt.org/cc-100/
StellaAthena#3530: / how do you plan on handing it out for people to vet
bmk#1476: ¯\_(ツ)_/¯
StellaAthena#3530: That website is fine IMO |
Sid#2121: do we have the english chunk downloaded anywhere?
StellaAthena#3530: Pro tip: don’t click on one of those buttons unless you want to download a massive chunk of data
Sid#2121: also, do we *really* need speakers of all the languages to verify the data? Like, surely all we need to do is check for boilerplate
bmk#1476: can you recognize boilerplate in japanese
Kazumi#1297: who even speaks japanese
bmk#1476: ikr
StellaAthena#3530: Given the lengthy history of ML researchers completely ignoring the accuracy or applicability of their work to places and data that aren’t the US or Europe... yes?
bmk#1476: does japan even exist
StellaAthena#3530: Also this
bmk#1476: anyways there is the possibility of doing step 2 in parallel and then using the feedback to retroactively fix wherever possible
bmk#1476: i have a copy
StellaAthena#3530: Q: what’s the plan with this data? Train on it? Add more shit to it?
bmk#1476: A: collect approximately 5x more of it
StellaAthena#3530: !
bmk#1476: archivist has absurd amounts of compute and storage so we can run this code on all of CC
StellaAthena#3530: 😮
StellaAthena#3530: That would be nuts
bmk#1476: like truly absurd amounts
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/784527052593692692/unknown.png
Kazumi#1297: > not as low as 1PB |
bmk#1476: :berk:
StellaAthena#3530: I have a friend whose a mathematician, linguist, and polyglot I should reach out to
StellaAthena#3530: He can probably be helpful and will be very excited
Sid#2121: ok fair point
Sid#2121: is it on the server?
Sid#2121: can it be?
bmk#1476: absolutely, drag him onboard
bmk#1476: er, i'll rsync it over
Kazumi#1297: huh, you get to be on wikipedia for being a polyglot https://cdn.discordapp.com/attachments/729741769738158194/784527747572957204/Screenshot_from_2020-12-05_06-12-39.png
bmk#1476: > six or more
Mischa#0599: 6 seems so arbitrary
bmk#1476: shit i gotta get to it
bmk#1476: i'm only at 5 even if i finish all the ones i'm currently working on
Kazumi#1297: lets see I know English, Japanese, python, java, C#, and brainfuck, do I get to be on the list
Mischa#0599: also, 1:1? Like is Fuzhounese not counted? It's pretty different from mandarin. Spanish is easier and faster than say Latin.
Mischa#0599: I just can't get over the idea of six for polyglot
StellaAthena#3530: Six or more? Scrubs
bmk#1476: lol i should go learn dutch
StellaAthena#3530: Matt knows like 10
bmk#1476: it's basically english and german put into a blender |
Mischa#0599: I think if you're stacking on like 3, 4 languages you're probably doing polyglot thangs
bmk#1476: and seasoned with impossible to pronounceness
bmk#1476: two questions: a) how good is *know*, like B2 or totally fluent?
Mischa#0599: I just want a neurolink .lib for each natural language
StellaAthena#3530: I would have to ask, I am not particularly sure
bmk#1476: b) i really need to get in touch with matt he seems like an incredibly cool person
Mischa#0599: I second this, as someone obsessed with languages and not very good at them.
StellaAthena#3530: He is
bmk#1476: we need him around here so we can all just have big language discussions
Mischa#0599: depending on which ones he knows, it might motivate me to get back to a conversational level to keep it from rusting
bmk#1476: yeah i'm curious which ones he does and what levels of proficiency
Mischa#0599: also meta: I want language learning tips more than specific language knowledge
Mischa#0599: I have a large to-do list
bmk#1476: yes
bmk#1476: i second that
bmk#1476: ftr i'm working on japanese, french, and german right now (i need to work on learning to *read* chinese but i already have high enough conversational proficiency there)
StellaAthena#3530: RIP I forgot about time zones. He’s an observant Jew and he’s going to be AFK for ~23 hours
StellaAthena#3530: Here’s the message I’m sending out to the polyglots I know:
```
I’m working on a multilingual AI project. We are looking to scrape a significant portion of the internet and then strip out the text in the data with an algorithm that processes it and sorts it by language. In particular, we want to get *actual human writing*, not computer code, auto generated boilerplate, and other nonsense. |
Google’s been working on this too, and recently made some of their code and data available. I was wondering if you / people you know would be interested in skimming some data in various languages and reporting on the quality.
To be clear isn’t for Google. We would like to assess how good an algorithm Google open sourced is. This is a group of nerds on the internet with way more compute than anyone should let them touch.
I would happily offer to pay you except for the bit where we have no money so I would rather bribe you with an invitation for coauthorship on the research. If the answer is “yes, but contingent on payment” I’ll show you what we are looking for and we’ll see how much you want for it.
```
bmk#1476: First off this isn't anything to do with google at all lol lol
bmk#1476: cc_net is by facebook
bmk#1476: https://github.com/facebookresearch/cc_net
bmk#1476: this has nothing to do with trying to assess how good *their* algorithm is either
bmk#1476: like technically we are doing that
bmk#1476: but that's absolutely not the point
bmk#1476: we're not volunteering to do *quality assurance for google/facebook*
bmk#1476: oh, no
StellaAthena#3530: Oops
StellaAthena#3530: Well, the thing I am asking people to do is assess the algorithm
bmk#1476: the purpose of this project is to *make sure it works* so we can *collect more data*
StellaAthena#3530: I don’t mean to imply that’s what *we* are doing
bmk#1476: that's by far the bigger focus |
bmk#1476: your email message makes it sounds like we're basically going google's homework for them
bmk#1476: or facebook
StellaAthena#3530: Ah
bmk#1476: that is absolutely not the point
StellaAthena#3530: Ok
bmk#1476: in fact, what we might even do is collect the data first and apply retroactive patches to it
bmk#1476: i haven't decided exactly how we want that to work though
bmk#1476: anyways, here's how i would word it:
bmk#1476: ```
We're working on creating the world's biggest, highest quality fully multilingual text dataset. As part of ensuring quality, we want to ensure that speakers of as many languages as possible have a say in the creation of this dataset. Due to the difficulty of building an html->text system from scratch, we're going to be basing our system off an existing system by Facebook et al (https://github.com/facebookresearch/cc_net), but we want to first get feedback on how their data looks before making modifications to their system based on that feedback. Then we'll run it on a huge compute farm to process way more data than the original cc_net paper did and make it available for NLP researchers to help further development in the field.
```
bmk#1476: @StellaAthena what do you think
StellaAthena#3530: That’s reasonable
bmk#1476: you can obviously make some rewordings
bmk#1476: some of my wordificationings are not optimal
bmk#1476: also do you know any speakers of the rare languages on that list?
bmk#1476: i don't think it's practical to require one for *every* rare language because then we'd be here for years but getting at least a handful would be really nice
StellaAthena#3530: What are the rare languages
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/784539470249984040/unknown.png
bmk#1476: here are all the languages identified in cc100 |
bmk#1476: i am under the impression that there may exist multiple other languages
StellaAthena#3530: I know or have known people who speak every yellow language. I suspect I know or have known people who speak every blue language https://cdn.discordapp.com/attachments/729741769738158194/784541320739028992/image0.png
bmk#1476: Wow, that's a good chunk of them
bmk#1476: We should it it gradually
bmk#1476: I don't think we have the management experience or the workflow set up to handle like 50 people doing things at once
Sid#2121: are you part of some secret super multilingual society or sth? there's like 100000 people in the world who speak gaelic
bmk#1476: We should start with just a small number of like "early alpha testers" so to speak, people who are patient enough to put up with us while we figure our shit out lol
bmk#1476: But gaelic isn't even highlighted
bmk#1476: Fuck autocorrect lol
Sid#2121: i'm assuming irish = gaelic here
Daj#7482: Quite a lot of Irish people speak some gaelic
bmk#1476: oh
bmk#1476: i thought it was "scottish gaelic"
StellaAthena#3530: Gaelic, Irish, Scottish Gaelic, and Scott’s are all different languages
bmk#1476: hm
Daj#7482: Shows how much I know about languages
Daj#7482: I met a Dutch person once, that's about it
Daj#7482: lel
bmk#1476: so there's the problem that language classification is completely broken for rare languages
bmk#1476: but tackling that is a seperate project of itself |
StellaAthena#3530: > Daily users outside the education system number around 73,000
StellaAthena#3530: (Re: Gaelic)
StellaAthena#3530: Yeah. I noticed they listed BCS as distinct languages
StellaAthena#3530: That’s Bosnian, Croatian, and Serbian
bmk#1476: what's the problem?
bmk#1476: i know they're similar but
bmk#1476: they are distinct, no?
Daj#7482: ~~They are all rightfully Serbian clay~~
Daj#7482: ~~this is obviously a joke~~
StellaAthena#3530: What constitutes a language is a political question more than anything else
StellaAthena#3530: The saying is that a language is a dialect with an army and a navy
bmk#1476: the problem is, just building the One True Language Classifier that can handle all 3000 written languages is such a gargantuan task that i'm not sure we'll ever get around to building a dataset
StellaAthena#3530: If the bar is mutually intelligible, then BSC are all the same language
StellaAthena#3530: So are Spanish and Italian
bmk#1476: ~~so are dutch and german~~ /s
StellaAthena#3530: American English and Irish English are not really mutually intelligible
Daj#7482: _X: DOUBT_
bmk#1476: spreek duits jij hoerenzoon
StellaAthena#3530: Also it’s not an equivalence relation
Daj#7482: Their ridiculous cartoon language is unsuitable for expressing any higher form of thought |
StellaAthena#3530: In particular, it’s not symmetric
StellaAthena#3530: Portuguese speakers can understand Spanish much better than Spanish speakers can understand Portuguese
StellaAthena#3530: Though the extent to which that is due to cultural hegemony and the fact that some dialects like Spanish and American English get a lot more air time than others
Mischa#0599: "en elegant puzzle: systems of engineering management" is a really well generalized and communicated toolkit for thinking about scaling and team and org dynamics. I loved it.
bmk#1476: ill add it to my mile long reading list
Mischa#0599: lol
Mischa#0599: Stripe, like the payment company, published it
Daj#7482: Reading for the reading list god
bmk#1476: i expect to get to it approximately 75 years from now
Mischa#0599: reading seems so inefficient when you think of learning as downloading and installing new models and libraries, but it's amazing and I love reading.
Mischa#0599: I just wish I could read more.
bmk#1476: i think i might set aside a few days for purely just chewing through my list and nothing else
bmk#1476: where by a few i mean like an entire week
Daj#7482: Where by a week you mean a year
Daj#7482: during which nothing new noteworthy is allowed to be published
Mischa#0599: from a technical standpoint, what does drawing lines between languages and similar dialects do?
Mischa#0599: for the Pile
zphang#7252: Singapore has a navy and an army but Singlish is still considered just a creole 😢
triggerhappygandi#0001: How do you guys keep up with what new research to read about
triggerhappygandi#0001: Following too many people on twitter just made my feed cluttered with apparently new SOTA every single day |
potato123#9646: Call me old school, but I just look www.arxiv-sanity.com once a week
Also I just check if there are new citations on google scholar on relevant papers for me (Ex. checking latest paper which reference stylegan2 paper)
triggerhappygandi#0001: I check arxiv sanity too. But maybe not enough for it to recommend me the most interesting papers. Will try Google scholar citations
bmk#1476: i read whatever is posted in this discord lol
bmk#1476: but then again that only works if your interests are almost entirely LMs and scaling
triggerhappygandi#0001: I wish for the day RL gets an Imagenet moment and lifts off.
bmk#1476: dont hold your breath
bmk#1476: atari dqn was the imagenet moment if there ever was one
triggerhappygandi#0001: :nooo:
triggerhappygandi#0001: I mean, robots are bound to become better someday. That would make RL the most relevant ML!
bmk#1476: anyways we like RL here too
bmk#1476: so if there's anything big we will discuss it here too
bmk#1476: but that's not our main thing
triggerhappygandi#0001: I know
bmk#1476: it's a filter
bmk#1476: if we talk about it you know it's probably big lol
triggerhappygandi#0001: Did you see this?
https://twitter.com/ylecun/status/1334860576418312194?s=19
bmk#1476: no, sounds interesting
bmk#1476: not like breakthrough levels but a neat framework at least |
potato123#9646: This is the one with public belief state? If I recall correct
triggerhappygandi#0001: If the same algorithm can beat humans at both Go and Poker then it's atleast it's better than Alphazero
triggerhappygandi#0001: I have yet to read the paper. Just read the abstract.
Airatak#7842: Anyone here got ShortlyRead Premium? Is it any good?
bmk#1476: what does it do
Airatak#7842: It is a writing assistant, most likely powered by GPT-3.
Check it out: https://shortlyread.com
potato123#9646: It uses GPT3, there are many other alternatives, which do similar like:
https://www.copy.ai/
https://snazzy.ai
https://virtualghostwriter.com/
potato123#9646: So, yeah choose whatever flavour you want for whatever task you need. As all of these use GPT3
triggerhappygandi#0001: I'm just going to wait for gpt-neo to subvert OpenAI's master plan
thenightocean#6100: my project that helps with that: https://ai-progress-feed.netlify.app/
bmk#1476: be prepared to wait a year or two or several
potato123#9646: Looks nice, gonna bookmark this one.
Airatak#7842: Well the first 2 seem to be for specific purposes, this one seems more general
triggerhappygandi#0001: In several years we will probably train GPT-3 on an 8GPU instance on AWS@bmk
bmk#1476: X - doubt
triggerhappygandi#0001: Yeah it's far fetched but you get the gist |
bmk#1476: hardware doesn't progress *that* fast
triggerhappygandi#0001: Gpt neo will be ready by then I'm sure
potato123#9646: Correct, but there other ones for creative writing, which I forgot. But i would use a tool which is focused. For example I used copy.ai to write all my text for my website https://bezier.ai. Other tools didnt give me good results.
bmk#1476: we need to make gptneo write a paper at some point
triggerhappygandi#0001: Unlimited power
triggerhappygandi#0001: Fine tune it on arxiv's 1.2 million papers
bmk#1476: we are one step ahead
bmk#1476: we already have the data
bmk#1476: we are already training
bmk#1476: we're not fine tuning, we're training it on arxiv from the start
bmk#1476: 4d chess moment
triggerhappygandi#0001: Damn
triggerhappygandi#0001: Yoshua whatnow?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/784887974990577684/unknown.png
triggerhappygandi#0001: Beautiful
triggerhappygandi#0001: It will be intelligent from the get-go
triggerhappygandi#0001: And a movie buff apparently
Airatak#7842: Wait there are others?
potato123#9646: Yes, a quick google search will give you lot of startups which offer some kind of writing assistants.
I mean theres a startups which raised venture capital. https://techcrunch.com/2020/11/12/othersideai-raises-2-6m-to-let-gpt-3-write-your-emails-for-you/ |
Kazumi#1297: how is the mean document size for youtube so high?
Kazumi#1297: is it only including videos longer than an hour or something?
Sid#2121: @Kazumi it's subtitles of all languages
Airatak#7842: Yea but then they are in early access :(
I still don't have access to GPT3, so yea... just waiting and trying out these apps until GPTneo comes along
Kazumi#1297: in one document?
Sid#2121: mostly focuses on lectures and TED talks with multiple translations
Sid#2121: yeah
Kazumi#1297: I guess it'll learn to translate better?
bmk#1476: that reminds me, you should write that down in the appendix if you havent already
Sid#2121: that was the idea
Sid#2121: i'm pretty sure i wrote it in the bits of the paper i typed up
bmk#1476: i.e how did you choose the search terms, what kinds of videos, etc
bmk#1476: ah ok]
Sid#2121: @Daj inaugural banhammer time ^
Daj#7482: https://www.youtube.com/watch?v=Ux0YNqhaw0I
Daj#7482: I've been waiting for an excuse to use this
Marzipug#6747: Thank you for introducing me to virtual ghost writer. this is by far the most advanced AI i have found online
Marzipug#6747: wanted to share this info for anyone else interested :)
CRG#8707: Needs more transformer corruption. https://youtu.be/iJgNpm8cTE8 |
bmk#1476: What happened here
Daj#7482: Spambot
thenightocean#6100: eh I naively clicked on his link in the off-topic channel. To make it worse my wife was on a couch with me at the time so I had to awkwardly explain it that this place isnt a porn links discord 😀
andyljones#7746: @bmk idk if your questions yesterday were linked to Wei Dai's post, but if they weren't it's a great example of how it's having some informational edge, noticing something that's not widely been noticed, that's important. rather than 'being good' in a generic sense.
https://www.lesswrong.com/posts/pYxpvoGKa5Sdnxpmc/anti-emh-evidence-and-a-plea-for-help
andyljones#7746: i'm also on board with the various comments about 'you may well be being comp'd for risks you don't realise', along with 'yes, and some of those risks are totally worth it'
andyljones#7746: Also worth mentioning that it *is* Wei Dai, and what works for Wei Dai may not work for mortal man.
bmk#1476: yeah, i think i heard about wei dai talking about evidence against emh and yud being kind of convinced
bmk#1476: also yes wei dai is a legend among us mere mortals
Mr. No#2263: Hey all
bmk#1476: Hello
bmk#1476: What brings you to these parts
Mr. No#2263: The eye. I was curious about the AI
Mr. No#2263: I freakin love all things artificial intel
bmk#1476: then you're in the right place
bmk#1476: https://github.com/EleutherAI/info
bmk#1476: some info about what we do
Mr. No#2263: I read you guys needed cpu power
bmk#1476: what scale of compute are you talking? |
bmk#1476: thanks to the generosity of archivist, we now have quite a bit of compute at our disposal, though obviously more is always better
Mr. No#2263: Oh. I was talking more a couple of personal pc's. Nothing huge
bmk#1476: yeah, thanks for the offer but i think we have enough for the time being
Mr. No#2263: Alright
Mr. No#2263: Sorry i couldnt do more
Mr. No#2263: Lol
bmk#1476: oh, no problem
bmk#1476: we currently have a massive shortage of people who can write code, so if that tickles your fancy you can help there
Mr. No#2263: Yike
I am interested in coding but im terrible at it
Mr. No#2263: I mostly Frankenstein it from open source guthubs
bmk#1476: other things: do you happen to be good at graphic design or website design
Mr. No#2263: Graphic design is unironically my passion
bmk#1476: perfect!
bmk#1476: we could def use a graphic designer around here
Mr. No#2263: Whew
Just be warned
I havent ever made stuff to be used publicly
bmk#1476: no problem, we're all learning as we go
bmk#1476: right now we mostly need logos designed for all of our projects |
bmk#1476: gptneo and pile in particular
bmk#1476: there are a few candidate logos for pile so far but we haven't settled on one yet, and gptneo has no logo at all
Mr. No#2263: alrighty
Mr. No#2263: ill see what i can learn and what i can bust out
Daj#7482: > 25GB
Daj#7482: Seems small for your standards hah
Mischa#0599: I mean the sentiment is pretty cash money of you imo
Mischa#0599: that's byteist
quality > quantity
Mischa#0599: what if it's 25GB of baby yoda
Daj#7482: It's Archivist tho
bmk#1476: archivist has ascended to a higher plane of existence where he is no longer capable of handling data smaller than 1TB
Mischa#0599: That’s fair
gwern#1782: (surely you mean a higher *order* of existence)
triggerhappygandi#0001: https://twitter.com/spibblez/status/1335638633970348032?s=20
triggerhappygandi#0001: Supersampling without deep learning. :bigzucc:
Kazumi#1297: Can you make a better pseudo code executer than python by fine-tuning GPTs?
triggerhappygandi#0001: They could give coherent code but would it be impressive?
StellaAthena#3530: I swear I read a paper that purported to prove that any model trained via backprop approximates a kernel, in the sense that as epsilon goes to 0 it converges to a kernel computation. I can't seem to find the paper anymore, does anyone know what I'm talking about?
Aran Komatsuzaki#5714: https://www.reddit.com/r/MachineLearning/comments/k7wj5s/r_every_model_learned_by_gradient_descent_is/ |
Aran Komatsuzaki#5714: this one?
StellaAthena#3530: Yes thank you
triggerhappygandi#0001: Anyone got that Venn diagram of various domains of improvement on Transformers?
triggerhappygandi#0001: It had 4 domains, memory, kernel, etc
gwern#1782: you are thinking of the tay venn diagram: https://www.gwern.net/GPT-2#efficient-attention
gwern#1782: specifically, https://www.gwern.net/images/ai/2020-tay-figure2-efficientattentiontaxonomy.png
bmk#1476: @carro made me this✌👑✌👑✌ wen c4
Mr. No#2263: anyone got the logo for this server as a png?
Mr. No#2263: im designing a new one and i wanna incorporate the old
StellaAthena#3530: @Mr. No it’s on the website as an image. You should be able to download it as a .png
Mr. No#2263: oh i didnt even think to check their
Mr. No#2263: thanks
Mr. No#2263: so what are you guys looking for in a logo?
StellaAthena#3530: TBH I wasn’t aware we were in the market for a new logo
Mr. No#2263: oh
someone mentioned that you guys needed a graphic design for something and i assumed a logo for something or other
Mr. No#2263: oh bmk said you guys werer looking fo ra logo design for th eproject as a whole
StellaAthena#3530: Ah. So we are looking for logos for each project, especially one for our GPT replication, GPT-Neo
Mr. No#2263: gpt? sorry im new to all the deeper ai stuff
Mr. No#2263: the most i know about ai is coding |
Mr. No#2263: and im terribad at that
StellaAthena#3530: GPT-1, GPT-2, and GPT-3 are the names of particular language models created by OpenAI. We are replicating them, and are calling our version GPT-Neo
Mr. No#2263: okay so what are you guys thinking for a logo?
Mr. No#2263: like a skull or an eye? what styles and what mood are you going for?
gwern#1782: I thought the little scattered boxes design was perfectly decent, and scales down to a favicon with 3 boxes and up to a bigass logo
StellaAthena#3530: That’s the logo for the Pile
StellaAthena#3530: I think specifically not a skull is a good start. The model reads and writes text, so something cute with a robot reading a book or something like this with text streaming out could be cool https://cdn.discordapp.com/attachments/729741769738158194/785732352063832084/image0.jpg
StellaAthena#3530: It’s suuuuper cliche, but I like the brain/computer hybrid DL logos
StellaAthena#3530: We don’t need all of our logos to make a cohesive theme, but it would be nice if it didn’t clash with this logo as they’ll be next to each other on our website. https://cdn.discordapp.com/attachments/729741769738158194/785733048813223956/image0.png
AI_WAIFU#2844: Oh god no, those galaxy brain memes became a thing for a reason.
triggerhappygandi#0001: Yes this one. Thank you
triggerhappygandi#0001: @gwern impressive website btw
bmk#1476: I like the scattered boxes too but I'd really like to make the color slightly different. I think Kazumi posted some code that'll let me do that
bmk#1476: wait, maybe he didn't
bmk#1476: @Kazumi can you post your modified version of veedrac's script
StellaAthena#3530: They did a version that was color matched to the plot in the paper https://cdn.discordapp.com/attachments/729741769738158194/785748629687500810/image0.png
StellaAthena#3530: I don’t think they posted a version with them scattered tho
bmk#1476: minor discord search gripe: the special operators like "from" and "contains" actually change based on your localization, which is extremely annoying if you change your localization frequently and search a lot since you keep typing the wrong word https://cdn.discordapp.com/attachments/729741769738158194/785749300155252786/unknown.png
bmk#1476: i still type `von:` occasionally by accident
Kazumi#1297: I'm outside, I can give you the file when I come home |
Kazumi#1297: The scattered one was apparently scattered manually, which I gave the file you need to do that for
CKtalon#7792: general question. If I understand correctly, GPT3's few shot (providing the examples and the prompt) has a 2048(?) token limit Is it an OpenAPI restriction because of compute limitations or a symptom of the model's training?
cfoster0#4356: Model. At the start of the model they convert token positions into embeddings and they decided to make it 2048 long
cfoster0#4356: Theoretically they could've gone for a different positional encoding without that hard limit, but that's what they did
CRG#8707: ViT (for image classification) was able to upscale the embeddings to fine-tune at higher resolution. So that might be possible with text.
Kazumi#1297: here it is https://cdn.discordapp.com/attachments/729741769738158194/785784645265915914/psvg.py
CKtalon#7792: but isn't this 2048 token limitation very limiting? We've seen what it can do, but it's all very small-scale stuff because of this 2048 token limitation? Like giving a description to get HTML written. It will only be cool, but nothing useful for real use
CKtalon#7792: my question is will GPT-Neo limit itself as well?
triggerhappygandi#0001: Does linear attention mean that gpt-neo will be a couple orders of magnitude less expensive to train than GPT-3?
triggerhappygandi#0001: There must be a comparison with GPT-2
Sid#2121: @CKtalon the 2048 token context window isn't as limiting as you would expect. The vast majority of relevant tokens are within the last few hundred. The problem is that attention is a O(n2) complexity algorithm, and most of the training documents are actually smaller than 2048 tokens, so longer context sizes don't really show much improvement. We have linear attention implemented and may test it soon, though.
CKtalon#7792: i'm more thinking of using it for writing stories
Sid#2121: @triggerhappygandi linear attention at such small context sizes actually doesn't make such a massive difference and performs slightly worse. It really starts to pay off at larger context sizes.
CKtalon#7792: even if it's just a short chapter, that's 1500 words
CKtalon#7792: so it's rather limiting
Sid#2121: you can use clever prompting to keep track of metadata like characters/settings. I think @Liminal_Warmth was talking about a system for that in a channel here the other day.
triggerhappygandi#0001: Define _larger_
CKtalon#7792: i was more thinking of using a summary of the major points in a particular setting, and let the machine just fill in the rest with it
Sid#2121: But yeah, language models just aren't up to the task of writing novels quite yet.
triggerhappygandi#0001: You mean at seq_len of 100k and more? |
CKtalon#7792: based on the examples given
CKtalon#7792: not really 100k, but something around 1-2k words
CKtalon#7792: so about 3k tokens
Sid#2121: it seems to be more effective for other modalities like images, where your context size could be 50, 100k +
CKtalon#7792: then with few shot, that needs about 15k tokens
triggerhappygandi#0001: I see. So it isn't much use for a language model?
triggerhappygandi#0001: I doubt even the next big LM would be able to do it, which would probably be T6
Sid#2121: i don't think it's been tested extensively yet, so i'm hesitant to say yes, but as i said above, the model seems to take most of its context cues from a relatively small amount of previous tokens
Sid#2121: also most linear attention variants just perform slightly worse on text
CKtalon#7792: yea, i believe gpt3 isn't capable of it, but I'm guessing gpt4 might be able to, but was wondering about the token limit
triggerhappygandi#0001: I see. Also, is this a disadvantage that the model takes its context from only a few nearby tokens?
CKtalon#7792: if the token limit is always a hard cap, then no matter how good the metalearning is, it will never be able to meet the use case
Sid#2121: personally I think some kind of augmented memory would be the best way forward. Can you remember > a few k tokens back when reading a book in detail?
CKtalon#7792: also why aren't reformers or longformers used to train these bit language models?
CKtalon#7792: @Sid setting wise, definitely
Sid#2121: we remember an abstracted version of the text but not every word
triggerhappygandi#0001: Indeed
Sid#2121: yes, but in less detail
triggerhappygandi#0001: Which is why I believe VAEs will have to be incorporated in some way
Sid#2121: yeah, or some sort of retrieval like MARGE |
Sid#2121: i think this paper is also a super interesting approach https://openreview.net/forum?id=lU5Rs_wCweN
Sid#2121: not quite memory, but something similar
triggerhappygandi#0001: Can we do something similar to what PixelSNAIL did in regards to capturing context?
Sid#2121: never heard of PixelSNAIL before, will check it out
triggerhappygandi#0001: PixelSNAIL captures gradients from literally every pixel before it. PixelCNN claims to do it, but the dependencies just vanish if you go far enough
triggerhappygandi#0001: Here's a comparison between PixelCNN, PixelCNN++ and PixelSNAIL https://cdn.discordapp.com/attachments/729741769738158194/785830447589621760/20201208_165912.jpg
triggerhappygandi#0001: It catches dependencies from even the first pixel.
Liminal_Warmth#8151: @CKtalon what you’re asking about is what everyone wants to do yeah
Liminal_Warmth#8151: Better methods to control context are arguably more useful than an order of magnitude larger prompt would be (although hard to say because we haven’t seen that in action)
Liminal_Warmth#8151: @Louis and I were discussing this yesterday
Liminal_Warmth#8151: He persuaded me in DMs that symbolic encoding is a pretty powerful method for doing this
CKtalon#7792: yea, guiding/control content seems what's important, else GPT3 will just imagine shit up and go on in a way that's not useful
Liminal_Warmth#8151: Well
Liminal_Warmth#8151: Actually it’s PRETTY good with prompt cycling, especially at low temp settings
Liminal_Warmth#8151: It does introduce new details but at like t=.7 or .6 it mostly works within the prompt context if you feed it about 800 words
Liminal_Warmth#8151: I tested this a ton with one of my fantasy novels and was stunned at how well it understood and picked up on subtlety in the scenes and interpreted it correctly
CKtalon#7792: I wanted to play with GPT3...if only i got an invite.. haha
Liminal_Warmth#8151: This use case isn’t supported by OAI so all you’d be able to do is play with it anyway
Liminal_Warmth#8151: I regret to inform you
Liminal_Warmth#8151: Actually that reminds me—is there a place I can subscribe to or check thats status updates on how GPT-Neo is going? I just browsed the channels a bit to look and couldn’t really find a clear status or update section |
Sid#2121: our organisation is terribad haha
Liminal_Warmth#8151: Gpt-2 is acceptable for now and I can do some refinement on the prompt but I really need to sub in a more advanced model for TextSpark as soon as I have a better option
Sid#2121: we've had a few minor bugs with our codebase that have slowed things down a little, but they're all ironed out now.
Sid#2121: best indicator of progress is just to ping me or @Daj / @bmk
cfoster0#4356: If anything big happens we'll probably link it here https://github.com/EleutherAI/info
Liminal_Warmth#8151: Awesome thank you
Liminal_Warmth#8151: I’ll bookmark that
Liminal_Warmth#8151: What are the biggest barriers right now to forward momentum?
Sid#2121: we may also start a patreon / medium at some point... we'll see
Sid#2121: dev time, haha
Liminal_Warmth#8151: Is there anything I could help with?
Sid#2121: we do this all in our spare time
Sid#2121: sure! what can you do?
Liminal_Warmth#8151: Lots of things 😁
Sid#2121: we need a lot of help with web design
Sid#2121: proficient coders are also always in need
Liminal_Warmth#8151: I’m a coder tho that’s not my core expertise
Liminal_Warmth#8151: Let me PM you my LinkedIn page, happy to discuss further.
Sid#2121: general organisation would also be a massive help - things like running experiments, organising tasks on the github
Liminal_Warmth#8151: Hmmm |
Sid#2121: i think @StellaAthena is probably typing a better response for you haha
StellaAthena#3530: lol yeah
StellaAthena#3530: 🙂
Liminal_Warmth#8151: That might be something I could easily help with too
Liminal_Warmth#8151: One sec
Liminal_Warmth#8151: Unfortunately, as I’m sure is the case with all of you, time is my biggest constraint
StellaAthena#3530: @Liminal_Warmth If you aren't experienced with neural networks, the most useful thing coding task you can do is help us with the evaluation harness. It's relatively basic Python, but basically the goal is to implement code to evaluate the model using various testing protocols. The interaction with the model is entirely hidden behind an API: https://github.com/EleutherAI/lm_evaluation_harness
Liminal_Warmth#8151: Yeah neural net training is an area I have very little experience with at all
Liminal_Warmth#8151: I’ve tuned GPT-2 a little and Shawn helped tune the TextSpark model for us
Liminal_Warmth#8151: I’ve got some time today—let me dig through your docs and message y’all
Liminal_Warmth#8151: With some thoughts
Liminal_Warmth#8151: Also financial support might be a thing I can offer too
StellaAthena#3530: As @Sid notes we have various behind-the-scenes tasks, largely focused on project management, web dev, and PR-type stuff.
Liminal_Warmth#8151: Well—now you’re speaking my language 😁
Liminal_Warmth#8151: I was a project manager and then a product manager for a combined 15 years
Liminal_Warmth#8151: I’ll message you and Sid in a bit Stella 🙏
StellaAthena#3530: Awesome
StellaAthena#3530: I strongly agree. It’s inane, and infecting ML in general driven by companies desiring to use in-house proprietary data. If you don’t publish code that I can follow the directions in the README and replicate the tables in your paper, it’s not open source.
3dprint_the_world#6486: Very little ML research is actually reproducible, in my experience.
3dprint_the_world#6486: Even in the rare cases when people actually *try* to be open, they're always using some or other odd version of python or pytorch or something, which you need to install a special version of cuda for, etc., and good luck with getting everything working without spending more time configuring things than it would take to just write the code yourself. |
3dprint_the_world#6486: And it's very surprising how so many recent-ish papers (2019, 2020), *still use python 2.7*
3dprint_the_world#6486: And I can't tell you how many times I've actually managed to run their code only for it to crash or bug out after training for an hour.
3dprint_the_world#6486: And how many times training succeeds, but I get results different from the paper, which you then ask the authors about, and they go "Oh sorry we uploaded an old version of our code, we'll upload the new version soon"
3dprint_the_world#6486: which never happens
3dprint_the_world#6486: But hey, I guess ML is more reproducible than psychology
Kazumi#1297: that's just software
StellaAthena#3530: That's not really a fair comparison. The bar for "reproduction" in ML is much *much* lower. Psych doesn't have what we in ML refer to as "replication." What psych researchers refer to as "replication" is more analogous to building the entire code base from scratch based on the paper, doing the experiments, and getting the same results on your first try.
3dprint_the_world#6486: erm, there's plenty of software that people can run reproducibly without error.
3dprint_the_world#6486: yeah I know, I was being tongue in cheek 😀
zphang#7252: the BERT era has significantly improved reproducibility, from my limited pov
3dprint_the_world#6486: A while ago I was thinking about ways of having ML be more reproducible. One possible way could be to have a 'sanity check' problem in every paper which implements the main methods of the paper on a reduced-size model and a number of reduced-size datasets. The idea is that this smaller model should be able to train cpu-only with very little config.
bmk#1476: HF is doing the lord's work wrt reproducibility
Deleted User#0000: yea agreed
zphang#7252: even pre-HF, BERT's code+weights were released wholesale
Deleted User#0000: yes, HF is amazing, where would we be without them?
bmk#1476: what if the main method *is* massive model massive size
3dprint_the_world#6486: then you can't say it's reproducible.
zphang#7252: and also the fact that most models are just "tuned-bert" rather than randomly slapped together NN modules
3dprint_the_world#6486: because it's not.
zphang#7252: makes it easier to compare/reproduce |
Deleted User#0000: i think the problem with ML papers is that, unless it is a truly knock out technique, there's usually some gotcha somewhere, unstated in the paper of course
bmk#1476: mildly warm take: randomly sticking together nn legos is not research
zphang#7252: (caveat: up to about 2-points below the reported scores in tables, which I blame on hyperparam tuning still)
3dprint_the_world#6486: interesting, go on?
Deleted User#0000: but it's already world's better than the papers i used to have to read in the bio realm
Deleted User#0000: where my default instinct is skepticism
bmk#1476: even if you automate the process of sticking and run it over a bunch of different configurations of legos
Deleted User#0000: @3dprint_the_world sure, well, once you get in touch with researchers, they let you in on little details, like so and so tried this but it didn't work on this architecture. so they leave it out
Deleted User#0000: and emphasize the positives
Aran Komatsuzaki#5714: nowadays some papers are not reproducible simply because nobody but the authors have enough computes for them lol
bmk#1476: that being said, i think there's something *fundamentally satisfying* about sticking nn legos which explains why it's so popular
Deleted User#0000: meanwhile, the PR machine at big corp goes nuts
bmk#1476: like i can't be the only one right
Deleted User#0000: and publicize everything as breakthrough
Deleted User#0000: lol
zphang#7252: that's the best part of conferences now IMO
Deleted User#0000: i think RL is the worst
StellaAthena#3530: ML is 90% PR, 9% engineering, and 1% science
Deleted User#0000: NLP is nice
zphang#7252: authors are forced to stand by their posters while you ask them questions, but at the same time you're not putting them on blast on twitter or anything |
Deleted User#0000: i'd give a technique about 20% chance of working
Deleted User#0000: which is already ok
Deleted User#0000: considering Sturgeon's law says 95% of things are crap
Deleted User#0000: lol
3dprint_the_world#6486: yeah but the point of a publication is that it ought to be the 5%
StellaAthena#3530: Literally the entire field of NAS is bunk.
bmk#1476: you know that feeling when you're scrounging together the best lego pieces
Deleted User#0000: i think ML community is quite self-aware though
Deleted User#0000: you regularly see papers come out that debunk
bmk#1476: gotta use the sota activation function, all the different types of layers
Deleted User#0000: like @chilli 's nice paper with label propagation
Deleted User#0000: as an example
zphang#7252: which paper is that?
3dprint_the_world#6486: Contrarian opinion: The ML community is highly non-self-aware, and it's so bad that occasionally some people get fed up and write a paper about it.
Deleted User#0000: https://arxiv.org/pdf/2010.13993.pdf
chilli#5665: you can see our supervisor's take here lol
chilli#5665: https://twitter.com/austinbenson/status/1323320759608057857
Deleted User#0000: yea, i come from medicine, where studies are pushed for profit
Deleted User#0000: so i've truly seen the underbelly
zphang#7252: I think the ML community is both more self-aware than most, and also overrates its self-awareness |
3dprint_the_world#6486: that may be true
Deleted User#0000: isn't this comment by nature self-aware?
bmk#1476: i think there's two seperate ML communities, one made of people who are actually trying to make progress, and one made of people who think it's easier to publish ML papers and produce crap just to fulfil requirements to finish their phd or something
bmk#1476: they just happen to publish at the same conferences
3dprint_the_world#6486: but that's just called being a phd student. I did that too when I was a phd student. You have no other choice.
Deleted User#0000: ok, i'm logging off, or i'll spend the day talking to randos
3dprint_the_world#6486: Now that I have more freedom I can be more self-aware.
Deleted User#0000: lol
bmk#1476: the problem is that those make up a very large percentage of ML papers, which explains why ML feels so shit and also so good at the same time
3dprint_the_world#6486: yea
bmk#1476: and also why it feels like the best research comes from industry
Daj#7482: I basically dropped out of academia for this reason
3dprint_the_world#6486: I probably did my best work during post-doc
Daj#7482: I don't want to spend 6 of my best years pumping out garbage to win a social signalling game
bmk#1476: tbf it's largely similar in lots of non FAANG industry
3dprint_the_world#6486: when I actually knew enough to make meaningful contributions, but also wasn't restricted in information sharing
3dprint_the_world#6486: like you usually are in industry
Daj#7482: The only correct career trajectory is to drop out of school at 12, have Jaan Tallin fund your nonprofit research org at 19, and then eventually write a Harry Potter fanfiction
zphang#7252: you might make more money if you wrote Twilight fanfiction though
Daj#7482: Do not eroticize the paperclip maximizer, please |
3dprint_the_world#6486: rationalist twilight fanfiction. make this happen people
Daj#7482: Willing to bet it exists
3dprint_the_world#6486: probably
Daj#7482: But it's a snarky parody
bmk#1476: stop making me regret removing literotica
Daj#7482: The closest I've read to rationalist erotica is probably God Shaped Hole by 0hpl
Daj#7482: But that's using erotic for horror purpose
Aran Komatsuzaki#5714: you can amass a fortune by writing fury porn
Daj#7482: Fun economics fact: The entire economy eventually bottoms out at furry porn
bmk#1476: how much of ff.net is erotica?
Daj#7482: all of it
bmk#1476: hpmor is erotica??
Daj#7482: Yes
3dprint_the_world#6486: economic activity can be defined as the exchange of furry goods or furry services
Daj#7482: > I couldn’t find anyone talking about this in a way that made sense to me, so eventually I paused my literature search, in favor of just making things up. A proud tradition here at lesswrong!
LW is such a good website
chilli#5665: I don't reallly know if that's true
chilli#5665: like, I'm not interested in most ML papers
chilli#5665: well, the vast percentage of ML papers
chilli#5665: but a lot of that is because I'm just not interested in most subareas |
3dprint_the_world#6486: you're not?
CRG#8707: https://tvtropes.org/pmwiki/pmwiki.php/FanFic/Luminosity
gwern#1782: _checks in and sees that someone has already brought up Luminosity. his work here is done_
Daj#7482: Hey @gwern , speaking of weird rationalist stuff: What is Weird Sun Twitter?
gwern#1782: weird sun twitter is various postrats who spend all their time on wordplay and I think was inaugurated by steven kaas or imitating him
Dromarion#3383: The userbase of AI Dungeon is getting overtaken by those using it to make their own erotica. You know with the amount of lewd input and output I kind of wonder how many of the engineers working on this groundbreaking tech are just coomers :thonk:
Daj#7482: Ah, postrats, yea that seems about right, thanks!
gwern#1782: 'overtaken' implies they weren't the plurality from the day after the first prototype was released on colab and 4chan heard about it...
Daj#7482: This is the first warning of the oncoming wire heading AGI. _If only you had listened._
Daj#7482: You could have stopped this
Daj#7482: It's too late
Daj#7482: Why don't we have a Ron Paul emote
chilli#5665: yeah I was kinda curious about that...
Daj#7482: My "furries are everywhere" conspiracy theory meme is common knowledge around here at this point
Daj#7482: @gwern is just a very sophisticated ~~coomer~~ waifu-affectionado too
Dromarion#3383: Pretty sure a good chunk of VR research is coomer driven too. Well whatever gets us there
gwern#1782: heh. I'm not an AID coomer (I still find it hard to imagine fapping to text, of all things), but it was very hard to miss if you were at all interested in AID on launch
gwern#1782: and I'm always interested in weird uses: 'the street finds its own use for things'
Daj#7482: Yea, there's something funny about how predictable people's _actual_ behavior usually is, but everyone pretends to not notice
triggerhappygandi#0001: No joke a paper that used self-attention on a VAE for neural style transfer was written in python 2.7 |
triggerhappygandi#0001: Seems like shits and giggles at this point
triggerhappygandi#0001: As if they actively want people to combat errors
triggerhappygandi#0001: I will get my flamer then. God's work needs to be done
Daj#7482: Says the person with a dog pfp
Daj#7482: I think the lady doth protest too much
triggerhappygandi#0001: You see this is the opposite of furry
triggerhappygandi#0001: Tis a Serbian doge
Daj#7482: "I-I'm totally not gay, guys! I only do manly stuff!"
Daj#7482: haha
Daj#7482: Sorry, I'm just joking of course
triggerhappygandi#0001: :zucc:
Daj#7482: Good spirited teasing doesn't work well over text, I'm always scared I'll insult someone
triggerhappygandi#0001: People getting insulted on internet is kinda weak though
triggerhappygandi#0001: I miss the Xbox chats of early 2000s
triggerhappygandi#0001: Now that's a real internet wasteland
Daj#7482: I grew up on 4chan, I know the trenches haha
Daj#7482: But that's not the kind of culture I want to cultivate here
triggerhappygandi#0001: I only go to 4chan when I'm feeling _too_ smart.
triggerhappygandi#0001: So I kill a few million brain cells on /pol/
Daj#7482: At risk of sounding like an oldfag: It ain't what it used to be |
Daj#7482: ~~and it was shit back then too lol~~
triggerhappygandi#0001: I would assume humour was better back then
Daj#7482: No, gen Z legitimately has much better memes than we used to, and anyone who says otherwise is a nostalgic idiot
Daj#7482: We had _rage comics_ for god sake
Daj#7482: Those were _legitimately funny on 4chan_
triggerhappygandi#0001: Hey. They take effort to make
Louis#0144: huggingface's dataset library has no ability to strea
Louis#0144: stream
Louis#0144: right?
3dprint_the_world#6486: Press X to doubt.
Daj#7482: Well you failed
Daj#7482: 2010 memes were shit and we have to take that hard to swallow pill
Daj#7482: Gen Z has weaponized surrealism and depression to hitherto unseen levels
Sid#2121: i truly aspire to Gen-Z's levels of absurdity
3dprint_the_world#6486: 2010 wasn't the golden age of millenial memes though. It was more mid-2000s. Also that's the thing, the themes have changed. Us millenials were mostly for the lulz. Gen Z is mostly depression/doomer stuff.
bmk#1476: you know my take on rage comics
Sid#2121: believe it or not, i don't
bmk#1476: i think that rage comic are underrated
Daj#7482: So https://www.youtube.com/watch?v=10QtCx6SIrY
Sid#2121: this is... a take |
Daj#7482: Never before have I been so close to banning someone for a non-anime offense
andyljones#7746: of course memes improve with time. they're the product of thousands of generations of hypercompetitive selection pressures, how could they not improve
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/785987494944440320/unknown.png
3dprint_the_world#6486: Gen Z kind of echoes Gen X. Nihilism/doomerism that ultimately leads to either suicide or becoming a Trump fanboy.
Daj#7482: Thin ice, bucko
Daj#7482: Rage comics crawled so Wojak could run
andyljones#7746: obviously today's hypereffective memes no longer consider @bmk's head worth colonising
andyljones#7746: the ground's been salted by rage comics. nothing shall ever grow there again
3dprint_the_world#6486: Someone on twitter referred to Gen Z as 'child soldiers' and I love that
Daj#7482: The funniest 2020 thing how the "racist MAGA femboys with anime pfp" is an actual demographic
Daj#7482: Why aren't all sociologists exclusively studying this?
bmk#1476: what memes are examples of *good* modern memes in your opinion?
bmk#1476: i am completely out of touch with the memeplex since i only frequent ich_iel
Daj#7482: Wojak
Daj#7482: Pepe
Daj#7482: And most of the two-week-memes are funny
Daj#7482: :bigbrain: excellent meme
bmk#1476: wojak is good in crypto memes, pepe is.. i have no opinion tbh
Daj#7482: :bigbrain: may be the best meme
Daj#7482: tbf wojak/pepe are more a genre than a meme |
Sid#2121: https://www.youtube.com/watch?v=sDj72zqZakE
Daj#7482: Genuinely got me the first time I saw it
bmk#1476: i said *good* and *modern* and *meme*
Daj#7482: https://www.youtube.com/watch?v=zhjrtM7f1OA
Sid#2121: that waffle falling over in 2013 is a 2020 masterpiece
3dprint_the_world#6486: also don't forget that we had meatspin
bmk#1476: fuck stop making me laugh so hard while in a videocall
Daj#7482: Thanks for reminding me
3dprint_the_world#6486: no worries
Daj#7482: Man, it's almost like _THE GAME_
Daj#7482: You just lost it lololol
bmk#1476: relevant xkcd https://cdn.discordapp.com/attachments/729741769738158194/785988693772468267/nostalgia.png
bmk#1476: *RELEVANT XKCD* https://cdn.discordapp.com/attachments/729741769738158194/785988775296630784/anti_mind_virus.png
Daj#7482: https://www.youtube.com/watch?v=DHWH2Jt3s0U
Daj#7482: Quality memes
Daj#7482: https://www.youtube.com/watch?v=LfqowXbejyo Dark Soul Boss meme is good
3dprint_the_world#6486: personally though I think gen Z is basically exactly like millenials
3dprint_the_world#6486: just younger
Daj#7482: https://www.youtube.com/watch?v=7ArSJbt_2U0 Peak meme
3dprint_the_world#6486: every time I ask someone to point out some differences between gen Z and millenials, their arguments boil down to what's essentially some version of "they're younger" |
Daj#7482: It's almost as if generational lines are completely arbitrary for any other than the babyboomers who were indeed an unusual outlier in population growth
3dprint_the_world#6486: basically: they're younger, didn't have the 2008 recession, but do have the knowledge the world will end in their lifetime
3dprint_the_world#6486: that's p much it
Daj#7482: https://www.youtube.com/watch?v=l789l6np-qA I feel Eleuther would appreciate this
Sid#2121: i don't care about the lines, how does he make chalk on a blackboard *sound so pleasant*
Daj#7482: https://www.youtube.com/watch?v=T5mdFk-Xsc0 Connor's experience with aliens
Daj#7482: https://www.youtube.com/watch?v=eMonGZEB0Ik I wanna be Monke is probably the best meme of the year
Sid#2121: jokes on them, i was monke all along
Daj#7482: https://www.youtube.com/watch?v=-jdAsT0ZB-M Facebook releases their latest AI
Daj#7482: https://www.youtube.com/watch?v=0G6RF5ChKYQ Eric Andre is the only funny thing on TV
Sid#2121: hah, this is my favourite eric andre bit
Daj#7482: https://www.youtube.com/watch?v=k4GQv5OMk_w
Daj#7482: this also ranks high
Sid#2121: apart from that one where they let a bear in
Daj#7482: oh god yes hahaha
Sid#2121: HAHA i forgot about this one
3dprint_the_world#6486: millenials are the first generation (in modern times) who's kids are exactly like them
3dprint_the_world#6486: interesting to think about
Daj#7482: >George Clooney emerges from table
Daj#7482: >Offer joint |
Sid#2121: also https://www.youtube.com/watch?v=9UlXcoVHnog
3dprint_the_world#6486: probably because the boomers were the only generation hated by both their parents and their kids
Daj#7482: I must admit I can't stomach his public stuff
Daj#7482: Cringe too much
Sid#2121: i feed on cringe
Sid#2121: oh god, the touch a stranger's hand challenge
Daj#7482: no
Sid#2121: https://www.youtube.com/watch?v=YpypKVIw7XQ she loves it
Daj#7482: Did you ever see his Hot One's interview? It's pretty great
Daj#7482: Apparently he's even crazy when pretending not to be
Sid#2121: yeah i saw it. he seems so difficult to interview lmao
Daj#7482: It was by far the most coherent interview lol
3dprint_the_world#6486: sorry to keep banging on about this genz/millenial stuff. But surely one can't be aware of Cum Town and still insist GenZ is more absurdist-funny
Daj#7482: This guy is my juvenile sense of trolling incarnate in a person
Daj#7482: Ok boomer
3dprint_the_world#6486: lol
Sid#2121: are cum town genZ? or are you pro cumtown lol
Daj#7482: You can't deny ok boomer was absolutely masterclass trolling
Sid#2121: can't actually tell
3dprint_the_world#6486: @Sid both |
Sid#2121: aren't they like, 30 years old
3dprint_the_world#6486: yeah ok boomer was thebest
Sid#2121: but yeah, nick and stavros are pretty funny, just, separately
3dprint_the_world#6486: (also a millenial invention, btw)
3dprint_the_world#6486: yeah I think cum town are millenials
Daj#7482: Making a difference between millenials and genZ is such a boomer thing to do
3dprint_the_world#6486: yeah true
Daj#7482: Boomer = Everyone older than me I disagree with, of course
3dprint_the_world#6486: I'll stop that now
Sid#2121: obviously
Daj#7482: Gen Z = Everyone younger than me I disagree with
Sid#2121: isn't that why the concept of generations exists in the first place?
Daj#7482: Basically
3dprint_the_world#6486: the concept of generations does make some sense when talking about boomers
3dprint_the_world#6486: probably not much about anyone else
Daj#7482: yea
Sid#2121: because they're fun to rag on, yes
Daj#7482: bOoMeRs RuInEd ThE eCoNoMy
Daj#7482: Julia Galef had a good podcast on that
3dprint_the_world#6486: which one'sthat |
Daj#7482: Rationally Speaking
Daj#7482: probably the only "rationalist" podcast I like
Daj#7482: Sorry Bayesian Conspiracy
3dprint_the_world#6486: ah
Sid#2121: the boomers did one thing right
Sid#2121: and it was this movie https://www.youtube.com/watch?v=wfgO90yGusI
Daj#7482: She interviewed several people making the "boomers ruined the economy argument"
Daj#7482: and it doesn't really hold up to scrutiny (and/or is more the Silent's fault)
3dprint_the_world#6486: humanity ruined the economy.
Sid#2121: the economy ruined the economy
3dprint_the_world#6486: humanity ruined humanity.
Daj#7482: I'm unsure we'll ever know in any satisfying level of certainty what caused the great stagnation, or whether it even happened
Daj#7482: Trust me, I tried lol
3dprint_the_world#6486: my mind was recently blown when I realized modern economics as a discipline only came about after widespread fossil fuel use
3dprint_the_world#6486: and that the correlation between economic growth and fossil fuel use is basically perfect
Daj#7482: I've seen versions that tie it to energy use more than fossil fuels specifically
Daj#7482: But there are quite a few "grand theories of economics" that honestly seem very plausible
Daj#7482: But are mutually incompatible
3dprint_the_world#6486: sure but up to and including now, fossil fuels are the only significant energy source, everything else is a rounding error
Daj#7482: food got us out of the malthusian trap |
3dprint_the_world#6486: all other energy sources are like 7% of the total or something
Daj#7482: Yep
Daj#7482: But solar is falling exponentially
Daj#7482: So that will replace fossil soon and then :foom:
bmk#1476: go nuclear
Daj#7482: One of my favorite hypothesis for the cause of the great stagnation is that we "missed" the nuclear age
Daj#7482: if we had just went from fossil to nuclear it wouldn't have happened
Daj#7482: ¯\_(ツ)_/¯
3dprint_the_world#6486: very likely, although there's reasons we didn't go nuclear that are hard to ignore, even if you ignore e.g. safety
3dprint_the_world#6486: daryanenergyblog had a great series talking about nuclear power. very long and detailed. but basically the gist is that nuclear power would probably have been way to expensive to actually adopt at worldwide scale.
andyljones#7746: one of my ex-coworkers had a lovely explanation for bull and bear markets: 'it's a random walk mate, don't overanalyse it'.
andyljones#7746: it's become my go-to lazy explanation for anything with peaks and troughs
Daj#7482: Can you link? I've seen analysis to the contrary
Daj#7482: On average, you are probably right
3dprint_the_world#6486: and that basically the only reason nuclear seemed cheap at first was because the costs of waste processing and decommissioning was treated as an externality
3dprint_the_world#6486: @Daj here's one of the articles, but there are many more https://daryanenergyblog.wordpress.com/ca/
Daj#7482: Thanks!
Daj#7482: Reading for the reading list god
AI_WAIFU#2844: I don't see any criticism of our lord and saviour, the lead cooled fast reactor.
Daj#7482: you mean ***T H O R I U M***? |
AI_WAIFU#2844: thorium is overrated, but sure.
Daj#7482: But I read about it in reddit
3dprint_the_world#6486: out of the few nations that actually undertook large-scale nuclear, all of them were heavily government subsidized. The USSR built reactors as cheap as possible (RBMK) and they exploded, and the USA only did civilian nuclear as a surplus market for military nuclear tech.
3dprint_the_world#6486: And none of them really gave any thought to long-term waste management, except France, but France is an outlier for a lot of reasons.
Daj#7482: What you just says contradicts a lot of what I know about nuclear
Daj#7482: but wtf do I know I'm known for being angry about AI and hilarious nightmare stories
3dprint_the_world#6486: oh really? interesting
AI_WAIFU#2844: I rest my case
Daj#7482: imagine doing something with physical hardware lmao
3dprint_the_world#6486: what does it contradict
Daj#7482: Rapidly falling prices of nuclear, promising designs that were abandoned just as they became usable, waste is easy to manage and several locations (like that one in Nevada) were built and ready to operate but were shutdown due to NIMBY (Think about it: The waste is still _somewhere_, mostly in nuclear powerplants swimming pools, and we're somehow still all totally fine. Fukushima killed _negative_ people)
Daj#7482: Even if it was slightly more expensive than fossil (which I doubt, because the cost of fossil hasn't been falling for a long time due to the fixed costs of fuel, which is lower per energy unit in nuclear), at least it would have spared us the whole climate change thing
Daj#7482: And the actually horrific number of people that die from fossil-fuel related smog and the like
3dprint_the_world#6486: rapidly falling prices of nuclear? that's news to me. in the past 10 years all reactor projects have had massive cost overruns
3dprint_the_world#6486: maybe we're looking at different sources
3dprint_the_world#6486: I wouldn't really trust any cost estimates from the nuclear industry; they do a lot of *ahem* creative budgeting
3dprint_the_world#6486: yeah sure but that's a different argument though. I'm sure once you actually take into account the cost of externalities, fossil fuel winds up being way more expensive than any other energy source
3dprint_the_world#6486: so maybe the root cause of the great stagnation was not implementing a carbon tax
Daj#7482: "if tech had continued without the Green religious regulation"
Daj#7482: Which could itself be wrong |
Daj#7482: It#s just the meager sources I know
Daj#7482: Well your argument was nuclear is too expensive with externalities taxed, so same should apply to fossil
gwern#1782: (implementing a carbon tax might make nuclear relatively cheaper, but it wouldn't make it absolutely cheaper or continue the exponential increase in energy usage per capita (which name I forget) leading to continued exponential increases in standard of living. unless you have some fairly exotic dynamics in mind)
3dprint_the_world#6486: my argument was that nuclear seemed cheap *because* externalities *weren't* taken into account 😜
Daj#7482: ah
Daj#7482: I really don't have a strong opinion here lol
3dprint_the_world#6486: but still, once you take a fair account of all energy sources, seems to me fossil fuels would wind up being the most expensive
3dprint_the_world#6486: @gwern precisely
3dprint_the_world#6486: that's exactly it
3dprint_the_world#6486: we only had anomalous levels of high growth over the past century due to pushing costs on to future generations
3dprint_the_world#6486: if we had carbon tax, maybe we'd be more green, but we'd be way poorer, and no one wants that
3dprint_the_world#6486: I'm not doing daryanenergyblog's argument justice though, he really goes deep into nuclear reactor designs and breaks everything down, it's quite detailed
3dprint_the_world#6486: (it's not just about externalities)
Daj#7482: It's on the reading pile heh
JC#3653: On the topic of nuclear energy, im kinda excited to see the results of the ITER experiment. Nuclear Fusion could be the way to go.
3dprint_the_world#6486: I'm excited about fusion too. But there are a lot of major problems to overcome though. We're unlikely to see it happen before 2060
3dprint_the_world#6486: But here's hoping for fusion-powered AI-enabled fully automated luxury gay space communism
3dprint_the_world#6486: in 2100
JC#3653: kinda doubt humanity will survive that long tbh
bmk#1476: Immortal transhumanist fusion powered safe AGI enabled fully automated luxury gay space communism |
3dprint_the_world#6486: y so pessimistic
JC#3653: because global warming wont stop even if every country went net zero emissions by tomorrow.
bmk#1476: This is the optimism room, all pessimistic predictions should come with a plan to solve the problem
bmk#1476: We ain't doomers
3dprint_the_world#6486: you have a point JC, but I think humanity will survive, just with a lot of suffering. We don't deserve to get out of it that easy anyway.
bmk#1476: Yeah i don't think we'll go extinct because of global warming, at least not before it becomes irrelevant due to AGI
3dprint_the_world#6486: but it's also possible AGW will prevent AGI
bmk#1476: (irrelevant either because we create safe AGI to solve the problem, or unsafe AGI that turns us all into paperclips)
JC#3653: make AGI solve global warming xD
3dprint_the_world#6486: may be the only solution
AI_WAIFU#2844: Global warming is a non-issue, just spray some salt water in the air, dump some iron in the ocean, and you're done. https://en.wikipedia.org/wiki/Marine_cloud_brightening
https://en.wikipedia.org/wiki/Iron_fertilization
bmk#1476: "just"
i mean, looking at how utterly immediate and predictable covid has been, and how horrible the response has been, getting the level of funding and coordination necessary to get that done sounds impossible
JC#3653: reminds me of the futurama solution
JC#3653: just drop a cube of ice in the ocean
AI_WAIFU#2844: You make a very strong point.
3dprint_the_world#6486: covid-19 is a non-issue, just wear masks, test people, and you're done.
3dprint_the_world#6486: ...oops |
AI_WAIFU#2844: The primary advantage of AGI will a have nothing to with it's intelligence, and everything to do with it being able to act unilaterally in a way that's not completely retarded.
3dprint_the_world#6486: well yeah but you also need intelligence for that
bmk#1476: well, if it's sufficiently intelligent it will know exactly what to say to make everyone do what they need to
bmk#1476: something that *certain human agencies* have not done a very good job at
AI_WAIFU#2844: this assumes that it's benevolent enough to not dissassemble us.
bmk#1476: well, that's alignment
3dprint_the_world#6486: it just occurred to me that Jacinda Ardern may be a benevolent AI
bmk#1476: *ahem* a certain organization whose name is both an anagram and syntactically similar word of HOW
JC#3653: problem is that people can be easily swayed to be against AIs.
bmk#1476: only for an insufficiently advanced AI
3dprint_the_world#6486: they can also be easily swayed to be for AIs
AI_WAIFU#2844: just slap an anime girl avatar on it and everyone will go along with it.
3dprint_the_world#6486: the argument of stupidity/gullibility cuts both ways
bmk#1476: ~~the AI would pick a.. less obscure waifu~~
AI_WAIFU#2844: I for one welcome Hatsune Miku as our overlord.
3dprint_the_world#6486: this but unironically
AI_WAIFU#2844: Sneak peak of the singularity: https://www.youtube.com/watch?v=O17f3lB7BFY
bmk#1476: peanuts
bmk#1476: https://www.youtube.com/watch?v=3iCSdZzsARg
3dprint_the_world#6486: oh god |
bmk#1476: only the greatest
3dprint_the_world#6486: I changed my mind, furries are better
JC#3653: what discord server is this again?
bmk#1476: i would make a joke about catgirl research but i have been warned not to
3dprint_the_world#6486: furries and weaboos
AI_WAIFU#2844: the one that believes the orthogonally thesis and the scaling hypothesis
bmk#1476: are we the only organization who believes in both?
bmk#1476: sama doesn't believe in orthogonality and i don't think MIRI does scaling
AI_WAIFU#2844: We might be the only org that takes both seriously.
3dprint_the_world#6486: anyway going back to AGW, the solution is even simpler, just scale back on our wasteful habits, which can be done in a way that has negligible impact on most people's quality of life. But it requires coordination.
3dprint_the_world#6486: It's 'just' a big coordination problem.
3dprint_the_world#6486: i.e. the hardest kind of problem for humanity to solve
bmk#1476: this is probably strictly harder than developing AGI lol
AI_WAIFU#2844: I think we went over this
3dprint_the_world#6486: yes, probably.
bmk#1476: has anyone proven a theorem like arrow's but for coordination problems in general?
StellaAthena#3530: What do you have in mind?
JC#3653: kinda impossible to coordinate anything when the current administrator of the EPA rejects climate change.
bmk#1476: somethig that formalizes the feeling that "oh god it's literally impossible to get people to agree on anything"
bmk#1476: arrow's actually kinda fits the bill |
AI_WAIFU#2844: the closest I saw was: https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness
3dprint_the_world#6486: interesting
JC#3653: ah game theory
StellaAthena#3530: No incentive system for a group of rational agents can satisfy all of the following:
1. Income equals outflow
2. The system has a Nash equilibrium
3. The system is Pareto efficient
AI_WAIFU#2844: You can't get to the pareto frontier if people disagree, and when you have a bunch of people you're in deep shit.
bmk#1476: what does #1 mean
bmk#1476: and doesn't every game have a nash equilibrium
StellaAthena#3530: No
StellaAthena#3530: Sorry, I dropped the context. Think a town of people, each person puts in work and gets value back. Alternatively, a taxation and benefits system.
#1 says that all the collected resources are distributed.
bmk#1476: ah, ok
StellaAthena#3530: Also, related to Arrow’s Theorem: https://en.m.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite_theorem
3dprint_the_world#6486: I'd like to see more game theory research on *irrational* agents
StellaAthena#3530: @3dprint_the_world check out Richard Thaler, Daniel Kahneman, and Amos Tversky. They won the Riksbank Prize in Econ in Memory of Alfred Nobel for basically saying “hey, what if instead of assuming people optimize their financial decisions we actually study the decision-making habits of real humans in grocery stores to see what they choose to buy
3dprint_the_world#6486: I will, thanks. I love the D.
3dprint_the_world#6486: "Thinking fast and slow" was great
StellaAthena#3530: Not sure what your sexual orientation has to do with this, but sure 😛 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.