data
stringlengths 115
7.61k
|
---|
dopa#3178: rotate it 🙂
dopa#3178: 3 raws and n columns
dopa#3178: section 8 test, is missing words
dopa#3178: it just Finally, we
dopa#3178: "However, this approachseems essential to the future of these models and AImore broadly, and more research is needed."
dopa#3178: this cought me I think it is to many ands ?
dopa#3178: sorry, I just say as it is, not my intent to be mean or anything
dopa#3178: btw. this my first reading paper that is unpublished lol
dopa#3178: I have a bit issue with last couple paragraphs, you become what you read, there is truth in this
dopa#3178: while we form memories, connection how memory is formed is lost
dopa#3178: it is kind of hack of human memory
dopa#3178: I need to find article for this
dopa#3178: couple more things about formatting:
dopa#3178: 1. abstract is formatted narrow frame then main text
2. spacing between sections, subsection, paragraphs is broken
bmk#1476: > abstract is formatted narrow frame then main text
wdym
dopa#3178: width of abstract text vs main text
StellaAthena#3530: This is literally how ever paper is
dopa#3178: may be it is new stile |
bmk#1476: has it ever not been like this
dopa#3178: did I just noticed it
dopa#3178: lol
dopa#3178: most paper I read are abstract is in bold text and same width as main text
dopa#3178: but paper are old lol
dopa#3178: I am old dammit
dopa#3178: when you export figures into png what is dpi setting ? (this my curiosity question)
StellaAthena#3530: Here are three papers in different areas, publications, and spread out across the past 10 years
https://arxiv.org/abs/2005.14165
https://arxiv.org/abs/1811.02017
https://arxiv.org/abs/1101.2613
StellaAthena#3530: They're all like that
StellaAthena#3530: I don't remember papers *not* like that.
dopa#3178: this are single column formatted papers
dopa#3178: you can have paper where abstract is single column but text is double column, if I am not mistaken
bmk#1476: anyways tl;dr the conference forces us to do it like this
dopa#3178: just please double check with other papers width of abstract
dopa#3178: I am not 100% sure my self
bmk#1476: this the conference forcing us
StellaAthena#3530: This is part of the `.sty` file provided by the conference |
bmk#1476: let's not spend too much time on this, it's bikeshedding
Isaac McHorse#2007: WHY ARE YOU BEING DISTRACTED? YOU CAN'T GET ANYTHING DONE LIKE THAT.
dopa#3178: got it
dopa#3178: can you export pngs in higher dpi ?
dopa#3178: they look pixlated at least on my screen, this me being picky
StellaAthena#3530: Here's an ICML 2 column paper http://proceedings.mlr.press/v97/abbati19a/abbati19a.pdf
StellaAthena#3530: Which ones?
StellaAthena#3530: Fig 1 is being redone with larger font already
dopa#3178: the paper you linked png figures are higher quality substantially
dopa#3178: nope hold on, this my screen
dopa#3178: there is difference between figure 3 and fig. 4
dopa#3178: fig. 5-9 are lower quality
dopa#3178: at least on my screen
dopa#3178: one more thing spacing on paper you linked is perfect between sections, subsection, etc
dopa#3178: to see quality of pic, just zoom to 400%
dopa#3178: @bmk I understand this, do not mean to destruct, I just say what stand out to me, gets my attention as being off
chilli#5665: Lol, well, the "proper" way to do things is to export to pdf
dopa#3178: @StellaAthena paper you linked has wider margins, that's why I had issue whit abstract being to narrow 🙂
dopa#3178: this might be worth to be tweaked
```% Page Settings |
\usepackage[paperheight=11in,paperwidth=8.5in]{geometry}
\geometry{top=0.7in, left=1in, right=1in, bottom=1in}```
dopa#3178: @Isaac McHorse are you human ?
Technobird22#2055: https://cdn.discordapp.com/attachments/729741769738158194/794006961238704148/unknown.png
Technobird22#2055: no they aren't
Technobird22#2055: I think a key word triggers it
Technobird22#2055: horse staple
Technobird22#2055: bike stable
Technobird22#2055: shedding bike
Technobird22#2055: bikeshedding
Isaac McHorse#2007: OH F*$K! OH HELL NO! OH HELL NO! STOP IT!
Technobird22#2055: ah yes
dopa#3178: should have thought of that!
dopa#3178: so I used cleverbot and twitch as bot once, but half of message replays where people
dopa#3178: it was fun
zphang#7252: +1 plots should be exported to PDF if possible
dopa#3178: @Technobird22 since you like my story about bots, me thinks you will like this too:
for few month(s) in bars/clubs, I pretended to be mute (person who can't speak) with a pen and paper, what people wrote, no one ever told me same thing in life verbally, I am not fully sure why people felt compelled to tell me there's feeling or desires, in writing.
3dprint_the_world#6486: every journal has their own style guidelines; some fields have very particular ones. A few journals will actually revise and reformat your paper (including figures) upon acceptance
dopa#3178: yep, that makes sense to some extent |
dopa#3178: it is just perfectionist disorder, strong with me today
chirp#4545: so in the ilya statement that @ethan caballero linked, i think he brought up more than just GPT-4 / multimodality. he dropped a pretty big hint that they’ve gotten human feedback working much better:
> GPT-3 and systems like it passively absorb information. They take the data at face value and internalize its correlations, which is a problem any time the training dataset contains examples of behaviors that we don’t want our models to imitate. When using reinforcement learning from human feedback, we compel the language model to exhibit a great variety of behaviors, and human judges provide feedback on whether a given behavior was desirable or undesirable. We’ve found that language models can learn very quickly from such feedback, allowing us to shape their behaviors quickly and precisely using a relatively modest number of human interactions.
i really wonder what this could mean. the way he put it, it could be really amazing - if you can get GPT-3 to do what you want with just a little feedback 🤯 ...but that’s way beyond what they showed in their human feedback paper from september, so maybe i’m just misinterpreting.
i guess we’ll need to wait and see. i know from their blog post they have a research update coming out in january
chirp#4545: i remember Sam Altman saying at the SSC meetup that they were trying to get language models to work with RL... maybe they finally got it working well?
zphang#7252: wasn't that the GPT-3+human feedback paper?
chirp#4545: That was related, but based on what ilya is saying it sounds like they got it to be more generally applicable or more sample efficient, like by a lot
chirp#4545: Unless I’m just reading too much into it
chirp#4545: Quotes that really jumped out to me: “learn very quickly”... “modest number of human interactions”
kindiana#1016: wild prediction maybe they somehow made data augmentation for text
thenightocean#6100: where is that quote from?
zphang#7252: https://blog.deeplearning.ai/blog/the-batch-new-year-wishes-from-fei-fei-li-harry-shum-ayanna-howard-ilya-sutskever-matthew-mattina
gwern#1782: did the earlier preference gpt-3 paper use the full blown 175b? I never got around to reading it in detail
Aran Komatsuzaki#5714: that was up to 10B iirc
gwern#1782: so going to 175B should still offer a big boost, and the 'instruction' GPT-3 might be even better
triggerhappygandi#0001: Its funny how they never even released the 13B model aswell |
gwern#1782: why bother? T5 and others are released
triggerhappygandi#0001: Or the one just smaller than that
triggerhappygandi#0001: Its somewhat bigger than T5
triggerhappygandi#0001: And who knows how better it is, without being hands on
gwern#1782: 'somewhat' doesn't really impress anyone given the scaling, and I think it's trained worse
triggerhappygandi#0001: It too seemed to not be converging at the end of the training
gwern#1782: a few billion params isn't cool. you know what's cool 100b+ parameters
triggerhappygandi#0001: Yeah yeah but still. Give people _something_
triggerhappygandi#0001: T-NLG isn't open source either
triggerhappygandi#0001: So T5 is the biggest you can get without the api
gwern#1782: why? you see anyone criticizing it?
gwern#1782: heaven forfend that someone not release a model because it might be *abused*! we'll make fun of them for years to come! but if they just silently withhold everything, well then that's alright. because we don't care about consequences, we only care about how stuff looks and looking Serious and Respetable
triggerhappygandi#0001: Hmm. It really is deliberately stunted compared to its bigger brother https://cdn.discordapp.com/attachments/729741769738158194/794211248396828702/1Q3fJcTssMqFN1OcGg60rWQ.png
zphang#7252: DeBERTa is 1.5B, so it's sparring evenly with T5 models an order of magnitude larger
chirp#4545: Took some guesses on all the recent OpenAI news, and what it might mean: https://www.notion.so/ericyu3/Where-is-OpenAI-headed-8603376cce9147f6b26d4f2c70180371
chirp#4545: Most plausible story to me at the moment: OpenAI got way more traction on their API than they expected, and as a result they’re redirecting their research towards improving it. That leaves a lot less room for Dario to set the research direction. OpenAI isn’t necessarily abandoning their AGI ambitions, but they might be shifting their research to be a lot more customer-driven. And Dario doesn’t have any special touch when it comes to OpenAI’s new customers.
chirp#4545: If this is true, I’m pretty optimistic about OpenAI - it’s a far cry from them running out of money or anything like that. And I think it’s consistent with what Sam Altman tweeted less than a month ago, after he knew that Dario was leaving: https://twitter.com/sama/status/1337462589022932999?s=21
chilli#5665: I think I mostly agree
chilli#5665: 😛
chilli#5665: I think the most important question there is |
chilli#5665: "why did they feel like they couldn't do their research under OpenAI's umbrella?"
mgostIH#0245: Money is temporary, AGI is forever
andyljones#7746: that prodded me into checking jack clark's replies, and this seems like a hint
https://twitter.com/jackclarkSF/status/1344052929074847750
chilli#5665: perhaps it's timnit 2.0
ethan caballero#6044: https://discord.com/channels/729741769192767510/729741769738158194/793604997426315304
chilli#5665: haha
triggerhappygandi#0001: Is it?
triggerhappygandi#0001: How so
zphang#7252: sorry, what do you mean
CRG#8707: I think he means on the recent progress on SuperGLUE https://twitter.com/sleepinyourhat/status/1344382025986437122
gwern#1782: accusing them of holding back improvements?
triggerhappygandi#0001: @zphang how is it managing to outperform a 7x larger model? I wanted to know what techniques it used for that
CRG#8707: Yeah, looks like the T5 team didn't like losing the first place.
Sphinx#2092: I doubt it
Sphinx#2092: Zirui was doing other stuff before this
CRG#8707: More of an early release then?
Sphinx#2092: What was the early release?
Sphinx#2092: Just hte entry "T5"? |
zphang#7252: I would guess that they had results for T5+Meena but were waiting for some date to publish along with their paper, then DeBERTa results came out, so they responded with their results
Sphinx#2092: Oh, sure.
Sphinx#2092: That's probably the case.
Sphinx#2092: I don't think Zirui has released how he did it.
CRG#8707: The original paper has good ablations: https://openreview.net/forum?id=XPZIaotutsD
rivalset#4984: Yes that it was happened. I was involved in that, but can't say more.
Sphinx#2092: Very mysterious.
gwern#1782: (I don't think it's mysterious so much as lulzy)
triggerhappygandi#0001: Idk when it will be 12am for you but it is for me. Happy new year to everyone.
triggerhappygandi#0001: May this year bring us exaflop/s compute.
rivalset#4984: happy new year to you!
rivalset#4984: I feel kind of bad for the deberta first author. https://twitter.com/Hepeng2012/status/1344154469743747073
Sphinx#2092: Damn, to shreds you say
3dprint_the_world#6486: I'm already in 2021, and I feel great. For those of you still stuck in 2020: sucks to be you.
triggerhappygandi#0001: Lol
gwern#1782: you come at the king you best not miss
zphang#7252: even colin was calling bs lol
Daj#7482: Happy New Year everyone! Thanks for making one of the shittiest years into one of the most exciting years, here's to many more ahead 🥂
nz#9710: Happy new year!
spirit-from-germany#1488: Happy new year! 🙂 |
thenightocean#6100: Happy new year! Hope we celebrate the next one with our own GPT-4 who will be making some next level memes (in German off course 😛 )
dopa#3178: https://tenor.com/view/new-years-eve-happy2021-trash-2020-happy-gif-19680726
JC#3653: Happy new year!
JC#3653: be sure to check Public Domain Day tomorrow 🙂
ethan caballero#6044: The story intensifies!!
https://twitter.com/ch402/status/1344798317364932608
Daj#7482: [cynicism]It seems like all the good people are leaving OpenAI [optimism]which could mean they'll be able to do even better work[/optimism][/cynicism]
bmk#1476: oh no what does this mean for my plans to go apply for a position at oa at some point
bmk#1476: i guess i need to lay back and see where all the good oa people go so i can aim to go there instead
Daj#7482: All the good people will end up working for EleutherAI, of course
bmk#1476: :bigbrain:
nz#9710: :chad:
zphang#7252: All the OAI people leaving to start a rival discord
bmk#1476: We should unironically start reaching out to them to see if they'd be interested in eleuther
gwern#1782: oh ho. olah is leaving too
gwern#1782: but this sounds still more like it's non-scaling startup if it's dario, clark, and olah but not sutskever 😦
gwern#1782: maybe that's a good thing? if OA is still all in on scaling per the sutskever quote
3dprint_the_world#6486: my honest opinion is that this is a great idea.
bmk#1476: can someone start drafting an email
3dprint_the_world#6486: yeah this was my hunch initially too. without really having any knowledge about what goes on in OAI, if you're interested in scaling it makes sense to stay where the resources are. |
3dprint_the_world#6486: especially since OAI has not shown any outward indication it's not interested in scaling.
3dprint_the_world#6486: the key thing is to think about incentives. what would a person leaving OAI potentially find useful in collaborating with (or joining) EleutherAI
bmk#1476: well, we have resources, but probably not as many as these big names would be able to raise on a whim, and manpower, but all of us are significantly less skilled or accomplished than these big names
bmk#1476: so, idk, nothing?
cfoster0#4356: They'd have a captive and willing audience
bmk#1476: we'd all become their grad students lol
cfoster0#4356: Don't discount how much people love talking about their pet ideas
3dprint_the_world#6486: yes, precisely.
bmk#1476: sure, but these people are *famous*, all they have to do is say they're giving a seminar and people will line up to participate
cfoster0#4356: Remember how we packed Kaplan's talk?
bmk#1476: lol
cfoster0#4356: Who wouldn't want that?
bmk#1476: wait, how much *did* we pack that talk, anyways
cfoster0#4356: That shit ain't free
zphang#7252: if someone were so inclined, they could probably organize a "Scaling" speaker series
3dprint_the_world#6486: also having an audience that can provide meaningful interaction is more valuable than just an audience that passively listens and maybe even doesn't understand what you're saying
bmk#1476: fair
bmk#1476: ok so who's good at being persuasive
3dprint_the_world#6486: @Daj
StellaAthena#3530: Happy New Years |
ethan caballero#6044: another hint:
https://twitter.com/ch402/status/1344798586081447937
bmk#1476: excite
Daj#7482: https://i.ytimg.com/vi/DSCX20tZ6_Q/hqdefault.jpg
bmk#1476: can someone edit this very early draft into a coherent email```
Subject: EleutherAI Speaker Series
Hey $name!
I was wondering if you would be interested in speaking about your work on $area for our speaker series on model scaling and alignment.
EleutherAI is a loose research group that's mostly been working on language model, scaling, and AI safety/alignment related stuff, so as you can imagine we've been following the work that has been done at OpenAI fairly closely. Our discord has over 1000 members, and certainly a lot of us would be interested in hearing about your latest work.```
Daj#7482: I genuinely think cold emailing people this well connected is silly
AI_WAIFU#2844: We got to work our way up
AI_WAIFU#2844: Just like This week in ML did
Daj#7482: Unless someone knows them personally at most through one hop on the social graph, this is silly imp
bmk#1476: I mean, you know Jack clark
Daj#7482: """Know"""
bmk#1476: Eh it's good enough
bmk#1476: It's basically know
AI_WAIFU#2844: I should become a Vtuber and start a podcast. |
Daj#7482: I can send him a friendly inquiry about what he's up to, anything more than that is unnatural and weird
Daj#7482: Eleuther twitch simp channel
bmk#1476: I subscribe to the "meh it can't hurt" camp
ethan caballero#6044: If you have something they could possibly want, then cold-emailing definitely makes sense.
Daj#7482: I don't lol
bmk#1476: This is both in relation to cold emailing and the twitch simp channel
Daj#7482: I have a crippling phobia of cold contacting people haha
Daj#7482: I don't see the value proposition from us being much greater than an average grad school club
Daj#7482: Maybe
Sphinx#2092: Worst that can happen is they ignore you.
Sphinx#2092: Not a bad lower bound.
Daj#7482: I actually think that isn't true
Daj#7482: But I have a weird social paranoia
bmk#1476: Well then what we need to do is publish enough papers so that we're a Serious Research Thingy
Daj#7482: Not anxiety, paranoia lol
Sphinx#2092: At least researchers i know.
Daj#7482: Yes this is the obvious path
Sphinx#2092: Realistically anyone who does worse is likely someone you don't want to work with anyways
AI_WAIFU#2844: If we pump out a solid GPT-3 clone, we're golden.
Daj#7482: This |
bmk#1476: Ok time to get to work in the deepspeed mines
AI_WAIFU#2844: I'll go back to the TPU mines.
AI_WAIFU#2844: Also @Daj I think we can do 200B if we buff up our server to like 64GBs of ram.
bmk#1476: Yeah can pls has moar ram
bmk#1476: (On the other server)
AI_WAIFU#2844: Although why TF needs 64GBs of ram I haven't a clue.
Daj#7482: Uh ok let me finish 2AM dinner and I'll beef up the server
rivalset#4984: Guys, I think those people are getting a lot of email. I don't think they are going to remember yours and dislike you forever.
AI_WAIFU#2844: No rush
StellaAthena#3530: @AI_WAIFU 200B for what?
AI_WAIFU#2844: Parameters
bmk#1476: I'm somewhat short on time for the next week but then after that I'll have a load of time on my hands
StellaAthena#3530: You think you can train 200B on our TPUs?
StellaAthena#3530: I feel like I’m missing something
AI_WAIFU#2844: I got 100B working at 12%
AI_WAIFU#2844: I tried going to 200B, then the server was kill because it wasn't chonk enough.
bmk#1476: @StellaAthena the ideal goal I've been pushing for is gpt3 on TPUs, 1T on TPUs
Daj#7482: Allegedly @Lucas Nestler (ClashLuke) and @XMaster96 's code is way more efficient because China is a wizard
bmk#1476: This isn't jannet
StellaAthena#3530: Hmmm |
bmk#1476: This is just gptneo
bmk#1476: We haven't even tried jannet yet
bmk#1476: Is jannet really that much more efficient?
Daj#7482: Yea just saying they might have useful bits
bmk#1476: If so, we should totally switch
Daj#7482: Jan said it was as efficient as data parallel when fully model parallel
Daj#7482: Which would be nuts
bmk#1476: Woah
bmk#1476: We need to try that asap
StellaAthena#3530: No, seriously?
bmk#1476: Does it work with pure text?
nz#9710: Are you guys making a 1:1 clone or inplementing some of the improvements developed since publication?
3dprint_the_world#6486: strong disagree. I cold email people all the time, sometimes famous people. The trick to getting a response is:
- make sure you absolutely know what you're talking about, and have them realize you know what you're talking about.
- be interested in an area they are also interested in.
bmk#1476: Or is it video-only?
Daj#7482: Then I misunderstood Jan
3dprint_the_world#6486: otherwise, yeah, they'll just dump you in the spam bin
XMaster96#7538: nearly
bmk#1476: A 1:1 clone is impossible because OA never gave enough details |
StellaAthena#3530: The later. When we say “a GPT-3” we are speaking about the size (175B). The architecture is based on GPT-3, but not identical
3dprint_the_world#6486: you don't need to 'know someone who knows someone', in fact a lot of them that's actually a recipe for going into the spam bin.
nz#9710: I see. May I ask which attention algorithm you guys plan to use?
Daj#7482: Yea you're probably right. This is an area I know my social strategies are unusually weird. Don't let me block this, I will contact them if you pressure me to
3dprint_the_world#6486: people like making new connections. They just don't like wasting their time.
StellaAthena#3530: @nz our code is on GitHub, you can read it yourself
Daj#7482: I'm too tired and distracted to make any serious arguments lol tell me in the morning what we decide on
StellaAthena#3530: @nz our GitHub repo can be found here: https://github.com/EleutherAI
GPT-Neo is the TPU implementation
GPT-NeoX is the GPU implementation
nz#9710: Yea, I just saw it, thanks!
XMaster96#7538: we got good scaling resolved with it, but outer architecture is a bit different because of Images, and axial attention.
bmk#1476: Ah
bmk#1476: That explains a lot
bmk#1476: So we can't just switch
XMaster96#7538: the model still have language support, because my goal is to have a model that can to both, Video and language.
AI_WAIFU#2844: How big have you guys scaled this thing?
3dprint_the_world#6486: that said, I can't really think of any good reason *why* they would be interested in EleutherAI right now. Giving a talk doesn't sound like a strong enough reason to me. Unless they thought EleutherAI's work could offer them interesting discussion paths.
Daj#7482: Eleuther is very much a place by up and coming people for up and coming people imo |
Daj#7482: Like SL4 but less cool
nz#9710: SL4?
AI_WAIFU#2844: http://sl4.org/
Daj#7482: Shock Level 4, it was an early transhumanist mailing list where lots of early AI alignment talk started
nz#9710: Thank you.
bmk#1476: For y'all who've been around since SL4: is eleuther *really* like the next SL4
Daj#7482: My point is the place doesn't matter as much as the people
Daj#7482: SL4 was great because Eliezer, Bostrom and co came out of it
Daj#7482: Not because SL4 did anything noteworthy
bmk#1476: Are the people here in any way like the people who were around SL4
AI_WAIFU#2844: > For y'all who've been around since SL4
I don't know if there's anyone from SL4 here.
bmk#1476: Well, i know gwern was i guess
Daj#7482: I feel like I would have hung out on SL4 and gotten into fights with Eliezer lol
bmk#1476: And i thought you mentioned you were too
AI_WAIFU#2844: I wish lol. I just read the archives a long time ago.
bmk#1476: Ah lol
AI_WAIFU#2844: SL4 is *old*
3dprint_the_world#6486: tbf, the Eliezer of 2020 would have hung around SL4 and gotten into fights with Eliezer
Daj#7482: accurate lol |
Daj#7482: It's not yet clear if any Eliezer-level people will emerge from here
Daj#7482: But it's possible, a few people here really really impress me
AI_WAIFU#2844: I kinda doubt it honestly
Daj#7482: Prior is unlikely, but that's obvious
bmk#1476: In particular, you?
Daj#7482: Obviously
Daj#7482: I'm so impressed with myself
Daj#7482: That's a totally normal thing to think
Daj#7482: lol
Daj#7482: I'm not even in the top 3 smartest people here in my ranking
Daj#7482: maybe top 10
Daj#7482: It was a sad day when I realized I really wasn't _that_ smart but c'est la vie
Sphinx#2092: Eqhh its more about hard work
StellaAthena#3530: @bmk you had said the paper was on the website, but the url you shared didn’t load for me
bmk#1476: Oh i just meant it would be there
StellaAthena#3530: Ahhh
bmk#1476: I can put it up if you want
StellaAthena#3530: What’s our plan for advertisement?
StellaAthena#3530: Twitter tomorrow morning?
bmk#1476: Honestly, given the weakness of correlation between how smart someone seems when talking and how smart they actually are, I don't know how trustable my internal rankings are |
Daj#7482: Very untrustworthy
Daj#7482: I can almost guarantee I'm dumber than you think I am
Daj#7482: because i do werds gud
Sid#2121: i'm going to tattoo the link on a pig and set it loose
ethan caballero#6044: Maybe also do a blog post too simultaneously OpenAI style.
Sid#2121: i am pro this, but we don't really have a blog per se
bmk#1476: we could always wait until jan 5, which is when our paper appears on arxiv, and do the thing then
Sid#2121: i like blogs as release
bmk#1476: i can post it on my blog
Sid#2121: i want an eleuther blog
bmk#1476: fork my website
nz#9710: this + HN looks solid to me 👍
Daj#7482: post it to usenet
Sid#2121: hire a skywriting plane?
zphang#7252: that means we can't do ACL tho
bmk#1476: no, the rules only stipulate we can't post to *arxiv* after jan 1, no?
bmk#1476: and we're already kinda violating the social media rule regardless
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/794375308735152148/Screenshot_from_2021-01-01_02-23-28.png
bmk#1476: what's that thing called, where it's basically the german equivalent of a BBS?
3dprint_the_world#6486: broadcast it on CB radio |
3dprint_the_world#6486: drop leaflets from drones
zphang#7252: We can't publicize it after jan1
The arXiv specific wording was that if we submit before jan2 and it only gets published on jan5, that's still fine
bmk#1476: ohh
3dprint_the_world#6486: hack air raid sirens to play .wav files
bmk#1476: heck
bmk#1476: there's no way i'm putting together a blog post overnight lol
cfoster0#4356: We just need some content for Twitter tbh
cfoster0#4356: If they want more we've got 38 pages for em
ethan caballero#6044: maybe get timnit or emily bender to quote tweet pile paper tweet to all their followers as an example of too much scaling to start a big twitter controversy that yields a bunch of free advertisement
zphang#7252: (there is a non-trivial chance that we get straight up canceled for the pile)
cfoster0#4356: Tbh I think the Pile paper engages with the kinds of issues they raise
nz#9710: Yea, in fact I think she (and others like her) would much rather have the Pile than the common crawl datasets.
3dprint_the_world#6486: why
Sid#2121: 'cause we cite her a few times lmao
nz#9710: Because a lot of her points have been about biases hidden in datasets, that are (at least based on my naive knowledge) sometimes hard to quantify. I gave the Pile's draft a read, and I may be mistaken, but I feel like it's overall likely to be less bias/the biased is better quantified.
zphang#7252: IMO we don't want too much publicity for the pile immediately
ethan caballero#6044: why not?
nz#9710: You guys definitely know *a lot* more than me, so I will accept your opinions.
zphang#7252: Let's just say that there is a very large surface area for criticism |
bmk#1476: totally agree
cfoster0#4356: Yup
Sahl#0630: You don’t want people to pile on you :)
3dprint_the_world#6486: yeah, I reckon you want to gradually open it up to more and more people
3dprint_the_world#6486: not all at once
nz#9710: Which main ones? I can guess one of them is copyright, but what else?
3dprint_the_world#6486: everything
zphang#7252: Infinite amounts of bias in open web data
cfoster0#4356: It's not multilingual
cfoster0#4356: *yet*
3dprint_the_world#6486: statistical methodology
3dprint_the_world#6486: which is something almost no one can actually agree on
zphang#7252: I think we should tweet it out tomorrow, get some traction/flack for a bit, but hope that people are too hungover to really get up in arms about it
StellaAthena#3530: No, it doesn’t. It explicitly says you can’t advertise on social media
Sid#2121: You're all too harsh on yourselves
nz#9710: I wholeheartedly agree.
Sid#2121: Sure, there's things to criticize. We've all done something pretty cool, though.
zphang#7252: I think that there's a chance we'll get totally ignored, chance that people will find it cool, and a chance we get completely ripped apart
nz#9710: You're offering something new, not arguing to completely replace any and all previous datasets. Regardless of the raised criticisms, it's an incredibly valuable body of work.
cfoster0#4356: Honestly I'm super super impressed a group of randos who've never met pulled this off |
Sid#2121: we should find the average of all the pile author's locations, and grab a drink there when the covid wears off
zphang#7252: IMO that's the most novel part of the work
Sid#2121: probably in the middle of the atlantic ocean tbh
3dprint_the_world#6486: somewhere in the atlantic ocean
ethan caballero#6044: NeurIPS 2021
3dprint_the_world#6486: so... ocean cruise?
Sid#2121: Atlantic Edition ™️
bmk#1476: aboard the diamond princess
3dprint_the_world#6486: actually if you take the true 3d mean, it would be somewhere in the Earth's mantle
bmk#1476: just think what the reaction will be when we replicate gpt3
zphang#7252: > NLP Just Had Its Avengers Moment
bmk#1476: lol
nz#9710: more like GPT-3: The Clone Wars
zphang#7252: anyway, we should make a googledoc for drafting the tweets
bmk#1476: now think what the result will be like when we make 1T and therefore briefly have the largest model
3dprint_the_world#6486: what are the remaining roadblocks for this? any way EleutherAI n00bs can help?
zphang#7252: in 12hrs the T5 team will flex on us
Sphinx#2092: T5 sends their regards.
cfoster0#4356: Wait did they go big?
Sid#2121: sure, we're actively developing the deepspeed codebase here https://github.com/EleutherAI/gpt-neox/ |
zphang#7252: no but they counter-flexed on microsoft in 12 hours on superglue
Sid#2121: it's very early stages so, if you know deepspeed, you can help
zphang#7252: not in size afaict
bmk#1476: so we have two parallel projects
bmk#1476: one is on TF and one is on DS
3dprint_the_world#6486: well that's an easy choice isn't it
Sphinx#2092: Jax obvs
bmk#1476: i've been trying to push for gpt3 on TF and 1T on DS
rivalset#4984: why not Julia?
bmk#1476: y no 1T
bmk#1476: 1T or bust
3dprint_the_world#6486: does Julia now have 'good' ML libs?
3dprint_the_world#6486: at the same level as e.g. pytorch or tf
Sid#2121: *there is a theoretical ideal size given a fixed amount of compute* is the only line i'm gonna repeat from now on
3dprint_the_world#6486: Flux seemed vastly underimplemented last time I checked (and I say this being a huge Julia fan)
bmk#1476: i'm saying we can get the amount of compute we need
cfoster0#4356: yesn't
bmk#1476: if we can figure out low bandwidth training
Sid#2121: let's just be content with gpt-3 first, lmao. We already have an agreement in place.
bmk#1476: but what happens if we figure out gpt3 on mtf |
bmk#1476: won't it be kinda redundant then
Sid#2121: no?
rivalset#4984: I wasn't serious, but some people are saying good things about flux and being able to write cuda kernels in julia
bmk#1476: what're we gonna do with a second gpt3
StellaAthena#3530: Be awesome
nz#9710: it's the model, for free
nz#9710: that's enough isn't it
zphang#7252: call it GPT-E
zphang#7252: but like, the E is reversed
zphang#7252: or something
StellaAthena#3530: GPTEEEEEEEEE
3dprint_the_world#6486: Yeah the design of Flux definitely seems elegant and super easy to use.
But sadly it seems like the Julia ML community hasn't yet reached the required critical mass yet.
3dprint_the_world#6486: I mean, look at what happened to Torch (not PyTorch, Torch)
3dprint_the_world#6486: and they *had* critical mass
cfoster0#4356: Yeah exactly. They're building but slowly. They'll get a new AD system that can deal with higher order differentiation more efficiently in the next release, IIRC
rivalset#4984: I think a lot of the torch code is reused in pytorch and the two are very similar right?
3dprint_the_world#6486: very interesting.
Sid#2121: we are literally being provided with the GPUs to train it in < 3 months, even if we 'figure out' the mtf model we're never going to train it on tpus in anywhere near that time, especially when our pods are getting pre-empted 24/7. Besides, we have an agreement with the people providing us compute, and they're expecting a gpt-3 model @bmk
bmk#1476: okok |
nz#9710: Is there anywhere to read more about this or is this stuff non-public?
bmk#1476: well, no, i got a 512 pretty consistently for a few weeks
3dprint_the_world#6486: could go for Elon Musk versioning.
S
E
X
bmk#1476: non public
Sid#2121: WIP
3dprint_the_world#6486: Y
bmk#1476: semi public
cfoster0#4356: Yea Keno Fischer gave a cool talk about it at SIGPLAN this year
nz#9710: Can you guys say who the agreement's with or not
3dprint_the_world#6486: Yea I vaguely remember
zphang#7252: rushers get hazed by being made to code on mtf
nz#9710: It's totally fine if it's not -- I'm just curious eheh
Sid#2121: I would assume they wouldn't mind us namedropping, but we haven't actually asked yet lol. let me just check in, and i'll get back to you.
nz#9710: Thank you.
bmk#1476: i mean we can say who it *isn't*
bmk#1476: elon musk
bmk#1476: microsoft |
bmk#1476: reddit inc
3dprint_the_world#6486: phew
bmk#1476: zombocom
zphang#7252: I'm not ruling out Elon long term
bmk#1476: no, definitely not
Sahl#0630: GPTƎ
3dprint_the_world#6486: nah Elon has his AI project.
3dprint_the_world#6486: well, he has the project he believes is going to lead to AI.
zphang#7252: Elon sounds like he likes to juggle multiple moonshots
nz#9710: Which one, he pulled out of OpenAI if I'm not mistaken.
bmk#1476: neuralink?
3dprint_the_world#6486: Neuralink
bmk#1476: it's dumb tho
nz#9710: Neuralink is not really about AI tho
3dprint_the_world#6486: yea
bmk#1476: he's gonna realise that sooner or later
zphang#7252: oh I was thinking his self-driving cars
AI_WAIFU#2844: that's not AGI
3dprint_the_world#6486: oh yes it is.
3dprint_the_world#6486: Neuralink is 100% about AGI, watch his interviews |
nz#9710: I mean current research.
nz#9710: That's BCIs
3dprint_the_world#6486: that's like saying "SpaceX isn't about Mars, it's about making rockets."
nz#9710: Unless I missed it of course.
nz#9710: That's also true.
3dprint_the_world#6486: no that's also false; if you actually believe this you should do more research
3dprint_the_world#6486: and by research I mean five minutes of googling
bmk#1476: This depends on the definition of about
3dprint_the_world#6486: The goal of SpaceX is to get humans to Mars, Elon is highly explicit about this.
AI_WAIFU#2844: Yeah, but rockets can get you to mars. I don't see that nearly as much with BCI.
andyljones#7746: (will move this to offtopic)
nz#9710: agreed.
andyljones#7746: (whups, the colah stuff I meant)
nz#9710: oh ahaha
AI_WAIFU#2844: I think we need to have multiple #general or #off-topic
3dprint_the_world#6486: honestly I'm not that interested in continuing the conversation because:
1. It's irrelevant
2. I have nothing new to say that can't be found by 5 minutes of googling
3. Elon Musk is not going to fund EleutherAI
3dprint_the_world#6486: but you guys can continue |
nz#9710: I'm sorry, I don't want to damage the conversation, so we can agree to disagree 👍
nz#9710: And now I feel guilty of derailing the conversation. I think you guys were talking about issues with TPUs being pre-empted too often?
kindiana#1016: Fwiw I think we have a good chance of replicating gpt3+ if we can utilize a swarm of tpu-8s
AI_WAIFU#2844: (x) doubt
kindiana#1016: Just shawwns quota gets 123pf
bmk#1476: we can get a v3-256 reliably and a 512 occasionally if that's good enough
cfoster0#4356: You don't need to feel bad about this. Happens naturally
Sid#2121: we actually don' thave any pre-emptible v-8s in our quota iirc
Sid#2121: anyway, as always, dev time is the main bottleneck
Sid#2121: if someone can get it working we'll try it for sure
StellaAthena#3530: Indeed. While talking about things is a lot less work, the real thing one should do if they want to contribute is give it a try.
kindiana#1016: Preemptable 8s are much easier to get allegedly
kindiana#1016: Anyways I'm working on that hypothesis with swarm-jax
bmk#1476: 8s run on different hardware right
kindiana#1016: Yeah
bmk#1476: I bet that hardware is a lot cheaper because there's no interconnect
rivalset#4984: they are meant for serving not for training
bmk#1476: They can't tell us what to do
kindiana#1016: They spent all the interconnect money on cpu lol
bmk#1476: Why all that cpu anyways |
nz#9710: Aren't v4s supposed to be coming up this year?
bmk#1476: Lol but we don't get to touch em
kindiana#1016: It's hard to feed a tpu for serving I guess
nz#9710: I see.
StellaAthena#3530: Speaking of which, where are we up to @Sid?
Sid#2121: speaking of... what?
rivalset#4984: Can't you use tensorflow serving on gcp?
StellaAthena#3530: Sorry, GPT-NeoX
Sid#2121: model skeleton? ☑️ Dataloading? ☑️ Sparse attention? ☑️ Model / Data parallelism? ❓ ZeRo? ❓ Pipeline parallelism? ❓
bmk#1476: happy new years from UTC-7 everyone! great work in 2020, and hopefully we can get a load more awesome work done in 2021
bismarck91#5255: Anyone have any luck with ssh into colab?
spirit-from-germany#1488: https://www.youtube.com/watch?v=7r0opCTVgx4&ab_channel=dedjo
moondrop#4519: Happy New Year lads. You're all breathtaking!!
fazz#8459: @bismarck91 colab-ssh released a couple weeks ago not tried it though. Let us know if it works - would be great to use vscode instead of browser
bismarck91#5255: Thanks. Will try it out. Used to try the hacky method where you had to setup ngrok but that was sometimes slow.
shgidi#0284: Hi guys, it's been a while since I've visited this forum 🙂 I'm giving a lecture about language models in a few days, and I wonder how is your progress in replicating GPT3. It's been a while since I've seen a new loss-line on the foomboard 🙂
chirp#4545: i got vs code working with colab! it wasn't too hard. wrote up some instructions here: https://www.notion.so/ericyu3/Using-VS-Code-on-Colab-18dcd29ece2a4aabaf89e0270240a5ca
chirp#4545: ooh didn't see the thing about colab-ssh
chirp#4545: will try that...
chirp#4545: (couldn't get it working, ran into a weird TLS error) |
xen0#3601: imagine thinking that the first model trained on Teh Pile has been released but it's just the pile itself
chilli#5665: Go upvote on reddit: https://www.reddit.com/r/MachineLearning/comments/kokk8z/r_the_pile_an_800gb_dataset_of_diverse_text_for
AI_WAIFU#2844: > Is this dataset available on huggingface datasets?
AI_WAIFU#2844: @bmk you now need to enter the second phase of your PR campaign
Louis#0144: gzgzgzg
chilli#5665: What, answering to questions?
bmk#1476: no see here's the :bigbrain: move: a week or two from now, we announce *again* to talk about being added to HF Datasets
AI_WAIFU#2844: Dealing with redditors and the twitterati
chilli#5665: Yeah, but should probably respond on reddit
chilli#5665: Also, Twitter metrics look pretty good so far
bmk#1476: can y'all do some of the responding
bmk#1476: i need a break from this
StellaAthena#3530: I'm on it
bmk#1476: thanks
StellaAthena#3530: Go leave
StellaAthena#3530: Turn off your computer
chilli#5665: Haha
Louis#0144: i'll respond as well
Louis#0144: about download process stuff
cfoster0#4356: Responded |
nz#9710: May I request the creation of a learning/beginner channel?
bmk#1476: this server isn't really a beginner server, there are better servers for that
Louis#0144: yeah the server isnt the most beginner friendly
Louis#0144: There are beginner friendly severs under #communities
nz#9710: That's understandable, it's just that sometimes I feel like if I have a question or something it would just end up derailing the conversation.
Louis#0144: this isnt the place for those kinds of questions though
Ben_H#4259: Hi everyone, I've been mainly lurking in here for the past month since I don't (currently) have the NLP experience to help or the bandwidth to pick it up. I just wanted to congratulate y'all on publishing!
Louis#0144: there are many other such places
nz#9710: Alright
Louis#0144: youre free to stick around and absorb the material ofc
Louis#0144: I learned a lot that wya myself
bmk#1476: yeah we encourage people to lurk here and learn about stuff
nz#9710: Will do 👍
StellaAthena#3530: Thanks! To be clear, this is a working paper, not something that has been peer reviewed. But we hope to have it published soon.
AI_WAIFU#2844: Oh fuck all the HN people are pouring in
nz#9710: It's a good thing, isn't it?
bmk#1476: pouring in in what sense?
AI_WAIFU#2844: look at #deleted-channel
bmk#1476: there's been barely a dozen new people
chilli#5665: Not necessarily from HN |
bmk#1476: this is nothing compared to what happened when the Eye first linked us
chilli#5665: But I do hope that it doesn’t change the culture of the server much
bmk#1476: we can just tell people to "lurk moar"
chilli#5665: In the competitive programming discords I’ve been in, the best communities ended up being those with a rating requirement
bmk#1476: there's a bell curve meme hiding somewhere in here
nz#9710: How does that work if you don't mind sharing?
nz#9710: I'm a moderator in another discord so always interested in ways to improve the user experience.
chilli#5665: Everything else got flooded with repetitive questions
bmk#1476: where is the threshold lol
chilli#5665: 1800 in the server I mainly talk in lol
bmk#1476: cf?
chilli#5665: Or err, 1900
bmk#1476: oof
chilli#5665: 1900 CF
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794705751834361897/unknown.png
bmk#1476: *oof*
chilli#5665: You just need to link your account
nz#9710: codeforces account I guess?
chilli#5665: And if your account is above a rating you can join yeah
nz#9710: I see |
chilli#5665: It’s 1900 for arbitrary user and 1600 if somebody vouches 😛
bmk#1476: lol
nz#9710: I never heard of codeforces, always of leetcode
iamian#9489: Makes sense, yeah
iamian#9489: I’ve heard of both
bmk#1476: i need to work on my cf sometime
chilli#5665: Code forces is basically the step up from leetcode
AI_WAIFU#2844: Is it worth grinding to get into one of these communities?
nz#9710: Well thank you for sharing, I just signed up 👍
StellaAthena#3530: What is CF?
StellaAthena#3530: Ah
chilli#5665: No lol
bmk#1476: i'm not griding to get *into* the community, i just like big number
bmk#1476: 1661 small number, 1900 bigger number
bmk#1476: and i want bigger number
chilli#5665: Ah yeah
iamian#9489: Theres no real benefit imho
bmk#1476: there isn't but it's fun and i like big number
chilli#5665: One of the server’s principles is to promote “ratism”.
bmk#1476: lol |
bmk#1476: that sounds kinda toxic ngl
iamian#9489: In that time I’m grinding on that I could either work on my paper or go sailing so
iamian#9489: ¯\_(ツ)_/¯
chilli#5665: The server’a mostly nice haha
bmk#1476: when i say grind for contests i mean do a cf now and then so my rating can go lower
nz#9710: *time to grind codeforces*
bmk#1476: ~~is this short for rationalism~~
bmk#1476: is it part of the ratsphere
chilli#5665: But the way CF is set up really does promote ratism in the discussion
chilli#5665: Since they put your color next to every comment you make
iamian#9489: It's a typo chili meant racism actually (/s)
iamian#9489: Yeah makes sense
chilli#5665: Like, imagine if Twitter put your citation count next to all your tweets lol
bmk#1476: good idea let's make a plugin for that
bmk#1476: **0** https://cdn.discordapp.com/attachments/729741769738158194/794708413342416926/unknown.png
nz#9710: OK, but codeforces is also more specialized than twitter
iamian#9489: Damn I want that
nz#9710: I don't know, I kind of like the idea to be honest.
chilli#5665: Lol
bmk#1476: the first thing i'll do after installing the plugin is go here: |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794708633199575080/unknown.png
nz#9710: ahahah
iamian#9489: It's honestly not the worst idea tbh
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794708721346805810/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794708744599765072/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794708763260747816/unknown.png
iamian#9489: Helps to see if the person I'm discussing with has any clue of what they are saying
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794708814473199616/unknown.png
chilli#5665: Sounds like you’re getting into the spirit 🙂
bmk#1476: accurate, i have absolutely no clue what i'm saying
iamian#9489: mood
iamian#9489: half of my comments are either trolling or very thoughtful
iamian#9489: nowadays im trying to avoid ML twitter mostly
StellaAthena#3530: ML twitter sucks
StellaAthena#3530: Twitter sucks
chilli#5665: Lol this is one of my favorite posts that resulted from the server
chilli#5665: https://codeforces.com/blog/entry/77480
chilli#5665: Genius
Bedebao#4842: Twitter is not a place for intellectual discussion. Quite the opposite.
StellaAthena#3530: The only reason I use it is that more of my colleagues are there than on LinkedIn |
chilli#5665: C++ is a phenomenal language
StellaAthena#3530: My goal is to do the minimal amount of engagement and self-promotion necessary and otherwise pretend it doesn't exist.
chilli#5665: I also learned about my favorite macro here
bmk#1476: my feed is equal parts ML twitter, algebraic geometry twitter, and miscellaneous rat- and rat-adjacent stuff
chilli#5665: ‘#define private public’
bmk#1476: i have no idea why algebraic geometry has such a massive twitter footprint
chilli#5665: :berk:
nz#9710: I really liked the ML subreddit of a couple years ago. Now it's kind of gone downhill.
StellaAthena#3530: @chilli ruined it, tbh
AI_WAIFU#2844: reddit has gone downhill
bmk#1476: is algebraic geometry really that popular in math
iamian#9489: It ended with all the people training gan’s on dicks
bmk#1476: or is my twitter feed just horribly skewed
bmk#1476: hey, our discord server has a big overlap in members with theirs
chilli#5665: Well, I originally joined this server since Stella advertised it on Reddit lol
StellaAthena#3530: Algebraic geometry is pretty popular
iamian#9489: Well I don't like a bunch of dicks popping up on my PC at work
StellaAthena#3530: Seems like you need a more fun job
cfoster0#4356: Save that for at home
iamian#9489: It is I lead the team |
bmk#1476: But your team isn't working on BigDickGAN
iamian#9489: True
iamian#9489: It's also something I shouldn't clone to the copoeate server
iamian#9489: okay not as bad as my 30gb of porn text stories we downloaded for testing
bmk#1476: 30gb? Amateurs
iamian#9489: it worked for our idea
iamian#9489: but got one of our devs banned from google colab
chilli#5665: Google Colab checks for this stuff?
chilli#5665: :thonk:
StellaAthena#3530: you think 30 GB of porn is a lot? Try 30 **pounds** of porn
nz#9710: :monkas:
bmk#1476: What weighs more, 30 pounds of feathers or 30 pounds of porn
iamian#9489: with or without *stains*?
bmk#1476: That's included in the gross weight
Sid#2121: *gross* weight? it's obviously the porn
iamian#9489: thats gross
Sid#2121: https://www.youtube.com/watch?v=-fC2oke5MFg
iamian#9489: whats heavier a kilogram of feathers or a kilogram o fucking plutonium
bmk#1476: Doesn't the steel actually weigh more because weight is a measure of force while kg is a measure of mass and there's less buoyant force on the steel
iamian#9489: yup |
iamian#9489: but physics were ignored in the developement of this joke
Sahl#0630: What has more mass, a kilogram of feathers or a kilogram of fucking quarks
iamian#9489: _your mom_
iamian#9489: very professional
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794713905778655252/p_ehrmannfrischerquark_front.jpg
iamian#9489: as a german this confused me at first
bmk#1476: We should start talking about ML stuff lol
Sid#2121: someone please fucking explain quark to me
asara#0001: This discord is going to get a *lot* of new members today if it stays at the top of HN, hopefully all for the better though
bmk#1476: All the new people here must be very confused
bmk#1476: It's like cream cheese
Sid#2121: what's the deal with the germans having it in a totally different section of the shelves
Sid#2121: it's like, got its own department
iamian#9489: we germans are addicted to it
iamian#9489: okay back to ml before someone yeets us all
bmk#1476: To anyone new here, this kind of stuff happens a lot
Sid#2121: no, as a purple-name i declare this a dairy only channel
iamian#9489: on a sidenote, supermarkets are another thing that should adapt to the 21st century before they lose against the internet
bmk#1476: The only channel where research actually happens is #off-topic (/s)
iamian#9489: i want smart assitants recommending recipes on fridges and shit |
asara#0001: Do you anticipate any future issues if this server ends up having too many users, or is it believed that ML users are basically so amazing that there will unlikely be any issues with size?
Sid#2121: ok, i'll bite. What's wrong with the supermarkets
iamian#9489: it feels like they are moving in the same direction as malls in the 10's
iamian#9489: adapting to slow with service such as amazon fresh
iamian#9489: they can only survive in the long term if they provide better support as amazon
Sid#2121: do you live in germany? I was surprised at the lack of self checkouts when i moved there
iamian#9489: yep, and yep
Sid#2121: very upset that i actually have to interact with people
iamian#9489: same
rivalset#4984: but self checkouts in the us are super annoying compared to europe
iamian#9489: dont wish me a good day i just want my ice cream and my tea and not a damn conversation
rivalset#4984: you have to weigh every item that you scan and god forbid you take it out of the scanning area
Bedebao#4842: Heh. I saw that "chonker" you snuck in the Pile gif on the website.
bmk#1476: @iamian hast du schon mal unseren schönen #art Kanal besucht?
iamian#9489: wunderschön
bmk#1476: Wir haben die besten deutschen Memes
iamian#9489: wunderschön
iamian#9489: warum aber
Sid#2121: die Deutschen vom Rest von uns fernzuhalten
iamian#9489: macht sinn |
iamian#9489: *geht*
Sid#2121: fairly sure there's a schmidhuber reference in there too
bmk#1476: Es gibt eine hohe Zahl von deutschsprachigen Leute hier, und wir wollten nicht #memes übernehmen
iamian#9489: ah sehr cool
iamian#9489: Deutsche ML community beste ML community
rivalset#4984: Schmidhuber?
bmk#1476: wenn er eingeladen werden könnte, würde ich sehr gern so zu machen
Louis#0144: @bmk someone from my lab joined
Louis#0144: @Ambient
bmk#1476: welcome
Ambient#0001: hey it’s me
Ambient#0001: (Spencer Frazier)
3dprint_the_world#6486: this only works if everyone who joins is actually an ML researcher/programmer
asara#0001: yeah, sometimes public discord invites get shared in unexpected places and things take some quick changes
bmk#1476: i will pull out the banhammer if need be
StellaAthena#3530: Our lurker to speaking ratio is between 1:100 and 1:1000. It'll be a long time before we have that "problem" and we can always set up a more formalized process
StellaAthena#3530: IDK about everyone else, but I've only shared rate-limited invite links
AI_WAIFU#2844: I think we need to do a reorg, possibly with access controls, but it should be managable.
bmk#1476: We need 600 more members before we surpass SSCD
StellaAthena#3530: only? |
StellaAthena#3530: huh
AI_WAIFU#2844: how many are we at rn?
bmk#1476: 1370
asara#0001: Yeah, I am not too worried. When it is worrying is when there is no moderation/staff that truly care, and then as quality degrades, smart people leave the conversation space *really* quickly. But as far as Discord goes, ML/AI is probably the best possible area of users, so it should be fine
AI_WAIFU#2844: This place has the advantage of having a mission, and that really helps with keeping the server focused.
StellaAthena#3530: I've spent more time working on this than my actual job the past month, so I think there's little danger of that
bmk#1476: Me too lol
chilli#5665: I thought you were in college :thonk:
Louis#0144: I’ve spent more time working on side projects than actual work but I think it’s been v fruitful
Louis#0144: 🤷♂️
3dprint_the_world#6486: my previous workplace actually used to encourage people to work on side projects because they viewed it as free training in the employee's own time
bmk#1476: I said it's complicated
chilli#5665: I see
chilli#5665: Lol
StellaAthena#3530: qanon is wild: https://twitter.com/dappergander/status/1345142898833162243?s=20
StellaAthena#3530: I'm glad I have a foot in the infosec world because it's become a great spectator sport
Sid#2121: what's gonna happen to Qanon when biden takes over and trump hasn't arrested and hanged everyone for treason
Sid#2121: i'm genuinely torn up about it, qanon is my favourite conspiracy meme
Louis#0144: LOL
Ambient#0001: obviously evolve to Ranon, their next most powerful form |
skiman10#4848: Don't worry, I'm sure they'll be able to theory themselves out of being stuck in that corner. They're pretty good at it.
Sid#2121: definitely looking forward to seeing how they weave themselves out of this one. It's like when a protagonist is stuck in a situation where they *really should die* but you know they can't because they're the main character
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794736376323112990/unknown.png
3dprint_the_world#6486: https://www.jstor.org/stable/10.1525/nr.1999.3.1.60
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794736400489775104/unknown.png
Sid#2121: 👀
bmk#1476: :ultrazucc:
Sid#2121: 👁️
3dprint_the_world#6486: > only one of the thirteen conspiracy theories examined collapsed after the failure of a prophecy
StellaAthena#3530: I recommend getting high and skimming r/asktrumpsupporters from time to time. It's hilarious
skiman10#4848: I have an unhealthy obsession with following the far-right and I agree skimming r/asktrumpsupporters is the best.
3dprint_the_world#6486: same
AI_WAIFU#2844: I reccomend browsing thedonald website, that place is a schitzo mess
bmk#1476: t_d is hilarious
bmk#1476: remember when they booked the wrong four seasons?
bmk#1476: holy shit t_d went wild doing mental gymnastics
3dprint_the_world#6486: > It’s hard to keep the faith when your wife and daughters have left you and we didn’t get the decisive MOAB [mother of all bombs] win we deserved on election night
^ actual quote from Trump supporter
skiman10#4848: Four Seasons Total Landscaping
|
I will never not laugh!
bmk#1476: they were like "yeah they actually picked this four seasons as a 200 iq 4d chess move"
bmk#1476: anyways this should be in #off-topic
bmk#1476: pls
bmk#1476: we're giving a bad first impression for the server to newcomers
Louis#0144: LMAO
Louis#0144: Holy shit
Louis#0144: I didn’t realize this was general
Louis#0144: Can we purge this
Louis#0144: Pls
bmk#1476: to newcomers: i promise we usually talk about research
Sid#2121: quick guys, let's start talking about QKV
bmk#1476: Linear attention is a horrible idea for text
StellaAthena#3530: Agreed
bmk#1476: Do we need to do the context length experiment?
StellaAthena#3530: Probably
StellaAthena#3530: Nobody else will
bmk#1476: Or did henigan scaling law paper cover that
AI_WAIFU#2844: Growing pains
StellaAthena#3530: Not really |
StellaAthena#3530: It did in a cursory fashion IIRC
bmk#1476: Wasn't there a plot that covered basically that?
Sid#2121: I didn't expect y'all to take my QKV suggestion seriously. It's past 2am here, therefore i can only converse in meme form
StellaAthena#3530: but not enough to make a paper not valuable
StellaAthena#3530: oh, query key value
StellaAthena#3530: I was trying to figure out what that meant
Sid#2121: what is 'the context length experiment' - didn't the shortformer paper cover this recently?
Ambient#0001: newcomer here, two thumbs way up
bmk#1476: no, i originally thought it did
Sid#2121: i haven't yet read it
bmk#1476: but apparantly that's not exactly what they did
bmk#1476: or at least it wasn't their main focus
bmk#1476: there's still room to do something else
chilli#5665: Why? Because it just sucks in general but it might be worth it if you really need long sequence lengths?
bmk#1476: you don't need really long ctx lens
Sphinx#2092: Ehh thats notbreally clear. Maybe for lm, sure
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/794738794032398356/unknown.png
bmk#1476: this is the plot i'm talking about
Sphinx#2092: But there's more to life than just LMs lol
3dprint_the_world#6486: [citation needed] |
AI_WAIFU#2844: Debatable
bmk#1476: this is LM land
StellaAthena#3530: I think the multilingual scaling laws experiment would be much more interesting tbh
bmk#1476: the info theory one?
StellaAthena#3530: yeah
chilli#5665: What is this showing? That 256 and 1024 tokens have the same scaling?
bmk#1476: effect of trining on less context
3dprint_the_world#6486: it's 8 and 1024 tokens isn't it
bmk#1476: here's the full explainer https://cdn.discordapp.com/attachments/729741769738158194/794740216526012416/unknown.png
3dprint_the_world#6486: yeah that's what I mean
3dprint_the_world#6486: so it's not 256 tokens vs 1024 tokens
3dprint_the_world#6486: it's the position of the token in the context
bmk#1476: hm, so then our experiment is still on?
Sphinx#2092: I fully support and encourage all multilingual experiments.
chilli#5665: Why is the plot like this? Is it since the first token has no context do it's just a random guess?
bmk#1476: tbqh i don't really know and i kinda have some other stuff i need to attend to for a moment
StellaAthena#3530: tl;dr some cool research in linguistics indicates that the exponents in scaling laws should be a function of the language the data is.
bmk#1476: this graph made sense to me a few days ago but i've forgotten by now
chilli#5665: Yeah I think that's what it's showing
chilli#5665: So I think it still shows that after 256 tokens you have all the context you need |
canjobear#5819: This fellow has some papers on information-theoretic scaling laws in language and how they might vary by language https://home.ipipan.waw.pl/l.debowski/
cfoster0#4356: This is super interesting. Thanks for sharing
StellaAthena#3530: @canjobear That's pretty cool, thanks for the reference
canjobear#5819: his stuff on Hilberg's law would be most relevant
canjobear#5819: that's the scaling of entropy rate as you consider more and more context
canjobear#5819: it's a power law, ofc
StellaAthena#3530: @cfoster0 @canjobear What I was thinking was that there's good evidence that the rate of information transfer of speech is roughly constant. Take English and Spanish as examples, as they have roughly the same script and syllable length. Spanish is spoken much faster than English, which means that it has a lower information density.
Written text is, in some sense, rate-limited text. As long as two languages have the same number of symbols per syllable they communicate in writing at the same speed. As the above comments would suggest, Spanish text is typically significantly longer than its English translation.
There are a couple directions you could go from here. Nobody has done work on how comparable perplexity over English and over Spanish are, AFAIK. Even if English LMs "learn more" per byte that doesn't mean they necessarily have a lower perplexity since Spanish text could be easier to predict.
Pure scaling law research, and in particular seeing if you can mathematically identify the dependency on entropy per byte, is also a very interesting direction to go in.
Those are the two main ones I've sketched out, but I'm sure there are other cool thinks. A future project might be working cross-script, as I've focused on two latin languages here. I think that's a good idea to start off with because it builds in controls, but knowing the answer to the above questions for any pair of languages would be facinating.
dopa#3178: https://news.ycombinator.com/item?id=25607809
dopa#3178: you guys made to hackernews 🙂
StellaAthena#3530: We posted that. Leo Gao is the first author of the paper 🙂
canjobear#5819: Yeah that's what brought me here
canjobear#5819: There is pretty good evidence that entropy rate is constant in speech across languages, I can't find the paper right now though |
chilli#5665: Ah I remember seeing this too
bmk#1476: nooo we're falling down the ranks again
bmk#1476: we were #2 at one point
canjobear#5819: There is some work that tries to make perplexity comparable across languages, I'm digging it up now
canjobear#5819: https://ryancotterell.github.io/papers/cotterell+alc.naacl18.pdf This paper looks at "bits per English character" to make things more comparable
canjobear#5819: https://arxiv.org/abs/2005.03774 This one does bits per phoneme, to abstract away from the writing system
canjobear#5819: although it's only word-level, not whole texts
StellaAthena#3530: Here's what you're looking for https://advances.sciencemag.org/content/5/9/eaaw2594
canjobear#5819: Yep that's it
canjobear#5819: iirc that was only using something like a phoneme level trigram model
canjobear#5819: Yeah, different languages will have different densities of information per word/character/etc., and another factor is that languages vary in terms of how much long-range statistical dependencies they have
canjobear#5819: Like, German and Chinese have more long-range dependencies than English. Just part of how the languages work
3dprint_the_world#6486: interesting, do you have any sources for this?
StellaAthena#3530: Is long-range dependency inversely correlated with how strict the word ordering is?
bmk#1476: is this because of german verb system lol
canjobear#5819: Languages with longer dependencies usually have freer word order
StellaAthena#3530: Right, that's what I meant
canjobear#5819: The words have complex morphology etc. that indicates how things are related, so the words that are connected don't have to be next to each other
Louis#0144: Ah yes a famous Stella essay
Louis#0144: Consider yourselves blessed |
Louis#0144: LMAO
StellaAthena#3530: @canjobear wanna research this with me?
bmk#1476: bist du wirklich sicher, dass deutsch so kompliziert sein werden könnte?
canjobear#5819: ja
StellaAthena#3530: Dope
bmk#1476: (i couldn't think of a less contrived way to stack the verbs off the bat)
canjobear#5819: http://socsci.uci.edu/~rfutrell/papers/futrell2015largescale.pdf This is on long-range dependencies across languages. Syntactic dependencies
StellaAthena#3530: Oooo I’ll check that out
canjobear#5819: Statistical dependencies haven't been looked at as closely, in part because not enough data
Louis#0144: @StellaAthena this could be useful for the level set thing?
Louis#0144: Level sets in different languages?
StellaAthena#3530: @Louis in a “far future” sort of way, yes
canjobear#5819: I'd be interested in looking at scaling laws across languages and how they relate to differences in languages, for sure
canjobear#5819: I do this stuff for a living
nz#9710: Is there any work comparing language understanding throughout development with language modelling throughout training?
canjobear#5819: hmm, not so much with like neural network LMs
zphang#7252: depends on how you define and measure "understanding"
canjobear#5819: there's lots of comparisons of, like, Bayesian learners with human learners
StellaAthena#3530: @canjobear what’s your background / what do you do for a living?
canjobear#5819: I'm a linguistics professor at UC Irvine |
StellaAthena#3530: Ah cool.
Louis#0144: Wow I think you’re the first professor we have ?
Louis#0144: Someone fact check pls
canjobear#5819: I work on statistics and information theory and language
bmk#1476: exciting stuff
canjobear#5819: I don't know a lot of practical ML (yet)
AI_WAIFU#2844: You'll pick it up quick
bmk#1476: this seems like the perfect collaboration imo
Louis#0144: ML is easy
StellaAthena#3530: Sounds like a good fit then 🙂 a lot of people here know a lot of ML and not much science
canjobear#5819: ha
canjobear#5819: well, I know a little
bmk#1476: so we shoul get this language scaling laws thing going asap, then?
AI_WAIFU#2844: ja
StellaAthena#3530: #scaling-laws
AI_WAIFU#2844: but I'm busy with my own project
bmk#1476: EINS VON UNS! EINS VON UNS!
Sphinx#2092: My issue with scaling laws like this is that it still focuses on one language at a time
Sphinx#2092: When we should really be doing them all at once.
canjobear#5819: it would be great to do a real crosslinguistic comparison |
Sphinx#2092: Especially for the very low resource languages
canjobear#5819: I'm skeptical that most differences between langauges you see in NLP are interesting, tbh, because I think they're probably mostly just sample size effects
bmk#1476: what if we get really big sample sizes
bmk#1476: we're planning on building a 100TB multilingual dataset
Sphinx#2092: There's definitely differences
canjobear#5819: yeah, if you can get big enough samples to estimate curves and see where they're going, it would be more convincing
canjobear#5819: Is The Pile just English?
Sphinx#2092: And they get bigger with larger batch size
StellaAthena#3530: @canjobear 95% English, yes
bmk#1476: basically yeah
bmk#1476: negligible amounts of other languages
bmk#1476: it's actually 97% and a good chunk of the 3% are probably misdetected
Bedebao#4842: 100 TB? That's orders of magnitude higher than the pile. Would it still take less time since you've got all the tools?
3dprint_the_world#6486: do you know Alistair Knott by any chance
canjobear#5819: Haven't met him but I have read his papers
3dprint_the_world#6486: ah nice
Louis#0144: I’m kinda surprised that this is the first prof thats joined tbh
3dprint_the_world#6486: (he's a collaborator of mine)
Louis#0144: LOL
Louis#0144: ok |
Louis#0144: Makes sense
3dprint_the_world#6486: oh no sorry
3dprint_the_world#6486: I mean Alistair Knott is a collaborator of mine
Louis#0144: I considered inviting my advisor but my colleagues said to not do that
Louis#0144: Ohhh ok
3dprint_the_world#6486: who also works in this same area
3dprint_the_world#6486: anyway
Bedebao#4842: ...do you want machine learning professors to come here or not?
zphang#7252: I have a running suspicion that my advisor has a secretly high power level, but we're both politely not acknowledging it
canjobear#5819: I'm going to invite some of my students 😄
Louis#0144: Thatd be cool
canjobear#5819: they would probably be better collaborators than me tbh
Louis#0144: My advisors power level is very high, his meme game is super strong
chilli#5665: ... what does power level mean here?
chilli#5665: Do your students know that your discord profile pic is an anime girl
Louis#0144: LOL I DIDN’T EVEN NOTICE
canjobear#5819: no. it will be a bit of a reveal
Louis#0144: Waifu reveal party
AI_WAIFU#2844: https://knowyourmeme.com/memes/hide-your-power-level
chilli#5665: Yeah, but in internet conversations I've always associated "hide your power level" with a specific kind of view |
3dprint_the_world#6486: so it will all come down to a fight between the WH40K faction and the anime faction
chilli#5665: Ah
AI_WAIFU#2844: That's just correlation, the concept is fairly general
chilli#5665: > The phrase has also been used to hide one's political affiliations, particularly among members of white supremacist political groups such as the alt-right.
zphang#7252: I assume that's some post-2016 twisting of the term
canjobear#5819: memes usually generalize in meaning over time
zphang#7252: anyway, I meant the old-type
chilli#5665: Lol
canjobear#5819: it's like thermodynamics
3dprint_the_world#6486: *takes the blue pill*
canjobear#5819: another interesting research program
AI_WAIFU#2844: You'll see power level hiding in any social environment that encourages it.
triggerhappygandi#0001: Did I hear Warhammer 40k?
triggerhappygandi#0001: Heresy Crusade Filthy xeno Emperor Protects Magnus did everything wrong death to traitors!
Dromarion#3383: Lorgar literally did nothing wrong :^)
bmk#1476: guys this is #general
bmk#1476: take the wh40k talk to #off-topic
Deleted User#0000: Hey guys
Deleted User#0000: Quick question
Deleted User#0000: Can this Ai tech be able to be used to make chatbots as such |
Deleted User#0000: Ideally fairly realistic
Deleted User#0000: And even better if it has things like memory like humans
Deleted User#0000: If not its fine
Akaibu#9379: Uhh, hi?
Deleted User#0000: Just wondering because I'm working on a similar project rn
Deleted User#0000: Heya
Akaibu#9379: So to put it short, I know another potential good data source
bmk#1476: tl;dr yes
Akaibu#9379: Which I’m vaugely part of
bmk#1476: what is it / how big is it?
Akaibu#9379: The problem is
Deleted User#0000: Yeah I tried doing something like this before
Deleted User#0000: With nerual nets
Deleted User#0000: But it's inefficient as hell
Akaibu#9379: It might introduce a Tay_Tweets problem
Deleted User#0000: And the dataset wasn't ideal
Deleted User#0000: So currently I'm trying a hybrid
bmk#1476: how big is it @Akaibu
Akaibu#9379: Honestly? Not too sure, I know it’s got almost 50-100TBs of images but not sure on the actual text
Akaibu#9379: Maybe 15GBs? |
Akaibu#9379: It’s archives of 4chan
bmk#1476: we're not interested in images atm
Akaibu#9379: We got the text too
bmk#1476: yeah and we're not going to add 4chan. sorry
Deleted User#0000: Oh lord
Deleted User#0000: Yeah 4chan isn't ideal
Deleted User#0000: But It could work
Akaibu#9379: Yea I figured you wouldn’t want that problem
bmk#1476: also 15GB is on the small end for us
Akaibu#9379: But what about just like /sci/ or other blue boards?
bmk#1476: too small
Akaibu#9379: Another thing is scraping the big partnered discords
bmk#1476: even all of it together is too small
Akaibu#9379: again I’m not sure on the text size, let me check
Akaibu#9379: It could be much bigger
RaspberrySleuth#3985: I was reading the pdf it looks like they gathered a great collection of datasets so far
bmk#1476: our goal for Pile v2 is 100TB in total
bmk#1476: currently we can probably get 30-40TB of that
Deleted User#0000: Hello all. Greetings.
bmk#1476: we need another 60TB |
bmk#1476: welcome
Deleted User#0000: Thank you
Deleted User#0000: So if someone knows how I could make or find a decent chatbots ai could you let me know please
Deleted User#0000: Because my past neural based one's were shit tbh
Deleted User#0000: And I'm currently working on a hybrid but mostly rule based, that won't be perfect either thoe.
rivalset#4984: see facebook's blender paper
bmk#1476: you can check back in a few months once we've finished our gpt3 replication
Deleted User#0000: I have a quantum computing API AI discord chatbot right now. 🙂
Deleted User#0000: Anyone else have?
Deleted User#0000: Curious
Deleted User#0000: legit
Deleted User#0000: "Quantum computing ai"
Deleted User#0000: *kek*
Deleted User#0000: Sounds good
Deleted User#0000: You know the problem with all nerual chatbot ai's
Deleted User#0000: Is they're only good at small talk
Deleted User#0000: They don't have much general knowledge as such
rivalset#4984: https://ai.facebook.com/blog/state-of-the-art-open-source-chatbot/ It's not something that will be trivial without lots of compute and machine learning skills
Akaibu#9379: Hmm, yea after looking into it, it seems we wouldn’t have more than 50GBs of text to contribute
Akaibu#9379: Sorry about the waste of time |
bmk#1476: oh, it's no problem
Deleted User#0000: I had fun making a discord bot do that
bmk#1476: if you ever find a good source of text, please let us know
Deleted User#0000: Not kidding btw
Deleted User#0000: fun as hell
bmk#1476: @Deleted User could you elaborate on the technical details
Deleted User#0000: sec
Akaibu#9379: Is the Wikipedia source just English Wikipedia?
bmk#1476: yes
Deleted User#0000: Familiar with https://github.com/EleutherAI/The-Pile >
Deleted User#0000: ?
Akaibu#9379: Simple English might be valid to toss in
bmk#1476: i would recommend reading through the paper first, most of the details are in there
bmk#1476: we plan on including all of wikipedia next time
Akaibu#9379: I thought I did lol
bmk#1476: simple wikipedia is probably pretty small tbh
Akaibu#9379: Yea it’s less than 200k article and each are really small
bmk#1476: do you have a question about it?
Deleted User#0000: Also one other quick question
Deleted User#0000: Is there an api or website |
Deleted User#0000: No, you asked how I am doing my bots
Deleted User#0000: Where u can get simple answers
Deleted User#0000: To simple questions
Deleted User#0000: Because I'd need to implement general knowledge for my rule based ai
Akaibu#9379: But yea, would there be any problem with scraping the big public discords? I’d imagine it would be bigger than IRC
bmk#1476: no scraping discord, for privacy reasons
bmk#1476: how are you using pile to do your bots?
bmk#1476: openai's api?
Deleted User#0000: Hmmmmmmm
Akaibu#9379: Also there’s other public IRC logs than just Ubuntu
Deleted User#0000: Is there a specific hunk of code u can link to or something?
Akaibu#9379: Archiveteam’s for example
bmk#1476: i recommend googling
bmk#1476: how big?
Akaibu#9379: No clue
bmk#1476: our goal is 100TB for the next iteration
bmk#1476: if it's smaller than 100GB, it's not worth thinking about
Akaibu#9379: But like with the various ones you could google I imagine you could get a sizeable amount
Akaibu#9379: Enough drops in a bucket will quench a horse
bmk#1476: not worth the time |
Akaibu#9379: ¯\_(ツ)_/¯
Akaibu#9379: Oh yea! Reddit has a dump!
Akaibu#9379: It’s at least a few hundred I believe
bmk#1476: at the moment, we're not really looking for new datasets, unless they're *really* massive tbh
Deleted User#0000: Reddit has some massive dumps
pingu692#2535: Hey guys, just joined after coming across posts about Piles on Twitter and Reddit! I read you guyz used arXiv and PubMed papers, what about S2ORC? It's kinda superset of these right?
Deleted User#0000: But they're not great for mural networks
Deleted User#0000: Because that's why I tried
bmk#1476: what's S2ORC?
chilli#5665: Microsoft also has a papers100M dataset
pingu692#2535: The entire Semantic Scholar dataset, AllenAI had a paper in ACL 2020
bmk#1476: ah, hadn't heard about it
pingu692#2535: https://www.aclweb.org/anthology/2020.acl-main.447/
pingu692#2535: Its over 650Gigs if I remember right, used a part of it recently for a project
bmk#1476: anyways, more data isn't a top priority right now unless it's, like, multiple TB after processing
rivalset#4984: your network was probably too small
bmk#1476: reading through their github i see some major issues that might make it unsuitable
bmk#1476: but i'll definitely look into it further sometime
Deleted User#0000: I cannot reveal that yet. That is a secret. For now.
Deleted User#0000: But nice to meet you all |
pingu692#2535: :) I'll try and go through the Pile paper thoroughly soon...thanks for the resource!
Deleted User#0000: I can say what I am doing
Deleted User#0000: Learning through large datasets of comparative religion and philosophy. And translations between all books for starts.
bmk#1476: if my understanding is correct, i don't think the pile would be of much help for you
Deleted User#0000: I did say I am not revealing it all, the person running the team which I did not link would kill me
Deleted User#0000: 🙂
bmk#1476: well, in any event, please do cite it if you end up using it
Deleted User#0000: Once it is not a secret, I will cite it all
Deleted User#0000: 🙂
Deleted User#0000: I was curious if anyone had done it before with a discord bot
Deleted User#0000: Hey guys
Deleted User#0000: I looked at openai
Deleted User#0000: Hello @Deleted User
Deleted User#0000: And I don't generally want to use a neural net
Deleted User#0000: For answering general knowledge questions
Deleted User#0000: And hey @Deleted User
Deleted User#0000: Nice to see you
Deleted User#0000: And you
Deleted User#0000: 🙂
Deleted User#0000: Like neural nets are cool but for what I'm doing it's not ideal |
bmk#1476: this server is not for beginner help, i recommend asking somewhere else
Deleted User#0000: Tru that
Akaibu#9379: It’s a shame you won’t take smaller datasets even when they might collectively be larger, as even with the short time I’ve spent looking, I’ve seen something like 15TBs worth of stuff, but it’s all like 50GBs or less
Deleted User#0000: Just thought I'd ask since I'm here
Deleted User#0000: Indeed.
Deleted User#0000: We can talk later
Deleted User#0000: I love nerual nets
Deleted User#0000: And when done right are amazing
chilli#5665: You've found 300 separate datasets of 50GB or less?
Akaibu#9379: But meh, I guess I’m just out of my league here lol
Deleted User#0000: But realistically a hibrad would be perfect for chatbots if possible
Akaibu#9379: Well obviously I’m exaggerating but I really could find that much with not much time
Deleted User#0000: My discord bots are neato. Been trying to tell people on the-eye that but people don't want to see.
Deleted User#0000: So 😛
Akaibu#9379: I’ve at least spotted a couple terabyte worth of various sizes that don’t seem “big enough” for you all to consider
chilli#5665: You'd need 20 datasets of 50GB or less
chilli#5665: To make one terabyte
Akaibu#9379: Yea that’s about right
chilli#5665: From the ones you've mentioned so far I'm not sure I've seen one terabyte's worth
Deleted User#0000: Cooleo |
chilli#5665: Or at least, they're legally problematic
Deleted User#0000: Wanna share info?
Akaibu#9379: Nothing legally problematic with 4chan, though I understand why one would want to avoid inserting that, unless they want another Tay_Tweets
Akaibu#9379: And I stopped mentioning them after it was clear small datasets were basically ignored
bmk#1476: with all due respect, AI chat is a very well researched field and so people who aren't up to date with the literature aren't likely to be taken seriously
chilli#5665: Well, you said 4chan was 15GB right?
Akaibu#9379: I’m not actually sure on the number, it’s various archives thrown throughout like 20-30 archive.org items
Akaibu#9379: but at least 15GBs of text, or at least data, yes
Deleted User#0000: @Akaibu could you dm me the text only please?
Deleted User#0000: It'd be interesting to have a look at lol
Deleted User#0000: Maybe train a nerual net up on it
Akaibu#9379: https://archive.org/details/archive-moe-database-201506 lol just link it here
Deleted User#0000: Awesome
Akaibu#9379: Good luck with it
Akaibu#9379: Your gonna need it
Akaibu#9379: Database is a clusterfuck
Deleted User#0000: Sounds fun
Akaibu#9379: From what I understand we’ve been trying 5 years to make a new system just so we aren’t patching someone’s unmatained crap
Deleted User#0000: I generally have two different projects I fuck around with
Deleted User#0000: At a time |
Deleted User#0000: So when I get boarded with one
Deleted User#0000: I move to the other
Deleted User#0000: I'm currently winding down work on my secure decentralised networking backend since its in alpha now
chilli#5665: Yeah 15GB is really too small
Deleted User#0000: And I've having another look at my ai stuff
Deleted User#0000: I shelved
chilli#5665: The issue is that there's some amount of upkeep for each data source
chilli#5665: Like, if you take a look at the current paper
Akaibu#9379: Oh I see the issue, death by a thousand paper cuts
chilli#5665: For each one, we needed to run separate analyses, we looked at licenses, and tried to figure out ToSes
chilli#5665: 22 was already kind of annoying
chilli#5665: 500 will be unmanageable
Akaibu#9379: Hmm, what about Pastebin?
chilli#5665: I don't know how much data is there/accessible
Akaibu#9379: I imagine that’s more than a terrabyte of text
Deleted User#0000: If I could be bothered I could whip up a pastebin scraper lol
RaspberrySleuth#3985: Perhaps there should be a different channel for personal projects?
Deleted User#0000: But someone else already probably did it
bmk#1476: depends, what is the project about?
bmk#1476: if it falls into #alignment-general , #research , #scaling-laws , etc, then you can talk about it there |
bmk#1476: otherwise, either #off-topic , or not in here at all
Akaibu#9379: I think they mean a “hey look what I’m doing” channel
RaspberrySleuth#3985: not for me just because the discussion here seems to divulge away from the pile
bmk#1476: this channel isn't for talking aboutthe pile, #the-pile is
bmk#1476: but yeah this channel is more or less a free for all
Akaibu#9379: Also, PasteBin looks to be a bust
Deleted User#0000: I know
RaspberrySleuth#3985: Oh okay
bmk#1476: well, not free for all as in no rules
bmk#1476: there are rules, we're just less strict about topics
RaspberrySleuth#3985: I dont usually talk in chats i just like reading
bmk#1476: this server isn't really geared up for people who aren't directly contributing to eleuther projects
bmk#1476: this server is primarily for coordinating eleuther projects
RaspberrySleuth#3985: Oh if it is for the best i can leave?
chilli#5665: (and for talking about research papers)
bmk#1476: but usually papers relating to the main research topics we're interested in
Akaibu#9379: Just Lurk Rasp
RaspberrySleuth#3985: Im more interested in the research papers
Deleted User#0000: Is there another popular discord server
Deleted User#0000: For general ai stuff then? |
rivalset#4984: check #communities
chilli#5665: Go to fast.ai discord
Deleted User#0000: Alright thanks
thrasher#7261: what is the scope of this group? language models specifically, NLP generally, big transformery things applied to arbitrary tasks, all of the above?
Akaibu#9379: LOL 7.2 reads like an academic shitpost
bmk#1476: We're interested in scaling, alignment, ML theory, and language models among other things
maghav#7178: Is there enough compute to experiment scaling?
Akaibu#9379: “7.2 Acceleration of AI Timeline”
“Oh no, the Terminator is gonna fucking read our shitty code and irc goatse hazing”
bmk#1476: Yes
bmk#1476: This is not the place to say bad things about alignment
bmk#1476: We are dead serious
bmk#1476: 7.2 is not a shitpost
Akaibu#9379: If I posted it on /Sci/ it’d probably become one lol
maghav#7178: What is the level of compute? Is there a general compute for whole Eleuther or project specific? / is there a better channel to ask these things?
cfoster0#4356: @maghav At the moment we have more compute than we have uses
cfoster0#4356: Large quantities of TPU resources through TFRC
cfoster0#4356: Also other GPU/CPU resources, some of which are project-specific
bmk#1476: we essentially have several million dollars worth of compute resources in total |
chilli#5665: Just because 4chan doesn't take it seriously doesn't mean that it's not an issue
bmk#1476: @Akaibu i strongly recommend you to lurk moar
chilli#5665: Well, more accurately, more compute than we have engineering effort
cfoster0#4356: True.
rivalset#4984: What is 7.2?
cfoster0#4356: Section of the paper
rivalset#4984: oh
chilli#5665: It's the one about alignment
Akaibu#9379: people in the 60’s thought we’d be in flying cars
People in the 80’s thought we’d have hoverboards
Who’s to say negative predictions won’t come to fail too?
cfoster0#4356: Difference here is, if we fuck up, there *is no second chance*.
cfoster0#4356: We take alignment very seriously, as a general rule
RaspberrySleuth#3985: Isnt that kinda survior bias? Some predictions may have failed but not all?
cfoster0#4356: Because the tail risk (even if low probability) is existentially catastrophic
RaspberrySleuth#3985: Its best to take them seriously?
cfoster0#4356: Only comparable scale of risk is maybe nuclear weapons.
maghav#7178: Is the majority of current engineering effort based on GPT Neo and The pile?
45#2247: Also, no economic incentives for flying cars / hoverboards
thrasher#7261: kinda hard to mess up your entire light cone with just nukes |
chilli#5665: This is like saying "people in the 60s thought we might die of a nuclear war, people in the 70s thought we might die in a nuclear war, people in the 80s thought we might die in a nuclear war, and they were all wrong. Why are you worrying now?"
Akaibu#9379: Even with nukes people got the effects wrong. They thought it was able to create a self sustaining reaction that would wipe out the atmosphere or such, obviously that didn’t happen
bmk#1476: i don't think anyone is going to get convinced in this debate
chilli#5665: The only reason you shouldn't take alignment seriously is if you don't think there's a serious risk of AGI happening soon
bmk#1476: https://www.youtube.com/watch?v=EUjc1WuyPT8 i strongly recommend this video for people not up to speed with alignment
Akaibu#9379: I just don’t think it’s possible for us to accidentally create something like that
bmk#1476: watch the video
Akaibu#9379: Computers are pretty stupid afterall
bmk#1476: *watch the video pls*
Akaibu#9379: I don’t have the mobile data lmao
bmk#1476: then wait till youre at a computer
Akaibu#9379: That’ll be another two days
chilli#5665: Create something like what?
cfoster0#4356: We'll be here in two days 🙂
Akaibu#9379: I’ll probably forget about this discord in two days lmao
bmk#1476: i don't think it's worth debating until we're on the same page
chilli#5665: Either way, if you're not interested in AGI/think it's a pipe dream this server probably isn't for you
Akaibu#9379: I don’t think it’s a pipe dream, just that it’d have to be very intentionally made
Akaibu#9379: Which ain’t happening soon
Akaibu#9379: At _least_ 30 years |
maghav#7178: most likely but do not underestimate the effect of compounding - think where we were 10 years ago
Akaibu#9379: Besides there’s the whole Chinese room thing
Akaibu#9379: Compounding is why I said 30 and not 120
bmk#1476: Anyways, go read up on alignment and we can talk after
thrasher#7261: w.r.t. agi timelines it's very easy to say numbers, very hard to convince people you've said the right numbers
Akaibu#9379: Nah, it’s more like the Fusion Constant
Akaibu#9379: 50 years ago people said in 50 years we’d have commercial fusion power, and they say today we’ll have it in 50 years
bmk#1476: @Akaibu none of these arguments are original. I highly recommend you do some reading first
cfoster0#4356: Agree with what @bmk said. It's a bit frustrating for all of us trying to communicate about these things, without more common ground. There are a lot of good resources you should be aware of, ex: https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ
bmk#1476: this post in particular refutes your point very well https://intelligence.org/2017/10/13/fire-alarm/
bmk#1476: and yeah you should definitely read the resources that @cfoster0 linked too
maghav#7178: A general question for a noob here though - what projects require some engineering effort/need some folks right now?
bmk#1476: What programming experience do you have
maghav#7178: know python/pytorch well and contributed to C++
bmk#1476: Hm, so we have a bunch of research projects floating around
bmk#1476: Most things aren't nailed down atm, but now with Pile done we can shift our focus to getting those projects up
bmk#1476: So tldr i recommend you stick around and we'll try to find something you can help with asap
chilli#5665: Well tbh there's a lot of Engineering effort
chilli#5665: Getting dependencies set up, getting stuff to work, actually getting stuff to run, etc.
chilli#5665: That's a large part of the work |
bmk#1476: There's a lot of work needed putting together the evaluations code, though that's blocked on me getting my stuff finished up
Akaibu#9379: Okay, I’m reading that and getting “predicting stuff is really fucking hard, even for literal world experts”
thenightocean#6100: Yes but that goes both ways, and sometimes experts can be too pessimistic with their predictions. See: Lord Kelvin saying heavier-than-air flying machines are impossible and then 2 bike salesman proved him wrong few years later
thenightocean#6100: and in case of AGI the costs of being wrong on the predictions in that way would be a disaster.
bmk#1476: tldr the predictions of experts have very little shannon mutual information with whether something actually happens
Sid#2121: gpt-neox could definitely use work if you're proficient with deepspeed 🙂
gwern#1782: congrats on getting the pile out. I worried several times it would just collapse under its own weight and turn into the typical hackers-get-fun-project-90%-done-but-then-get-bored thing
kip#6104: i find the conversations on this server so interesting, i'm going to look into alignment more so i can join in on the discussion more
AI_WAIFU#2844: I you have any specific things you want to find out about, a lot of us are basically talking directories, and can often point you to relevant material that might not be the easiest to find by conventional methods.
Deleted User#0000: Hello all
Deleted User#0000: Hello @Luigi
Deleted User#0000: Welcome
chilli#5665: Imo it's basically all up to the central person taking responsibility for the project
chilli#5665: There's inevitably a lot of boring stuff that nobody wants to do
Louis#0144: AMA, tried uninstalling python https://cdn.discordapp.com/attachments/729741769738158194/794986858492592140/Screen_Shot_2021-01-02_at_12.45.39_PM.png
Louis#0144: needed to do a clean install
Louis#0144: how fucked am I
Deleted User#0000: Hello @nutbread Sup?
nutbread#0041: helo!
Louis#0144: god im fuckin defusing a bomb here |
3dprint_the_world#6486: probably easier to reinstall linux at that point
3dprint_the_world#6486: I don't know why ubuntu lets you do this. seems like a massive UX design failure to me.
Louis#0144: Rip
Louis#0144: I’ll just reinstall
Sahl#0630: These can’t be all python dependencies... right?
Louis#0144: lol
thrasher#7261: what did pyenv and conda do to you such that you have reached this point
3dprint_the_world#6486: ~~they are things that depend on python~~
Sahl#0630: I’m pretty sure it wouldn’t let you uninstall python without uninstalling those, and they probably remain required even if python’s uninstalled
Sahl#0630: basically pretty sure they’re all dependencies
3dprint_the_world#6486: oh yes, quite right
gwern#1782: that's not a solution here, unfortunately. no one is paid to do this. you can't be 'fired from eleutherai by your PM'. and few or none of us have the skills and willingness to plug arbitrary gaps. so there's always a risk of it just halting, blocking on one or two people who then ghost
3dprint_the_world#6486: but there's an equally large list of things that depend on python
andyljones#7746: save yourself a vast amount of trouble, start working from conda envs at the least and docker images ideally. vscode's dev-in-a-container support is *superb*.
Louis#0144: Yes
Louis#0144: Doing that now
Louis#0144: Rip
will#0685: i've used pipenv quite heavily, really fast to get up and running and it's been quite dreamy: https://realpython.com/pipenv-guide/
nz#9710: if there's anything that is particularly boring but that doesn't require skills too advanced I would happy to help
chilli#5665: Not saying it's a solution, just that most of the reason this worked was that bmk and stella pushed it through |
bmk#1476: for any projects where i'm first author, i'm prepared to make sure it gets done no matter what
Louis#0144: *waves cookie infront of bmks face*
Louis#0144: any paper?
Louis#0144: LMAO
bmk#1476: only ones where i'm first author
bmk#1476: if i'm second/middle author, i'll sit back and watch the dumpster fire while eating popcorn
Akaibu#9379: Convert The Pile into Brainfuck for optimal storage
Louis#0144: True
StellaAthena#3530: I would say “go to hell” but TBH you’re already there
xen0#3601: are you gonna also train lesser size models, or gonna focus on making something 175B-like only?
xen0#3601: :angrythonk:
Deleted User#0000: Hello all.
xen0#3601: oh heyo law person
Deleted User#0000: hahaha
Deleted User#0000: that is just a side-quest I did
Deleted User#0000: 🙂
Akaibu#9379: Also have the converted data hosted onto punch cards
futurememe#2076: Hey all! This is amazing news! I am currently grappling with GPT-3 price
futurememe#2076: I want to provide services for free but can't because GPT-3 will cost so much
futurememe#2076: Da Vinci is so good but so expensive |
futurememe#2076: LOVE tha work you guys are doing. I hope I can figure out how to use this!!!
futurememe#2076: Trying to make an AI teacher!
WAUthethird#4977: nice to see our referral worked out for you guys
IDK#1046: What are you doing with GPT-3?
xen0#3601: oh heyo wau
Deleted User#0000: I am working on my dataset, which is taking a lot longer than I wish
StellaAthena#3530: Models of all sizes. We’ve trained GPT-2 scale models on the Pile.
xen0#3601: you can just borrow some parts of the pile
xen0#3601: it's pretty giant
Akaibu#9379: _i_ kinda want to create an AI that can create conlangs
Deleted User#0000: Not for what I am doing.
Deleted User#0000: But yah
IDK#1046: Is it available to download btw?
xen0#3601: are evaluation results better than those of original gpt-2 models?
xen0#3601: pile is much more diverse than common crawl, so i'd expect so
futurememe#2076: @IDK lol...the question is not what I am I doing with it....but is what I am NOT doing with it. It is soooo great. I suppose my high level goal is to use it to power an AR Game wrapper on top of reality and give a simulated sentience to all life forms using datasets. IE: I want people to be able to talk to a Sunflower:)
futurememe#2076: Da Vinci rocks this out.
futurememe#2076: BUt sooooo expensive and sucks
futurememe#2076: My goal is to augment empathy in the earth and turn reality into one big video game:)
IDK#1046: What's Da Vinci? |
futurememe#2076: That is the engine for GPT-3 that gives amazing answers
xen0#3601: uh, i wouldn't say that it's an engine, but yeah, that's SOTA GPT-3 model, the "GPT-3" as openai says
Sid#2121: trained models? not yet, very soon. We're transitioning our code to GPUs right now but i'm about to get some final models training to release for our TPU codebase
xen0#3601: weren't you focused on TPUs? :thonk_sun:
xen0#3601: ~~i mean, colab has lots of them, so that'd make good use for us poor people~~
Sid#2121: we were, but as the latest msg in #announcements says, we now have a ton of GPUs 🙂
xen0#3601: ah well
futurememe#2076: That's amazing!
Sid#2121: but yeah, this is precisely the reason we want to still release a model for the TPU codebase
WAUthethird#4977: coreweave is awesome, one time they let me stress test their new A100 system for free
good thing you guys are around to take advantage of that initial offer
xen0#3601: a model trained on the pile you mean, or the GPT-2 replication as you did?
Sid#2121: mtf allows finetuning of >gpt2 size models in colab, which as far as i know hasn't been possible before
futurememe#2076: Is there an API I can test with yet?
Sid#2121: on the pile
xen0#3601: oh, amazing to hear!
xen0#3601: that's really unexpected
Sid#2121: we won't be having an API, we'll just release the weights. They'll work on colab, so everyone can run them for free
xen0#3601: ~~inb4 colab breaks down like in days of old ai dungeon~~
Sid#2121: (to clarify, just for the TPU models) |
WAUthethird#4977: question - will both the non-distilled and distilled weights be released?
Or just the distilled ones?
maghav#7178: I wonder if the next step to The Pile is "The Image Pile" essentially replicating google's JFT-300M dataset or something equivalent? @bmk
Mimic#3790: I think it's amazing OpenAI only got gpt to 3, and gpt-neo is already all the way up to X 😜
Sid#2121: yeah, can't thank you enough for the referral 🙂
bmk#1476: image pile is not being considered
xen0#3601: eleuther is focused on language models right now i think :angrythonk:
Sid#2121: next we'll release gpt-neoIV, just to fuck with ya
StellaAthena#3530: Probably both. As in, there’s no specific plans but also no reason to not do both
xen0#3601: ~~or you'll just go corpo route and say "hey we'll offer gpt-3 but at lower prices, though no open-source for ya"~~
Sid#2121: we've had discussions about this before, but the risk of illegal / immoral content is much higher with images, so we've decided to stay away for now. You should check out YFCC100M tho
Sid#2121: We're too disorganized to even think about starting up a corporation lmao
Sid#2121: that and we don't want to
StellaAthena#3530: The range of things that can go wrong in image data is much, *much* worse than text.
xen0#3601: welp, openai sure did go that route... :p
remember when GPT-2 wasn't released *really* because of "ethics"?
StellaAthena#3530: There’s no text version of “child pornography” for example
Louis#0144: I got them to lend me GPUs too
Louis#0144: :^)
Louis#0144: I got 8 GPUs for two days |
xen0#3601: there literally is
Louis#0144: There is
xen0#3601: people write scary stuff, and even if it's text and not illegal, it's still highly immoral
StellaAthena#3530: What is it?
xen0#3601: ~~i mean, if you knew what people use AI Dungeon for~~
Louis#0144: Yeahhhhh
Louis#0144: That’s a thing
45#2247: mein kampf: exists
StellaAthena#3530: I agree that there is immoral stuff in text. We’ve had some discussions about this in the past. But we felt that it was much, much less of a worry for text.
StellaAthena#3530: What are some examples of really bad text?
StellaAthena#3530: (Don’t write them, describe them)
Sid#2121: ok, but writing fucked up stuff about children, as disgusting as it may be, *doesn't involve the harming of actual children*
xen0#3601: still highly immoral, still biases for AI
futurememe#2076: LOL AI Dungeon is amazing
Louis#0144: There’s tons of loli fanfic
StellaAthena#3530: (Hopefully that’s obvious)
futurememe#2076: My daughter and I do so much around poop and pee
maghav#7178: I worked on hate speech detection this whole year - some text is absolutely horrendous - but not with child explicit/pornography levels
futurememe#2076: the potty humor is endless
futurememe#2076: hahaa |
StellaAthena#3530: This is my attitude too
45#2247: what about the pain from reading bad stuff that corrupts people's morals and produces pain while reading?
futurememe#2076: AI Dungeon is game of the year
AI_WAIFU#2844: are we talking about my degenerate fantasies rn?
xen0#3601: it was released in 2019, so nah, doesn't work
StellaAthena#3530: No, we’re talking about ethical hazards of data distribution
futurememe#2076: 🙂 well i just discovered this year. So my game of the year
xen0#3601: though considering that gpt-3 version rolled out only in june-july...
Sid#2121: don't read it, ya dummy
Sid#2121: the only real 'bad text' is infohazards, and even those are mostly a rationalist meme
bmk#1476: oh let's not go there rn
bmk#1476: the existence of infohazards is tiself an infohazard
Louis#0144: Too late
pwang99#3791: Congrats @StellaAthena on the big announcement!
StellaAthena#3530: Yeah, the less than a year gap is astounding
pwang99#3791: That's great about CoreWeave
pwang99#3791: Quick q: have you guys thought about assembling Piles in other languages?
Sid#2121: it's in the works @pwang99
pwang99#3791: awesome
Sid#2121: see #the-pile |
futurememe#2076: So if I wanted to start working with this repo....I wanted to train it on a students profile....Could I start training it to summarize it?
futurememe#2076: IE take all the data and give me a summary?
WAUthethird#4977: imo, weights alone aren't harmful and shouldn't be censored
Use cases are, but I don't think there's much more of a risk with gpt-neo than with GPT-2, aside from the coherency and data differences
futurememe#2076: https://github.com/EleutherAI/gpt-neox
Sid#2121: sorta, but it might take some extra work. A GPT-3 sized version could probably do decently just with few-shot, but what you really want to do is https://openai.com/blog/learning-to-summarize-with-human-feedback/
xen0#3601: i don't think that you can ever censor weights :thonk:
AI has biases, but usually doesn't go into them much if you don't prompt for it
futurememe#2076: Sid I friending yoU! Gonna read!
futurememe#2076: wow
xen0#3601: you can filter words/use semantic analysis to detect a harmful sentence, but censoring weights themselves isn't possible i think?..
WAUthethird#4977: indeed, you can't really censor weights after the fact
45#2247: We should censor numbers that could be interpreted as weights encoding immoral classifiers
45#2247: Ban math
45#2247: cancel ZFC
bmk#1476: stop doing ML
WAUthethird#4977: lame, how will I get my latex generator up and running
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/795038911070994452/EitxhWtUcAIYDXO.png
maghav#7178: ML is love, ML is life
xen0#3601: imagine trying to make machine learn something |
bruh, it's a machine, how is it gonna learn lol
maghav#7178: https://giphy.com/gifs/rIq6ASPIqo2k0
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/795041354844471336/EnrxyVEVoAILduW.png
xen0#3601: eleuther contribs training gpt-3 replication, 2021, colorized
Deleted User#0000: You know what's interesting and kinda sad
Deleted User#0000: Even with our best ais
Deleted User#0000: Like conversation ais for example
Deleted User#0000: No matter how good they get they can only give an illusion of understanding
Deleted User#0000: Not true conscious understanding
Deleted User#0000: Just mimicking humans and their language patterns
Louis#0144: Oh no
Louis#0144: Oh no
Louis#0144: Don’t u start them
Louis#0144: Bmk shield ur eyes
Deleted User#0000: What?
AI_WAIFU#2844: Too late
AI_WAIFU#2844: jk I'm actually way too fucking tired to spell this out for like the 5th time
Sid#2121: G...gary?
Louis#0144: LMAO
zphang#7252: how long until gary joins the discord |
Louis#0144: @Garymarcus
asara#0001: did you just drop one controversial topic only for a completely different one to be brought up instantly
Louis#0144: HES HERE
Louis#0144: WBAT
Deleted User#0000: Oh what did I cause
Deleted User#0000: Just pointing out something
Louis#0144: Sigh
Deleted User#0000: Like this stuff is v1 of AI shit
Deleted User#0000: True consciousness would be like v2
Louis#0144: OH NO IT IS GARY
Louis#0144: Ban Gary hurry!
Louis#0144: Before he infects us
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/795047340419448842/BETTER_MEME.jpg
Deleted User#0000: *Warning. Gary has entered the facility*
Sid#2121: ok, maybe i can illustrate the contention here with a question. How do you know when something is 'conscious'?
Deleted User#0000: Yeah I get that point
Deleted User#0000: We haven't figured out a solid way of defining that yet
Deleted User#0000: But like the neural chat ai basically are just using language patterns
Deleted User#0000: They don't 'understand' it in any way
AI_WAIFU#2844: Do you understand what it means to understand something? |
Deleted User#0000: Stop it before you cause a reoccurrtion error lol
Sid#2121: that's precisely the point
Sid#2121: @Deleted User https://www.lesswrong.com/s/5uZQHpecjn7955faL/p/fysgqk4CjAwhBgNYT
Deleted User#0000: Yeah
Deleted User#0000: But you understand my point that the current ai just mimic language
Deleted User#0000: And thanks for clearing this up thoe
AI_WAIFU#2844: I'll take that as a no, you don't.
Deleted User#0000: 🕺
kip#6104: okay, but we just mimic what we see other humans do too as well.
Deleted User#0000: yeah i know
Deleted User#0000: but humans learn an understanding
Deleted User#0000: of like language
Deleted User#0000: rather than just copying it
Deleted User#0000: or what others say
Deleted User#0000: but i get ur point 🙂
kip#6104: humans just copy humans, but the encoding of knowledge is perhaps different
Sid#2121: "Do you understand what it means to understand something?"
Sid#2121: If you don't know what understanding means, how do you know it isn't just sufficiently advanced mimicry
45#2247: wait aren't humans perfect symbol manipulators made of magic blood vessels impossible to replicate on sillicon ?
kip#6104: yeah man your forgetting about the soul |
kip#6104: that stuffs impossible to re-create at all.
cfoster0#4356: it's that hardware DRM, maaaaan, old man's out to get us, maaaaaaaaan
kip#6104: brain go brrrr
Deleted User#0000: Screw brains all my Homies use GTX 1080ti's
turian#1607: Golf clap too soon
Bedebao#4842: Two new projects popped up all of a sudden?
Louis#0144: which
Bedebao#4842: #alphafold and #multimodal
Louis#0144: To be fair alphafold was ready to be turned into its own channel
Louis#0144: lll
Louis#0144: Lol*
Louis#0144: idk about #multimodal tho
Louis#0144: seemed a bit rushed
Louis#0144: ngl
turian#1607: Why is there no interest in an #audio modality channel?
Louis#0144: LMAO
Bedebao#4842: 120 Days of Sodom, by the Marquis de Sade. The guy that sadism is named after.
Aran Komatsuzaki#5714: probably cuz i had been working on that for a few weeks without talking with other people much lol
Louis#0144: oh
Louis#0144: true |
Louis#0144: ok
asara#0001: there was definitely an audio channel here for like a few days or something, guess it was experimental and got removed
bmk#1476: it was inactive and we like to keep the channel list clean so we purge inactive channels every once in a while
asara#0001: yeah, either that or move them to the very bottom of the list
zphang#7252: "publish or perish"
Sphinx#2092: Wait, are you the guy that wrote that paper with Fevry?
zphang#7252: probably?
zphang#7252: depends on which paper
Sphinx#2092: The compression one.
Sphinx#2092: Maybe its a small world.
zphang#7252: lol that yea
Sphinx#2092: Nice, small world indeed.
Sahl#0630: especially thanks to compression...
triggerhappygandi#0001: @StellaAthena we have sponsors now?
triggerhappygandi#0001: Inb4 we are the next Deepmind
triggerhappygandi#0001: _Weebmind_, if you will
bmk#1476: no
bmk#1476: i will not
triggerhappygandi#0001: It is the best name
turian#1607: Fair enough |
turian#1607: Dialogue (e.g. chat) would be good. It's more interesting and dynamic than single author language. I'm not aware of any large scale dialogue corpus. Would have impact
turian#1607: Nietzsche says that in german, all words in the sentence are the slave of the verb. I can't think of anything more german than thaz
triggerhappygandi#0001: Is there enough open source text data
triggerhappygandi#0001: Like, even the gpt-3 training data looked to be stretching the limit. And the pile has most text data I can think of.
Sid#2121: eh, the pile is like a drop in the ocean. Common Crawl crawls like 10 TiB of text a month, there's 500GB of fan fiction alone, and there's massive parts of the internet that are untouched by CC or the pile (twitter, discord, facebook, chinese internet, etc.). 800GB is ~1.5 million books. The Library of Congress has 39 million.
cfoster0#4356: 100 TB is definitely a very bold vision, but there's no reason to set our sights meekly. "Shoot for the moon..."
Sid#2121: It’s gonna have to mostly be CC, tbf
Cheese is good#5316: Hey so uh I know this might sound stupid because all of seem to have come to a mutual understanding about this whole thing, but can I ask about the distillation or whatever it's called? In other words, would someone be able to use the language model and everything on their computer or would it require a tpu or smth
cfoster0#4356: Yeah I'm expecting we'll probably be able to hit 20 TB of non-CC data and then need to fill in the rest
Louis#0144: It wouldn’t be on a local computer
Louis#0144: You’d still require beefy GPUs
Louis#0144: Probably multiple
Louis#0144: Or TPUs
Louis#0144: But you would not need thousands
goolulusaurs#1571: Probably you could combine a few different optimizations and get it to the point where it could be run locally with a beefy machine.
Louis#0144: I’m skeptical
Louis#0144: Like a 1B model can be ran locally
Louis#0144: But bigger than that and you run into issues
goolulusaurs#1571: I was thinking using something like L2L https://arxiv.org/abs/2002.05645, which I think would work in terms of memory but be really slow, and something else like distillation to speed it up.
cfoster0#4356: For the above reason I'm pretty excited at the results we're seeing so far from Shortformer/PIA, which seems to both work well in training and speeds up inference |
goolulusaurs#1571: PIA?
cfoster0#4356: Position-infused attention, from the same paper. Lets you do fast caching among other benefits
cfoster0#4356: https://arxiv.org/abs/2012.15832
CRG#8707: I think the T5 relative bias is probably the best option https://discord.com/channels/729741769192767510/730090096287547444/794644425094463528
cfoster0#4356: Oh, does that speed up inference too?
CRG#8707: Yeah, it also lets you cache results https://discord.com/channels/729741769192767510/747850033994662000/794279545347506186
CRG#8707: A 2048 context 96 layer model with caching would have a maximum attention span of 196608 tokens.
Louis#0144: Wtf
Louis#0144: That’s crazy
Louis#0144: How do you use this in practice
CRG#8707: The effective span would be about half of that, but still ~100K https://cdn.discordapp.com/attachments/729741769738158194/795317078634528768/80dff9b9f45ac1a1c6ca2caad358be0b.png
CRG#8707: Novel co-writing?
cfoster0#4356: You'd still need to hold all that in memory, no? 🤔
CRG#8707: Only the last span really
cfoster0#4356: Oh I see. You'd only need the rightmost span of 2048 at each layer
CRG#8707: You attend to tokens that have already attended to tokens... until 100K
goolulusaurs#1571: That's like a recurrence mechanism, sounds similar to Transformer-XL
Louis#0144: No I know that
CRG#8707: Yeah its the TrXL mechanism
Louis#0144: I meant like |
Louis#0144: How do I literally use this
Louis#0144: Is there a model that does this
CRG#8707: Other than Transformer-XL?
Louis#0144: Ye
CRG#8707: XL-net and T5 could do it.
cfoster0#4356: At this point I'm wondering whether training with smaller context windows + caching (+ effective spans) might be the move
CRG#8707: Dynamic attention span could also be feasible with GPUs
CRG#8707: https://arxiv.org/abs/1905.07799
rivalset#4984: would you use caching only during inference or also during training?
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/795321376927514634/0mrV1VMF_G2mhQ9Jj.png
rivalset#4984: so BPTT
CRG#8707: No
CRG#8707: You stop the gradient
rivalset#4984: oh right
cfoster0#4356: Yeah... was trying to think through the simple, easy to implement optimizations, since I already feel like I'm backseat driving 😅
CRG#8707: Yeah, could be a nightmare to mesh with DeepSpeed and everything
wassp#2544: Hi everyone, thank you for the invitation to the server!
wassp#2544: How can I as a newcomer make myself useful here?
AI_WAIFU#2844: How much do you know?
kip#6104: i'd like to ask the same question. i think i know quite a bit in general about machine learning though i have not worked with deep-speed before |
Sid#2121: most of our WIP repos have issues with things that need doing. Most pressing rn is gpt-neox. We're trying to get better at documenting work and using git properly. If you think you can tackle any of the issues, make a comment in github saying you're going to take it on
Sid#2121: and then take it on lol
Sid#2121: the pile v-2 is also in the works, but i think we're all taking a break from that for a bit
Sid#2121: I'll try and get round to making git issues for that soon
Sid#2121: but we need to figure out how to extract Common Crawl well, in multiple languages. Another big plus would be extracting PDFs well, which is a super hard problem.
wassp#2544: Gotcha @Sid , thanks
cfoster0#4356: +1 on these as medium-term goals. If someone worked out an algorithm for robust multilingual HTML-to-text, we'd be well on our way for v2. We've also got a bunch of PDFs waiting in the wings.
cfoster0#4356: +1 on these as medium-term goals. If someone worked out an algorithm for robust multilingual HTML-to-text, we'd be well on our way for v2. We've also got a bunch of PDFs waiting in the wings.
bmk#1476: the alternate pathway is figuring out robust multilingual garbage cleaning of WETs
bmk#1476: which is also hard and unsolved
bmk#1476: the good news is that we can run ablation experiments to figure out which extraction has the highest quality
3dprint_the_world#6486: what kind of garbage cleaning
bmk#1476: Lemme show some examples
bmk#1476: this is filtered for english only, btw
bmk#1476: any garbage in here has to be cleanable not only in english but also in all other languages
bmk#1476: actually one sec this is in json
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/795393186796273694/raw_cc.txt
bmk#1476: ok there you go
StellaAthena#3530: @Sid TBH I think we should just stop saying “the pile V2 is in the works” when people ask what they can help with.
StellaAthena#3530: Also, I’m in the middle of making a “jobs board” that I am hoping to put out tomorrow PSA |
bmk#1476: Reminder that lead role on multilingual Pile is open
3dprint_the_world#6486: awesome, does it have research-y type stuff too
3dprint_the_world#6486: I'm sure lots of people would be keen on helping out with maths, data analysis, etc. too
Gurkenglas#7362: I hear you're planning to bring about an open-source GPT-3. Wouldn't that increase existential risk?
cognomen#6297: define the risk
chilli#5665: There’s a section in the paper about it
Louis#0144: Hey dorks
Louis#0144: How u guys doing
Louis#0144: Doubt it
3dprint_the_world#6486: why?
Gurkenglas#7362: Suppose a project that starts from GPT-3 takes 1d100 years to reach AGI and 1d100 years to make their approach safe. If there is one project, it has a 50% chance of working out safely. If there are a hundred such projects, you will almost immediately have one of them finish unsafely.
3dprint_the_world#6486: why?
Gurkenglas#7362: Which part?
3dprint_the_world#6486: everything
Gurkenglas#7362: The way one usually does these debates is I say "A and B" and then you pick one for me to defend.
Louis#0144: He’s claiming your premises don’t make sense
Louis#0144: Lol
Gurkenglas#7362: Okay. What model would you use for predicting when a project finishes, and how likely it is to finish safely? That might be in "the paper"... link?
StellaAthena#3530: If it’s safe to sell to MSFT it’s safe to open source
Gurkenglas#7362: If my above premises were correct, it would be safe to sell to MSFT but not safe to open-source, yes? |
Hiccup#6835: yo
cfoster0#4356: (haven't you been around here for a while btw? 😄 this has been the plan all along)
StellaAthena#3530: I think that analysis makes absolutely no sense. I don’t even know how to respond to it tbh.
Hiccup#6835: anybody know if the technology used in west world is similar to what gpt3 is like or would it be more advanced
Hiccup#6835: like giving your ai a conscious seems kinda odd
StellaAthena#3530: Nothing real is like West World
Gurkenglas#7362: I can relax the premises. Suppose that the more carefully a project proceeds, the later it finishes, but the more likely it is to finish safely. Then there is a coordination problem. These are harder with more players, so we should minimize the number of players. Does this make more sense?
3dprint_the_world#6486: let's not go there...
bmk#1476: the word "consicousness" sets off alarm bells lol
Hiccup#6835: just an internal voice
Hiccup#6835: is what i meant
bmk#1476: also i just googled it and west world is a movie lmao
bmk#1476: or, er, tv series
bmk#1476: fiction
chilli#5665: It’s a tv show
chilli#5665: Lol
Hiccup#6835: its a fictional tv series
Sphinx#2092: It's both.
bmk#1476: they use the power of magic™, end of story
Hiccup#6835: im watching it rn |
Sphinx#2092: It was originally a movie series, with only the first movie being good.
Hiccup#6835: but thats not to say the technology couldnt be real
chilli#5665: Hence why “I don’t find this conversation interesting “
Gurkenglas#7362: What does Stella's reaction to my last post mean?
bmk#1476: @Gurkenglas i think our general position is that "the most dangrous thing about gpt3 is the *information* that scaling works, not the model itself, and openai already released that knowledge"
chilli#5665: If you hover over it you’ll see it says gamer yes
Hiccup#6835: we shouldnt use our imagination to design the world - @chilli
chilli#5665: What does that even mean
Gurkenglas#7362: not on the mobile app ._.
Hiccup#6835: lol it just felt like you are using the show being fiction as a this conversation isnt interesting thing
3dprint_the_world#6486: basically your argument boils down to "there is an incentive to make AI quickly rather than safely", correct?
3dprint_the_world#6486: it really has nothing to do with the number of 'players' even
chilli#5665: Speculating about the way a fictional AI is implemented isn’t interesting to me
bmk#1476: your question sounds like "does the warp drive in star trek use spacex's XYX-123 model rocket?"
Hiccup#6835: well in the show they talk about how it is designed
chilli#5665: It’s like speculating about the physics behind Harry Potter
bismarck91#5255: lol.
chilli#5665: (And yes I’ve read hpmor)
Hiccup#6835: but i mean the idea that giving an ai an inner voice to think to itself like humans do would that be interesting
bmk#1476: so take this as an official Mod Warning to drop this conversation before it gets out of hand |
bmk#1476: we're not here to speculate about some movie's fictional AI
cognomen#6297: or implement it
Gurkenglas#7362: How do you think you know that there isn't already a way to generate useful research given the right prompt protocol?
cognomen#6297: (yet)
Hiccup#6835: well alright then, il just read what you guys are talking about
bmk#1476: i don't believe gpt3 itself has enough capacity to do so
bmk#1476: i believe future models will, though
bmk#1476: so the dangerous thing is the *information* that scaling works
Veedrac#0443: Congrats on a pretty crazy 2021 so far guys
chilli#5665: Yes at this rate we’ll out publish google
chilli#5665: Maybe
StellaAthena#3530: Lol
chilli#5665: Wait no probably not lol
chilli#5665: Google probably publishes more than once a day on average
bismarck91#5255: https://chuvpilo.medium.com/ai-research-rankings-2020-can-the-united-states-stay-ahead-of-china-61cf14b1216
bismarck91#5255: Theres some metrics in there.
bmk#1476: icml https://cdn.discordapp.com/attachments/729741769738158194/795439409884430356/unknown.png
bmk#1476: if we can publish more than 5 papers at icml we can get into the top 30
chilli#5665: Lol that’s hard
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/795439538977243226/unknown.png |
bmk#1476: or at least we can try to have more than 5 papers published anywhere
bismarck91#5255: None.ai
chilli#5665: That’s doable but hard
bmk#1476: certainly not impossible
Gurkenglas#7362: That's the premise, yes.
bmk#1476: we already have at least 5 research ideas
3dprint_the_world#6486: @Gurkenglas ok awesome, then we agree
bmk#1476: we just need people to do them
Gurkenglas#7362: The premise implies that more players are bad, because the careleast player decides the game.
3dprint_the_world#6486: why? Surely the same incentive applies regardless of the number of players in the game.
chilli#5665: 5 research ideas != 5 upcoming papers != 5 accepted papers
galacticglum#6741: if only
bmk#1476: simple we just get more research ideas then
chilli#5665: Of course
chilli#5665: I think it’s doable
Gurkenglas#7362: Different players care differently about safety and finishing early. Some even care about finishing earlier than other players.
3dprint_the_world#6486: *but none of this has to do with number of players*
3dprint_the_world#6486: I think you may be making an implicit assumption, without realizing it, that if there's just one player, they are going to inherently be more ethical than if there are many players.
chilli#5665: But I think our research ideas also tend to be more ambitious on average lol
bmk#1476: a few just off the top of my head |
aran's moe scaling paper is well underway
stella has a lot of theory paper ideas
i want to put together a pile scaling law paper
AI_WAIFU is working on a Thing™
there's two different datasets: a spiritual successor to the pile and a CC-based dataset that i want to drag archivist along for
the context length thing is probably still a thing we can do
we're eventually going to do the Super Rigorous scaling laws paper
aran's doing the vae stuff
connor has alignment stuff he wants to do
3dprint_the_world#6486: but the same incentives apply to *everyone*
bmk#1476: some of these points expand to more than one paper
3dprint_the_world#6486: and you can make the argument that actually, the more people are involved, the better
Gurkenglas#7362: Hm? Coordination games, Tragedies of the Commons, Unilateralist's Curses, hit harder the more players there are. The more countries have nukes, the likelier a nuclear exchange. A person is smart, but people are stupid.
3dprint_the_world#6486: taking the limit, if everyone on the planet is involved in AI research, there's a much higher chance of it being aligned towards humanity's collective interests (not that that's necessarily a good thing, of course)
Veedrac#0443: I feel I have sparked a dark and dangerous digression lol.
3dprint_the_world#6486: the analogy to nukes doesn't apply because AI is *already* monopolized (not fully but to a large extent) by *private* companies!
chilli#5665: What’s the AI_WAIFU thing
3dprint_the_world#6486: it would be like if General Electric had their own proprietary nuclear weapon
chilli#5665: And do any conferences allow 2d girls to publish |
bmk#1476: wdym
3dprint_the_world#6486: taking the limit to all of humanity, if everyone is involved in building AIs then there's more likelihood of it being aligned to everyone's interests than a small number of people (of course, that may not even necessarily be a good thing)
turian#1607: Not being snarky or anything, why is academic publishing a goal of yours? Happy to help
bmk#1476: i mean, the usual: career advancement opportunities
chilli#5665: Academic papers still have the benefit of being 3rd party “proof of legitimacy “
bmk#1476: yeah
chilli#5665: If they’ve never heard of the pile
bmk#1476: once people start taking eleuther seriously, it'll get us more resources
Gurkenglas#7362: If everyone's building their own AI then whichever one finishes first might end up aligned to any one random person. If it's you, that might mean that it cares about everyone.
bmk#1476: more social capital
bmk#1476: as an organization, we'll have more agency
Gurkenglas#7362: I think your model is that if there are many AIs developed at the same time they're going to each have a say in where the world goes...
bmk#1476: Legit™ researchers will be more interested in helping a Real Legit Grassroots Research Org than a random discord channel
chilli#5665: Well, I think the pile has already gotten eleuther a lot of legitimacy
bmk#1476: sure, but the ceiling is high
Veedrac#0443: @bmk It was just a joking reference to paper counts being a harmful thing for academia.
bmk#1476: oh, lol
bmk#1476: i meant like "stella has a lot of theory paper ideas" isn;t a single paper
chilli#5665: It proves that we’re more than a discord channel with big plans that never actually goes through with them
Gurkenglas#7362: My model is that there's going to be a hard takeoff that is probably going to be reached by exactly one AI at a time because it happens so fast and afterward it can just hack the planet. |
3dprint_the_world#6486: @Gurkenglas taking your argument to its logical conclusion, no one should ever share any information about how to make an AI. All research should be closed.
Now, you talk about one hell of a coordination problem...
Gurkenglas#7362: I fear if two AIs take off at the same time we would be in far greater trouble...
Sahl#0630: That’s potentially better for us
Sahl#0630: Probably not tho
Gurkenglas#7362: I agree that the research should be closed, and coordinated the hard way, using trust networks and/or centralized overseers. And I realize that in such an environment I probably wouldn't get to be part of the world's plot... but surviving it is more important.
3dprint_the_world#6486: ok good luck
Hiccup#6835: whys it take so much resources to make an ai
Hiccup#6835: i mean thats why open ai has become closed ai right cuz its expensive to make a nice ai
Gurkenglas#7362: I agree that OpenAI's name is wrong, but I am glad that they realized that openness just kills everyone.
3dprint_the_world#6486: personally, I think research should be open, and grassroots organizations like EleutherAI should be empowered, and MSFT and OAI shouldn't be sucking up all the oxygen in the room. But that's just me.
3dprint_the_world#6486: but apparently you're ok with everyone just handing over the responsibility of AI development to Microsoft
Hiccup#6835: eventually we will have enough compute anybody could make an ai i think rn the technology just isnt where we want it to reach the level of ai that we are trying to reach
Gurkenglas#7362: It doesn't have to be Microsoft, for all I care Hitler could be the monopolist
Deleted User#0000: Hello.
Deleted User#0000: How is everyone>
Deleted User#0000: ?
Gurkenglas#7362: This shouldn't be an argument about whether grassroots movements are more moral, but about whether they kill everyone.
Hiccup#6835: im good
Deleted User#0000: Cool |
cognomen#6297: yes, clearly it would be a terrible thing if the plans for the death star were to fall into the hands of the rebellion
Hiccup#6835: im watching west world 😛
Deleted User#0000: Anyone wanting to see my custom quantum computing QI discord bot, DM me. I will let you in the server to see. 🙂
3dprint_the_world#6486: lol
3dprint_the_world#6486: cool
Gurkenglas#7362: Have you read The Sword of Good?
3dprint_the_world#6486: no
Gurkenglas#7362: Have you read Scott's post on Mistake Theory vs Conflict Theory?
3dprint_the_world#6486: yes
Gurkenglas#7362: It sounds like I've accidentally taken to using Mistake Theory while you are one of those evil Conflict Theorists.
chilli#5665: LOL
3dprint_the_world#6486: ok going to go do some real work now
bmk#1476: consider this an official mod warning: please stop advertising your "quantum computing discord bot" here
Deleted User#0000: ok
Deleted User#0000: was just saying
Deleted User#0000: understood
Lucas!#1234: link?
asparagui#6391: @Lucas! https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/
Lucas!#1234: ty
Deleted User#0000: well |
Deleted User#0000: this group goes with tensorflow or pytorch?
chilli#5665: :berk:
chilli#5665: pytorch unless we have to use tensorflow (which is quite common)
bmk#1476: both
bmk#1476: all of the above
Deleted User#0000: thank you, nice group btw
triggerhappygandi#0001: We should upgrade to mxnet:berk:
triggerhappygandi#0001: Since most people here are masochists
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/795526608198041610/20210104_183507.jpg
Deleted User#0000: Welp
Deleted User#0000: I made it a bit too knowledgeable
Deleted User#0000: Because there is so much data it takes 36 seconds to answer a question
Deleted User#0000: I guess it's time to optimise it now
siri#5473: Ahem, fuck openai
erin#5432: ^
bmk#1476: >.> tfw a sloppy screenshot of an announcement post gets more likes than the pile post https://cdn.discordapp.com/attachments/729741769738158194/795574285821149204/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/795574317135560724/unknown.png
3dprint_the_world#6486: nice
kindiana#1016: > 9% battery
:nooo: |
erin#5432: bruuuu
bmk#1476: im mildly salty
bmk#1476: we spend all that time putting together the pile paper and nobody really cares; then we make a discord announcement of something that's generally been floating around the discord for a bit and that has no deliverables yet and suddenly everyone goes to upvote it
bmk#1476: (this reminds me of how real research would get like 20 upvotes on r/ML while drama would get thousands)
3dprint_the_world#6486: I bet the cross-post on /r/funny will get even more likes
bmk#1476: the x-post of what, the coreweave thing?
3dprint_the_world#6486: yeah sure
bmk#1476: lol
bmk#1476: has anyone done it yet
3dprint_the_world#6486: fortunately not, I don't think
3dprint_the_world#6486: but the internet is fickle
bmk#1476: tbf i think this method of delivery is increasing the coutnersignalling effect
bmk#1476: the fact that it's a shitty discord screenshot *increases* how exclusive the info seems
bmk#1476: wheras if we had made an official blog post about it it signals that we *want* people to care
3dprint_the_world#6486: yeah that could be a part of it for sure
j o e#4696: I joined for this exact reason ^
3dprint_the_world#6486: people generally have a knee jerk reaction to what is perceived as advertising
Aran Komatsuzaki#5714: tbf openai is fucking itself, as people are leaving spontaneously lol
j o e#4696: It seemed like a small group of people passionate about AI, which is much more enticing than an official org/blog post
3dprint_the_world#6486: any more resignations? |
bmk#1476: i propose we amp this up a notch
bmk#1476: from now on, we no longer use the #announcements channel
bmk#1476: we simply post things in #off-topic
3dprint_the_world#6486: make a private channel, make announcements there, then post screenshots in #off-topic
j o e#4696: :galaxybrain:
bmk#1476: i am preserving this for archival before you have a chance to edit it https://cdn.discordapp.com/attachments/729741769738158194/795576895432425502/unknown.png
j o e#4696: it is well known that every discord server ever has a :galaxybrain: and :smoothbrain: analogue
j o e#4696: it's just a matter of finding it
bmk#1476: also :berk:
bmk#1476: among our highlights, we also have :gameryes: , :virgin: , :chad: , :brr: , :nooo: , :yud: , :firealarm: , :mesh: , :ultrazucc: , :lucid: , :yarr:
j o e#4696: :mesh: = :kekw: where I'm from
bmk#1476: :lucid: , :firealarm: , :yud: , :ultrazucc: are our most used ones
j o e#4696: I like that line up, might steal some
j o e#4696: I bet :virgin: gets used a lot
bmk#1476: not really actually
Aran Komatsuzaki#5714: we instead use this: :yarr:
bmk#1476: :carlos:
bmk#1476: :carlos2:
bmk#1476: :foom:
bmk#1476: we should use foom more |
erin#5432: lucidrainmssss
bmk#1476: :lucid:
j o e#4696: I'm pretty interested in getting involved in #the-pile , it looks fun
j o e#4696: I'm working on a data mining startup/prototype and I think it could help
bmk#1476: That would be awesome
j o e#4696: what's the best way to get involved?
bmk#1476: Our goal for v2 is 100TB
bmk#1476: We only have about 30-40TB of that
bmk#1476: Help us find another 60TB of text somewhere
triggerhappygandi#0001: lmao wtf
thenightocean#6100: Library of Babel?
triggerhappygandi#0001: Have you tried crawling twitter? Literally every single book ever? Facebook?
bmk#1476: Only if you sort by quality
Aran Komatsuzaki#5714: it's a great thing, since you don't even have to advertise by yourself. eleutherai is outsourcing
j o e#4696: @Twisterr1000
bmk#1476: Less twitter pls
thenightocean#6100: Its a real thing btw: https://libraryofbabel.info/
triggerhappygandi#0001: Hmmm. Agreed. Then Facebook is not worthwhile either
pdillis#2914: Parler? :KEK:
triggerhappygandi#0001: :mesh: |
triggerhappygandi#0001: @bmk is this done? https://www.loc.gov/about/general-information/
bmk#1476: Is it 60TB
triggerhappygandi#0001: Idk. Probably not. But still a good 10TB+
bmk#1476: If you can figure out how to scrape it I'll include it
triggerhappygandi#0001: Will check
bmk#1476: Actually how about this, to make your life easier: I'll toss in 10TB of github
bmk#1476: So y'all only need to get 50TB of text
bmk#1476: Which is clearly much easier
j o e#4696: @bmk where are you storing your 40TB + of data?
bmk#1476: 40TB is peanuts
triggerhappygandi#0001: Assuming 500 kb avg length, 170M*0.5MB = 85TB
bmk#1476: I have more than 40TB worth of disks on my desk at this moment
bmk#1476: And I'm not even that into storage
triggerhappygandi#0001: Damn. Are you a tech youtuber or something
j o e#4696: ah okay so it's on your personal drives, not stored in some central location yet
triggerhappygandi#0001: I have like 5 TB storage ever purchased
bmk#1476: Lmao we're literally replicating gpt3 and your question is where we're going to get *storage*?
j o e#4696: no not really
j o e#4696: I was just interested in what you were using, whether it was centralised or not
bmk#1476: Don't worry we'll figure it out |
j o e#4696: just for my own benefit because I need massive amounts of storage for my project
bmk#1476: Also I was mostly replying to @triggerhappygandi with my snide comment
bmk#1476: This
j o e#4696: all good
j o e#4696: we're currently using MongoDB Atlas, but I'm not sure that's going to last long :mesh:
bmk#1476: Let's just say we're friends with someone who has unlimited amounts of storage
bmk#1476: Anyways so storage isn't a concern
bmk#1476: We just need to find the data in the first place
triggerhappygandi#0001: But why do you have 40 TB _physical_ storage
j o e#4696: sweet, I'd love to help with that
j o e#4696: I run an AI society at my university with 200+ people, perhaps I could create a Data Mining project/competition to see who can get the most useable text data
bmk#1476: Hmm... If that's the case, then i have a better idea
bmk#1476: Let's pop over to #the-pile
Deleted User#0000: use the "infinite" storage service of google xd
Neuromancer42🔮#6297: Anime Stella photobomb pic got more upboats because well-known u/Wiskkey x-posted it from r/GPT-3 where it was already very successful (you're missing my troll American Eagle btw). I gilded it once it showed up on ML. That's just how the Reddit recommendation algos work half the time. There's an art to successful Reddit posts.
Deleted User#0000: buying hard drives is so much expensive
Deleted User#0000: well, as you know (i guess) gpt-3 costed like 4± million dollars to train
cfoster0#4356: Maybe
cfoster0#4356: Interesting...
Deleted User#0000: it does cuz it has the largest amount of data |
Deleted User#0000: it is unsupervised that means it manage probability for each next sentence and what is commonly more used
StellaAthena#3530: Hard drive space is comically cheap. You can buy an external HD with 10 TB of storage for less than 200 USD.
cfoster0#4356: While yes it is unsupervised, as you scale up, the additional data you need grows much more slowly than the additional compute you need. That means the overall costs from data storage end up being cheap by comparison (to, say, bandwidth)
Louis#0144: Even cheaper if u go with tapes
Louis#0144: I have a tape NAS in my house
triggerhappygandi#0001: I am perpetually waiting for more storage in a single drive :berk:
triggerhappygandi#0001: "We have 1 TB storage smaller than a nail? Cool. Will wait for 1PB version"
triggerhappygandi#0001: I don't see art in most subreddits. Only unadulterated autism tbh
triggerhappygandi#0001: :mesh:
Louis#0144: Reddit is pretty consistently an awful place
Louis#0144: 🤷♂️
Louis#0144: In a lot of ways the format of 4chan is significantly better than Reddit, 4chan just lacks moderation
Louis#0144: I would honestly put Reddit at the bottom of the social networks
triggerhappygandi#0001: That is what makes it so funny
Louis#0144: I don’t think it’ll be around much longer ngl
triggerhappygandi#0001: 4chan I mean
triggerhappygandi#0001: Can't wait. Some subreddits are more enraged than all of twitter combined.
Louis#0144: Yeah
Louis#0144: I think a version of 4chan could be made for normies that takes off as well as twitter did
Louis#0144: It would just need a slightly tweaked presentation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.