data
stringlengths 115
7.61k
|
---|
bmk#1476: actually, i feel like this has already been done before, no?
Noa Nabeshima#0290: sure seems like it should have, but I don't know about it.
bmk#1476: pretty sure shannon ran one
bmk#1476: pretty sure many others have done so too
Noa Nabeshima#0290: Oh, you mean the entropy of language?
bmk#1476: yeah
Noa Nabeshima#0290: https://cdn.discordapp.com/attachments/729741769738158194/760699974484361246/unknown.png
Noa Nabeshima#0290: owo
Noa Nabeshima#0290: https://www.princeton.edu/~wbialek/rome/refs/shannon_51.pdf
bmk#1476: *notices your ngram* OWO what's this
bmk#1476: shannon was a true pioneer
Noa Nabeshima#0290: Well Shannon estimated entropy with a human experimenter as far as I can tell
Noa Nabeshima#0290: but GPT-3 is better than me as far as I can tell at predicting the next word
Noa Nabeshima#0290: in most cases
Noa Nabeshima#0290: the true entropy of language, then, is plausibly much less than Shannon's estimate
Noa Nabeshima#0290: I'm curious in terms of bits/character how efficiently GPT-3 can encode language, does anyone know the answer to that?
Noa Nabeshima#0290: bits/token is easy
Noa Nabeshima#0290: Anyone know the average chars/token weighted by probability of token?
bmk#1476: Like, 4.3
bmk#1476: Off the top of my head |
bmk#1476: Don't quote me on that
kindiana#1016: take a dataset you are interested in and run it through a gpt tokenizer, it might vary a little depending the specific dataset
Noa Nabeshima#0290: If 4.3, and it makes sense to multiply bits/token by tokens/char, we get .4 bits/char which is less than Shannon's empirical entropy lower bound at 100 characters. Ofc 100 characters is much less than 2048 tokens * 4.3 chars/token
Noa Nabeshima#0290: But you can probably fit a law to the lower bound and extrapolate to 9000 tokens
bmk#1476: keep in mind: as context grows, *conditional* entropy drops
kindiana#1016: the training loss is ln, not log2 btw, so its actually like 2.3 bits per token
kindiana#1016: or like 0.53 bpc
Noa Nabeshima#0290: Oh, thank you
Noa Nabeshima#0290: How do you fit possible empirical laws to data for extrapolation? EG in Scaling Laws for Neural Language Models, how did they find simple mathematical expressions that fit the data well?
Noa Nabeshima#0290: Do you just keep fitting variations on exponentiation, addition, multiplication, etc, that look like they would work?
kindiana#1016: i dont imagine it took them very many tries
kindiana#1016: with the data on a plot, and with some understanding of the effects, its probs not too difficult to come up with an expression
bmk#1476: everything is either a line, an exponential curve, or an exponential curve on its side
Noa Nabeshima#0290: Well I fit a function to the data and it looks like even with the extra context, assuming that our GPT-3 bits per character estimate is correct enough above and Shannon's empirical estimate lends itself well to being fitted by an exponential, GPT-3 is beating Shannon's humans.
Noa Nabeshima#0290: https://cdn.discordapp.com/attachments/729741769738158194/760727341281705984/unknown.png
Noa Nabeshima#0290: Estimated entropy at 2048*4.3 characters is .9536
kindiana#1016: I'd totally believe gpt3 is better than humans at next token prediction
kindiana#1016: I think a more fair comparison though would be to have n completions generated by GPT and have humans evaluate which one they think is more likely, and derive entropy from that
FractalCycle#0001: the OpenAI progression for me went:
>Oooh cool, an elon musk collab with sam altman! |
>they're doing some dank research!
>huh, elon left... prolly just a technicality or something.
>oh, they have an alignment guy!
>what's their approach?
>IDA... wait, that doesn't solve the first step where a human trains values...
>gpt-3 is cool!
>alright, they're not releasing it, probably it's too dangerous...
>okay, the closed beta, that's just to prove how good it is...
>okay now they're charging money for it
>okay now they're licensing it to microsoft
StellaAthena#3530: I think you’re journey on this is quite common @FractalCycle. The only place I differ is that I became disillusioned quicker. (@Noa Nabeshima @bmk)
StellaAthena#3530: Also, I don’t think Elon Musk’s presence has a positive correlation with being morally good or anything like that.
StellaAthena#3530: Lol I say that, open twitter, and immediately find this: musk will “not take a Covid-19 vaccine when one becomes available, and declined to say whether he feels a duty to pay employees who want to stay home to avoid contracting the virus.”
https://t.co/f7z4Fm6oeY?amp=1
bmk#1476: I'm.. not the biggest fan of Musk, so my opinion is a bit biased
bmk#1476: But I think being associated with musk is probably net negative
bmk#1476: Also I'm actually not even surprised, I just think OA communicated this so poorly that they turned a non-issue into a major controversy
Daj#7482: > Also I'm actually not even surprised, I just think OA communicated this so poorly that they turned a non-issue into a major controversy
@bmk OA in a nutshell |
Noa Nabeshima#0290: Say you actually had OpenAI's stated mission of making the world better by making AGI before other people and then using it in a responsible and principled way. If the main bottleneck to better systems is money (for researchers and compute), you need to get money *somehow*. If each 100x parameter scaling of GPT cost 100x money (ignoring parallelization requirements and algorithmic improvements) OAI can't actually last two scalings with their 1B from MSFT. (Please correct me, I don't know how capabilities scales with money.) Even if GPT-esque systems won't ultimately be the most promising track, you'd better bet any competent, general system will require a lot of compute and money. Pairing with Microsoft in the short term is one way of staying alive and on track in the long term in order to fulfill the stated mission. GPT-3 is not AGI.
StellaAthena#3530: I’m skeptical of this type of reasoning because it seems to justify almost anything.
StellaAthena#3530: You need some rather strong core principles to not end up at “slavery is good because it lets us generate cash for AGI research”
FractalCycle#0001: > staying alive and on track in the long term
correct me if i'm wrong, but they seemed to have a good amount of funding/funding-commitments early on
Noa Nabeshima#0290: I currently believe that in many worlds where OAI is honest about its institutional values, it would be doing this sort of thing to make money.
StellaAthena#3530: I think that that depends a lot on what “this sort of thing” is. What the deal with MSFT actually constitutes is unclear to me
StellaAthena#3530: I think that MSFT gets exclusive rights to offer GPT-as-a-Service, but that OpenAI will continue to both sell and provide free access to relevant people.
StellaAthena#3530: But that’s me reading into their public statements and not at all something anyone has clearly stated
StellaAthena#3530: If this deal puts GPT-3 in a box and gives the key to Microsoft, then I disagree.
bmk#1476: my biggest issue is
StellaAthena#3530: That’s how their original press release read
StellaAthena#3530: But who knows
bmk#1476: the way they've done this makes absolutely no sense whatsoever
bmk#1476: they have absolutely no reason to make people think that they're "exclusively licencing" GPT3 to microsoft
StellaAthena#3530: Also, tbh, unless there’s some sort of profit sharing arrangement OpenAI is throwing money away
bmk#1476: i know corporate euphemisms are an issue sometimes
bmk#1476: but this is literally inaccurate
Noa Nabeshima#0290: Of course there's some sort of profit sharing arrangement, though.
StellaAthena#3530: And even then they are probably doing so, but at least they’re doing so in a understandable-ish way |
StellaAthena#3530: GPT-3 can easily generate hundreds of millions in revenue over the next 10 years
Noa Nabeshima#0290: At the beginning of 2018 OAI had 10.5M in assets, at the end OAI had 26M in assets
bmk#1476: if i was at OA i would have called it "allowing M$ to use GPT3 for their products" or something
bmk#1476: and buried the exclusive bit
StellaAthena#3530: If OpenAI put me in charge of BD I could increase their operating revenue by an order of magnitude
FractalCycle#0001: the OpenAI post doesn't appear to mention exclusivity at all (not sure though)
StellaAthena#3530: And I’m not even a *good* consultant. I’m a researcher who happens to work for a consulting firm.
bmk#1476: @StellaAthena what would you do? i'm curious
Noa Nabeshima#0290: At the beginning of 2017 OAI had 2.5M in assets, at the end they had 7M in assets.
bmk#1476: and can any of the things be adapted to work well for us without us completely selling out
StellaAthena#3530: Absolutely. Gimme a sec to pull up some figures
Noa Nabeshima#0290: https://projects.propublica.org/nonprofits/organizations/810861541/201920719349300822/IRS990f
https://projects.propublica.org/nonprofits/display_990/810861541/02_2020_prefixes_76-81%2F810861541_201812_990_2020020717127002
Noa Nabeshima#0290: > they have absolutely no reason to make people think that they're "exclusively licencing" GPT3 to microsoft
@bmk They do though, afaict without understanding the cloud ecosystem well, they're trying to get people to commit to Azure and tie OpenAI brand to Microsoft.
bmk#1476: why would "exclusive license" be the best way to convey that meaning?
bmk#1476: wouldn't they want to say "our services are now also available through Azure, powering xyz cool products there!"
Noa Nabeshima#0290: No, because then you don't know that Azure is the only cloud provider that will have GPT-3 in the future such that it makes sense to invest company resources in it.
Noa Nabeshima#0290: This also is suggestive of future OAI services being exclusive on Azure
Kazumi#1297: I don't understand how exclusivity benefits anyone except by making the service scarce and making it more expensive |
bmk#1476: OA themselves still offer GPT3
Noa Nabeshima#0290: making it more important
bmk#1476: or at least it's not precluded
bmk#1476: afaict the only thing that exclusive means is that M$ are the only people outside OA who also have direct access to the weights
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/760902630770999336/unknown.png
Noa Nabeshima#0290: Oh, I misunderstood the original press release. Thanks!
bmk#1476: this is why i complain about the press release being absolutely horrible
Noa Nabeshima#0290: Okay, now I'm thinking that the press release obfuscation makes sense
bmk#1476: ?
Noa Nabeshima#0290: I read it like "OAI and MSFT are in an ongoing partnership, at Microsoft we're going to be at the forefront of AI, GPT-3 is amazing, we will use GPT-3 to make stuff better and noone else is going to be able to"
Daj#7482: fwiw I think Noa is giving OA the charitable interpretation they deserve
bmk#1476: i dont see how my interpretation is uncharitable
Daj#7482: Have we considered analyzing this from the perspective of microsoft, not OA?
Daj#7482: Seems to make a lot more sense then
bmk#1476: i dont think they're doing what makes sense even for OA
bmk#1476: i guess M$ would want to strongarm OA into doing this
bmk#1476: because M$ probably doesn't give a shit about OA's reputation or alignment
Daj#7482: Maybe OA did the research and then handed it to MS for commercialization
Daj#7482: And MS produces the marketing
Daj#7482: Would explain the observed behavior |
bmk#1476: this marketing only makes sense from the perspective of someone who doesn't give a shit about alignment
Daj#7482: Which seems in character for MS
Daj#7482: And out of character for the star cast at OA
bmk#1476: ok i guess it makes sense
bmk#1476: but it should be an "oh shit" moment for alignment that OA is now beholden to the corporate suits at M$ who don't care
Daj#7482: Yes that's an orthogonal argument I agree 100% on
bmk#1476: so OA is potentially compromised
bmk#1476: this is extremely dangerous for ~~our democracy~~ alignment
Daj#7482: Maybe yea
StellaAthena#3530: Okay I’m back with numbers
Daj#7482: I'm more worried about their focus on IDA than that lol
StellaAthena#3530: The US Government spends ~400 million hiring consultants to translate text per year
bmk#1476: youre proposing GPT3 be used for translation?
StellaAthena#3530: It spends another 400 million on text summarization
Daj#7482: How much does it spend on literotica?
Daj#7482: I'm willing to bet it's more than zero lol
StellaAthena#3530: 800 million on information retrieval
Noa Nabeshima#0290: I would guess that's not enough money for the scale OAI would like.
Daj#7482: GPT-N will be transformative AI
StellaAthena#3530: 400 million on summarizing reports |
Daj#7482: You guys should read the Biological Anchors report
Daj#7482: When you have a free month
Daj#7482: The "digital professionals" task is a good framing
Daj#7482: But I think Stella is on the right track too
FractalCycle#0001: where is that report?
Daj#7482: And this is just the US government
Daj#7482: Uhh let me find it Fractal
StellaAthena#3530: > I would guess that's not enough money for the scale OAI would like.
@Noa Nabeshima I just sketched out 1.5 billion dollars per year
bmk#1476: i dont think gpt3 can replace human translation
Noa Nabeshima#0290: > I would guess that's not enough money for the scale OAI would like.
just because it's hard to extract the money without creating a giant company doing the extracting and it's near the MSFT deal scale.
Daj#7482: @FractalCycle
https://drive.google.com/drive/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP
bmk#1476: summarization is probably more likely but there's still the probability of introducing insidious errors
Daj#7482: Same problem exists with humans
bmk#1476: and govts are probably more.. cautious than us
Daj#7482: It just has to be more reliable than humans
StellaAthena#3530: > i dont think gpt3 can replace human translation
@bmk I agree, but it can augment and greatly speed it up. A 50% improvement in speed to accomplish a task is a 33% financial savings when you pay by the hour |
bmk#1476: govts suffer from nirvana fallacy more than we do
Daj#7482: And interpretability and accountability for AI is coming and easier than for human brains
StellaAthena#3530: You’d happily pay 100 mill to save 33% on 400 mill.
Daj#7482: I basically do exactly this at my day job
Daj#7482: Government are slow as fuck but have so ungodly much money to burn
StellaAthena#3530: Obvi OpenAI won’t capture the whole market and I’m roughly guesstimating based on project authorizations
Daj#7482: They are so incomprehensibly kafkaesque inefficient
StellaAthena#3530: But if you haven’t worked with national governments before you have absolutely no idea how much money is available and easily accessible
bmk#1476: just thinking about how much of my tax dollars get flushed down the inefficiency toilet makes me cry
StellaAthena#3530: The USG pays my company alone about 10% of Google’s entire R&D budget to do AI research specifically.
Noa Nabeshima#0290: Wow, really?
StellaAthena#3530: Yeah
Noa Nabeshima#0290: That's crazy
StellaAthena#3530: There’s metric fucktons of money available
Noa Nabeshima#0290: But has your company proven themselves to be really good?
Noa Nabeshima#0290: How does that arrangement even happen?
StellaAthena#3530: No, but the government thinks we have
StellaAthena#3530: 😛
Daj#7482: tbf your employer is sort of the illuminati
StellaAthena#3530: I like the NYT’s description personally: the world’s largest private intelligence agency crossed with a AI start up |
Noa Nabeshima#0290: Palantir?
FractalCycle#0001: Palantir?
StellaAthena#3530: Lol
StellaAthena#3530: Booz Allen Hamilton
FractalCycle#0001: ah
Daj#7482: Hey that's similar to my company except we're just 5 people in a hole
StellaAthena#3530: > No, but the government thinks we have
To clarify this a bit, we do not have really any visibility in the R&D space. However this is largely due to being internally hamstrung IMO. We had a 60% acceptance rate at NeurIPS last year for example (3 of 5)
StellaAthena#3530: Some work I did won “best work by a junior researcher” at a national OR conference and I was *supposed to* present at an internal one this summer before the world ended
StellaAthena#3530: A colleague of mine had a paper in each of NeurIPS, AAAI, CVPR, and ICML last year I believe
StellaAthena#3530: We do very good work, but the company doesn’t think that this is worth investing in. Half the aforementioned papers were written in people’s *spare time* after they had done all the work because the company didn’t think that paying people to write and publish papers would be valuable.
bmk#1476: and you're continuing the tradition of doing valuable research in your spare time, eh?
StellaAthena#3530: Yup lol
Daj#7482: Then why do you hang out here?
Daj#7482: Zing
StellaAthena#3530: To make fun of y’all while I work through writers block
Daj#7482: Fair
StellaAthena#3530: When I told my senior manager about how I was leading two events on AI Security at DEF CON his first response was to confirm that it wasn’t during work hours.
Daj#7482: Hahaha
StellaAthena#3530: > But has your company proven themselves to be really good? |
@Noa Nabeshima Anyways, that’s my rant on why the answer to this question *should be* yes but isn’t.
Daj#7482: I get to do Eleuther stuff during work hours hell yeah
StellaAthena#3530: I mean, it’s 1 pm for me right now
Daj#7482: We won't tattle
StellaAthena#3530: My direct boss doesn’t care how I spend my time as long as I continue to meet my deadlines and when someone wants to get ahold of me they can
StellaAthena#3530: Sometime something my code is compiling
Daj#7482: Yea that's fair
StellaAthena#3530: It's really nice
StellaAthena#3530: It also helps me with managing my disabilities
Daj#7482: Glad that the military industrial complex, of all people, are accommodating of their employees hah
Louis#0144: anyone have a cluster whos willing to help me process a dataset?
Louis#0144: Im having issues downloading and processing it
Louis#0144: 😦
bmk#1476: > anyone have a cluster whos willing to help me process a dataset?
@Louis if you do find out, key us know because we also need a cpu cluster for dataset processing
Noa Nabeshima#0290: For small bit of funding for the project you might be able to work on the Hutter prize now
Noa Nabeshima#0290: http://prize.hutter1.net/
Noa Nabeshima#0290: They increased the upper-bound on compute requirements so that GPT-2-XL size models can be used.
kindiana#1016: still no GPU though, thats makes heavily ML based solutions difficult
bmk#1476: I don't think it's worth the time spent |
Noa Nabeshima#0290: Yeah, that's mostly true
bmk#1476: We have enough things we need to do as is
Noa Nabeshima#0290: You don't need to train it, just do inference for the entire dataset in 24 hours
bmk#1476: ?
kindiana#1016: if you ship the model in the compressor/decompressor, you need to double count the bits in the model
bmk#1476: to clarify, i was talking about engineer time
bmk#1476: we could win, what, a few thousand euros?
Noa Nabeshima#0290: Yeah, 5K
bmk#1476: that would only last us a few months and we already have a somewhat steady source of funding
Noa Nabeshima#0290: from where? I didn't know about this
bmk#1476: donations, mostly
Noa Nabeshima#0290: Oh, that's nice
bmk#1476: ~~i say somewhat because none of it is steady at all~~
bmk#1476: anyways so we have just barely enough month-to-month to keep doing our stuff
Noa Nabeshima#0290: I mean, how useful would money be?
Noa Nabeshima#0290: As you get closer to having something that works there must be *some* creative ways of getting money, probably better than hutter prize money.
bmk#1476: well, just getting a bit more money probably wouldnt change much
bmk#1476: if we could get sizeable amounts of money (say, multiple 10ks)/hardware equivalent of that, then that changes things because we'd be able to purchase a load of machines to do stuff like HUMONGOUS or dedupe better
bmk#1476: then, if we could get absurd amounts of money (say, a few million per year) we could actually pay us all to work full time and maybe hire some people too, which would speed everything up
bmk#1476: the problem is, each of those is a massive jump over the previous step |
bmk#1476: and i'm having trouble seeing what we'd do with in-between amounts of money
bmk#1476: then again im not the best at this whole budgeting thing
gwern#1782: it is a dark day in GPT-3 land. the Slack is filled with the moans of the fallen, who have blown through their tier's tokens already (even people on paid tiers). press F to pay respects.
bmk#1476: I'm one of them, haha
bmk#1476: Blew through all 100k in about 45 seconds
bmk#1476: Now eyeing the paid tiers
Noa Nabeshima#0290: F
bmk#1476: it's ok i'll just bill my company
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/761772490753048586/unknown.png
spirit-from-germany#1488: I just had the thought that in the near future Tech companies or billionaires will pay people basic incomes in exchange for living on the complete surveillance to collect data of social interactions and nonverbal behaviors of people to train multimodal sequence learners on.
Search data could be very useful for all kinds of modeling of social behavior like Marketing, chatbots, computer games, virtual humans / actors, robots that need to simulat human behavior,...
and it wouldn't be that expensive I guess, i am pretty sure that there are plenty of people in the USA or anywhere else sell their privacy for 1000$ a month as long as their data wouldn't be publicly available in association with their identity.
Sid#2121: Why pay when they can get it for free?
researcher2#9294: Laws might change.... maybe...
spirit-from-germany#1488: I think they could get some data for free, but not complete audio visual data of their whole lives
Daj#7482: Pretty sure you can get plenty for free. Or if we're talking a few decades down the line, just build humanoid robots or human uploads
Daj#7482: Once we reverse engineer the brain humans won't even be useful for preference signal
spirit-from-germany#1488: That's the stuff black mirror episodes are made of 😂
Daj#7482: _Reality_ is the stuff Black Mirror episodes are made of lol
Daj#7482: I'm losing my mind at this local backprop paper |
Daj#7482: That's like, the one thing I thought missing for super human AGI
Daj#7482: But that bottleneck is now broken too
bmk#1476: Black mirror is behind the curve on AI
bmk#1476: Still worrying about pesky things like morality
Daj#7482: I can't stress this enough, but I think it's _actually happening_ this time
spirit-from-germany#1488: > I'm losing my mind at this local backprop paper
@Daj
What paper are you talking about?
Daj#7482: > @Daj
>
> What paper are you talking about?
@spirit-from-germany cfoster posted it in #research
Daj#7482: Basically they show that a local Hebbian learning rule from predictive processing approximates global backprop
Daj#7482: So the brain probably really _does_ backprop (or something similar)
Daj#7482: Fuck me
bmk#1476: The brief window between when AI can do black mirror things and the world ceasing to exist as we know it will be an interesting time
spirit-from-germany#1488: Can you send me the link to the paper? @Daj
Daj#7482: > The brief window between when AI can do black mirror things and the world ceasing to exist as we know it will be an interesting time
@bmk also a very short time |
bmk#1476: Yes
Daj#7482: https://openreview.net/forum?id=PdauS7wZBfC
spirit-from-germany#1488: Ok
spirit-from-germany#1488: Thx
Daj#7482: This means that the biggest bottleneck in backprop can be pretty trivially broken
Daj#7482: GPT4 next year confirmed
spirit-from-germany#1488: I am currently in the theme park with my kids, so little bit distracted
Daj#7482: Haha enjoy!
Daj#7482: The world will still be here when you're back
Daj#7482: Probably
researcher2#9294: "spirit-from-germany, I want a ice creeeam!"
spirit-from-germany#1488: Who did confirm GPT 4? :)
Daj#7482: Me
Daj#7482: Was a joke
Daj#7482: But I mean, come on
Daj#7482: Definitely happening
Daj#7482: _TPU v4-8192 has entered the chat_
spirit-from-germany#1488: Here it is! :) https://cdn.discordapp.com/attachments/729741769738158194/761979183139913758/IMG_20201003_175248.jpg
Daj#7482: Damn looks pretty good actually
researcher2#9294: hahaha |
researcher2#9294: I knew it
bmk#1476: Theme parks open already?
spirit-from-germany#1488: Yes, here in Germany had they been reopened since June Or so
researcher2#9294: This seems a bit challenging. How the hell do you learn/remember anything with time based learning when the signal latency is unpredictable? https://cdn.discordapp.com/attachments/729741769738158194/761980038118637599/unknown.png
spirit-from-germany#1488: https://cdn.discordapp.com/attachments/729741769738158194/761980131299295232/IMG_20201003_175619.jpg,https://cdn.discordapp.com/attachments/729741769738158194/761980131903930388/IMG_20201003_175624.jpg,https://cdn.discordapp.com/attachments/729741769738158194/761980132608049192/IMG_20201003_175633.jpg
researcher2#9294: Unless distributed simply means "in the same room"
researcher2#9294: being a kid was great, except at christmas when you felt like an idiot at the adults table
Daj#7482: Distributed in this context just means "only reliant on local data for calculation" I think
Daj#7482: > being a kid was great, except at christmas when you felt like an idiot at the adults table
@researcher2 we always had the cool kids table
cfoster0#4356: The only signals you care about are your own and your neighbors'. With a fixed topology, jitter shouldn't be a big problem, no?
researcher2#9294: Internet routing itself is non-static
researcher2#9294: But you're right generally I think?
researcher2#9294: jitter is fine
researcher2#9294: big jumps not so much
researcher2#9294: This excites me, I had discarded that whole idea as simply being "too hard"
cfoster0#4356: I mean if you send along an ID, like "this is my ouptut #18556", your peer could match with the corresponding output from their local buffer
researcher2#9294: So are we simulating time steps here now?
researcher2#9294: Or pure event based
researcher2#9294: Not sure it particularly matters... |
cfoster0#4356: That would be assuming you're trying to do this over a super unreliable network
cfoster0#4356: If you've got a cluster, just responding in an event based fashion could work well enough, possibly
researcher2#9294: Yeah this would ~~definitely ~~ probably work "in the same room", or even "in the same building".
Daj#7482: "in the same building" is good enough for most applications. Though this does make truly distributed training interesting
Daj#7482: Internet scale training is almost trivial like this
cfoster0#4356: What excites (and terrifies) me most is that this is gives a path to scaling on neuromorphic hardware
Daj#7482: Though you'd still need redundancy / error checkinf
Daj#7482: > What excites (and terrifies) me most is that this is gives a path to scaling on neuromorphic hardware
@cfoster0 This ^^^^^^
Daj#7482: Neuromorphic hardware is about to be the biggest thing since NVIDIA
researcher2#9294: Ideally the neural network itself can handle some degree of dead nodes. Error checking would be handled by current networking stacks I hope.
Daj#7482: I mean more checking for malicious actors
cfoster0#4356: Tbh the brain is super fault tolerant
AI_WAIFU#2844: something something dropout
Daj#7482: NNs are not at all tolerant to malicious gradients
Daj#7482: But otherwise should work yea
cfoster0#4356: True true
Daj#7482: Wow, do you guys remember **checks notes** two months ago when we thought this would be totally impossible?
researcher2#9294: lol
Sid#2121: Man, research by humans is progressing so fast. Imagine what it’s gonna be like when the AI takes over |
researcher2#9294: https://tenor.com/view/elmo-hell-fire-gif-5073559
Daj#7482: I'm already having enough of an existential crisis, thx
Sid#2121: “Wow, do you guys remember *checks notes* five nanoseconds ago when we thought this would be totally impossible?”
Daj#7482: https://media.tenor.com/images/d46785e4ab591e604e61c42e970ea016/tenor.gif
Daj#7482: Why don't we have Ron Paul emote yet
bmk#1476: I feel like a hopeless bystander
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/761985126393315348/images_6.jpeg
bmk#1476: I personally don't put much weight on "biologically plausible" so I don't think too much of this paper though
AI_WAIFU#2844: It's less the biologically plausible aspect that's interesting as much as it's the fact that it fixes important issues with the locality of computation.
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/761985584762716175/doom_paul_1.jpg,https://cdn.discordapp.com/attachments/729741769738158194/761985584976887818/bdc.jpg,https://cdn.discordapp.com/attachments/729741769738158194/761985585187258369/2de.jpg,https://cdn.discordapp.com/attachments/729741769738158194/761985585530273842/848.gif,https://cdn.discordapp.com/attachments/729741769738158194/761985585878663198/688.jpg,https://cdn.discordapp.com/attachments/729741769738158194/761985586068193290/0e0.png
Daj#7482: We're not an apocalyptic cult I swear
Daj#7482: But yeah biologically plausible or not (and I think PP is plausible), it removes the biggest bottleneck on backprop
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/761985923315007508/a85.jpg
Daj#7482: tfw everyone listens to MIRI
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/761986008605655055/baa.jpg
Sid#2121: I’m sorry how do you have this many Ron Paul memes
researcher2#9294: hahahaha
Daj#7482: > I’m sorry how do you have this many Ron Paul memes
@Sid Why _don't_ you have this many Ron Paul memes?
AI_WAIFU#2844: > But yeah biologically plausible or not (and I think PP is plausible), it removes the biggest bottleneck on backprop |
I'm actually not convinced of this. You still need to store activations and wait for gradients to propagate.
Daj#7482: > I'm actually not convinced of this. You still need to store latents and wait for gradients to propagate.
@AI_WAIFU But only one layer
AI_WAIFU#2844: ?
Daj#7482: You can calculate the gradients of layer 1 without having the gradients of layer 10?
AI_WAIFU#2844: I don't think so.
AI_WAIFU#2844: The latent variable updates are local and only depend on the surrounding variables, but to get information from the end of the network to the beginning, you need to do at least n updates where n is the depth of the network.
AI_WAIFU#2844: Right?
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/761986906711916544/Screenshot_2020-10-03-18-23-35-227.jpeg
Daj#7482: Uhm let me actually like sit down and take a look I've been on a walk for like two hours lol
researcher2#9294: I read the abstract and assumed they were talking about STDP 😄
researcher2#9294: lazy I know
AI_WAIFU#2844: I can talk about this in detail later, right now I need to go.
Daj#7482: I've read the paper once so maybe I misunderstood some of the implications but Hebbian learning afaik does _not_ require a full pass through
cfoster0#4356: AFAICT, you need N steps to converge to backprop gradients, but you can adjust yourself as soon as you see your neighbor firing
researcher2#9294: https://cdn.discordapp.com/attachments/729741769738158194/761988051558727710/unknown.png
researcher2#9294: Ok I'll definitely read this one fully
Daj#7482: Someone quickly develop a PP transformer before they do
researcher2#9294: @Deleted User de23c58c you have 2 days
Daj#7482: lol I hope Lucid has seen this or he shows up soon, I want to talk to him about it |
cfoster0#4356: The Krotov & Hopfield paper has the recipe
Daj#7482: It seriously is happening, isn't it?
cfoster0#4356: Possibly
Daj#7482: Even if this doesn't turn out to be the paper that cracks it, it seriously feels like the research is converging on world models, locality, attention etc
Daj#7482: Anyone read the Dreamer2 paper yet? Seemed also amazing
StellaAthena#3530: Let’s say I have a function `f(x) = g(S(h(x))` where `g` and `h` are polynomials but `S` is a stochastic operator. Can I use backdrop to approximate “the” gradient of `f`?
StellaAthena#3530: It seems like the answer should be “yes, for the right notion of a gradient” but I’m having trouble finding stuff on this
gwern#1782: _has seen too many weird methods like random weights going back years and years to believe that the brain *isn't* doing something like backprop. it's all over but the crying IMO_
AI_WAIFU#2844: What do you mean by by stochastic operator? You can probably meaningfully talk about the expectation over the gradient, or the distribution of the gradient. But I don't know if you would call it "the" gradient.
StellaAthena#3530: Expectation over the gradient is fine and probably correct
StellaAthena#3530: `S` takes two vectors as inputs and returns one with probability `p` and the other with probability `q`. It always returns one of its inputs, but which one it is changes.
StellaAthena#3530: Oh, it’s worth clarifying that these functions all output vectors. `f(x)` is a vector
AI_WAIFU#2844: All this stuff is application specific, but you'd probably be looking at the expectation/distribution over the jacobian matrix of the function.
StellaAthena#3530: How do I compute the Jacobian if one of the parts of the function is stochastic?
chirp#4545: emoji request: yann lecun's cake https://cdn.discordapp.com/attachments/729741769738158194/762059774774345748/unknown.png
bmk#1476: on it
AI_WAIFU#2844: Depends, if it applies and you have a nice distribution, you can do things like the reparameterization trick for VAEs where you rewrite the stochastic function as a deterministic function + some external source of randomness as an extra argument. Then you can compute the jacobian the same way you would for any other function. If your stochastic operator discreetly samples from a distribution, things get more complicated. (e.g. a discrete value is drawn from a distribution parameterised by it's input.)
If you only care about rough point estimates and nothing analytic, you could use JAX to compute the jacobian, or one of the tricks in https://arxiv.org/pdf/1711.00123.pdf if you can't use the reparameterization trick on your function.
bmk#1476: :cake: |
AI_WAIFU#2844: :cake: :cake: :cake: :cake:
StellaAthena#3530: @AI_WAIFU cool. My function should be expressed as containing noise.... it takes on the value `a` with probably `p` and value `b` with probability `1-p` where `a`, `b`, and `p` are polynominal functions of the input
AI_WAIFU#2844: So in that case if p is a function of the input you won't be able to use the reparameterization trick. Fortunately, since it's only 2 possibilities, you can explicitly sum over the 2 cases and you don't need any fancy control variate tricks. I think the gradient of the expectation will wind up looking something like so: https://cdn.discordapp.com/attachments/729741769738158194/762114870988111912/ql_45b8354286b40746405e61f768d0ca0d_l3.png
AI_WAIFU#2844: with z as a discrete random variable
kindiana#1016: on the predictive coding paper, the requirement to converge the free energy every batch seems like a pretty big problem for efficient implementation https://cdn.discordapp.com/attachments/729741769738158194/762121040058908702/unknown.png
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/762121158078103582/unknown.png
AI_WAIFU#2844: Yeah, that sounds about right. I was thinking about it though, and you could probably get some benefit by caching the latents between gradient updates, since the gradient isn't gonna change much from 1 parameter update to another. The problem is that you need secondary storage that can keep num_latents*num_examples and you need to be able to stream those to and from the compute cluster.
kindiana#1016: imo reversible layers have solved the compute locality problem; its biologically implausible because it requires exact inverses but 🤷
cfoster0#4356: I think the biggest gains from the predictive coding approach would be for neuromorphic implementations. Could imagine designing it so that the synaptic learning dynamics correspond to the local energy minimization that the paper proposes
cfoster0#4356: That way, you'd just drive the input layer with spikes, let it forward propagate to the output, drive the output neurons with spikes, and let the system settle to an equilibrium with better weights
cfoster0#4356: I think lol
kindiana#1016: yeah, if all parameters are contained in on chip sram, communication much much cheaper
kindiana#1016: but IMO fully sram based accelerators doesn't really make sense for large models (you only need to take ~1e5 steps to converge your model, regardless of size, so sram is too fast/too small to be cost effective)
kindiana#1016: with reversible networks its very similar, you drive the input and let it forward propagate through the network, and have partial gradients and recomputed activations propagating backwards
AI_WAIFU#2844: I think one of the bigger areas for gains is to design architectures that better exploit/map to the memory hierarchy.
AI_WAIFU#2844: Secondary SSD storage is fairly fast but most of the time it's just kinda sitting there.
AI_WAIFU#2844: likewise I don't think we're making the best use of L1 cache/sram
bmk#1476: sram?
bmk#1476: i presume youre not referring to static ram
kindiana#1016: yeah static ram |
bmk#1476: isnt static ram generally just worse than dram?
bmk#1476: slower, lower density
bmk#1476: the only advantage i know is it's easier to work with because you dont need a clock to drive it
kindiana#1016: sram is much faster but requires more chip area and power
bmk#1476: oh
AI_WAIFU#2844: looks like I've been out of the loop. eDRAM appears to be where its at
kindiana#1016: sram is used for cpu/gpu caches and stuff
bmk#1476: literally my only knowledge of SRAM is that using it for homebrew cpu projects is much easier because you dont need to worry about refreshing and stuff as much
AI_WAIFU#2844: but the main idea is to exploit the on chip memory
kindiana#1016: cerebras and graphcore are pretty sram focused
kindiana#1016: they don't have any dram/hbm/off chip high speed memory
bmk#1476: what would happen if you took a dozen cerebras dies
bmk#1476: put them, like, an inch apart
bmk#1476: stacked
bmk#1476: and ran coolant straght across the surface very fast
bmk#1476: like, just submerge the entire thing in a vat and move the coolant through the vat very fast
kindiana#1016: if you have the money for it, you can train large models very fast with all weights on sram
kindiana#1016: but the problem is most people don't want to spend a load of hardware to train models in like 2 days
kindiana#1016: they'd rather spend less on hardware and train in like 2 months
kindiana#1016: at 1e5 steps over 2 months, the bandwidth to capacity ratio required is closer to SSD levels, not sram lol |
kindiana#1016: (but the problem with flash based ssds is you can only rewrite each cell like 1000 times, so you can't really train off an ssd)
AI_WAIFU#2844: You could always just buy 100x more ssd.
kindiana#1016: ssds are ~20c/gb, ram is ~1000c/gb, so its not really worth it
AI_WAIFU#2844: Yeah, which gets back to my point about architectures that better exploit the memory heirachy. SRAM is kinda pointless (althoug you can use it to reduce bandwidth requirements for matmuls) and SSDs are only good for serving up data.
AI_WAIFU#2844: And forget about harddrives.
kindiana#1016: this is my mental model of different memory technologies (technology, capacity/throughput (how long it takes to read/write entire memory), cost per gigabyte), should be roughly the right order of magnitude at least
sram: >1ns, >$200/GB
edram: >1ms, >$60/GB
dram(gddr, hbm etc): ~10ms, ~$10/GB
optane: ~10s, ~$1/GB (~1mil write limit)
flash: ~100s, ~$0.1/GB (~1k write limit)
kindiana#1016: I think dram is the lowest you'd want to go for model weights, with dense models at least
kindiana#1016: still need sram cache for temp buffers like activations and for doing transposes
cc_#1010: hey, i havent checked in a while
cc_#1010: how's the project going?
StellaAthena#3530: Pretty good
StellaAthena#3530: The model is finished
StellaAthena#3530: We have enough training data collected as well
cc_#1010: oh, based
StellaAthena#3530: The current major blocker is implementing the evaluation code. We need to have all of the evaluation datasets before we deduplicate the training data. |
cc_#1010: unfortunately my finances arent in a position where i can really help much yet but hopefully Soon
bmk#1476: > The model is finished
to elaborate on this, the *training code* is finished
bmk#1476: well, mostly finished
cc_#1010: i am now going to to close this folder for a bit and return to trying to make drilbot say funny stuff
cc_#1010: while crying that i dont have gpt-3 access
cc_#1010: (they gave it to deep leffen but not me? :powercry2: )
bmk#1476: could always use some more optims
bmk#1476: i have access
cc_#1010: oh?
bmk#1476: if you want to run some one off stuff
cc_#1010: well, unless you have the ability to generate a big folder for me to try and draw from it might not be super useful
cc_#1010: relatively big anyway
bmk#1476: oh
bmk#1476: uh, i could try, but the quota ceiling is pretty low
cc_#1010: my gpt-2 output is usually 1000 tweets at a time and maybe 1/20th of them are like... good
bmk#1476: and i'm already doing research that consumes a lot of my quota
cc_#1010: yeah i can imagine you wouldnt want to burn it for a shitposting bot on twitter
cc_#1010: anyway feel free to ping me if the situation develops further or if anyone needs pizzas
cc_#1010: i cant afford big hard drives but i can afford a pizza for our boys in programmer socks every now and again |
bmk#1476: speaking of hard drives there was recently a big sale on amazon (at least, here in canada there was)
bmk#1476: 10TB for 230 C$
cc_#1010: that is a lot of money and a lot of terabytes
bmk#1476: that's 170 USD
gwern#1782: _shakes his head. even with kryder's law dying, storage is crazy cheap compared to when he grew up, scrabbling for every megabyte_
bmk#1476: :guilty: looks at 40TB disk array
bmk#1476: i am scrabbling for every tb
AI_WAIFU#2844: yeah, I used to work in domains where most of my job was dealing with the constant pressure from running out of space on disk.
bmk#1476: im still constantly running out of space unfortunately
Louis#0144: Ok I have been sent https://twitter.com/ak92501/status/1312921190873300993?s=21 four times today
Louis#0144: To be clear
Louis#0144: This paper is awful
Louis#0144: Their experiments are awful
Louis#0144: It’s poorly written
Louis#0144: And the author has no idea what they’re doing
Louis#0144: Their evaluation metric as a whole is flawed
Louis#0144: I honestly even doubt that the model works
bmk#1476: how is it bad?
bmk#1476: this research is very relevant to what i'm currently working on so it would be great to know so i can avoid the pitfalls
Louis#0144: I’m going to write a formal rebuttal and put it on arxiv |
Louis#0144: I’ll get back to you
bmk#1476: which evaluation metric is bad?
bmk#1476: the mturk one?
Louis#0144: The issue is that their methods for testing story coherency go against current literature
Louis#0144: Yes
bmk#1476: what's wrong with it?
Louis#0144: Humans are awful at measuring coherency
Louis#0144: Typically you have them order by coherency or you use dialogue coherency metrics within mturk
Louis#0144: It could easily be biased
bmk#1476: i haven't read it closely but some of my current research uses mturk for quality scores
Louis#0144: That’s a paper I’m working on rn
bmk#1476: is using mturk inherently flawed?
Louis#0144: Presenting a human with a story or a set of stories and asking which are the most coherent is inherently flawed
Louis#0144: Coherency is subjective
bmk#1476: is that just applicable for coherence?
Louis#0144: As it pertains to your biases
Louis#0144: No
Louis#0144: Plot holes are subjective too
Louis#0144: It’s messy
Louis#0144: You can’t test coherency this way |
bmk#1476: what about a generic "do you like this story better"
Louis#0144: That’s not testing coherency
Louis#0144: That’s testing bias
bmk#1476: im asking if that's ok
Louis#0144: That’s ok yeah
bmk#1476: ok phew
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/762494944065093692/unknown.png
Louis#0144: But yeah humans have awful reading comprehension skills
bmk#1476: here are the questions i used in my work
Louis#0144: You can’t do 3
Louis#0144: I’ll explain tmrw
bmk#1476: the first 3 are directly lifted from a FAIR paper
bmk#1476: anyways my method wins out on all 4 metrics so it doesn't really matter right?
bmk#1476: i still dont think i get why "coherence" is not ok but "preference" is, they both seem equally subjective
Louis#0144: But yeah formalizing plot holes and coherency literally *is* my thesis q that I’m working towards
Louis#0144: Like in the most literal sense
bmk#1476: ah ok
bmk#1476: tfw it's barely been a week and you've already used up a third of your tokens https://cdn.discordapp.com/attachments/729741769738158194/762496491709005834/unknown.png
bmk#1476: i might have to upgrade to the 400/mo tier >.>
bmk#1476: looking at the overage fees, it would cost $90 per experiment even on the 400 tier >>.>> |
bmk#1476: gpt3 is too expensive
bmk#1476: we need to ramp up our efforts
chirp#4545: out of curiosity what are you doing with gpt3?
Deleted User#0000: Are there any distributed computing projects like Folding At Home, but focused on ML training. Every now and then I have some gpu capacity to spare, and I'd like to donate it somewhere.
kindiana#1016: afaik the only type of training like that is collecting rollouts/self play data for RL models, like leela zero
Daj#7482: Backprop unfortunately doesn't distribute very well. There is stuff like OpenMined though
FractalCycle#0001: what's the recommended order of things to do, to get up to speed and be useful to the project? Like DL basics, relevant math
StellaAthena#3530: @FractalCycle right now, the biggest hurdle for the project is scraping and data processing. If you check out the GitHub issues, they list the datasets we need to collect.
https://github.com/EleutherAI/lm_evaluation_harness/issues
StellaAthena#3530: Once we’ve collected and processed all of this data, we then are going to deduplicate our training and evaluation data. I believe that @researcher2 and @Sid have that mostly finished?
StellaAthena#3530: And then we need to run it and see what happens
FractalCycle#0001: alright, thanks! I'll try and get on that within this week
StellaAthena#3530: We try to keep things roughly organized, so any issue that is marked as “to do” on the project board and which doesn’t have someone assigned to it is open game. Just leave a comment claiming it so I can assign it to you
FractalCycle#0001: thanks, will do!
researcher2#9294: > Once we’ve collected and processed all of this data, we then are going to deduplicate our training and evaluation data. I believe that @researcher2 and @Sid have that mostly finished?
@StellaAthena The deduplication process is just starting. Deduplication pipeline is working on my machine at home but we won't know how long it will take until I can get owt2 started - currently generating the hashes required.
bmk#1476: Should I buy that big hetzner now
bmk#1476: Or should that wait
StellaAthena#3530: @researcher2 sorry, I meant that you have the pipeline mostly done |
StellaAthena#3530: Not that the data has been deduped
researcher2#9294: Good, didn't want anyone getting too excited haha
StellaAthena#3530: @bmk I would wait until eval datasets are done
bmk#1476: Good idea
StellaAthena#3530: https://www.zdnet.com/article/digital-pioneer-geoff-huston-apologises-for-bringing-the-internet-to-australia/
researcher2#9294: Tell us how you really feel Geoff
FractalCycle#0001: can confirm; ever since i started using the internet, my accent has gotten more australian
researcher2#9294: https://cdn.discordapp.com/attachments/729741769738158194/762812130868723782/iu.png
researcher2#9294: best dollarydoos
FractalCycle#0001: b'l'ime'y'
researcher2#9294: struth love
Ken#8338: There will be no AGI fire alarm? https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence
Ken#8338: circa 2017
genai (Immortal Discoveries)#0601: Can an ANN learn semantics like word2vec without being told to? Word2vec is told to look for semantics.
Ravna#1831: Unsupervised translation does this. It discovers a common semantic representation across different languages without being told to.
genai (Immortal Discoveries)#0601: like you mean the most common words or phrases are probably the "the" word in chinese, french, etc, so it sees the pattern then across languages ?
genai (Immortal Discoveries)#0601: or how 2 languages may place the words/phrases in the same arrangement but japanese may rearrange words like last names ?
bmk#1476: @genai (Immortal Discoveries) Minor nitpick but Chinese has no "the" and french has multiple which means each one won't be the most common
zphang#7252: unsupervised NMT has an odd history. it actually did start with pretrained cross-lingual embeddings and tricks to initialize the different language embeddings so the model isn't immediately lost
Ken#8338: New podcast: Sam Altman - How GPT-3 will shape our future https://hbr.org/podcast/2020/10/how-gpt-3-is-shaping-our-ai-future |
gwern#1782: frigging podcasts. no transcript either
Ken#8338: but a few interesting tidbits in it.
bmk#1476: Tfw BPE encoding isn't a homomorphism
gwern#1782: _takes credit for everyone hating on bpes now_
gwern#1782: (me on my deathbed: "either the bpes go or I go, this world ain't big enough for the both")
bmk#1476: are there any sane in-between encodings?
bmk#1476: fixing the problems of bpe but not going full char
gwern#1782: randomized BPE dropout is the best I've seen
bmk#1476: ah, ok
gwern#1782: there's also a 'unigram' that nshepperd was experimenting with
gwern#1782: personally, I think with bigbird and the rest now shown to have both similar quality and lower performance, we should just go full character
bmk#1476: hmm, interesting
bmk#1476: how do you think the best way to handle full unicode is, then? a vocab of 1.1M for every codepoint?
zphang#7252: ELMo used a CNN to go from char->word
bmk#1476: BPE does have the advantage of handling all of unicode
bmk#1476: or are you thinking just byte level
gwern#1782: I'm not sure about unicode. the approach of just 255 BPEs for encoding byte literals and leaving them as multi-BPEs seems to have worked well for chinese, so if it works for chinese...
bmk#1476: ah, ok
bmk#1476: I think it would be interesting to craft a custom char-vocab that covers 99.9% of commonly used unicode characters and does byte BPE for the rest; it would be interesting low-hanging fruit to work on
zphang#7252: https://huggingface.co/transformers/tokenizer_summary.html#unigram this confuses me |
bmk#1476: a while ago, i ran character frequency stats on a small subset of commoncrawl to make this: https://gist.github.com/leogao2/6f0cb98e63126cc40759e58df7c511a8
genai (Immortal Discoveries)#0601: BPE is a way to make your network smaller yet still keep most the accuracy....
genai (Immortal Discoveries)#0601: "another thing they are is food" ----- instead of storing 'are is food', 'is food', 'food', etc, we store only the segmentable parts cuz i mean really, 'are is'? LOL, so we get ex. 'another thing', 'another thing they are', 'they are', 'is food' .....
genai (Immortal Discoveries)#0601: usually you'll see those parts
genai (Immortal Discoveries)#0601: so usually you won't benefit from storing the rest....
Ken#8338: We have heard for quite a few years about how neuromorphic chips will become important to AI but it has not happened yet. Personally, I think I tend to ignore the latest news in this field because I haven't seen the research translate into use yet. But need to keep an open mind I guess. https://syncedreview.com/2020/10/07/breakthrough-neuron-mimicking-nanoscale-electronic-circuit-element-for-neuromorphic-ai/
gwern#1782: 'Sam Altman will be doing a Q&A during one of our meetups. Was hoping to get some "seed questions" to get thing going in case the audience does not have any good ones. ' <-- suggestions?
gwern#1782: my current list: "at what point did you start to believe in OA making progress by scaling up neural net models beyond what everyone else thought reasonable?" Or, "what are your thoughts on Schmidhuber's proposals towards having 'one big model to rule them all'?" "Are you concerned about potentially triggering AI arms races and Manhattan Projects in AI research?" "Will future developments focus more on new hardware and larger investments, or have we largely used up the 'hardware overhang' and progress will now depend more on new ideas and refinement?"
StellaAthena#3530: I would press them as much as you can about democratizing AI research
StellaAthena#3530: OpenAI talks big game about democratizing AI, but does little in practice.
Ken#8338: In the interview I listened to of Sam Altman he said that they no longer make predictions of when AGI will happen (they are working on a better question around this issue), and that achieving AGI will be very expensive (but I would think time frame matters in this issue - but guessing they want to be first).
FractalCycle#0001: ask them about inter-company safety collaboration (e.g., preventing an arms race between OAI and DM)
FractalCycle#0001: also ask him which approaches to A.I. *alignment* he's looking into
bmk#1476: They probably won't answer this but something about how big they're planning to go for GPT4 would be amazing to know
FractalCycle#0001: i could see them going for 1 trillion, it's a big round number
bmk#1476: i think 1T is too low, personally
kindiana#1016: gpt2 to gpt3 was 100x 🤔
FractalCycle#0001: i could see them going for [any big number with a zero at the end of it], it's a big round number
FractalCycle#0001: ~~10 features~~
bmk#1476: i mean i would have expected them to go for 100B this time but they went for the unnice unround 175B |
bmk#1476: gpt2 was 1.542B
kindiana#1016: its just whatever fits well on their cluster lol
bmk#1476: gpt1 was 0.110B
bmk#1476: so gpt1 -> gpt2 was only 10x
bmk#1476: i fear the next one may well just be 1T, oh well
FractalCycle#0001: 1Q when
bmk#1476: but if someone makes it to market first with 1T they might aim higher
bmk#1476: i'd bet that happened with gpt3, actually
bmk#1476: we know they already had 13B for months at the time of the gpt3 release
bmk#1476: they probably had their wind stolen from them by the turing-nlg people so they waited until they had a 175B model
bmk#1476: or, in other words:
bmk#1476: if we finish 1T before OA, then theyll be pressured by sunk cost to go even bigger
bmk#1476: remember: 1T or bust
aquajet#7800: 1T or bust
aquajet#7800: > One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.
Maybe a question about this? Or more specifically RL + LM experiments
bmk#1476: ok it'll probably be multimodal
bmk#1476: igpt + gpt3
bmk#1476: probably >= 1T
bmk#1476: unlikely to involve RL, imo |
kindiana#1016: would be funny if AGI is just solved with MLE, no rl required
bmk#1476: where di you find this quote, btw?
bmk#1476: > would be funny if AGI is just solved with MLE, no rl required
@kindiana i've been saying this for a while now
bmk#1476: i don't believe RL is necessary for AGI
aquajet#7800: They have had some RL + LM experiments, "Finetuning from human Preferences" last year and the summarizer a month ago
aquajet#7800: https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/
aquajet#7800: It's crazy that Neuralink and OA share a dining room
bmk#1476: wait, *what*?
bmk#1476: ~~well, they both have one thing in common- neglecting alignment research~~
aquajet#7800: Eleutherai office in Pioneer building when?
bmk#1476: this is about 50% joke when it comes to OA and about 25% joke when it comes to neuralink
bmk#1476: > Eleutherai office in Pioneer building when?
@aquajet ~~we could build parabolic listening devices to spy on OA, too~~
aquajet#7800: It's interesting that OA uses GPUs for their work. Is it simply due to them having access to more compute through GPUs as opposed to TPUs?
bmk#1476: i mean, would *you* rather work with TPUs or GPUs if TFRC wasn't a thing
aquajet#7800: Theoretically I would want a TPU since those would be more efficient
bmk#1476: poor choice
bmk#1476: tpus are such a pita to work with
bmk#1476: theres a reason why google literally hands out TPU time for free like candy: nobody likes to use them |
kindiana#1016: theoretically tpus are great, you don't need those cuda cores, just mxus, but practically programming them is a big pita
bmk#1476: not to mention only google has em
bmk#1476: and OA is already friends with M$
kindiana#1016: are there tpus for training (not necessarily by google) you can buy without signing a contract lol
kindiana#1016: I think huawei also has one but I havn't seen it for sale
aquajet#7800: I don't think so
aquajet#7800: How hard would it to build a TPU?
aquajet#7800: the compiler would be hard but is doable
aquajet#7800: im not sure about the mxu
kindiana#1016: the mxus are trivial compared to building a processor
kindiana#1016: its just add multiply units in an uniform grid
aquajet#7800: so you could just lay it out on an fpga?
kindiana#1016: yeah
kindiana#1016: its not going to be very efficient though lol
kindiana#1016: (or cheap, or fast)
bmk#1476: making tpus is really hard
bmk#1476: source: a shitload of companies have tried it and exactly 2 companies in the world have chips that are actually competitive for training models
bmk#1476: maybe 3 if you count cerebras, but they still haven't actually selling product in any capacity
gwern#1782: (or released any public benchmarks)
bmk#1476: yeah, or that |
bmk#1476: and i *strongly* suspect they optimized it just a tad too much for CNNs
bmk#1476: probably would fall flat on its face for transformers, just considering how it works, and even setting aside the *absolutely puny* amount of memory
FractalCycle#0001: just read the above article on OpenAI. Thinking about the part where OAI allegedly already has an AGI project going...
bmk#1476: 1. i'm eh not surprised
bmk#1476: 2. they probably mean *in the direction of AGI*
bmk#1476: i think there are many things in the direction of AGI
bmk#1476: but nothing that to my knowledge can be done today that absolutely *is* AGI
chirp#4545: @gwern if you're still taking questions for Sam Altman:
What would you say is OpenAI's biggest bet when it comes to research strategy, compared to DeepMind or other big AI labs?
What would you consider to be a "fire alarm" for AGI?
What would you do if you had 10x as much money? Could you use it effectively?
What do you think is the most misunderstood thing about OpenAI?
What kind of academic research do you find most useful for OpenAI's work?
FractalCycle#0001: Any advice for people who want to get up to speed in DL?
|
What do you look for when hiring at OAI?
bmk#1476: Slight modification of that first question: any advice for people who want to get to to speed in alignment?
bmk#1476: I think there's a lot of resources out there for learning DL (and there are way too many DL people around) and not many Alignment people
FractalCycle#0001: Are you looking into verifiable computing?
What specific alignment approaches are OAI pursing?
aquajet#7800: What's the roadmap for the OA API? WIll they be increasing access to gpt3 in the future?
FractalCycle#0001: What is the overarching alignment direction OA is looking at? Human training? Provability? Scalable supervision? Something else?
FractalCycle#0001: What should someone do to get into A.I. alignment in a useful capacity?
bmk#1476: ~~what do you think of eleutherai~~
aquajet#7800: > Unfortunately, a bug in the filtering caused us to ignore some overlaps, and due to the cost of training it was not feasible to retrain the model
What was your immediate reaction to this?
aquajet#7800: more serious question: how much did it actually cost to train gpt3?
bmk#1476: The guy responsible for introducing that bug actually talked about it in a video hah
aquajet#7800: which video?
bmk#1476: Hm, I can't find the link now
StellaAthena#3530: I’m still down for “ask critical questions about their democratizing AI rhetoric that never materializes”
bmk#1476: To add to that, what do they think "democratizing" means, anyways?
bmk#1476: https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights is even more relevant than usual
aquajet#7800: > Alternatively, suppose a group of rebel nerds develops an AI in their basement, |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/763955397584224286/8R9xGpL.png
aquajet#7800: Has someone finetuned a LM of lesswrong?
FractalCycle#0001: What safeguards does OAI have in place to prevent harmful accidents?
researcher2#9294: Are you looking into separate storage and retrieval systems along with continual learning on live data streams?
andyljones#7746: Does anyone know of a reinforcement learning discord as active as this one? Best I've found so far is 'RL group', which is of a similar population but much quieter.
StellaAthena#3530: No
FractalCycle#0001: > Sam Altman will be doing a Q&A during one of our meetups
wait which meetups?
gwern#1782: SSC
FractalCycle#0001: aye, thx
thenightocean#6100: Oh cool! How did they managed to get him? Do you know which date?
asparagui#6391: i dunno if you want to start a pytorch vs tensorflow holy war, but it would be interesting to know what their ml stack looks like
gwern#1782: @thenightocean no idea. maybe it's thanks to conor? the mod just said what I quoted there, so I dunno when sam-sama might show up
gwern#1782: @asparagui they announced a while back they were switching everything to pytorch
asparagui#6391: i guess more wrt to the scaling hypothesis
asparagui#6391: do we just need bigger computers, or is there other magic involved
bmk#1476: > > Alternatively, suppose a group of rebel nerds develops an AI in their basement,
@aquajet https://cdn.discordapp.com/attachments/729741769738158194/764286431564529664/garfieldwonder.png
Chlorokin#6581: @thenightocean I organize the meetups. Will be on Nov 22nd. Details here on Joshua's blog (Joshua is the host) and signup form will be posted there soon: https://joshuafox.com/ssc-online-meetups/. I got him just by cold emailing him.
thenightocean#6100: Thats great @Chlorokin ! Thanks for organising this. |
Eddh👽#7290: https://www.wired.com/story/opinion-ai-is-an-ideology-not-a-technology/ good article. I disagree in some parts but still like it a lot. That's how I know a good critique.
bmk#1476: i think i read this (unless i'm thinking of something else) and i *strongly* disagree on many levels
Noa Nabeshima#0290: > Does anyone know of a reinforcement learning discord as active as this one? Best I've found so far is 'RL group', which is of a similar population but much quieter.
@andyljones I haven't found one yet 😦
Daj#7482: > https://www.wired.com/story/opinion-ai-is-an-ideology-not-a-technology/ good article. I disagree in some parts but still like it a lot. That's how I know a good critique.
@Eddh👽 I literally could not finish this good God. There needs to be a name for the genre of punditry where you try to convince your audience that X isn't a problem by just redefining what X is and not addressing the original issue
Daj#7482: "Applause Light" something something
Daj#7482: I like a lot of Glen's ideas but christ is he deep in post modernist lala land sometimes
Eddh👽#7290: I don't see the link to post modernism
bmk#1476: at least he didn't quote thiel's "AI is communist, crypto is libertarian" thing
Daj#7482: The title is literally post modern lol
Daj#7482: "Electricity is just a shared cultural illusion"
Sid#2121: I should read the article before commenting but how is "artificial intelligence" a belief
Sid#2121: *goes to read article*
bmk#1476: > Everything we accomplish depends on the social context established by other human beings who give meaning to what we wish to accomplish.
Eddh👽#7290: Ah I see what you mean. It's true that the title is a bit click bait but at least original. But I don't think that's what the article means to say
bmk#1476: > Supporting the philosophy of AI has burdened our economy.
Eddh👽#7290: And it seems true to me that there is an ideology behind the drive to develop AI in tech, which has political implications.
Daj#7482: I didn't finish the article, but from the first half or so I read, the title seems perfectly accurate in predicting the content
Daj#7482: And typical of Weil's rhetoric |
Daj#7482: > And it seems true to me that there is an ideology behind the drive to develop AI in tech, which has political implications.
@Eddh👽 sure, but that's a vacuous statement imo
Daj#7482: There's "an ideology" behind toothbrushes too
thenightocean#6100: also if I punch the writer in the face, problem is not the punch but the ideology of writer being able to perceive me knocking his teeth out.
thenightocean#6100: or smth
bmk#1476: i really don't get what the author is trying to say, it seems to be mostly about the ethics of data collection, which, i mean, i'm a fan of privacy and i dislike :zucc: as much as anyone else, but then there's a lot of other.. stuff
Chlorokin#6581: Weil's book was great, save towards the end. His rhetoric gets worse and worse over time.
bmk#1476: like, if this entire thing was "Bigcorp™ collecting data from people is unethical and it needs to stop and/or we need to be compensated" i'd be totally fine and even supportive of it
Daj#7482: > Weil's book was great, save towards the end. His rhetoric gets worse and worse over time.
@Chlorokin agreed with this. The 80k podcast is also good. Except where he calls rationalists neo reactionaries lol
Daj#7482: Weil has some strange beliefs about data collection
bmk#1476: but then they bring in the china boogeyman and extreme carbon chauvinism and talk about AI being an ideology?
thenightocean#6100: this is basically vacuous "actually...its a ideology" style of writing that Scott criticised for a while: https://slatestarcodex.com/2018/01/15/maybe-the-real-superintelligent-ai-is-extremely-smart-computers/
bmk#1476: > Accusing an entire region of California of projection is a novel psychoanalytic manuever, and I’m not sure Chiang and Buzzfeed give it the caution it deserves.
Chlorokin#6581: The whole general failure mode of X is an ideology/religion therefor X cannot achieve its goal is strange. Maybe X is a ideology/religion with a plausable mechanism of action. Maybe “reconceptualizing the narrative” is not a plausible mechanism of action.
Daj#7482: When I'm off the record I sometimes call AGI "The only religion that doesn't just worship a god, but _builds_ one" and other dark side epistemology things for fun
Daj#7482: But that's not an _argument_
Daj#7482: It's poetry
bmk#1476: > The whole general failure mode of X is an ideology/religion therefor X cannot achieve its goal is strange. Maybe X is a ideology/religion with a plausable mechanism of action. Maybe “reconceptualizing the narrative” is not plausible mechanism of action.
@Chlorokin from yud: something something the priests of science are the only ones who can perform actual miracles, like walk on the moon |
Daj#7482: (more fun metaphors: AI alignment is neo-catholicism. Be chaste and work hard in this life ("pre singularity") and you will be rewarded with heaven (aligned singularity) later, or if you don't you will go to hell (misalignment))
Daj#7482: In many ways, science is religion except, you know, it works
Chlorokin#6581: I like AI alignment as gnosticism and natural selection the demiurge.
Daj#7482: kind of a big deal
Daj#7482: Ooooh that's a good one
Eddh👽#7290: I see your point. Still I think it's worth taking criticism. I think the main point was about centralisation of power, and the current data gathering and processing systems and their developments making a huge concentration of power, unseen in history. And of course if AGI actually is developped, the biggest concentration of power in history might get in the hands of the team of 10 people that developed it.
Eddh👽#7290: or just 1 engineer potentially.
Daj#7482: That's just the fast takeoff scenario with extra steps
Daj#7482: Read Bostrom or Yudkwosky instead of this guy
Eddh👽#7290: I'll take your advice when I have the time.
Daj#7482: Happy to give more concrete reading suggestions if there's anything you want to learn about! I'm kind of a walking alignmnet lexicon
Daj#7482: I have yet to contribute anything meaningful but at least I read a lot lol
Chlorokin#6581: Voltaire's Church or the church of the god that will be.
Daj#7482: The only religion that unironically talks about catgirls
bmk#1476: > The only religion that unironically talks about catgirls
@Daj tell me more
Daj#7482: Lets move this to DMs 👀
bmk#1476: Oh no is this an infohazard
Daj#7482: Absolutely
Chlorokin#6581: First rule of the church is we do not talk about Reedspacer's lower bound outside of DMs. |
Daj#7482: Oh god it _actually_ has a name
Daj#7482: Finally I have a word to bully people that aim for weak alignment
Chlorokin#6581: Yep. Think it was from the Interpersonal Entanglement post in the sequences.
Daj#7482: It's from the Fun Theory Sequence
Daj#7482: Was that in the original sequences? I don't remmeber
bmk#1476: > This Utopia isn't famous in the literature.
Chlorokin#6581: I think so.
Chlorokin#6581: It is when your "the literature" is 4chan.
Daj#7482: Good that #reading-club is rereading the whole thing
bmk#1476: > And I said: "AAAAIIIIIIEEEEEEEEE"
Daj#7482: > It is when your "the literature" is 4chan.
@Chlorokin And harry potter fanfiction, apparently
bmk#1476: Pinging @AI_WAIFU
Chlorokin#6581: Cool. I remember reading it all the way through on my Kindle in my university library in a week-long stretch. Good times. Not sure I’ll ever have the energy to read it all the way through again.
Daj#7482: I'm enjoying the reread a lot
AI_WAIFU#2844: https://cdn.discordapp.com/attachments/729741769738158194/764535730579308585/iu.png
Daj#7482: God you're a walking stereotype, aren't you?
Daj#7482: haha
bmk#1476: We're talking about catgirls so if you have any opinions on this matter you should express them now
AI_WAIFU#2844: > God you're a walking stereotype, aren't you? |
AI_WAIFU#2844: I get that a lot
Chlorokin#6581: Hotz said something interesting in his Lex interview, basically the most efficient interface between an AI and a human is a catgirl/catboy.
AI_WAIFU#2844: But yes catgirls are a nessesary part of the post singularity utopia.
Daj#7482: > Hotz said something interesting in his Lex interview, basically the most efficient interface between an AI and a human is a catgirl/catboy.
@Chlorokin lel
bmk#1476: this depends on the definition of efficient
bmk#1476: and on the definition of interface
Daj#7482: You, a hentai loving plebeian: Humans and cat girls coexist
Me, an enlightened utilitarian: No humans _or_ cat girls, only computronium
Chlorokin#6581: His claim was being in a loving relationship with an aligned AI > neuralink. AI girlfriend rather than exocortex.
AI_WAIFU#2844: Why not both?
AI_WAIFU#2844: ~~Give your waifu read/write access to your mind. ~~
Daj#7482: > His claim was being in a loving relationship with an aligned AI > neuralink. AI girlfriend rather than exocortex.
@Chlorokin this is the most 4chan thing I have ever heard jesus christ
Daj#7482: > ~~Give your waifu read/write access to your mind. ~~
@AI_WAIFU I revise my previous statement
bmk#1476: honestly, after we have aligned ai, i don't really share yud's concern that humanity will splinter
Daj#7482: We're almost as weird as those transhumanist furries that interviewed Anders sandberg
bmk#1476: the catperson people can form their own thing and everyone else can form a different thing
bmk#1476: and like, i don't think that's a big deal? |
Daj#7482: Eh, if humans (as anything other than uploads or post human demigods) still exist we basically failed
bmk#1476: if your definition of human is untampered monkey meat suits then i agree
bmk#1476: i think human + neuralink as a neo-neo-cortex is good enough
Daj#7482: I can't see how that would prevent e.g. wire heading
bmk#1476: also it would be nice to turn off the primitive mind stuff
Daj#7482: Now we're back at my problems with amplification hah
bmk#1476: but i feel likee that wouldn't be too hard if we can already do neo neo cortii
Daj#7482: > also it would be nice to turn off the primitive mind stuff
@bmk oh, so the parts that make you human?
AI_WAIFU#2844: I don't think it's super obvious, people have preferences about what other people can and can't do even if they are not directly affected by those things, and there are also commons that needs to be shared regardless. That doesn't magically go away with FAI.
bmk#1476: > @bmk oh, so the parts that make you human?
@Daj they're literally the least human parts of me because all animals have them too
Daj#7482: > I don't think it's super obvious, people have preferences about what other people can and can't do even if they are not directly affected by those thing, and there are also commons that needs to be shared regardless. That doesn't magically go away with FAI.
@AI_WAIFU except we can directly rewrite what people want
bmk#1476: my neocortex *is* the most human part of me
bmk#1476: and my neoneocortex would be even more human
Daj#7482: > my neocortex *is* the most human part of me
@bmk You're a vegetable if you only have a neocortex
Daj#7482: Decorticated rats can still do most species normal behavior
bmk#1476: a computer is a brick if it only has a cpu |
Daj#7482: Neocortex isn't a cpu
Daj#7482: It's a GPU
bmk#1476: sure, gpu
Daj#7482: Basal Ganglia and midbrain are the CPU
AI_WAIFU#2844: > @AI_WAIFU except we can directly rewrite what people want
To quote EY: "AAAAIIIIIIEEEEEEEEE"
https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement
bmk#1476: the gpu is the most computing part of the computer
Daj#7482: > To quote EY: "AAAAIIIIIIEEEEEEEEE"
> https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement
@AI_WAIFU I'm saying _can and will do so if we don't solve the problem_
bmk#1476: the cpu is just the thing that keeps the gpu fed
Daj#7482: I'm with EY here
Sid#2121: https://tenor.com/view/grandpa-abe-exit-confused-bye-bart-gif-7694184
Sid#2121: me coming into #general
Chlorokin#6581: "Me, an enlightened utilitarian: No humans or cat girls, only computronium" Computronium is the preferable substrate for both humans and cat girls.
Daj#7482: > "Me, an enlightened utilitarian: No humans or cat girls, only computronium" Computronium is the preferable substrate for both humans and cat girls.
@Chlorokin Now we're talking
bmk#1476: i have serious concerns about the feasibility of uploads
Daj#7482: > the cpu is just the thing that keeps the gpu fed |
@bmk yea, so with only a disconnected GPU there's not much "intelligence" there
Chlorokin#6581: What you have to realize is moral realism is true and it is catgirls.
Sid#2121: Pinned a message.
bmk#1476: my point is we can turn off parts of the primitive mind without just yeeting it entirely out of existence
Daj#7482: If you think a disconnected neocortex is sentient then Neural Networks are _absolutely_ sentient
bmk#1476: i don't mean that i'm *just* my neocortex
Daj#7482: Which parts are you then? Now we're talking hard problem of consciousness lol
Daj#7482: Or my preferred approach, qualia formalism
bmk#1476: your claim was that my primitive mind was my most human part and my claim was that actually it's the part that is least uniquely human
Sid#2121: > Which parts are you then? Now we're talking hard problem of consciousness lol
@Daj e m e r g e n c e (please don't ban me)
Daj#7482: > your claim was that my primitive mind was my most human part and my claim was that actually it's the part that is least uniquely human
@bmk No, my claim was you can't just "shut the primitive parts off"
bmk#1476: something quantum
Daj#7482: > @Daj e m e r g e n c e (please don't ban me)
@Sid on thin ice, bud
Sid#2121: > something quantum
@bmk this feels just as bad as emergence lol
Daj#7482: > something quantum
@bmk :ultrazucc: |
bmk#1476: we can write a better bootloader for our neocortex
Sid#2121: ok, you first
AI_WAIFU#2844: > I'm saying can and will do so if we don't solve the problem
Ok, although I think we need to be really careful with that because "preference rewriting" is some proper evil supervillan shit.
Daj#7482: That's like saying "well, if I just throw out the OS and all the programs and put different programs on it, it's still the exact same system"
Chlorokin#6581: Def something quantum, or maybe complexity, or linear algebra.
Daj#7482: The neocortex just executes the genetic programming in your subcortex
Daj#7482: The subcortex is what calls the shots, "system 1"
Daj#7482: > Ok, although I think we need to be really careful with that because "preference rewriting" is some proper evil supervillan shit.
@AI_WAIFU No shit, but it's effective, and therefore subject to Molochian dynamics
AI_WAIFU#2844: ^
Daj#7482: Which is why I say that weak alignment can not and will not work
bmk#1476: ok then if we cant replace the subcortex, at least we can have our neoneocortex be an advisor to our subcortex and our subcortices just have to learn to listen
Daj#7482: Sure that's just amplification with extra steps
bmk#1476: maybe we can physically modify them to be more likely to listen
bmk#1476: introduce a new bias on top of all them biases
bmk#1476: "neoneocortex bias"
Daj#7482: That's preference modification
Daj#7482: > To quote EY: "AAAAIIIIIIEEEEEEEEE"
> https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement |
@AI_WAIFU
bmk#1476: everything is preference modification
Daj#7482: Then why not modify yourself to desire nothing?
Chlorokin#6581: How can you be "weakly aligned" when edge instantiation is a thing.
bmk#1476: i feel like this is a much more minimal preference modification
Daj#7482: > How can you be "weakly aligned" when edge instantiation is a thing.
@Chlorokin I don't understand this question
bmk#1476: it's shades of grey all the way down
Daj#7482: > i feel like this is a much more minimal preference modification
@bmk how would you prevent more extreme modifications from happening?
Daj#7482: The god of the universe is Moloch
bmk#1476: like, by accident?
Chlorokin#6581: If you are imperfectly aligned doesn't edge instantiation eventually amplify it to total misalignment.
Daj#7482: Accident, wire heading, misalignment, malignant actors
AI_WAIFU#2844: cosmic ray...
Daj#7482: > If you are imperfectly aligned, doesn't edge instantiation eventually amplify it to total misalignment.
@Chlorokin Yea that's my point
bmk#1476: we dont just not do surgeries because of the risk of complications
Daj#7482: We don't just do surgery without washing our hands, either
Daj#7482: Also, surgery won't literally destroy all value that has ever existed forever |
Chlorokin#6581: > We don't just do surgery without washing our hands, either
@Daj Yes, but presumably the weak alignment people have a response to that line.
AI_WAIFU#2844: > Then why not modify yourself to desire nothing?
Was this addressed to me?
Daj#7482: > Was this addressed to me?
@AI_WAIFU To bmk
Daj#7482: > @Daj Yes, but presumably the weak alignment people have a response to that line.
@Chlorokin How about this: We have a surgery technique that guarantees to 100% of the time always kill the person
Daj#7482: Should we use the technique? It's the best we have
Daj#7482: Don't get me wrong, if we end up with 5 years of VR cat girl heaven before Moloch destroy everything, fine
Daj#7482: But it feels ridiculous to just give up on trying
Chlorokin#6581: Well, at least we tried.
Daj#7482: But we're not!
Daj#7482: Other than MIRI, who is working on _strong_ alignment?
Daj#7482: I love Christiano, his methods will definitely get us cat girls, but I just don't see how it'll work 10mio years down the line
Chlorokin#6581: Russell?
bmk#1476: well, we should be working on it but we aren't really
Daj#7482: Any method that is based on aligning with real humans is weak
Daj#7482: Russell is basically isomorphic to Christiano imo
bmk#1476: the entirety of our contribution to alignment has been long philosophical debates with no real conclusion |
Daj#7482: Yep, still not giving upbecause of that
Chlorokin#6581: One of the upsides of being a midwit is I don't feel morally responsible for contributing to the imminent destruction of our light cone.
Daj#7482: That's an excuse, being someone that can make a large contribution doesn't _feel_ like being big brain
Daj#7482: Hero Licensing argument
Daj#7482: It's surprising how utterly unimpressive most people that made big contributions were until the exact moment that they did
Daj#7482: Also, "trying" is a strictly dominant strategy over "not trying"
bmk#1476: ok so what should we (eleuther) do to contribute to alignment
Daj#7482: To quote EY: "AAAAIIIIIIEEEEEEEEE"
AI_WAIFU#2844: I don't think you need strong alignement to halt molochian dynamics. You don't need to create utopia, you just need to put on the brakes.
bmk#1476: we haven't really done anything concrete at all
Daj#7482: No one knows how to solve these problems, so you have to _try_
Daj#7482: > I don't think you need strong alignement to halt molochian dynamics. You don't need to create utopia, you just need to put on the brakes.
@AI_WAIFU I think any system that can stop Moloch is "strongly aligned" by my definition...Probably, need to think about this
Daj#7482: EY was just some weeaboo with a blog
Daj#7482: Thank god he _tried_
Chlorokin#6581: Yes. But high IQ is required to push things in either direction. I just don't have the technical chops to push things in either direction technically, unless the world can be saved by writing CRUD in Perl and Javascript.
My model of the world is that elites are basically all that matter. I do give MIRI 200 bucks a month though, so I must feel some responsibility.
Daj#7482: I don't know you personally, but at least for me personally I'm a super depressive archetype and think I'm a fucking idiot all the time, but I learned long ago to still _try_
thenightocean#6100: But Paul Christiano is working for OpenAi, right? Is he is considered the best expert in Ai alignement atm ?
Daj#7482: though tbh even just those SSC meetups are already probably a bigger contribution than the mean by far |
thenightocean#6100: so maybe we already have our best shot for now
Daj#7482: > But Paul Christiano is working for OpenAi, right? Is he is considered the best expert in Ai alignement atm ?
@thenightocean He's kinda the poster boy for prosaic AI alignment
Daj#7482: I don't like deferring to authority like this
Daj#7482: I would _never_ get in the way of Christiano or EY, but I can _try_ in parallel
Chlorokin#6581: For sure. But my point is, I don't feel guilty about it. It is not my hubris that will destroy the world. Any influence I have will be marginal compared to the IMO winners. Still worth donating and stuff.
Daj#7482: Eh, I still don't agree with that, but I see where you're coming from
Daj#7482: I'm projecting
bmk#1476: so *what* do we do in parallel?
Daj#7482: Figure it out!
Daj#7482: I'm trying to figure it out every waking moment
bmk#1476: i don't feel like any of our hours-long debates are useful for anything
Daj#7482: They have been useful to me
Daj#7482: I feel like I'm making progress
Daj#7482: And if I don't, I'll try something else
Daj#7482: Worst case, complete failure and zero impact
Sid#2121: if you don't feel like they're useful, put your work somewhere else I guess. I've learnt a lot from them
Sid#2121: I do think we should be erring on the side of putting our time into running experiments rather than having debates, though
AI_WAIFU#2844: Multipolar traps happen when you have multiple agents with comparable levels of power. When you have a single agent with an overwhelming strategic advantage, other agents won't bother to sacrifice their values. Alternatively if you have a collective of agents with a shared goal, no one has an incentive to sacrifice their values for more power.
Daj#7482: Debates are recreation and oratory practice for me mostly |
Daj#7482: I can try out arguments here before releasing them to the public
Daj#7482: But yes, experiments are good if we can do them
Sid#2121: I also feel like there's a big problem in the alignment field with everyone feeling like they have to do everything on their own
Sid#2121: seems like there's a lack of teamwork and a little bit of hero fetishism
Daj#7482: > Multipolar traps happen when you have multiple agents with comparable levels of power. When you have a single agent with an overwhelming strategic advantage, other agents won't bother to sacrifice their values. Alternatively if you have a collective of agents with a shared goal, no one has an incentive to sacrifice their values for more power.
@AI_WAIFU Yea this is the really intense Andrew Critch type stuff. _Please more people work on this_
bmk#1476: i feel like most of these topics we debate are philosophically hard in the first place
Daj#7482: > seems like there's a lack of teamwork and a little bit of hero fetishism
@Sid I feel it's the opposite. Too much hero worship
Sid#2121: wait, that's what I'm saying
Daj#7482: (fwiw, this is also MIRI/EY's view, they strongly encourage groups of rogue nerds to do stuff. That's the whole point of MIRIx)
bmk#1476: i don't think solving philosophically hard problems is the right way to solve the thing
Daj#7482: I think it is
Daj#7482: If you think it's not, spend your time elsewhere!
bmk#1476: philosopihcally hard is infinitely harder than technically hard
Daj#7482: Nah
Sid#2121: *both* is even harder, tho
bmk#1476: i mean, that's why i've been intensively studing math for the past fe months
Daj#7482: People underestimate how much progress we have made in philosophy
Daj#7482: Philosophy problems are just confused technical problems |
bmk#1476: what's an example of a thing that philosophy has solved?
Daj#7482: Deconfuse the philosophy, solve the science
bmk#1476: or not solved but progressed
Daj#7482: Lets not go into "solved" because you can always find someone who disagrees
bmk#1476: it feels like people just keep departing further into their own branches of philosophy
Daj#7482: Progress: Metamathematics, consciousness, induction, epistemology...
Sid#2121: > or not solved but progressed
@bmk the... enlightenment? Scientific method?
Chlorokin#6581: Greg Brockman said something intresting about this, to the effect of: We are probably out of luck solving millennial old philosophical problems, except to the extent our technical progress informs us about them.
Sid#2121: computing basically stems from a branch of philosophy
Chlorokin#6581: Made me think OpenAI may get more aligned with time.
Daj#7482: There was a time where Quinean Naturalism (sequence type stuff) _didn't exist_
Daj#7482: That was less than 100 years ago
Daj#7482: more like 50
bmk#1476: i have no clue what any of that is
Sid#2121: > Progress: Metamathematics, consciousness, induction, epistemology...
@Daj I thought consciousness was a spook
Daj#7482: that's progress
Sid#2121: fair
bmk#1476: my understanding of philosophy is a bunch of people arguing over trolley type problems using their own pet theories |
AI_WAIFU#2844: Do we even need that much philosophical progress though? IMO the first reasonable AI safety proposal came from bostrom and was basically "we kick all the complex philosophy down the road after dealing with the immediately pressing issues"
Daj#7482: Denett solved consciousness for all I'm concerned
Daj#7482: > my understanding of philosophy is a bunch of people arguing over trolley type problems using their own pet theories
@bmk _philosophy as a field_, yes
Daj#7482: The philosophy _field_ is bullshit
bmk#1476: what are you talking about then?
Daj#7482: But as far as I'm concerned EY is primarily a philosopher
thenightocean#6100: I wonder how much are there similar kind of conversations in OpenAI and other cutting edge groups? Is Ilya Stutskever aware of alignment research for example?
Daj#7482: > what are you talking about then?
@bmk There has been philosophic progress on problems I care about
AI_WAIFU#2844: Rats have been pretty good at making philosophical progress IMO
Daj#7482: If I was born in the 1950s my philosophy would be strictly worse
Daj#7482: (Negative) Utilitarianism is another point
Daj#7482: Identity Theory (Derek Parfit)
Daj#7482: > I wonder how much are there similar kind of conversations in OpenAI and other cutting edge groups? Is Ilya Stutskever aware of alignment research for example?
@thenightocean I'm pretty sure
Chlorokin#6581: You're a negative utilitarian?
Daj#7482: > Do we even need that much philosophical progress though? IMO the first reasonable AI safety proposal came from bostrom and was basically "we kick all the complex philosophy down the road after dealing with the immediately pressing issues"
@AI_WAIFU Oh right, Bostrom is a philosopher, don't forget that
Daj#7482: And I agree with "Long Reflection" (another philosophical progress, ding!) ideas btw |
Daj#7482: > You're a negative utilitarian?
@Chlorokin Ehhhh kinda
Daj#7482: I'm a utilitarian that thinks that the average suffering is much worse than the average good thing
bmk#1476: i'm not sure i would call myself a utilitarian
Daj#7482: Relevant MIRI post btw: https://intelligence.org/2013/11/04/from-philosophy-to-math-to-engineering/
thenightocean#6100: I think most people are when push comes to shove
Daj#7482: Philosophy is just confused mathematics
Daj#7482: Mathematics is just engineering waiting to happen
bmk#1476: how do we know that not only is there a well ordering over world states, but also that these world states actually have *scalar* preferability?
AI_WAIFU#2844: I'm not suggesting to stop, but I think that the philosophical problems we need to solve are likely tractable since we can solve the trickier ones down the road.
Chlorokin#6581: Average utilitarianism, biting the bullet on the weirdness.
Daj#7482: > how do we know that not only is there a well ordering over world states, but also that these world states actually have *scalar* preferability?
@bmk That's the kind of problems I think we need to work on!
Daj#7482: > I'm not suggesting to stop, but I think that the philosophical problems we need to solve are likely tractable since we can solve the trickier ones down the road.
@AI_WAIFU This is basically what I believe as well
Daj#7482: I think Strong Alignment is probably tractable with human intelligence
Daj#7482: also, re being a utilitarian, I of course _act_ like a deontologist in practice
thenightocean#6100: its just in our culture Utilitarianism has a bad rap as evil ideology. Like things Thanos and Straw Vulcans believe
Daj#7482: Yep, utilitarianism is so _obviously good_ but it's villanized everywhere because it means you have to be nice to the outgroup
bmk#1476: i'm not doing the pop culture strawman |
bmk#1476: i'm asking a really technical question
Daj#7482: I know, that wasn't aimed at you
bmk#1476: what does it mean for one state to be twice as desireable as another?
Daj#7482: _That's what we have to figure out!_
Daj#7482: You're making my point for me
bmk#1476: if you ask me, you can only define a well ordering but not a scalar value on world states
Daj#7482: That'd be fine by me
Daj#7482: (and is what I believe in practice)
bmk#1476: that would not be fine by me
AI_WAIFU#2844: Utilitarinaism is also rightly vilified because people are really good at lying to themselves, and so when they belive they're doing something for the "greater good" the lizard brain pulling the strings is actually just advacing it's own interests.
bmk#1476: it breaks any notion of expected utility
Daj#7482: If I could define even an extremely approximate ordering of world states that is _truly, actually just_ I would be _ecstatic_
bmk#1476: if utility is well ordered but not scalar, then the concept of expected utility is completely meaningless
Daj#7482: Yea I'm fine with that
Daj#7482: Would be unfortunate but so be it
Daj#7482: If something can be destroyed by the truth, it should be
AI_WAIFU#2844: > how do we know that not only is there a well ordering over world states, but also that these world states actually have scalar preferability?
There are multiple such well orderings.
bmk#1476: and defining scalar utility in terms of "would you prefer x or y_with_probability_p" is circular
bmk#1476: i mean i dont doubt it |
Daj#7482: You're basically saying "Wow there's a hard problem here" and I'm responding "Yes! We should try to solve it!"
Daj#7482: If we fail, we fail
bmk#1476: is there already literature on well ordered but non scalar utility?
bmk#1476: i dont want to reinvent the wheel
Daj#7482: But it seems weird to say "Wow this problem is hard, lets solve this easier problem that has no chance of succeeding but we can probably solve"
Daj#7482: > is there already literature on well ordered but non scalar utility?
@bmk Oh I'm sure there is _somewhere_ in the philosophy literature, but I'm not that deep into that stuff
Daj#7482: I only have 24 hours in a day to read hah
Daj#7482: Ask me again in 10 years
bmk#1476: ok haha
bmk#1476: @StellaAthena you do philosophy stuff right?
bmk#1476: pls help some noobs out
Daj#7482: > But it seems weird to say "Wow this problem is hard, lets solve this easier problem that has no chance of succeeding but we can probably solve"
@Daj Btw, disclaimer: This is _totally_ a strawman of prosaic AI alignment
Daj#7482: Christiano _definitely_ doesn't believe this
Daj#7482: and has much more nuanced arguments
Daj#7482: I have to finish Parfit before I move on to other philosophers hah
Daj#7482: also this is 100% unironically my opinion: The average front page LW post has more philosophical merit than most philosopher's careers
Daj#7482: spicy haha
bmk#1476: im going to keep going deeper down the math rabbithole before going for philosophy |
Daj#7482: Sure, once the philosophy is deconfused, that's what's needed next
Daj#7482: If you have a comparative advantage for math, for heaven's sake do math
Daj#7482: I was unfortunately born with the severe cognitive defect called "Being an Artist"
Daj#7482: Trying my best to treat it with code therapy hah
Chlorokin#6581: > also this is 100% unironically my opinion: The average front page LW post has more philosophical merit than most philosopher's careers
@Daj Don't let /r/sneerclub read that.
bmk#1476: I mean, next to real mathematicians my math ability is absolutely abysmal
Daj#7482: > @Daj Don't let /r/sneerclub read that.
@Chlorokin Hahahaha
thenightocean#6100: > @Daj Don't let /r/sneerclub read that.
@Chlorokin 😄
Daj#7482: > I mean, next to real mathematicians my math ability is absolutely abysmal
@bmk I'm an idiot, you're an idiot, we're all idiots, oh well c'est la vie
thenightocean#6100: I sometimes read sneerclub I admit. Guess its my kink
Daj#7482: > I sometimes read sneerclub I admit. Guess its my kink
@thenightocean Angry nerds bullying other nerds?
Daj#7482: lewd
AI_WAIFU#2844: > also this is 100% unironically my opinion: The average front page LW post has more philosophical merit than most philosopher's careers
LW > 1000 years of philosophy
thenightocean#6100: u need to be informed about your enemy dude |
Chlorokin#6581: Dear god, the thought of that as an actual kink just popped into my head.
Daj#7482: ~~tbf, I also (mostly ironically) think some LW people should be bullied lol~~
bmk#1476: I'm going to unrefine my earlier search query: is there literature on the perspective of utility as assigning values to *world states*
bmk#1476: This is a fairly wide net
Daj#7482: > LW > 1000 years of philosophy
@AI_WAIFU Depending on where those 1000 years are, this is pretty much true by definition
AI_WAIFU#2844: shh...
Sid#2121: > ~~tbf, I also (mostly ironically) think some LW people should be bullied lol~~
@Daj I gotta say EY talking about relationships and catgirls made me want to bully him, just a little
Daj#7482: > @Daj I gotta say EY talking about relationships and catgirls made me want to bully him, just a little
@Sid Right??!
Daj#7482: :D
bmk#1476: According to wikipedia, most literature starts from trying to define utility for individuals by picking something like pleasure and then mixing in various amounts of deontology
AI_WAIFU#2844: > I'm going to unrefine my earlier search query: is there literature on the perspective of utility as assigning values to world states
Sounds like part of the embedded agency litterature would cover that.
Daj#7482: And yeah, most good philosophy comes from AI research tbh
Daj#7482: Not just LW
bmk#1476: > @Daj I gotta say EY talking about relationships and catgirls made me want to bully him, just a little
@Sid I don't tbh, does that make me even nerdier than the baseline in here
Daj#7482: Judea Pearl did more for causality than thousands of wasted philosophy tenures talking about ravens |
AI_WAIFU#2844: Although I think it's more useful to think of utility over programs that generate the world rather than over world states.
Daj#7482: > @Sid I don't tbh, does that make me even nerdier than the baseline in here
@bmk No, me and Sid are just the artists among autists
bmk#1476: Ah
Sid#2121: lmao
Sid#2121: I'm either an artist among autists or an autist among artists
Daj#7482: > Although I think it's more useful to think of utility over programs that generate the world rather than over world states.
@AI_WAIFU Probably? I need to think about this
Sid#2121: my whole life has been skirting that line
Daj#7482: There is a kind of autism nerd hierarchy
Daj#7482: like how all the game engineering students at my uni are the cool kids and the CS are the nerds
Daj#7482: haha
FractalCycle#0001: I am lower on it because i have trouble focusing lol
Daj#7482: (disclaimer: This is shitposting)
AI_WAIFU#2844: > There is a kind of autism nerd hierarchy
The farther up you are in this heirachy the more your social status correlates with it.
bmk#1476: Would it be a good idea for me to just write up my ideas and then let the rabid mob of the internet tell me what literature from (Schmidhuber, 1763b) I'm missing
AI_WAIFU#2844: > This is shitposting
But postironically.
Daj#7482: > Would it be a good idea for me to just write up my ideas and then let the rabid mob of the internet tell me what literature from (Schmidhuber, 1763b) I'm missing |
@bmk Ah yes, Poe's Law
Daj#7482: (this is a joke)
FractalCycle#0001: i have a friend into philosophy, he's probably read Schmidhuber
bmk#1476: A la "best way to figure out how to do something on Linux is to say Linux can't do it"
Daj#7482: > But postironically.
@AI_WAIFU :bigbrain:
Daj#7482: Schmidhuber is pretty decent AI philosophy tbh
Daj#7482: Mostly wrong imo and way too weak alignment
Sid#2121: is there any way to read schmidhuber without having to subject myself to his godawful website
bmk#1476: > is there any way to read schmidhuber without having to subject myself to his godawful website
@Sid what do you mean it is the greatest website ever
Chlorokin#6581: Schmidhuber wants to consciously set forth a Malthusian hell. He seems fully aware of what this implies. Weak on alignment is a bit of an understatement.
AI_WAIFU#2844: Schmidhuber, hutter, and solomonoff made some really good AI philosophy progress that was promptly ignored by everyone.
Sid#2121: > @Sid what do you mean it is the greatest website ever
@bmk https://cdn.discordapp.com/attachments/729741769738158194/764551284455768074/Screenshot_2020-10-10_at_20.13.37.png
Daj#7482: God I can hear his voice every time I see his face
FractalCycle#0001: i
FractalCycle#0001: wait is Schmidhuber real? i thought that was fake
FractalCycle#0001: i literally cannot tell rn
Daj#7482: Also, btw my serious advice o |
To work on this stuff: _Just read tons of stuff_. You'll develop a "taste" and a list of people and ideas you're interested in
Daj#7482: > wait is Schmidhuber real? i thought that was fake
@FractalCycle :berk:
FractalCycle#0001: aaaaaaaaaa
FractalCycle#0001: https://en.wikipedia.org/wiki/J%C3%BCrgen_Schmidhuber
Daj#7482: Did you _seriously_ think Schmidhuber was some kind of boogeyman the AI world invented?
Daj#7482: That's absolutely hilarious if so
FractalCycle#0001: i thought he was a fake 1800s german philosopher like hegel that we just made up
Chlorokin#6581: > i thought he was a fake 1800s german philosopher like hegel that we just made up
@FractalCycle That is a hilarious. The world would be a better place if this were true.
Daj#7482: This is advanced shitposting
Daj#7482: tbh I met him once
FractalCycle#0001: alright, then my friend likely hasn't read him lol
Daj#7482: and if any person was fake it would be him
AI_WAIFU#2844: three men are sentenced to death, one of them german, one of them french and one of them british...
Daj#7482: He's like a fucking James Bond villain, no joke
AI_WAIFU#2844: https://www.youtube.com/watch?v=fnbZzcruGu0
Daj#7482: His compression theory is really neat
Daj#7482: But he comes to a hilariously wrong conclusion about alignment
Daj#7482: I think EY even bullies him in the sequences |
thenightocean#6100: "According to The Guardian,[35] Schmidhuber complained in a "scathing 2015 article" that fellow deep learning researchers Geoffrey Hinton, Yann LeCun and Yoshua Bengio "heavily cite each other," but "fail to credit the pioneers of the field", allegedly understating the contributions of Schmidhuber and other early machine learning pioneers including Alexey Grigorevich Ivakhnenko who published the first deep learning networks already in 1965. LeCun denies the charge, stating instead that Schmidhuber "keeps claiming credit he doesn't deserve".[2][35]"
Daj#7482: Schmidhuber invented AGI in 1991
Sid#2121: > i thought he was a fake 1800s german philosopher like hegel that we just made up
@FractalCycle this is fucking hilarious.
Sid#2121: brb starting a rumour that Hegel is fictional
FractalCycle#0001: this is dank, mind if i screenshot the schmidhuber discussion? my friend would crack up
AI_WAIFU#2844: EY shit's on NNs in the sequences
Daj#7482: If Schmidhuber invented AGI in 1991, but no one cites him, did he make a sound?
thenightocean#6100: tbf he might have a point. I never heard of that guy until year or 2 ago, but everyone knows about LeCun, Bengio and Hinton
Daj#7482: > this is dank, mind if i screenshot the schmidhuber discussion? my friend would crack up
@FractalCycle This is the quality you can only get at EleutherAI™️
Daj#7482: fwiw I think Schmidhuber is _mostly_ actually correct
FractalCycle#0001: maybe he'll join the server haha
Daj#7482: He's like the Stephen Wolfram of AI
AI_WAIFU#2844: > tbf he might have a point.
He's 100% correct.
StellaAthena#3530: > He’s like the Stephen Wolfram of AI
@Daj who is this?
AI_WAIFU#2844: He's just not cool with the modern AI crowd
Daj#7482: Schmidhuber |
thenightocean#6100: whats with his company btw (Nnaisense)? I would expect great things based on his reputation
Daj#7482: I said _mostly_ correct
Daj#7482: lol
AI_WAIFU#2844: I think their buissness model was to get bought out but that never happend.
AI_WAIFU#2844: Because they're not cool
Daj#7482: I am trying to lobby my company to get Schmidhuber as an advisor
Daj#7482: If only for the meme lol
Sid#2121: > I am trying to lobby my company to get Schmidhuber as an advisor
@Daj PLEASE
AI_WAIFU#2844: Also schmidhuber is old, I don't think he's pumped out much recently.
Daj#7482: Jonas thinks it would be hilarious
Sid#2121: where does he work?
Sid#2121: i mean like, area of germany wise
FractalCycle#0001: >mario says "A.G.I. will take 1,000 years to make and it will require semantic Bayesian networks complexity to mimic the human sacredness."
>luigi says "If it's not at scale, it's going to fail!"
Sid#2121: not specifically
Daj#7482: He works in Switzerland
Sid#2121: don't plan to stalk... or anything
Sid#2121: oh, thought he was germany based |
Daj#7482: I studied at the same uni as him though
AI_WAIFU#2844: IDSIA
Daj#7482: He used ot be
Daj#7482: > >mario says "A.G.I. will take 1,000 years to make and it will require semantic Bayesian networks complexity to mimic the human sacredness."
>
> >luigi says "If it's not at scale, it's going to fail!"
@FractalCycle Please make this into a meme so I can post it to Twitter
FractalCycle#0001: on it
FractalCycle#0001: >most useful i've been to the project so far
bmk#1476: eleuther has a massive meme footprint
Daj#7482: Still far too small
bmk#1476: once we release gpt2 we need to flood the internet with our OC memes
Daj#7482: We need to do that _right now_
Daj#7482: hahaha
FractalCycle#0001: https://cdn.discordapp.com/attachments/729741769738158194/764554937551683584/marioagi_final.png
FractalCycle#0001: may gLob have mercy on my soul: i have accelerated AGI by making a meme for an open source project
Daj#7482: Please tweet it so I can retweet or can I tweet it?
FractalCycle#0001: k
AI_WAIFU#2844: On the topic of more concrete AI safety research, a topic that is relevant but badly understood has to do with chains of command. Even if we maintain control, at scale we'll likely need agents or agent like strutures to inform us about what the AIs are doing and to translate that into commands for more detailed actions for smarter AIs or AI components. This is a pretty central problem for HCH and christiano style amplification, but it's also relevant for other proposals.
FractalCycle#0001: https://twitter.com/FractalCycle/status/1314996762306846722 |
bmk#1476: this sounds a lot like inner alignment
AI_WAIFU#2844: It's very related to inner alignment.
Sid#2121: > eleuther has a massive meme footprint
@bmk unironic opinion: good alignment memes could be more effective to alignment as a whole than our alignment debates here
bmk#1476: to be exact, it sounds like a strictly easier version of inner alignment
AI_WAIFU#2844: But you can study it with multiple agents in an environment
AI_WAIFU#2844: Yup
Daj#7482: > On the topic of more concrete AI safety research, a topic that is relevant but badly understood has to do with chains of command. Even if we maintain control, at scale we'll likely need agents or agent like strutures to inform us about what the AIs are doing and to translate that into commands for more detailed actions for smarter AIs or AI components. This is a pretty central problem for HCH and christiano style amplification, but it's also relevant for other proposals.
@AI_WAIFU If there's a human involved, Connor's....not interested (I need something that rhymes...)
bmk#1476: > @bmk unironic opinion: good alignment memes could be more effective to alignment as a whole than our alignment debates here
@Sid let's get on it then
AI_WAIFU#2844: That's why I think it's a good research direction, and I think it's understudied, so there might be low hanging fruit.
Daj#7482: I feel like that is almost all what Christiano researches
AI_WAIFU#2844: We also have plenty of existing social heirachies, and people who study them. So just collating resarch on human chains of command and extracting general insights can be useful.
Daj#7482: _Robin Hanson has entered the chat_
AI_WAIFU#2844: The other thing is that if you take the alignment-as-interfaces perspective I was ranting about earlier, work on brain-machine interfaces might also be productive, but it's far outside of my domain of expertise.
bmk#1476: this is a bit crazy but: assume you have a finite number of world-states and you have some total order on them but no scalar preference
bmk#1476: if you assign each world state a utility computed purely by looking at how many other world states that state is preferable to and nothing else
bmk#1476: how well does this play with existing decision theories?
bmk#1476: it seems like this would break a lot of things but i also can't say that with certainty |
Daj#7482: Now you're thinking with MIRI
bmk#1476: and there's a chance something like it could be made to work
AI_WAIFU#2844: I mean, that's just constructing a utilty function.
bmk#1476: or what if you weight each world-state by its likelihood
bmk#1476: yeah, but im constructing it from just a well ordering
bmk#1476: and this works for partial orders too
bmk#1476: (or it might, at least)
AI_WAIFU#2844: You could also alter the properties of it to make the agent more/less risk averse by passing the scalar through a concave/convex utility function.
bmk#1476: but that starts introducing more priors
bmk#1476: i'm sort of thinking about a "free utility function"
bmk#1476: if you do the likelihood weighting this works for infinite world states, too
bmk#1476: just look at the measure of world states your world state is preferable to
Daj#7482: You already made more progress on this problem than 99% of mainstream philosophy lol
bmk#1476: i need to actually learn decision theory so i can figure out how this would play with real agents
AI_WAIFU#2844: The scalars can carry important information, if 10 of the world states are "you get x jellybeans" and one of them is "eternal maximal s-risk suffering", you can bet that I want the separation between the first 10 to be much smaller than between them and the last option.
bmk#1476: but imagine the implications of being able to define a scalar utility function that isn't completely broken given just a partial/total order over world states
bmk#1476: i mean, intuitively, yeah
bmk#1476: but the thing is if this doesnt actually change the behavior of an agent using xyz decision theory it could work
AI_WAIFU#2844: I think a more useful direction to go in is that there are a lot of situations where the exact scalars don't matter but the ordering does.
AI_WAIFU#2844: If you have total control the scalars don't matter. |
bmk#1476: that's what i'm saying
Daj#7482: I also think ordering is way more important than scalars btw
AI_WAIFU#2844: ^
bmk#1476: is it possible to build a reasonable enough utility function that doesnt break xyz DT given only an ordering
bmk#1476: my first reaction was "of course not"
bmk#1476: my second reaction was "unless.."
AI_WAIFU#2844: I think it's a function of the power of the agent, we distort the reward function in RL everytime we do reward shaping, but it works anyway because the agents are good enough to find an acceptable local maxima.
AI_WAIFU#2844: The more godlike your AI -> the less the exact scalar values matter.
Daj#7482: That's a good framing, I like it
bmk#1476: i think being able to formalize that mathematically would be valuable, if nobody has done so yet
Daj#7482: Just do it anyways
Daj#7482: If someone else has done it you still learned a ton and are now able to have a conversation about it
AI_WAIFU#2844: Do it. There are also existing power formalisms out there you can build off of.
Daj#7482: This is why the publication mindset is toxic
Daj#7482: It trains people to not actually _try_
bmk#1476: first i need to spend a few months reading up on decision theory, haha
Daj#7482: Time well spent
bmk#1476: ~~the left adjoint functor from the category of posets to the category of utility functions~~
zphang#7252: and economics!
AI_WAIFU#2844: The other thing you can do is if you have uncertainty about the ordering, you can still get some useful information out if the orderings have some degree of commonality. Most people agree that pain and suffering is generally bad. |
Daj#7482: For the record: I think formalizing suffering may be the highest impact thing we as a species could accomplish for alignment
AI_WAIFU#2844: So you can still get actionables even if your information is incomplete
bmk#1476: also another advantage of doing partial orderings vs utility functions: each person defines a partial ordering, and you can then smush them all together to get a weird directed multigraph
AI_WAIFU#2844: formalizing "smush them all together" is another productive avenue
bmk#1476: i feel like this might be better than just blindly adding utility functions since you actually have more info to work with
AI_WAIFU#2844: see "voting theory"
bmk#1476: oh yeah, arrow's thm and stuff
Daj#7482: > also another advantage of doing partial orderings vs utility functions: each person defines a partial ordering, and you can then smush them all together to get a weird directed multigraph
@bmk MULTIGRAPH?
_Stephen Wolfram has entered the chat_
AI_WAIFU#2844: the nice thing though is you can assume you have access to real preferences, rather than strategic BS
AI_WAIFU#2844: you can't hide your strategic voting from an FMRI
Daj#7482: > you can't hide your strategic voting from an FMRI
@AI_WAIFU Not sure if this is true with current fMRI tech lol
bmk#1476: is there a weakened arrow that concerns combining preferences of honest voters?
bmk#1476: i.e no strategic stuff
AI_WAIFU#2844: Give it time... Worst case I use neuralink-3
Daj#7482: Big problem with using brain states as ground truth for utility functions: Mind crime
Daj#7482: Evaluating such a function might _itself_ be instantiating a suffering entity
Daj#7482: Just to check whether something is bad or not |
AI_WAIFU#2844: ok i gotta go
Daj#7482: See ya! Always fun to talk
bmk#1476: cya
Louis#0144: weird feeling when microsoft excel is literally the biggest piece of software i have on my computer by a long sholt
Louis#0144: 🤷♂️
chirp#4545: me just now on my side project:
> wow, my model outputs really suck. i'm gonna look at my data really carefully and see what's going wrong, and think about how the errors could be fixed
> but that will take a long time. i wonder what will happen if i just make my model bigger
> wow, the results are so much better. i guess sometimes STACK MORE LAYERS really does work https://cdn.discordapp.com/attachments/729741769738158194/764990519910596638/n9fgba8b0qr01.png
gwern#1782: you might say that when it comes to allocating your time and effort in making a better lossage, you just learned a... bitter lesson?
FractalCycle#0001: for whoever's keeping track, i have yet another (rephrased?) question for the sama meetup:
"What, if anything, can someone do to further A.I. alignment, relative to what OpenAI is doing?"
zphang#7252: I had a silly thought last night.
What if all the text ever available, of all human language ever written/spoken, still wasn't enough to train a human-level/significantly better LM/AI?
(With the assumption that the amount of text being generated in the present also doesn't shift the needle.)
bmk#1476: then we just toss images in the mix
bmk#1476: there are a shitload of images out there |
bmk#1476: and if we ever need to collect more images it's quite easy
bmk#1476: just get one of them streetview cars and drive around for a few months
bmk#1476: that would actually be an incredibly fun job, streetview car driving
gwern#1782: @zphang the data scaling law so far is sublinear, and I'd argue you could easily scrape up 1000x more text data, even with cleaning, if you went global. and if you couldn't, just wait a year or two, and social media will make it so.
gwern#1782: (think about how much text every person in china must write per day, for example. or look at AI Dungeon user activity - suppose you take the top 1% of AID transcripts by quality, that's surely human-level, so you can feed that back in...)
cfoster0#4356: I know this isn't the kind of answer you're looking for, but if "all the text ever available, of all human language ever written/spoken, still wasn't enough to train a human-level/significantly better LM/AI", then it *might* be time to invest in a better learning strategy
Ken#8338: @zphang I was wondering the same question a month or so ago. I came up with one idea that if we used voice to text of the X billion of people even for some reasonable percentage of the average 16,000 words a day a person speak that it wouldn't take too long to meet the requirements I had extrapolated that would be required for multiple quadrillion parameter models.
gwern#1782: @cfoster0 well, the worst-case outcome is we train something which is not AGI but 'merely' orders of magnitude better than GPT-3 ^_^
Noa Nabeshima#0290: What things in RL/DRL seem intuitively reasonable (without proof) and actually suck?
bmk#1476: sigmoids seem like a nice idea as an activation function but it sucks
bmk#1476: it ticks all the boxes too
bmk#1476: seems biologically plausible
bmk#1476: very mathematically "nice"
bmk#1476: complete garbage in practice
Noa Nabeshima#0290: thinking model-based RL (up until recently), bayesian methods, learned curriculum, maybe statistical learning theory stuff, generalization from simulation to real life in robotics
I think these are bad examples and I'm looking for good ones
kindiana#1016: a lot of those don't suck _that_ much lol
Noa Nabeshima#0290: yeah, I know
kindiana#1016: I think that question is pretty hard to answer because most people get their intuitions from reading results of papers lol |
Noa Nabeshima#0290: Yes, that seems like a big problem.
Noa Nabeshima#0290: Paul Christiano's iterated amplification proposal seems to rely on lots of things that seem like they might intuitively work, so this seems like a good thing to check.
kindiana#1016: would be an interesting experiment for one of the big ML conferences to run, have people predict the results of a paper from the method and see what the R^2 is
Noa Nabeshima#0290: but even still you're selecting for methods that people are publishing about
Noa Nabeshima#0290: what about novel methods that people might not publish about?
Noa Nabeshima#0290: a probably bad idea is to just come up with ideas that seem in a similar reference class to Paul's methods and try them?
StellaAthena#3530: > What things in RL/DRL seem intuitively reasonable (without proof) and actually suck?
@Noa Nabeshima virtually anything you can say about activation functions
StellaAthena#3530: Fourier convolutional neural networks, at least in the naive way of implementing them
bmk#1476: S p i k i n g
bmk#1476: NN s
StellaAthena#3530: There’s a clever idea you can have about using the Convolution Theorem to speed up CNN computations, because F(f • g) = F(f) x F(g) where • is convolution, x is the product, and F is the Fourier transform
bmk#1476: SeqGAN is amazing in theory but according to multiple professors I know, nobody has ever managed to make it work
StellaAthena#3530: Unfortunately this speeds up convolutions at the cost of slowing down activation functions, as it turns products into convolutions!
StellaAthena#3530: Abstractly you could pull out of your ass an activation function that works in Fourier space, but nobody has figured out how yet
bmk#1476: (I'm still salty about the time I've spent on SeqGAN)
StellaAthena#3530: (The same ideas give rise to spectral pooling, which is useful)
Noa Nabeshima#0290: @StellaAthena Where do I find the theoretical ML people to talk to?
Noa Nabeshima#0290: EG if I want to talk about the Neural Tangent Kernel, or ask questions about the paper, where do I go?
Noa Nabeshima#0290: My first concrete question is: what is \|\| \cdot \|\|_{p^{in}}? |
but I expect to have more https://cdn.discordapp.com/attachments/729741769738158194/766048766402756628/unknown.png
Noa Nabeshima#0290: https://arxiv.org/pdf/1806.07572.pdf
StellaAthena#3530: @Noa Nabeshima the \cdot is a placeholder symbol for the input of an implicitly defined function.
StellaAthena#3530: the kernel K is positive definite with respect to g(x) = \|\|x\|\| if g(f) > 0 ==> \|\|f\|\|_K > 0
Noa Nabeshima#0290: But what is \|\|f\|\|_{p^{in}}?
Noa Nabeshima#0290: I understand the rest, just not what this norm is
StellaAthena#3530: ah
StellaAthena#3530: The bottom of page 2
Noa Nabeshima#0290: oh wow
Noa Nabeshima#0290: embarrassed
Noa Nabeshima#0290: thank you
StellaAthena#3530: No worries 🙂
StellaAthena#3530: Reading is hard. And OP.
StellaAthena#3530: I am always down to chat about ML theory.
Noa Nabeshima#0290: awesome
Noa Nabeshima#0290: I am going to want to do that
StellaAthena#3530: This discord channel is about ML and Security, but has a number of mathematicians and a lot of ML theory chatter: https://discord.gg/AjdqRu
StellaAthena#3530: When I get home I can also share a link to a Slack channel about TDA in ML too
StellaAthena#3530: But sliding into my DMs is a good way to start 😉 |
Deleted User#0000: for the NTK i find the presentation in Jascha et al's paper easier to read
Deleted User#0000: also i made some notes for the Jacot paper some tiem ago -- maybe they are helpful https://github.com/damaru2/ntk/blob/master/notes/Neural_Tangent_kernels___Jacot_et_al.pdf
Deleted User#0000: im also down to ml theory chat
StellaAthena#3530: Here’s the invite link for the TDA in ML slack: Let’s give Slack a try -- it’s a simple, focused messaging app for teams that work together. Sign up here: https://join.slack.com/t/tda-in-ml/shared_invite/zt-brm7ypv4-Br0vXGge8wUoaSmgp~JTGA
bmk#1476: This is the whole finding whether the data has a hole thing right?
bmk#1476: (in reference to TDA)
StellaAthena#3530: Reductively, yes.
spirit-from-germany#1488: Interesting interview about AGI & GPT-3 🙂 https://youtu.be/16xwYrudT_c
gwern#1782: how can it be interesting when it's goertzel?
bmk#1476: man, y'all *really* don't like goertzel
cc_#1010: dear diary: today we narrowly avoided kessler syndrome, which is good
spirit-from-germany#1488: Hey, Ben is a freaky Hippie who had done a lot of controversial PR stunts.... But he's also pretty smart and kind 😄 😉
gwern#1782: as I think I've ranted before, I've been reading Ben's pontifications on AI since 2004 on SL4 etc, and my opinion of him has never increased, only decreased, with time, and I do not forecast any increases...
bmk#1476: On the topic of SL4, I always feel like I missed some kind of golden age of the whole rationalism/transhumanism thing
bmk#1476: It feels like all the rationalist/transhumanist people retreated into their caves sometime in the mid 2010s and nowadays it's really hard to find any such significantly sized communities
Daj#7482: I sometimes have that feeling too @bmk but I'm almost 100% sure some young kid will say the same thing about Eleuther in 10 years
Daj#7482: Well, probably not Eleuther specifically, but whatever within the reference class of Eleuther that does something successful
Daj#7482: I think SL4 is a survivorship bias thing. We remember it because tons of cool things emerged from it, while forgetting the countless other groups that led to nothing
bmk#1476: Fair
bmk#1476: I guess we just need to make sure Eleuther does cool things, then |
bmk#1476: Also lots of cool things emerged from LW which is also well past its prime at this point too
gwern#1782: (and people on SL4 missed the golden age of Extropy ML)
AI_WAIFU#2844: Are there any archives of Extropy ML? Back when I was young I read SL4, but I was never able to find the "extropians" that they referred to.
alstroemeria313#1694: is this it? http://lists.extropy.org/pipermail/extropy-chat/
alstroemeria313#1694: not old enough, probably
alstroemeria313#1694: how about this? http://www.lucifer.com/exi-lists/
AI_WAIFU#2844: That looks right, I wouldn't know for sure but I guess I just found my weekend reading.
bmk#1476: where do all the rationalists/transhumanists hang out these days?
bmk#1476: LW is nowhere near as popular as it was before
AI_WAIFU#2844: Is it? I think LW is still the big online rationalist hub. IMO transhumanism seems to be kinda dead ATM. It's now less of a bunch of people online and more engineers trying to convince the FDA to let them do their thing.
I think the bigger issue for LW is that it lost its feedstock. I think LW got big and popular during the mass conversion of people away from religion during the early internet. EY, Lukeprog, and many others, myself included fit that bill. After you find out a good fraction of the human race is insane, including all your familiy, LW style rationalism is not that big a step.
AI_WAIFU#2844: I don't think there's an equivalent to that now-a-days that can as easily bridge the gab between the normie memeplex and rationalism.
bmk#1476: ah
bmk#1476: i mean, i don't know, i found a bridge to rationalism somehow
bmk#1476: also i would think of myself as at least mildly transhumanist
bmk#1476: and i cant be the only one left
AI_WAIFU#2844: I mean you can still find it, but I definitely feel like it's harder. I haven't seen a link to LW outside of the immediate ratsphere in a long time.
AI_WAIFU#2844: Like, Elon Musk is doing neuralink.
AI_WAIFU#2844: My point is that it's become less a political movement and more people buckling down and doing engineering |
bmk#1476: ~~also he's said the phrase *less wrong* on multiple occasions~~
bmk#1476: oh
bmk#1476: that's a good thing then
bmk#1476: still, having it as a political force isn't a bad thing
bmk#1476: i wonder how many people in this server would count as vaguely transhumanist
AI_WAIFU#2844: Yup. The other thing is that transhuman ideas are getting less weird. We've got all sorts of biotech implants coming down the regulatory pipeline.
bmk#1476: that's *really* good
bmk#1476: now we just need to make the ideas weirder
bmk#1476: before anyone even realizes it, bam, we've shifted the overton window miles over
AI_WAIFU#2844: Honestly, wheater they we're responsible for it or not, the rationalist/transhumanist movement has done an excellent job of normalizing and disseminating it's most important ideas.
AI_WAIFU#2844: AI Safety being arguably the biggest success.
bmk#1476: rob miles is amazing in that regard
bmk#1476: his videos brought me into ai safety, actually
AI_WAIFU#2844: nice, there's a whole bunch of things that were put out. Bostroms book being arguably the most influential.
bmk#1476: superintelligence?
AI_WAIFU#2844: Yup, I think that's what got musk, which led to OpenAI
bmk#1476: i might add that to my reading list, is there anything in it that i wouldn't get from general ratsphere exposure
bmk#1476: i feel like i get the general gist of this cluster of ideas, if nothing about the technical solutions
AI_WAIFU#2844: I honestly haven't read it myself, but I feel like it's got the same problem as many ratsphere books. It's a summary that quickly becomes outdated.
bmk#1476: ah, ok |
bmk#1476: probably wont be reading it then
AI_WAIFU#2844: IMO it's better to just keep up with the alignment forum/do your own research.
bmk#1476: good idea
bmk#1476: ~~i still need to read the af sequences dammit~~
AI_WAIFU#2844: ~~don't forget the comments~~
bmk#1476: ~~***EEEEEEEEEEE***~~
AI_WAIFU#2844: Ok I need sleep
bmk#1476: ok cya
thenightocean#6100: "Currently there are three exponential trends acting upon AI performance, these being Algorithmic Improvements, Increasing Budgets and Hardware Improvements. I have given an overview of these trends and extrapolated a lower and upper bound for their increases out to 2030. These extrapolated increases are then combined to get the total multiplier of equivalent compute that frontier 2030 models may have over their 2020 counterparts. " https://www.lesswrong.com/posts/QWuegBA9kGBv3xBFy/the-colliding-exponentials-of-ai
Deleted User#0000: about the transhumanism stuff above, most of my friends are transhumanists. Here in Oxford there is a fair amount of people interested in it
Deleted User#0000: weve had student societies about transhumanism, longevity, rationality, etc
Deleted User#0000: (also interestingly most furries ive met in VR identify themselves as transhumanists)
StellaAthena#3530: > "Currently there are three exponential trends acting upon AI performance, these being Algorithmic Improvements, Increasing Budgets and Hardware Improvements. I have given an overview of these trends and extrapolated a lower and upper bound for their increases out to 2030. These extrapolated increases are then combined to get the total multiplier of equivalent compute that frontier 2030 models may have over their 2020 counterparts. " https://www.lesswrong.com/posts/QWuegBA9kGBv3xBFy/the-colliding-exponentials-of-ai
@thenightocean taking bets on how iff these projections are.
Daj#7482: > (also interestingly most furries ive met in VR identify themselves as transhumanists)
@Deleted User This will never not be funny to me. It makes perfect sense, which is why it's funny. Never forget that one time furries interviewes Anders Sandberg and gave him a fursona lol
Deleted User#0000: whaaaat
Deleted User#0000: when was that i need to know
Daj#7482: Haha you haven't seen?
https://youtu.be/5XKTdx5BWic |
Daj#7482: This is why normies don't associate with us lol
Deleted User#0000: i actually told Anders about furry GANs before arfa did it, and he loved the idea xD
and then he posted some tweets defending furries too xD
Deleted User#0000: > Haha you haven't seen?
> https://youtu.be/5XKTdx5BWic
@Daj no i havent seen it, but now defo will
Daj#7482: I love Anders so much, such a sweetheart
Deleted User#0000: me too
thenightocean#6100: he is!
Deleted User#0000: i am close to convincing him to get VR xD
Daj#7482: Hahaha
Daj#7482: FHI twitch channel when?
Deleted User#0000: xD sooon
Deleted User#0000: i hopee
thenightocean#6100: I got so lucky to get randomly placed in the same virtual room as him after his SSC talk. Had almost 30min time to pick his brain
Daj#7482: The funniest thing about you not knowing the Anders interview means that there are _at least two completely seperate transhumanist furry groups at Oxford alone_
thenightocean#6100: talked about everything from Infohazards to weather in Sweden
Daj#7482: Nice, I didn't get to speak to him that time
Deleted User#0000: hmmm well need to have a look at the tak maybe theres some overlap
Deleted User#0000: curious noww |
Deleted User#0000: ah didnt know he gave a talk with virtual rooms after
Deleted User#0000: but yeah conversations with him are pretty awesome
Daj#7482: I talked to him once or twice then he never answered my follow up emails haha
Daj#7482: (this is totally fine, I know how professors are haha)
Deleted User#0000: ah no i didnt know this Danfox guy
Deleted User#0000: > The funniest thing about you not knowing the Anders interview means that there are _at least two completely seperate transhumanist furry groups at Oxford alone_
@Daj so u are right! owo
Daj#7482: That's hilarious
Daj#7482: Oxford seems to be a wild town lol
Sid#2121: > The funniest thing about you not knowing the Anders interview means that there are _at least two completely seperate transhumanist furry groups at Oxford alone_
@Daj I am certain there are more than just two. Source: grew up in oxford
Daj#7482: Shit I'm really missing out on the party it seems
bmk#1476: Damn, I'm really missing out too, I don't think there are any transhumanist groups up here, let alone transhumanist furry groups
bmk#1476: Also, a (semi)relevant meme from the fine folks over at SSCD: https://cdn.discordapp.com/attachments/729741769738158194/767070718168268850/the_great_furry_porn_saga_2.png
Daj#7482: This is why no one hangs out with the rationalists
bmk#1476: Haha
Daj#7482: It's like all the nice and smart people from 4chan emigrated but couldn't leave their autism behind
bmk#1476: :yes:
Daj#7482: Stuff like this happened all the time back when I hung out on 4chan IRC servers
Daj#7482: Or worse |
bmk#1476: I'm not surprised
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/767073521628479498/28c.png
Deleted User#0000: > Damn, I'm really missing out too, I don't think there are any transhumanist groups up here, let alone transhumanist furry groups
@bmk lets make a furry transhumanist group in VR, and then we can belong to it wherever we are ^^
bmk#1476: haha
bmk#1476: i'm not a furry but i'm a transhumanist so close enough
Daj#7482: > i'm not a furry but i'm a transhumanist so close enough
@bmk Newest hot take
bmk#1476: the semantic distance between "transhumanist" and "transhumanist furry" is less than the distance between "furry" and "transhumanist furry"
Daj#7482: I'm gonna tell my kids this was SL4 https://cdn.discordapp.com/attachments/729741769738158194/767082229418819594/fa_logo_20191231.png
Daj#7482: Also fucking rip my search history looking for that logo jfk
bmk#1476: am i supposed to recognize this logo
Daj#7482: Furaffinity, biggest furry site
bmk#1476: ah
thenightocean#6100: Speaking of furries. A random fact I recently learned: worlds best competitive player of Mortal Kombat and many other fighting games is a a furry. https://en.wikipedia.org/wiki/SonicFox
Daj#7482: Oh that guy. Even my sister heard about him putting on the costume on stage and asked me very confusedly what this guy was doing on her timeline and what a furry is lol
Daj#7482: My conspiracy theory is that furry and other weird "niche" sex stuff is _much_ more common than people think (Hentai is practically mainstream but tbh it's as weird if not weirder than furry really)
bmk#1476: i take issue with the usage of the word hentai in english, it's a perversion (pun not intended) of the original meaning of the word in japanese
Daj#7482: Hentai weirdo detected
Daj#7482: :^) |
Daj#7482: My theory moreover is that this stuff is super common, but only people with autism have the lack of social skills to admit to it publicly, so it becomes associated with internet/spectrum culture
Daj#7482: (to be clear: this is shitposting, not an actual argument)
Daj#7482: I remember what shocked me most about the BDSM community is how much it's made of normies
bmk#1476: > Hentai weirdo detected
@Daj ironically this is closer to the actual meaning of the word in japanese than the other things
Daj#7482: Wait really?
Daj#7482: I actually have no idea what the word means
Daj#7482: I just use it to describe creepy japanese porn
bmk#1476: that's what i was saying
bmk#1476: > i take issue with the usage of the word hentai in english, it's a perversion (pun not intended) of the original meaning of the word in japanese
Daj#7482: So what does it actually mean?
bmk#1476: it means "perversion" minus the sexual connotation
Daj#7482: Huh, that's not how my singaporean friend explained it
Daj#7482: But tbh then I'm using it kinda correct. The perverse stuff is my problem, not the sexual stuff
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/767092226278162432/unknown.png
bmk#1476: this is what i mean when i say perversion
Daj#7482: I know
bmk#1476: this got corrupted by english to refer to a much more specific concept
Daj#7482: So you could say
Daj#7482: It's a hentai of hentai? |
bmk#1476: unfortunately, yes
zphang#7252: it's more deviant than perversion, imo
but it's also a genre marker, and once a word become that it takes on a life of its own (like word "fantasy")
bmk#1476: i chose "perversion" because it sort of has a bit more of a connotation that it's "against the natural order of things" than "deviant", so to speak
bmk#1476: though the two words are basically synonyms
Deleted User#0000: Any one send me materials to build product from gpt3
3dprint_the_world#6486: I've known a few asian women and they were all fans of hentai. They just don't admit it publicly.
3dprint_the_world#6486: That's just my anecdotal experience though.
Noa Nabeshima#0290: > @ JPrester#6100 taking bets on how iff these projections are.
@StellaAthena I'm down to make bets on this
gwern#1782: @AI_WAIFU https://twitter.com/gwern/status/968903939063209985
bmk#1476: idk, the majority of people i know irl would feel at least slightly uneasy about most ideals i associate with transhumanism
gwern#1782: the kine do not matter
bmk#1476: i.e in ascending order of uneasiness: brain augmentation (neuralink style), life extension, uploads
bmk#1476: i mean, they're a more representative sample of normies (i.e the voting population) than our bubble
gwern#1782: anyway, the whole furry/tech thing is one of the open questions on my https://www.gwern.net/Questions and the autism correlation seems parsimonious: just broken brains leading to breakage in key human capabilities like reproduction. evolution will fix it eventually, but by the time it can catch up, we'll have transcended, mwahaha
Noa Nabeshima#0290: Hi! I'd appreciate it if you took ~5 minutes to rank some GPT-3 summaries here: http://165.227.51.22:5000/
Noa Nabeshima#0290: just rank as many summaries as you want, you don't need to go to the end
Daj#7482: > anyway, the whole furry/tech thing is one of the open questions on my https://www.gwern.net/Questions and the autism correlation seems parsimonious: just broken brains leading to breakage in key human capabilities like reproduction. evolution will fix it eventually, but by the time it can catch up, we'll have transcended, mwahaha
@gwern Haha, I'm glad more people than me are perplexed by this _very important_ question |
Daj#7482: (btw I remember reading somewhere a statistic that the average age of furries is 19, which explains everything about their culture tbh)
gwern#1782: well, autism rates keep going up over time...
Daj#7482: > well, autism rates keep going up over time...
@gwern Do you actually believe this is true? I've heard lots of arguments both ways, whether it's rising or it's just more people being classified as autistic vs other mental handicaps
gwern#1782: I dunno. but it being more culturally visible / diagnosed is not actually in contradiction with it being why autism-correlated subcultures are expanding so rapidly
gwern#1782: (there may be furry susceptibility but without proper environmental exposures to furry stuff, it may never turn into a genuine paraphilia)
Daj#7482: Ah I interpreted you saying "rates" as actual biological rates. It's been one of those things in the back of my mind I would like answers to ( ~~right next to whether they're turning the fricking frogs gay~~ )
Daj#7482: (good take on the gay frogs I recently saw for who's interested: https://youtu.be/i5uSbp0YDhc )
researcher2#9294: Could be a good way to spend 30 minutes
researcher2#9294: Alex Jones yelling "Wake Up"... yep looks highly valuable
Louis#0144: that video is amazing
Daj#7482: The fine art of the crank is to mix some outrageous but true things with your outrageous and false claims. I think the atrozine story is totally plausible, but for example the hologram bullshit was even debunked by captain D
https://youtu.be/Xmrn2IuSW-Q
miguelos#7956: What would you do if the data collection problem was solved, and you had a real-time high-resolution model of the world (the location of every atom and the thought of every person)? What would become possible to build on top of it, given existing machine learning techniques?
StellaAthena#3530: I would destroy it
miguelos#7956: I have a ton of personal data (every keystroke, 24/7 audio recording, every app/url/media consumed, detailed journal, picture of all food/supplement/medicine consumed, precise 24/7 indoor/outdoor location, etc). I'm looking for project ideas of what I could do with it.
StellaAthena#3530: Actually, first I would rejoice about the fact that interplanetary travel was solved because you couldn’t build such a device on earth.
miguelos#7956: The thought experiment isn't meant to be realistic, but meant to evoke ideas of what could be useful given access to more information.
StellaAthena#3530: That’s a *very* different question. “Horizontal” data sets where you have the same data point on many people is much more useful than “vertical” data sets where you have a lot of data about one person.
miguelos#7956: I don't see many people/articles/papers discussing what will be done once we have a lot more data about the world. |
StellaAthena#3530: This doesn’t seem like a necessary fact, but rather a contingent one about the profitable ventures of major companies.
StellaAthena#3530: @miguelos that’s because words like “de facto surveillance state” and “make billions by freeloading on work done by non-employe people” are not popular words to say aloud.
miguelos#7956: Correct, I can't build a model that generalizes across people from just my own data. but I imagine I can still build something? I have millions of data points.
Isaac McHorse#2007: are you for real
miguelos#7956: @Isaac McHorse What do you mean?
StellaAthena#3530: Isaac is a bot
miguelos#7956: I see.
miguelos#7956: So, what becomes possible once we have a lot of data?
StellaAthena#3530: > Correct, I can't build a model that generalizes across people from just my own data. but I imagine I can still build something? I have millions of data points.
@miguelos Very likely. However relatively little research has been done in this “vertical” model.
miguelos#7956: Perhaps my data is completely useless.
miguelos#7956: Is there a canonical term for that concept?
StellaAthena#3530: No, I invented it 30 seconds ago, drawing inspiration from the way we talk about monopolies.
miguelos#7956: Other than "vertical model" or "N of 1"?
miguelos#7956: Have you heard of any paper/project about that? One place to start would facilitate things.
miguelos#7956: But I didn't realize that this so-called "vertical model" was an important constraint of my data. Thanks for the insight.
miguelos#7956: Now I wonder if my data is useful at all. It's now clear that I can't, for example, discover my preference for unwatched movies by learning from the preference of others, because I don't have a model of movie preference by other people. There's a whole lot of problem classes that are out of reach.
miguelos#7956: I guess I'll just go through all SOTA papers and look for inspiration. I don't know many applications of ML outside of the usual next word prediction, image classification, speech recognition/synthesis, sentiment analysis, etc.
miguelos#7956: I wish I could use my data to predict my next steps/activities, but I don't see what it would predict other than going to sleep around the same time, waking up around 8 hours later, and taking a shower within a few minutes of a workout.
miguelos#7956: I could build a language model using my text and audio-recording history, but how useful would that be? |
miguelos#7956: I could analyze how food affects my productivity/mood/energy/symptoms.
miguelos#7956: I'm not sure what aspects of a human are static and can be definitely learned. For example, I might have enough data to model my digestive system or my sleep requirements, but these might change over time due to external factors and the models will become out of date.
miguelos#7956: Perhaps vocabulary of speech doesn't change too much over time, and a language model or speech synthesis/recognition model could have lasting value.
miguelos#7956: Data about myself can only be used to model myself, while data about allselves can be used to model humans.
miguelos#7956: I'm now really interested in research showing models that are fairly constant across all humans (where a N of 1 data can be useful everywhere), and models that are consistent in a single person across time (where a trained model of 20-year-old person A is useful for 40-year-old person A).
Ken#8338: Neural Scaling Laws and GPT-3
Jared Kaplan, Johns Hopkins University, 12:00 EDT
Abstract: A variety of recent works suggest that scaling laws are ubiquitous in machine learning. In particular, neural network performance obeys scaling laws with respect to the number of parameters, dataset size, and the training compute budget. I will explain these scaling laws, and argue that they are both precise and highly universal. Then I will explain how this way of thinking about machine learning led to the GPT-3 language model, and what it suggests for the future. http://physicsmeetsml.org/posts/sem_2020_10_21/
bmk#1476: > are you for real
@Isaac McHorse perfect timing
bmk#1476: https://discordapp.com/channels/729741769192767510/747850033994662000/766779836748005407 this one was perfect too
cfoster0#4356: @Ken if there's a Q&A I'd love to hear if they've got a theory on why larger models are more sample-efficient
Ken#8338: I am also curious.
gwern#1782: I hate talks but maybe I should listen to this
3dprint_the_world#6486: > evolution will fix it eventually, but by the time it can catch up, we'll have transcended, mwahaha
@gwern can you really assume that though. For example evolution hasn't "fixed' homosexuality yet; it seems to have always existed in human society at about the same level it exists now.
3dprint_the_world#6486: So it seems rational to conclude that homosexuality actually serves some evolutionary purpose. What that purpose is, is more speculative.
miguelos#7956: Many have speculated on that, with seemingly reasonable arguments.
3dprint_the_world#6486: and so then it doesn't seem that much of a jump to me to conclude that autism also has evolutionary benefits, or is at least a side effect of some other feature that has evolutionary benefits.
miguelos#7956: Well, the world needs hairdressers and programmers. |
3dprint_the_world#6486: hah. But joking aside, benefits to the group aren't usually that much of an evolutionary forcing function. Benefits to one's immediate siblings are far more important.
miguelos#7956: Then wouldn't rates of autism and homosexuality rise to appear in every family?
3dprint_the_world#6486: well it would reach an equilibrium.
3dprint_the_world#6486: obviously everyone can't be gay.
3dprint_the_world#6486: One really common speculation is that there is some 'autism gene' that is beneficial in general and doesn't actually result in socially debilitating problems, but when it occurs in combination with some other genes, it does.
miguelos#7956: Right. And evolution might not be able to have the rest of the family as context for producing homosexual/autistic/else offsprings.
3dprint_the_world#6486: but since on average that gene causes better reproductive success, it persists in the population.
bmk#1476: what if it's a relatively recent phenomenon, and/or its not evolutionarily disadvantageous enough
miguelos#7956: Another explanation is vaccines.
3dprint_the_world#6486: lol
StellaAthena#3530: > what if it's a relatively recent phenomenon, and/or its not evolutionarily disadvantageous enough
@bmk Autism or Homosexuality?
bmk#1476: both
bmk#1476: idk i'm just guessing
miguelos#7956: It seems to me like more people these days have autistic tendencies. Not that I have first-hand experience of the past.
3dprint_the_world#6486: @bmk perhaps, I just don't think we have much evidence either way, since almost no historical cultures properly diagnosed autism
miguelos#7956: I'm not the first to self-proclaim to be autistic.
bmk#1476: to be fair, the cumulative number of vaccines administered worldwide correlates *shockingly* well with cumulative number of autistic people
3dprint_the_world#6486: homosexuality otoh, most cultures with historical records documented it
StellaAthena#3530: We know that people have had sex with people of the same gender for as long as we have written record. |
3dprint_the_world#6486: exactly.
miguelos#7956: Who do you think wrote those records? Probably not normies.
StellaAthena#3530: Actually, our oldest records are *exceptionally* mundane
3dprint_the_world#6486: yeah
miguelos#7956: What's the purpose of classifying autism/homosexuality as evolutionary useful? I can't see an application outside of eugenics.
3dprint_the_world#6486: stuff like commercial transactions
3dprint_the_world#6486: @bmk so do many other factors like GDP per capita and quality of health services.
StellaAthena#3530: Most of our Akadian records are receipts, complaints about the quality of goods, random secretarial shit, "Joe was here"
3dprint_the_world#6486: it's plausible that during most of history, autism was just lumped in along with general learning disabilities and so on
bmk#1476: > @bmk so do many other factors like GDP per capita and quality of health services.
@3dprint_the_world read closer
StellaAthena#3530: > What's the purpose of classifying autism/homosexuality as evolutionary useful? I can't see an application outside of eugenics.
@miguelosIt's easy enough to invent one, but I don't think any of us actually know what the purpose *really is*.
StellaAthena#3530: If you let me start the sentence with "maybe it's the case that..." I can justify just about anything on evolutionary grounds.
3dprint_the_world#6486: oh it's just a response to people who say homosexuality is 'new' and evolution 'selects against it'
StellaAthena#3530: (This is one of the central problems with evo psych)
StellaAthena#3530: (IMO it's suspiciously similar to 1700s conjectural histories)
3dprint_the_world#6486: which are incorrect arguments. It's not new and there's no evidence it's entirely selected against by evolution.
3dprint_the_world#6486: obviously either way it doesn't matter as to e.g. LGBTQ people having rights -- even if those were true, you'd still be a bad person for mistreating LGBTQ people
StellaAthena#3530: Yeah... the "born this way" meme has clearly been historically successful but ultimately it pisses me off. What, should LGBT people *not* have rights if they *weren't* born this way? |
bmk#1476: > @bmk so do many other factors like GDP per capita and quality of health services.
@3dprint_the_world also hold up did you just admit that *healthcare causes autism* (conspirasy theory intensifies)
3dprint_the_world#6486: exactly
3dprint_the_world#6486: lol
3dprint_the_world#6486: I guess I did
miguelos#7956: LGBTQ somehow managed to get some special privileges that other groups sadly don't receive.
3dprint_the_world#6486: not really.
StellaAthena#3530: What special privileges?
miguelos#7956: Autistic people, poor African people, and Trump supporters don't receive as much compassion.
3dprint_the_world#6486: lol
bmk#1476: we need a #culture_war
StellaAthena#3530: "compassion" is not a "special privilege"
StellaAthena#3530: It's basic human decency.
3dprint_the_world#6486: I don't see how "don't torture LGBTQ people into becoming normies" is a special privilege
miguelos#7956: LGBTQ guidelines make this "basic human decency" more widely applied.
miguelos#7956: (to them)
3dprint_the_world#6486: remember: there's still plenty of places in the world, some first world countries even, where gay conversion therapy is still practiced.
StellaAthena#3530: I don't understand what you're saying.
miguelos#7956: Lots of people are treated like shit by seemingly good people.
3dprint_the_world#6486: yes and that's wrong |
StellaAthena#3530: The US is one of those first-world countries
bmk#1476: @miguelos i think it would be best if you first list which "special privileges" you are referring to
miguelos#7956: Not being discriminated against is one of them.
StellaAthena#3530: https://en.wikipedia.org/wiki/List_of_U.S._jurisdictions_banning_conversion_therapy
bmk#1476: > Not being discriminated against is one of them.
@miguelos how is this not a privilege that should already be afforded to everyone?
StellaAthena#3530: > Not being discriminated against is one of them.
@migueloslol what.
StellaAthena#3530: 1. This should apply to everyone
2. This isn't true
bmk#1476: actually before we go there
bmk#1476: define discrimination
bmk#1476: i feel like this could turn into definition chasing
3dprint_the_world#6486: ok I feel bad now for having started this. sorry everyone.
StellaAthena#3530: No don't
cfoster0#4356: Can we transition this to off topic?
StellaAthena#3530: Nothing wrong with it.
miguelos#7956: I don't discriminate against flamewar starters.
bmk#1476: #off-topic everyone
miguelos#7956: I'm good with ending this. |
StellaAthena#3530: Congrats?
guac#4716: what's the deciding factor in off-topic vs. general discussion?
StellaAthena#3530: Things move to off-topic when people get asked to kick a convo out of general
bmk#1476: Go to #off-topic to continue politics discussions or I will have to kick you from the server
bmk#1476: @miguelos
guac#4716: at me?
bmk#1476: I'm talking to @miguelos
Louis#0144: o boy politics in a STEM discord
Louis#0144: funfunfun
Louis#0144: (not rly)
bmk#1476: It's all good if it stays in #off-topic
thenightocean#6100: Maybe we need this? 😈 https://missionprotocol.org/
thenightocean#6100: But I prefer the old school version: https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer
3dprint_the_world#6486: @Louis I think we had a quite civil and nice discussion in the end.
Louis#0144: whats less wrong
Louis#0144: it looks familiar
asparagui#6391: start here : hpmor.com
StellaAthena#3530: “Politics is a mind kill” is the single most useless, lazy, and moronic meme the rationalist community has ever seen
Nik McFly#3288: hey all. i'm looking for a tech+marketing (haha, if such a person even exists) expert to talk about gpt-3 code generation use-case in terms of tech description and business application. is there anyone to have a 15-min call about it? got a few questions so it would be definitely better&faster than asking here 🙂
bmk#1476: > “Politics is a mind kill” is the single most useless, lazy, and moronic meme the rationalist community has ever seen |
@StellaAthena imo politics isn't *necessarily* a mindkiller, but it very often becomes one
bmk#1476: My favorite semi rationalist thing relating to this is waitbutwhy's "story of us" series
bmk#1476: It really dissects (one perspective of) why this happens
3dprint_the_world#6486: Perhaps the causal relationship is the other way round. If you start with a group of rational good faith actors, and you set politics as the debate topic, you probably won't get any mind-killing. But if you start with a bunch of people who have already mind-killed themselves, the debate will probably soon become political.
bmk#1476: that's entirely likely
bmk#1476: so politics discussions can only happen in tight-knit rat circles, and only with strict rules on decorum
3dprint_the_world#6486: @Nik McFly are you interested in hiring? starting a business? something else?
Nik McFly#3288: > @Nik McFly are you interested in hiring? starting a business? something else?
@3dprint_the_world something else, haha. maybe both in the future. now trying to grasp specific things around the use case for code generation — to understand three main things: 1) overall problem for this use-case in terms of the global market 2) technical description of gpt-3 model, especially a flowchart 3) tech-stack
3dprint_the_world#6486: hmmm a flowchart for GPT-3 eh
3dprint_the_world#6486: lol
3dprint_the_world#6486: are you familiar with transformer models?
3dprint_the_world#6486: if not, and you are actually interested in a technical description of gpt-3, then you 100% first need to understand transformer models
Nik McFly#3288: > hmmm a flowchart for GPT-3 eh
@3dprint_the_world haha yes
kindiana#1016: https://raw.githubusercontent.com/javismiles/X-Ray-Transformer/master/images/xray-transformer-14000px-c.gif
Nik McFly#3288: > are you familiar with transformer models?
@3dprint_the_world there is a website right in my browser now:
|
ERROR: type should be string, got "https://github.com/huggingface/transformers\nkindiana#1016: huh that image is too big to inline, but there is actually a decent flowchart for transformers `https://raw.githubusercontent.com/javismiles/X-Ray-Transformer/master/images/xray-transformer-14000px-c.gif`\n3dprint_the_world#6486: that's a great infographic but pretty meaningless to someone who isn't already familiar with transformers\nNik McFly#3288: @kindiana thanks!\nkindiana#1016: yeah I agree haha\nNik McFly#3288: I'm a fast-learner 🙂\n3dprint_the_world#6486: maybe start with http://jalammar.github.io/illustrated-transformer/\n3dprint_the_world#6486: although that article, as good as it is, only describes *what* transformers do, not *why*. Which is a far more complicated thing to understand.\n3dprint_the_world#6486: but it's a start\ncfoster0#4356: In terms of tech stack, transformers are made of embarrassingly-simple components, so they should be fairly portable to different stacks. But at GPT-3 scale you're talking CUDA running on NVIDIA GPUs, with some kind of numerical library such as PyTorch on top, and probably a model-serving setup if you're doing it in production\nbmk#1476: GPT3\n\nML complexity: 1%\nEngineering complexity: 99%\n3dprint_the_world#6486: > In terms of tech stack, transformers are made of embarrassingly-simple components, so they should be fairly portable to different stacks. But at GPT-3 scale you're talking CUDA running on NVIDIA GPUs, with some kind of numerical library such as PyTorch on top, and probably a model-serving setup if you're doing it in production\n@cfoster0 lol that's not even at GPT-3 scale. At our work we do that for vision models which are like 40M parameters.\n3dprint_the_world#6486: yeah engineering complexity is the big thing. But if you just want to understand the core ideas without implementing it yourself, you can just focus on the maths.\nNik McFly#3288: in fact, I'm thinking about the whole architecture with implementation\n3dprint_the_world#6486: if you want to implement your *own* GPT-3.... well good luck\n3dprint_the_world#6486: lol" |
Nik McFly#3288: not just a concept as a sum of ideas
3dprint_the_world#6486: make sure you have millions of dollars of cash on hand
bmk#1476: :guilty:
Nik McFly#3288: it's not SO important to have them for thinking 🙂
bmk#1476: > if you want to implement your own GPT-3
> make sure you have millions of dollars of cash on hand
3dprint_the_world#6486: or have *very* good relations with someone at google
bmk#1476: :guilty: :guilty: :guilty:
Nik McFly#3288: that's true
Nik McFly#3288: I understand that anything of that size wouldn't be real without tons of resources, for sure
3dprint_the_world#6486: good
Nik McFly#3288: didn't mean to make gpt-3 clone on my netbook 😦
3dprint_the_world#6486: what's got into you @bmk
bmk#1476: what do you mean
3dprint_the_world#6486: I mean why are you guilty
bmk#1476: i'm just pointing out how ironic it is
3dprint_the_world#6486: it is?
bmk#1476: > *if you want to implement your own GPT-3*
bmk#1476: EleutherAI: :guilty: |
Nik McFly#3288: surely, we all can have that in our dreams
3dprint_the_world#6486: well it's still true isn't it
Nik McFly#3288: > EleutherAI: :guilty:
@bmk haha
bmk#1476: fact check we do not have millions of dollars
Nik McFly#3288: that's my question to this community also
Nik McFly#3288: true
3dprint_the_world#6486: and EleutherAI hasn't actually made it's own GPT-3 yet either
bmk#1476: yet
3dprint_the_world#6486: I mean, hasn't trained it yet
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/768269656326144005/unknown.png
bmk#1476: sorry i just had to
Nik McFly#3288: I love the intention for the open-source gpt alternative
Nik McFly#3288: is there a road map for that?
bmk#1476: road map: whenever we feel like it
Nik McFly#3288: sounds good
3dprint_the_world#6486: I get that the plan here is to use spare TPU resources but 1. That's still not accessible to most people 2. It's not clear yet that that approach will actually work in the end (this isn't a criticism of the project, but one needs to be rational about it)
bmk#1476: hey we also plan on begging Our Benevolent Overlord Google for more resources
3dprint_the_world#6486: yeah
bmk#1476: not *just* scraps |
3dprint_the_world#6486: hence
> or have *very* good relations with someone at google
@3dprint_the_world
Nik McFly#3288: afaiu the main thing about tech is about people, so humane tech go first to deliver resources for the idea
bmk#1476: not here
bmk#1476: the main thing here is compute
bmk#1476: compute compute and more compute
bmk#1476: even data is basically unlimited and not too valuable
Nik McFly#3288: but how to compute without computing power?
bmk#1476: engineers are replaceable, a dime a dozen
bmk#1476: but compute is always necessary
bmk#1476: i say this as someone who does basically all engineering and data for eleuther lol
Nik McFly#3288: it's okay
Nik McFly#3288: I was one of the miners at first years of bitcoin
bmk#1476: nice
Nik McFly#3288: on my netbook, haha
bmk#1476: still in crypto these days?
bmk#1476: or nah
Nik McFly#3288: i'm into building other things outside of the crypto but i still believe in it
Nik McFly#3288: also will go into selling art on blockchain next week |
bmk#1476: i have poor opinions of art on the blockchain
Nik McFly#3288: now grasping the gpt-3 concept
bmk#1476: i have poor opinions of fine art in general but especially nft art
Nik McFly#3288: that's the industry i know it's growing this moment
cfoster0#4356: 👀
Nik McFly#3288: i know people making money there
guac#4716: people transfer money there lol
bmk#1476: thats a horrible reason to go into something
Nik McFly#3288: so i want to go and try but the main question is to go for the community
bmk#1476: dont just do something because it makes money
Nik McFly#3288: not for the money
Nik McFly#3288: that's what i also think
bmk#1476: ok i guess money is an instrumental convergent goal for a lot of things but
bmk#1476: still
Nik McFly#3288: money is a blood of the business
cfoster0#4356: There is a pretty cool crypto-ai-art community on Twitter but I'm not sold on NFT art
Nik McFly#3288: blood of projects
Nik McFly#3288: nft art is cool
bmk#1476: i would distance myself from nft art unless someone can convince me otherwise
Nik McFly#3288: it's okay 🙂 |
Nik McFly#3288: i just want to do art and throw there and see what happens
guac#4716: is there blockchain for music?
bmk#1476: can you give me the elevator pitch for nft art, assuming im already not a fan of fine art as a whole
Nik McFly#3288: no illusions
cfoster0#4356: there are multiple @guac
bmk#1476: i'm into crypto too so you dont need to explain crypto
Nik McFly#3288: no way, im not pitching it, just doing
bmk#1476: ah ok
3dprint_the_world#6486: he's not pitching. he's catching.
Nik McFly#3288: yes 🙂
cfoster0#4356: Who's on first?
bmk#1476: maybe i'm too cynical, don't let me discourage you
Nik McFly#3288: it's okay
guac#4716: @cfoster0 ooooo apparently I'm a year or 2 late hehe there goes my plan!
bmk#1476: anyways i'm generally cynical of many of the new blockchain applications
cfoster0#4356: The big problem with blockchain music is the unit economics
bmk#1476: call me a maximalist but i'm still most interested in crypto as a currency and general economic platform
bmk#1476: most things *dont need to be on chain*
bmk#1476: most things *dont need their own chain/token*
Nik McFly#3288: > anyways i'm generally cynical of many of the new blockchain applications |
@bmk it's okay too. i'm cynical on the tech at all. but you wouldn't believe what's the project with blockchain application i'm involved into
bmk#1476: i cant parse that
bmk#1476: youre cynical of.. the tech? or applications?
Nik McFly#3288: of the tech at all
Nik McFly#3288: all of the tech
bmk#1476: you're cynical of all the tech?
Nik McFly#3288: yep
Nik McFly#3288: absolutely
bmk#1476: but you're not cynical of applications like NFT art
cfoster0#4356: ^
Nik McFly#3288: i live with PC for 30 years and work in it for... 20 years
Nik McFly#3288: so i'm very skeptical and cynical about it
bmk#1476: i'm confused as to your stance
Nik McFly#3288: i was very enthusiastic about it through whole of my life
Nik McFly#3288: now I'm cynical and skeptical
Nik McFly#3288: i'm reaching the balance
bmk#1476: i mean your opinions on applications on top of blockchain
guac#4716: isn't the main purpose of NFTs to create artificial scarcity? That feels wrong.
Nik McFly#3288: i love people and communities. i love to help people learn and i love to ship happiness
bmk#1476: if i understand correctly you're skeptical of the underlying tech but you think the applications are worth exploring |
Nik McFly#3288: yep
bmk#1476: huh
bmk#1476: that's the exact opposite of me tbh
Nik McFly#3288: haha
bmk#1476: i think the tech is solid and people are building castles of shit on top
Nik McFly#3288: 🙂
bmk#1476: but yeah to each their own
Nik McFly#3288: i'll try to find my favorite meme about it
guac#4716: i don't think you'll win him over with a meme hehe
Nik McFly#3288: i'm not for the win
Nik McFly#3288: only fun
guac#4716: and money 😮
Nik McFly#3288: money is fun 🙂
Nik McFly#3288: https://cdn.discordapp.com/attachments/729741769738158194/768274816968556564/WCwOtQ-cQ4CaFU1hBsi7acRqn7gvpXCwDcrjFmQF5AY.png
Nik McFly#3288: 🙂
Nik McFly#3288: the underlying tech, you know
bmk#1476: predictably, i am not convinced
Nik McFly#3288: it's okay
Nik McFly#3288: i'm more into humane tech
Nik McFly#3288: in terms of society development |
Nik McFly#3288: society coding
Nik McFly#3288: society is a tech also
Nik McFly#3288: language is a software
shgidi#0284: Hi, is there a status doc or something for this project? will be happy to know where is stands now...
Daj#7482: Hey @shgidi ! We have https://docs.google.com/document/d/1yOnxEMlU57M8YFlQC3XNOvyMVX2EpU5LeIWhEBcwQNk , which is semi up to date. Currently we're just in a slow phase of cleaning up our code and stuff for release
Daj#7482: The data collection side is more active in #the-pile
shgidi#0284: Thanks @Daj ! I was sure the code is the easy part, since it is very similar to GPT2...isn't it?
Daj#7482: Ehhh it turned out to be quite the struggle hah
Daj#7482: Since model parallelism isn't easy
shgidi#0284: Do you mean training the model on multiple TPUs? @Daj
Daj#7482: The model is too large to fit in one device's memory, so you have to split the model onto multiple cores, that's called "model parallelism" and it's sort of tricky
Sid#2121: Code quiz: what's the most efficient way to do this: ```# merge a list of pairs with a single overlap into a list of sequences without repeats
# simple case: [[1,2],[2,3],[3,4],[9,8],[8,7]] --> [[1,2,3,4], [9,8,7]]```
Sid#2121: a.k.a do my job for me
Sid#2121: https://gist.github.com/sdtblck/88a2d440f6d925df38b7855899fceb25 this is what I got but I bet it can be done faster
Daj#7482: I'm sure this can be done with monads
Sid#2121: No one really knows what a monad is and you can't convince me otherwise
Daj#7482: Monads are like burritos
Sid#2121: yes, because also no one knows what a burritos is
Daj#7482: Oh shit |
FractalCycle#0001: sam altman ssc meetup date released: https://joshuafox.com/ssc-online-meetups/ 11/08 (although it appeared to be later last i checked, so i guess they moved it sooner)
StellaAthena#3530: @Sid what causes the break between sequences? The fact that monotonicity breaks?
Sid#2121: @StellaAthena the input is arbitrary, the pairs could be in any order
Sid#2121: they should stay in the order they first appeared in the output sequence, though
Sid#2121: [[11,21],[9,5],[21,420],[420,69],[5,12]] would be an equally valid input
bmk#1476: > yes, because also no one knows what a burritos is
@Sid simple, a burrito is like a monad
3dprint_the_world#6486: a burrito is a burritoid in the category of gastrofunctors, what's the problem?
asparagui#6391: interior gastrofunctors
asparagui#6391: if you walk down the street and get hit by a car that's an external gastrofunctor
AI_WAIFU#2844: @Sid Are the list items unique?
AI_WAIFU#2844: Assuming they are, I think you can create a pair of hash maps/dicts then build up the sequences by iteratively looking up the matching entries in the dict. As you do this, keep track of the pairs you've already visited with another dict/array. Then just scan over the original list and if you see a new entry, build the chain, otherwise keep going until you find an entry that hasn't been acounted for. Whole thing should run in roughly O(n) time. I think.
AI_WAIFU#2844: Or just don't do it in python.
Logan Riggs#7302: I just noticed that "1" and " 1" are different tokens which will then have different embeddings. To illustrate, this a simple pattern where the top prompt (using " 1") gets the correct answer whereas the bottom prompt (using "1") gets it wrong. I've noticed something similar with text association as well. https://cdn.discordapp.com/attachments/729741769738158194/768910283753521172/onePattern.png
Logan Riggs#7302: Here's an annotated image https://cdn.discordapp.com/attachments/729741769738158194/768910411859755058/onePattern.png
bmk#1476: this is the same for all words
gwern#1782: yeah, BPEs are notorious for whitespace separation mattering
bmk#1476: gwern has a lot to say about this lol
gwern#1782: the API Playground will warn you about this too, in addition to my writeup ranting at length about the evils of BPEs
gwern#1782: (the 'B' stands not for 'based' but 'Baphomet') |
Subsets and Splits