data
stringlengths 115
7.61k
|
---|
Aran Komatsuzaki#5714: yeah no detail. shouldn't be much better than the interconnect they used for DeepSpeed experiments around September.
Sid#2121: which was?
Aran Komatsuzaki#5714: It seems that each node is DGX-2, and they had 800 Gbps internode communication bandwidth. Not sure what it is called.
Aran Komatsuzaki#5714: https://cdn.discordapp.com/attachments/729741769738158194/777895483494367263/fig1.png
Aran Komatsuzaki#5714: From this figure, I guess they focused mainly on pipeline and data parallelism without much model parallelism.
Aran Komatsuzaki#5714: They used 16x fewer GPUs, but it seems that you can quadruple D dimension without much hussle. Another 4x can come from P or D.
Sid#2121: is this figure from the deepspeed blog post?
Aran Komatsuzaki#5714: yeah
Aran Komatsuzaki#5714: it's hidden somewhere in this page: https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/
Airatak#7842: Well All I know about GPT-3 is that it was trained using azure
CKtalon#7792: https://www.zdnet.com/article/nvidia-unveils-new-dgx-station-ai-workstation-in-a-desktop-form/
CKtalon#7792: Anyone knows how much this will cost? 50k? 60k?
bmk#1476: 80GB A100?? Woah
CKtalon#7792: https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/dgx-station/nvidia-dgx-station-a100-datasheet.pdf
bmk#1476: We'd only need like 8 of these to train GPT3
StellaAthena#3530: 80 GB A100?!?!
StellaAthena#3530: > Anyone knows how much this will cost? 50k? 60k?
@CKtalon Wrong order of magnitude.
StellaAthena#3530: Given that it has so much memory per chip, I wonder why they're only doing four cores. This isn't actually much bigger than the GDX-3.
CKtalon#7792: But the dgx a100 is 300k I think? |
StellaAthena#3530: @CKtalon Thereabouts. Why would you think that the workstation would be cheaper than just the GPU chips?
WAUthethird#4977: interesting, wonder if they'll manage to fit all 80GB on a PCIE card
AI_WAIFU#2844: Apparently they also have 640GB DGX computers. https://www.servethehome.com/nvidia-dgx-a100-640gb-edition-launched/
bmk#1476: That's just enough to run inference on gpt3
WAUthethird#4977: yeah, the general rule of thumb for vram amounts is twice the size of the model, right?
WAUthethird#4977: with some overhead ofc
Deleted User#0000: that's ridiculous
Deleted User#0000: that would be great if there were some chrome extension that basically gives you a price tag for an item N years into the future, projected based on data
AI_WAIFU#2844: Frankly, this thing probably isn't that expensive. From what I can gather, previous DGX rackmounts costed on the order of 200k-400k, so it's not unthinkable for this thing to be under 1M.
AI_WAIFU#2844: To top it off, this is just an 8x device. Nivida's put together 16x GPUs with a single interconnect before. So a system with effectively 1TB of VRAM that takes care of most model level parallelism for you isn't out of the question.
Airatak#7842: > https://www.zdnet.com/article/nvidia-unveils-new-dgx-station-ai-workstation-in-a-desktop-form/
When I get rich, I am so buying this
bmk#1476: Well, better get on it asap
Airatak#7842: yup
Airatak#7842: Step 1: Make some AI Product
Step 2: ???
Airatak#7842: Step 3: Profit $$$
bmk#1476: Step 1: contribute to one of our projects in exchange for getting your name on one of our papers, thereby allowing you to add it to your publication list and making it slightly more likely for you to get hired at a Bigcorp™
Airatak#7842: haha I'm down for that
bmk#1476: We can definitely use some help with Pile |
bmk#1476: If you can think of some analysis you can run on our data, implement it, run it, and write up a paragraph about the analysis and the result, you can get a spot on our Pile paper
Airatak#7842: oh cool
Airatak#7842: I'll think of something then
bmk#1476: For reference, this is what we're already running:
bmk#1476: Topic modelling
Language analysis
Profanity analysis
Top Ngrams
Document size distribution
Gpt3/other model perplexity
bmk#1476: If you can do something else that we're missing that would be amazing, more analysis is always better
Airatak#7842: ohk cool, I'll think of something
Airatak#7842: The entire pile dataset is available on github right?
bmk#1476: http://eaidata.bmk.sh/data/pile/
bmk#1476: The data is here
Airatak#7842: You guys calculating the entropy for the data?
bmk#1476: Please elaborate
bmk#1476: We're getting perplexity of GPT3 and a GPT2 baseline on the data, is that what you mean?
Airatak#7842: I'm referring to the Shannon entropy of the dataset
Airatak#7842: A lower entropy suggests that the data is close to the true distribution |
StellaAthena#3530: That's not really true.
StellaAthena#3530: Some distributions have higher entropy than others
StellaAthena#3530: A nearly uniform distribution will have a lower entropy than something highly complicated, but that doesn't mean complicated true models don't happen.
bmk#1476: > A lower entropy suggests that the data is close to the true distribution
@Airatak the true distribution of what?
StellaAthena#3530: Do you mean *relative entropy*?
StellaAthena#3530: aka Kullback–Leibler divergence?
bmk#1476: But the KL divergence between what and what?
bmk#1476: That's only relevant when we talk about models, no?
Airatak#7842: > A nearly uniform distribution will have a lower entropy than something highly complicated, but that doesn't mean complicated true models don't happen.
@StellaAthena hmmm makes sense
StellaAthena#3530: I'm not sure what @Airatak has in mind, I'll wait for them to explain.
StellaAthena#3530: @bmk You can compare the KL Divergence of two samples to infer which is more similar to a particular known distribution
Airatak#7842: > @Airatak the true distribution of what?
@bmk I was talking about calculating the Entropy of the languages in the dataset
Airatak#7842: I remember something like this was mentioned in a course I took on NLP
bmk#1476: The dataset is exclusively English
bmk#1476: Or, some 98% english to be pedantic
bmk#1476: Or something like that
StellaAthena#3530: "The entropy of English" is not a very well defined concept. It can vary significantly based on what data you are measuring |
bmk#1476: Character level vs word level vs document level, etc
StellaAthena#3530: No, I mean more the context the documents came from
StellaAthena#3530: Student essays vs newspapers vs academic papers vs blogs
Airatak#7842: Can't we calculate the entropy of a language - like english using isolated symbol probabilities
StellaAthena#3530: It also is different if you measure it in the US vs in South Africa
StellaAthena#3530: > Can't we calculate the entropy of a language - like english using isolated symbol probabilities
@Airatak For a particular sample, yes. But the variance between samples makes coming up with something meaningful more or less impossible
StellaAthena#3530: There's too much important variability to cram into that single number
Airatak#7842: That is true
StellaAthena#3530: I endeavor to say true things 🙂
Airatak#7842: I did find a few papers exploring this concept, it is interesting
bmk#1476: Also we usually don't really care about the entropy of isolated words anyways right?
bmk#1476: We care about entropy conditioned on context
StellaAthena#3530: Yeah, there are some interesting metareviews comparing different measurement attempts.
bmk#1476: Which, for obvious reasons, is nontrivial to figure out
StellaAthena#3530: What I generally find more interesting is comparing information across languages
StellaAthena#3530: Have you read this paper @Airatak https://advances.sciencemag.org/content/5/9/eaaw2594
Airatak#7842: Nope, Let me check it out
Airatak#7842: Oh this seems interesting
StellaAthena#3530: Yeah! I’ve been pushing to make one of our next projects be about investigating scaling laws for language models with that paper in mind |
bmk#1476: Unfortunately we can't do this analysis for v1 because our data is 98% english
Airatak#7842: > Yeah! I’ve been pushing to make one of our next projects be about investigating scaling laws for language models with that paper in mind
@StellaAthena That sounds super awesome!
Airatak#7842: I'd love to see an extended psychological study on the findings to see how humans perceive more information dense languages
StellaAthena#3530: That would be dope
Airatak#7842: oh btw this is rather simple, but are you doing sentence size distribution also?
StellaAthena#3530: Doing what with it?
Airatak#7842: I mean putting it in the paper
Airatak#7842: I see you are doing it for documents
Airatak#7842: Most dataset papers I comeby have sentence distributions, but they are limited in size compared to the pile
StellaAthena#3530: TBH I don't find sentence size distribution particularly interesting. What benefit does that have?
Airatak#7842: I get your point. Document distributions are much more useful for most use cases, but sentence size distributions may come in handy for some cases. Say you wanted to make a transformer which had a context size of 4-5 sentences, it would be handy to know the distribution, but it is not a big deal.
andyljones#7746: research idea: construct the scaling law for fine-tuning language models to a specific person's utterances. in other words, figure out how much data you need from a person in order to imitate them to a specified level of accuracy
define the law in model size and fine-tuning dataset size, extrapolate to arbitrarily large models, call the limit Roko's Law
StellaAthena#3530: That sounds dope
bmk#1476: what would it mean to imitate a person at x% accuracy?
andyljones#7746: was more thinking nats than %
bmk#1476: ah
StellaAthena#3530: How much data would we need? |
StellaAthena#3530: I have access to a lot of text written by me, but to test on people not in this discord would probably be challenging
andyljones#7746: would start with reddit, cause it gets you a lot of unique people outputting a lot of text and it's easy to bulk download
Deleted User#0000: thats quite interesting
Deleted User#0000: are we doing per-domain scaling laws too? Is it easier to learn science/literature/news articles/etc?
andyljones#7746: i'm tangling with a similar issue in my own project right now. place i've come to is to get *a* result first, on the easiest dataset possible. and then worry about generalisation and specialisation
andyljones#7746: an MVP, in other words
AI_WAIFU#2844: https://www.penguincomputing.com/products/servers/altus/altus-xo3218gts-server/
Anyone wanna ask for a quote?
AI_WAIFU#2844: Looks like terabyte vram is already a thing: https://www.nvidia.com/en-us/data-center/hgx/
> NVIDIA HGX A100 combines NVIDIA A100 Tensor Core GPUs with high-speed interconnects to form the world’s most powerful servers. With A100 80GB GPUs, a single HGX A100 has up to 1.3 terabytes (TB) of GPU memory and over 2 terabytes per second (TB/s) of memory bandwidth, delivering unprecedented acceleration.
bmk#1476: if we are unable to persuade Overlord Google to give us enough resources to do gpt3 replication, we should start a kickstarter to buy a coupla these
bmk#1476: we'll probably need 4-8
AI_WAIFU#2844: That or because it's nvidia, you get 1 and do L2L
bmk#1476: i don't plan on waiting several years
AI_WAIFU#2844: tru
AI_WAIFU#2844: but 1 of those systems has the ram to train an almost 1T system, even if in practice it would take a decade to train.
AI_WAIFU#2844: ...
AI_WAIFU#2844: We need a hardware overhang emote.
bmk#1476: see, we can overtake oa in just 3 easy steps
bmk#1476: 1. solicit $100M in investment |
2. fill 2 or 3 40U racks with these servers
3. profit!
AI_WAIFU#2844: This has made me update again in terms of what's possible with NN training.
AI_WAIFU#2844: 100T is possible.
bmk#1476: i'm actually unironically considering this
AI_WAIFU#2844: Maybe 1Q
bmk#1476: after we finish Pile, and models up to something like 10B, we should start pestering masayoshi son
bmk#1476: if we can get him to drop a hundred or two million on us, we could overtake OA overnight
bmk#1476: and it would still be a better investment than WeWork
AI_WAIFU#2844: I think we're lacking social capital.
bmk#1476: we'll have that after we have stuff released
bmk#1476: and after we turn most of our protopapers into real papers
bmk#1476: more legitimacy as a Real Organization is one big reason i've been pushing for us to publish papers so much
AI_WAIFU#2844: Yeah, that makes sense.
AI_WAIFU#2844: Changing topics, has anyone tried tokenizing music?
AI_WAIFU#2844: After an appropriate discretization step
cfoster0#4356: You mean music recordings?
AI_WAIFU#2844: Yes, raw audio.
cfoster0#4356: Not I. There was a discussion a while back in #research about that, though
AI_WAIFU#2844: Do you remember the conclusion? |
cfoster0#4356: I think we agreed that there are a bunch of options that would probably work
cfoster0#4356: Including running BPE on the raw samples, extracting DCT coefficients, and working with spectrograms
AI_WAIFU#2844: yeah, the former is what I'm thinking of.
cfoster0#4356: If you do that, it's probably worth reducing bit depth and sample rate
AI_WAIFU#2844: I'll use the tricks from the wavenet paper. They have a nice discretization technique that lets them reduce things down to 256 symbols without much loss of quality.
AI_WAIFU#2844: I don't think you can compromise on sample rate too much unfortunately.
cfoster0#4356: Depends on whether you're willing to post-process
cfoster0#4356: Waveform generation is in a pretty good place right now
cfoster0#4356: cf. https://arxiv.org/abs/2009.00713
StellaAthena#3530: @gwern has fine-tuned GPT-2 to output musical compositions
StellaAthena#3530: See also: https://arxiv.org/abs/1809.04281
StellaAthena#3530: Open spruce implementation here: https://github.com/scpark20/Music-GPT-2
AI_WAIFU#2844: It looks like gwern used ABC format and this uses MIDI.
cfoster0#4356: Ye the above are symbolic
cfoster0#4356: I've always wanted to train on .mod files
AI_WAIFU#2844: I wonder if I can further shrink the codebook from 256 to something like 64 and then regenerate the underlying music using a neural network.
AI_WAIFU#2844: Hmm... I think this is gonna be my weekend project.
Deleted User#0000: one idea i was thinking for movement generation is something they are doing in unsupservised speech synthesis: wavenet-vq-vae to convert a raw continuous signal into a discrete sequence (which can be of lower frequency) https://arxiv.org/pdf/1901.08810.pdf. You could then train a transformer on the discrete vq-vae latents sequence, and use the decoder to generate a raw signal
Deleted User#0000: code https://github.com/DongyaoZhu/VQ-VAE-WaveNet
Deleted User#0000: i havent tried it but it looks like a cool model to try on any continuos signal kind of data |
Deleted User#0000: like raw audio/music
Deleted User#0000: also on another note (pun not intended)..
there's an idea for LM model parallelism, i had a bit back and was wondering how feasible u think it is
it's inspired by the findings that in the infinite-width limit of Bayesian DNNs for multi-output prediciton, every output effectively becomes like an independent model.
That made me think, what if you did model parallelism over output tokens. Kind of like a mixture of expert, but each expert is a priori designed to focus on a subset of tokens. So Expert 1 outputs an embedding which is then compared with the embedding table of subset S_1 of tokens, and so on for experts 1...n. The error gradient would be easily propagated as a vector of size |S_i| to each expert. If the infinite-width limit intuition above works, then this could achieve similar performance, but allow for some easy and scalable form of model paralelism (up the number of tokens) ?
AI_WAIFU#2844: Looking back, OpenAI already took a proper crack at it. https://openai.com/blog/jukebox/
zphang#7252: Google Magenta has been less prolific that I would've thought
cfoster0#4356: Yeah they were early on the transformers train and then idk what happened
cfoster0#4356: Their DDSP stuff is neat at least
bmk#1476: > We’re releasing the model weights and code
they were early on, *ahem*, this particular train too, and then idk what happened
AI_WAIFU#2844: I'm gonna guess they ran out of data. They made their top level model 5B and at most they had at most 70B tokens in their top level dataset.
AI_WAIFU#2844: Actually, maybe not
bmk#1476: not enough data?
bmk#1476: that sounds like a job for
bmk#1476: *MusicPile*
AI_WAIFU#2844: I'm OOTL is this a new thing? |
bmk#1476: i'm joking, but only kind of
AI_WAIFU#2844: I have 20GBs of *N I G H T C O R E*
bmk#1476: if anyone wants to lead a MusicPile project i am all for it and i'll probably do whatever i can to make it happen
bmk#1476: in particular, i've learned a lot of lessons about what (not) to do from Pile which are probably largely transferrable
AI_WAIFU#2844: I feel like the copyright situation would be a mess.
bmk#1476: i mean, how is it different from pile?
bmk#1476: should be fair use
bmk#1476: :guilty:
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/778081011199246346/unknown.png
cfoster0#4356: The music industry is wayyyy more attentive to this
AI_WAIFU#2844: The music industry has more lawyers
AI_WAIFU#2844: They'll fight you in court.
bmk#1476: :guilty: https://cdn.discordapp.com/attachments/729741769738158194/778081324408766474/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/778081361306189904/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/778081442567553034/unknown.png
AI_WAIFU#2844: See, I'm willing to bet the legal budget of the RIAA is 3 orders of magnitude bigger than elsevier's
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/778081850496516106/unknown.png
AI_WAIFU#2844: Like, if you somehow don't get destroyed in court, they will lobby governments worldwide to make the pile illegal.
AI_WAIFU#2844: That's what you're up against
Deleted User#0000: im leading a Movement++Pile |
bmk#1476: *but.. but.. muh fair use*
Deleted User#0000: hopefully, if i get stuff to work
AI_WAIFU#2844: The law is only immutable if your pockets are shallow
cfoster0#4356: @Deleted User super hopeful for you. A good prior on human motion would be nice to have
Deleted User#0000: yeap. Plus this would be movement+speech + sensory data (vision+hearing), all time-syncced, and involving multi-people situations in interesting environments
Deleted User#0000: thats the goal at least
AI_WAIFU#2844: Is this the VR dataset?
Deleted User#0000: yea
Deleted User#0000: the (future) VR dataset
gwern#1782: we already have music pile, we call it TLMC
gwern#1782: it is culture
gwern#1782: if 1.65tb of touhou music isn't enough to train your music synthesizer, then it's not worth training
Harrison Wells#2251: Hello
Harrison Wells#2251: Anyone here intrested in AGI?
bmk#1476: what specifically about AGI?
bmk#1476: and why do you ask?
Harrison Wells#2251: > what specifically about AGI?
@bmk an Artificial General intelligence
bmk#1476: no i mean what *about* AGI
Harrison Wells#2251: > and why do you ask? |
@bmk because i wanted help
Harrison Wells#2251: > no i mean what *about* AGI
@bmk human like
bmk#1476: what kind of help do you need?
Harrison Wells#2251: How to make an AGI?
Harrison Wells#2251: Like we know how to make some basic ones like
Harrison Wells#2251: Self driving car models
Harrison Wells#2251: And some assistant
Harrison Wells#2251: With deep learning and KNN, CNN
Harrison Wells#2251: But what about AGI?
Harrison Wells#2251: With emotions
bmk#1476: > How to make an AGI?
@Harrison Wells this is a quadrillion dollar question
AI_WAIFU#2844: Understatement.
bmk#1476: this is a universal-paperclips scale question
Harrison Wells#2251: Oops
Harrison Wells#2251: But i am 15
Harrison Wells#2251: Can't pay sorry
Harrison Wells#2251: Just intrested in making one
bmk#1476: I advise you to start by learning some of the math involved |
Harrison Wells#2251: Please tell if you know
bmk#1476: https://discord.com/channels/729741769192767510/729741769738158194/736374402366832681
Harrison Wells#2251: > I advise you to start by learning some of the math involved
@bmk i know about calculus matrices vectors
Harrison Wells#2251: Parabola
Harrison Wells#2251: Etc
Harrison Wells#2251: Logistic regression etc
Harrison Wells#2251: Readed books about them
AI_WAIFU#2844: go read this https://www.hpmor.com/ It may feel like it has nothing to do with AI, but it's a suprisingly good starting point.
bmk#1476: lol don't get him into HPMOR
bmk#1476: at least give him R:A-Z
AI_WAIFU#2844: Fine. @Harrison Wells once your done reading that first book, read this: https://rationalitybook.com/
Harrison Wells#2251: You kidding me?
AI_WAIFU#2844: No
AI_WAIFU#2844: I'm 100% serious.
Harrison Wells#2251: Really
AI_WAIFU#2844: Yes
Harrison Wells#2251: Sounds like they are not even related with them
Harrison Wells#2251: But what about AGI here?
Harrison Wells#2251: Like some ai(s) in zombie games etc are just basic programs |
Harrison Wells#2251: Which aren't even self aware
Harrison Wells#2251: And emotionless
bmk#1476: @Harrison Wells you have some *serious* reading up to do
Harrison Wells#2251: Simple made up with KNN or deep learning maybe?
Harrison Wells#2251: @bmk yes please tell
bmk#1476: and don't start with HPMOR lol
Harrison Wells#2251: Ya
AI_WAIFU#2844: Don't listen to @bmk
Harrison Wells#2251: @AI_WAIFU ok
Harrison Wells#2251: Is it
bmk#1476: HPMOR is *a* way, just not the best way imo
Harrison Wells#2251: You both are not Friends?
bmk#1476: ~~no, we do not watch 90s sitcoms~~
Harrison Wells#2251: Ok
Harrison Wells#2251: Great
Harrison Wells#2251: Please let me know the basic procedure of Making an AGI i will learn all of the steps one by one
Harrison Wells#2251: Please
AI_WAIFU#2844: Read both of the things I linked then read the messages exhanged on this long dead mailing list. http://sl4.org/
cfoster0#4356: Are these legit recommendations for someone getting started?
AI_WAIFU#2844: This is how I started |
Harrison Wells#2251: @cfoster0 i really don't think so
Harrison Wells#2251: @AI_WAIFU i am intermediate bro :/
AI_WAIFU#2844: At about the same age to boot
cfoster0#4356: Huh
bmk#1476: @Harrison Wells give me a few minutes, i'm a bit busy right now but i can get you a reading list in a moment
Harrison Wells#2251: I made some models like crypto predictions , self driving cars
Harrison Wells#2251: Etc
Harrison Wells#2251: > @Harrison Wells give me a few minutes, i'm a bit busy right now but i can get you a reading list in a moment
@bmk yeah Please thanks though
Harrison Wells#2251: And some sales prediction
Harrison Wells#2251: And actually assistant like google or alexa i Can't even call them an AI😂
Harrison Wells#2251: They are just voice Assistant
Harrison Wells#2251: And chat bots
Harrison Wells#2251: With NLP moreover
Harrison Wells#2251: Did ànyone made a AGI yet?
Harrison Wells#2251: Like a human brain?
bmk#1476: @Harrison Wells i presume you're good at picking up math so you're ok with some more dense material
Harrison Wells#2251: Yeah
Harrison Wells#2251: Atleast completed high school
bmk#1476: http://static.stevereads.com/papers_to_read/all_of_statistics.pdf this textbook is very good |
AI_WAIFU#2844: No, he's got to get the philosophical basics down first. There's no point in getting deep in the weeds of math unless you have the big picture first
bmk#1476: ok, i'm ok with R:AZ
Harrison Wells#2251: > http://static.stevereads.com/papers_to_read/all_of_statistics.pdf this textbook is very good
@bmk ok
bmk#1476: but hpmor is.. too much
Harrison Wells#2251: It's about what
bmk#1476: statistics
Harrison Wells#2251: Bro :/
Harrison Wells#2251: Statistics and probability are teached in school
bmk#1476: you *will* need statistics
Harrison Wells#2251: And if you are talking about the matspotlib one
Harrison Wells#2251: I already know
Harrison Wells#2251: And the maths one too
bmk#1476: ok, if you think it's easy, do the 5th problem of each chapter's exercises
Harrison Wells#2251: @bmk ok
AI_WAIFU#2844: HPMOR provides the vision and the motivation for R:AZ, you either need HPMOR first or a very specific personality type.
Harrison Wells#2251: i am not beginner
Harrison Wells#2251: You really recommending them
Harrison Wells#2251: I asked for the procedure steps required for AGI :/
Harrison Wells#2251: I will handle the rest |
Harrison Wells#2251: Please give me the procedure list
Harrison Wells#2251: OF AGI
bmk#1476: ok, i will allow HPMOR
cfoster0#4356: Start with the textbook from Russell and Norvig, imo
Harrison Wells#2251: @bmk Please don't joke buddy
bmk#1476: i am not joking
Harrison Wells#2251: Really?
AI_WAIFU#2844: None of us are joking.
bmk#1476: HPMOR is basically a palatable packaging of a bunch of philosophical ideals
Harrison Wells#2251: I don't want to read them
Harrison Wells#2251: I just wanted to know about the procedure of AGI
bmk#1476: you said you want to do AGI
bmk#1476: this is the way
Harrison Wells#2251: @bmk i want to know the procedure
Harrison Wells#2251: From 1st step to last
Harrison Wells#2251: With heading
Harrison Wells#2251: Please guys
Harrison Wells#2251: I request you i will learn the rest by searching each of them
bmk#1476: we're being absolutely serious
Harrison Wells#2251: @bmk no doubt |
Harrison Wells#2251: But Please let me know the
cfoster0#4356: @Harrison Wells it's not that easy
Harrison Wells#2251: Steps
Harrison Wells#2251: Of making AGI
cfoster0#4356: No one knows the steps
Harrison Wells#2251: @cfoster0 i have 8 years now to learn i don't care
Harrison Wells#2251: @cfoster0 what?
Harrison Wells#2251: Really
Harrison Wells#2251: Is it?
Harrison Wells#2251: :/
bmk#1476: @Harrison Wells again, i highly advise you to read up on the literature
Harrison Wells#2251: @bmk sir i will read them
Harrison Wells#2251: But Please provide me with the steps required to make an AGI?
Harrison Wells#2251: Like data collection then model prepration
Harrison Wells#2251: Which type of model
cfoster0#4356: @bmk and @AI_WAIFU have offered some solid resources from the rationalist sphere. That'll get you started with the mindset
Harrison Wells#2251: Like deep learning, CNN what?
Harrison Wells#2251: Like these
Harrison Wells#2251: I want some steps
Harrison Wells#2251: I will go though everyone don't worry |
Harrison Wells#2251: But Please give me them
kindiana#1016: nobody knows how to make agi, a lot of people have guesses, its all just conjecture until somebody does it
Harrison Wells#2251: > @bmk and @AI_WAIFU have offered some solid resources from the rationalist sphere. That'll get you started with the mindset
@cfoster0 i want hands on
bmk#1476: ok, do that stats textbook
AI_WAIFU#2844: @Harrison Wells I've been at this for almost a decade. I don't know all the steps to AGI, but I can show you my best guess at the first ones. And although it may seem crazy, step 1 is to read https://www.hpmor.com/
bmk#1476: if you don't want HPMOR
Harrison Wells#2251: > nobody knows how to make agi, a lot of people have guesses, its all just conjecture until somebody does it
@kindiana there is no module or proper Framework
Harrison Wells#2251: For doing it?
Harrison Wells#2251: Ok if there is none of them then what i will do reading these books
kindiana#1016: a lot of people think they have a path to AGI, but there's no telling if any of them work
Harrison Wells#2251: > a lot of people think they have a path to AGI, but there's no telling if any of them work
@kindiana yes that's true
Harrison Wells#2251: But atleast the type of algorithms used?
Harrison Wells#2251: Like deep learning, deep q learning etc
bmk#1476: i'd second cfoster's recommendation of the norvig book, though this is mostly second hand since i haven't read it myself
Harrison Wells#2251: What is being used
Harrison Wells#2251: I want it hands on
Harrison Wells#2251: Not in the books |
Harrison Wells#2251: Please
bmk#1476: @Harrison Wells this is the only way
AI_WAIFU#2844: I'll also throw in "Bishop's pattern recognition and machine learning", but do that one later. It's a classic.
Harrison Wells#2251: @AI_WAIFU you know that's the most basic stuff :/
cfoster0#4356: If was as easy as following a tutorial, we'd already have AGI. But it isn't, so we don't.
AI_WAIFU#2844: But it's not though, and if you think it is, that means you're missing the *real* basic stuff.
Harrison Wells#2251: @AI_WAIFU i am doing the basics since 2 years
Harrison Wells#2251: Fed up now reading books and understanding maths
Harrison Wells#2251: Want it hands on
Harrison Wells#2251: In code
Harrison Wells#2251: > If was as easy as following a tutorial, we'd already have AGI. But it isn't, so we don't.
@cfoster0 not talking about tutorial but some basic steps
bmk#1476: @Harrison Wells a maths undergraduate is 4 years
Harrison Wells#2251: @bmk if you study According to school
Harrison Wells#2251: Then
Harrison Wells#2251: I studied nights and drop up school
Harrison Wells#2251: Now i want it hands on Please
cfoster0#4356: You'll need a better mentor than a group of strangers on Discord, friend 😅
Harrison Wells#2251: @cfoster0 yes
Harrison Wells#2251: But where to find one |
bmk#1476: I encourage you to contact your local university
Harrison Wells#2251: @bmk they all are waste
cfoster0#4356: Honestly, a college or university is the straightest path to find someone like that
bmk#1476: so, we've given all the advice we can
Harrison Wells#2251: I live in India
bmk#1476: either you take our advice, or you don't
Harrison Wells#2251: No universities here
Harrison Wells#2251: Are so good
bmk#1476: so this is up to you to decide
bmk#1476: either you bite the bullet and take our advice, or you don't. we can't really force you to do anything
cfoster0#4356: @Harrison Wells Contact someone at a university elsewhere, then. Email them and explain your situation.
cfoster0#4356: People are generally open to help if there's a direct, easy ask.
cfoster0#4356: But as bmk said, there's not much more we can do that would be helpful rn.
AI_WAIFU#2844: We can however say with a fair degree of confidence that if you don't bite the bullet, become more patient, and learn more of the basics first, that you won't get anywhere.
AI_WAIFU#2844: Although perhaps it's best for you to flail around implementing a few more algorithms, you'll come to understand their shortfalls.
Harrison Wells#2251: Ok
AI_WAIFU#2844: Infact you know what. @Harrison Wells Here's a paper that explains how to build AGI. http://www.hutter1.net/ai/aixigentle.pdf
Harrison Wells#2251: Really?
AI_WAIFU#2844: Yes. If you have a large enough computer.
bmk#1476: it's a theoretical framework for AGI |
Harrison Wells#2251: Oh wow
Harrison Wells#2251: Then why you guys don't make one
Harrison Wells#2251: Learning from it
bmk#1476: after you've read the paper, we can discuss further
Harrison Wells#2251: Ok
Harrison Wells#2251: Reading
Harrison Wells#2251: Ok i have a very basic question now
AI_WAIFU#2844: Good.
Harrison Wells#2251: Can we make an ai which can read a book on It's own and learn?
Harrison Wells#2251: ?
cfoster0#4356: Not quite, yet
StellaAthena#3530: Learn *what*?
StellaAthena#3530: We can train an AI which can answer SAT reading comprehension type questions
Harrison Wells#2251: @StellaAthena yes
StellaAthena#3530: @Harrison Wells yes we can train an AI which can read a book and answer SAT-type reading comp questions about it
StellaAthena#3530: (Not phenomenonaly, but not worse than the average American high schooler)
bmk#1476: > (Not phenomenonaly, but not worse than the average American high schooler)
@StellaAthena this is simultaneously exciting and sad
Harrison Wells#2251: @StellaAthena oh
Harrison Wells#2251: How? |
Harrison Wells#2251: And is it possible to create an AI like jarvis?
Harrison Wells#2251: @StellaAthena this way we can even make him paas high school and get a job?
Airatak#7842: I wonder if someone were to make GPT-3 do their assignments, will the professors be able to figure it out?
CKtalon#7792: probably not
StellaAthena#3530: @Harrison Wells It is rather hard to overstate how different responding to SAT-like questions about a text and JARVIS are. That's like asking the Wright Brothers if they can travel to the stars
gwern#1782: how do you know it's like asking the wright brothers, and not goddard?
StellaAthena#3530: I don't, I guess
Deleted User#0000: @Airatak this may be a good incentive to get teenagers interested in training attention nets
Deleted User#0000: advertise it as an essay-writer
WAUthethird#4977: "finally...your childhood dream come true"
Airatak#7842: Answering SAT questions should not be too hard since the answer of each question is available in the given text itself
Airatak#7842: > advertise it as an essay-writer
@Deleted User That is actually a good idea. I think someone did implement something similar but not as useful.
FractalCycle#0001: > advertise it as an essay-writer
i've wondered about this too, surprised this hasn't become more of a thing
Deleted User#0000: i remember as a kid, i often just paraphrased Encarta for my writing assignments lol
Deleted User#0000: kids will go at any lengths
Deleted User#0000: in case you don't remember Encarta https://www.youtube.com/watch?v=qLmudzYWY94
Aran Komatsuzaki#5714: @Deleted User nowadays you can let a harvard graduate write an essay for $10 per page.
Deleted User#0000: oh yea, i've heard about these ghost-writing services |
Deleted User#0000: transformers would be a great fit for that market
Deleted User#0000: once they fine tune on a bunch of applicant essays
Deleted User#0000: those writings are so derivative..
Aran Komatsuzaki#5714: yeah i'll let it write my cover letter lol
Aran Komatsuzaki#5714: i say my work speaks for itself
Deleted User#0000: yea, we are about to see the death of writing in our lifetimes
Deleted User#0000: nuts
Deleted User#0000: well, any writing that is in-distribution
Aran Komatsuzaki#5714: even harvard literature major gets paid such a pittance. they don't have any hp left lol
Deleted User#0000: yea, times are changing
Aran Komatsuzaki#5714: this isn't anything new, but i hope i can show that people are training MLM models for far too many tokens
Aran Komatsuzaki#5714: they generally train for like trillions of tokens, and hopefully we can show that you need only tens of billioins
Aran Komatsuzaki#5714: that'd be like 100x less
Deleted User#0000: @Aran Komatsuzaki is your plan still pivotal around the use of MoE?
Deleted User#0000: i feel like every self-supervised game for language modeling has already been thought of and studied
Aran Komatsuzaki#5714: it consists of three major components: MoE, scaling and no regularization.
Deleted User#0000: gotcha..
Aran Komatsuzaki#5714: also trying them on both GPT-2 and MoE.
Aran Komatsuzaki#5714: Yeah pretty much everything... Open domain QA is the last thing i guess
Aran Komatsuzaki#5714: It should provide a very strong baseline for gpt and mlm. |
Aran Komatsuzaki#5714: sorry i meant GPT-2 and MLM
Aran Komatsuzaki#5714: should be very catchy cuz it can save the computes at the order of 10x - 100x.
Aran Komatsuzaki#5714: scaling paper didn't really measure how much computes they can save, but we can measure it here.
Deleted User#0000: yup, you'll release dopamines if you put that in the abstract, and back it up
Aran Komatsuzaki#5714: MoE paper tried moe on an unpopular task without any nice scaling, but they still got 10x computes saving, so our study is very promising imo
Aran Komatsuzaki#5714: haha
Aran Komatsuzaki#5714: yeah it's very easy to sale our paper for sure.
Aran Komatsuzaki#5714: let's name the model Homer (in contrast to Marge).
Aran Komatsuzaki#5714: it's a low-hanging fruit, but it's also high return investment in terms of how much attention it will attract.
Deleted User#0000: @Aran Komatsuzaki what happened with Erik? are you still planning on working with him?
Aran Komatsuzaki#5714: he's interested but very busy. he said he's releasing the paper this week, so i'm planning to add it to gptneo for mlm experiments due to its simplicity.
Deleted User#0000: ohhh yea, he's releasing his paper to beat Electra
Deleted User#0000: im kind of eager to see what he came up with
Deleted User#0000: lol, people keep offering to put me on their papers, but it means nothing to me really. i guess i didn't go through the academic gauntlet, so it doesn't really matter
Aran Komatsuzaki#5714: let me talk to you on one on one, as it's easy for us to be notified.
StellaAthena#3530: @Deleted User thats a good problem to have lol
FractalCycle#0001: here's what we do:
step 1: write a bunch of papers using ML
step 2: put all our names on them
step 3: we now have prestige |
foolproof can't fail $100%
FractalCycle#0001: step 4: all the papers cite all the other papers
step 5: we now have citations
Deleted User#0000: make 100 smaller papers that all cite each other
Deleted User#0000: paper parallelism
bmk#1476: Mesh PubliCation
FractalCycle#0001: "Hmmm, we sure are getting a lot of 2048-token-long submissions with identical authors, all of which cite each other..."
Deleted User#0000: paper... maximizer
bmk#1476: Relevant smbc
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/778384790520201266/20090831.gif
Deleted User#0000: ouch
FractalCycle#0001: we laugh, but remember: the SMBC guy publishes (books)
StellaAthena#3530: After adding a citation to a recent paper of mine to the Pile paper, I thought about this kind of thing a bunch.
Paulo Mann#0416: I think this is a big problem we are having with the science community, especially the Computer Science community.
Paulo Mann#0416: It is hindering slow science + major advances and prioritizing fast science + minor advances (consistently useless)
Daj#7482: If you think it's bad in CS just wait until you discover biology and psychology hah
Paulo Mann#0416: I didn't know that psychology was like this lol
Daj#7482: Well no, psychology is just largely fraud
Paulo Mann#0416: they should know better than us that this does not make any good for mental health
Daj#7482: Not all of it |
Daj#7482: Just most of it
Paulo Mann#0416: I see
Daj#7482: > they should know better than us that this does not make any good for mental health
@Paulo Mann Academic Psychologists are not incentivized to maximize patient well being
Deleted User#0000: we were straightout told by a lecturer in medical school that 50% of what we were taught (if not more) will not be correct
Deleted User#0000: in 10 years
Deleted User#0000: gives you an idea of the state of medicine
Deleted User#0000: if the pandemic hasn't exposed that already >_>
gwern#1782: it's not that bad. I think you actually have to go back more like half a century to a century to get that high a reversal level. see prasad's _ending medical reversal_
Daj#7482: https://statmodeling.stat.columbia.edu/2020/10/05/what-is-the-purpose-of-the-scientific-process-according-to-the-people-who-are-in-charge-of-some-social-science-journals/
Spicy relevant blogpost
Daj#7482: Cognitive Psychology is still pretty robust at least
Daj#7482: except that terrible social priming stuff lol
Paulo Mann#0416: >
> except that terrible social priming stuff lol
> @Daj Interesting concept, actually. Did not know about it haha
StellaAthena#3530: > except that terrible social priming stuff lol
@Daj Also any field of psych that has the word "evolutionary" in it.
Daj#7482: Evolutionary psychology is so good for memes though :<
StellaAthena#3530: TFW they think they are scientists |
Daj#7482: I actually think evopsy has ideas which are useful, even if they are wrong/fake
StellaAthena#3530: Sure
Daj#7482: e.g. signalling stuff
StellaAthena#3530: I view their work as the modern-day equivalent of the conjectural histories that people like Hobbes and Locke wrote
Daj#7482: That's an interesting comparison
Daj#7482: I feel they are more like Jungian Psychology
Daj#7482: Useful in practice, fake explanations
StellaAthena#3530: side note: did you know that Voltaire became rich and powerful because he systematically rigged the lottery
gwern#1782: I thought he just exploited a broken lottery
StellaAthena#3530: I'm not sure where I would draw the line between the two
StellaAthena#3530: He had a mathematically sound scheme to make money, but he did have to pay off a bunch of people in the Minister of Finance's office to look the other way as his exploit was rather obvious if you were paying attention.
StellaAthena#3530: There's an excellent talk on it here: https://www.youtube.com/watch?v=Uzr1TPmIJT8
gwern#1782: that's more 'kept it broken'
StellaAthena#3530: Fair enough
FractalCycle#0001: >Jungian Psychology
>Useful in practice
y'know i'm not sure it even does that
Daj#7482: It's useful in writing fiction
FractalCycle#0001: ah, that's better
Daj#7482: ~~and generating deepities~~ |
FractalCycle#0001: same with many/most personality things also, like MBTI enneagram uh
Daj#7482: Jung is particularly useful imo
Daj#7482: It's basically poetry
FractalCycle#0001: i like the stuff built from that for storytelling (campbell + harmon story cycle)
FractalCycle#0001: back in high school we did the 4 colors personality test and i got the only Green in class, which is both totally in-character and probably a statistical fluke
Daj#7482: The Hero's Journey is absolutely a thing
Daj#7482: not just because I'm such an unashamed fan of writing in that style
FractalCycle#0001: the video "Every Story is The Same" hints at a good rundown for why that / the story circle is a thing: the rhythm is basically the rhythm you would expect from (learning from mistakes) + (mean reversion between good and bad moments)
Daj#7482: yup, it's the oldest story
Daj#7482: It's also just the "endure negative utility to achieve high positive utility (for the group)"
FractalCycle#0001: > Good people create good times. Good times create bad people. Mean reversion creates bad times. Bad times create mean reversion. Or feedback, or something.
- Socrates
Daj#7482: A classic quote
FractalCycle#0001: > "endure negative utility to achieve high positive utility (for the group)"
basically the same idea as the "roundabout" strategy concept; You can build cars now, or build a factory to build more later.
Daj#7482: It's, like, the story of the rise of the prefrontal cortex, to control our short sighted animal urges, dude
Daj#7482: _passes joint_
FractalCycle#0001: The indirect means make greater ends possible (for those willing to delay gratification).
FractalCycle#0001: also i'm shocked i don't have a weed emote to react to that lol
Daj#7482: I miss the time when I could have conversations like this unironically |
FractalCycle#0001: i like talking about some of this stuff, but i have to keep in mind the fine line between "the evidence points towards this", "this is speculative", and "this is wrong but the words are pretty".
Daj#7482: One of the most important rationalist lessons I learned was that fake explanations can still be useful, or just valuable for aesthetics alone
FractalCycle#0001: I'm just glad my personality prevents me leaning in to that too much. Although wrong models *can* be more useful than other wrong models.
(This is also where you get folks like Taleb or the guy who write Dilbert; if you stick with a comfort zone, you don't need much extra to stay safe within that.)
Daj#7482: Yea, it's a balance
Deleted User#0000: Paper arguing that we need more than text to understand language, explains an example, *in language*. https://cdn.discordapp.com/attachments/729741769738158194/778781729677901854/unknown.png
Deleted User#0000: I think a better example would have been how I'm able to see their hands waving even though it's not explicitly written
gwern#1782: so, how does a blind person learn to understand 'as nimble as a cat'?
Deleted User#0000: Yeah that's what I told them in our last meeting group
Deleted User#0000: This is the group I'll be joining soon for postdoc (not the ones who wrote that paper above, but they are still discussing it). I think I'm gonna be the odd one out again...
Deleted User#0000: i hope i can convince them of some stuff:P
Deleted User#0000: this field is full of ironies. Like their only substantive claims are only backed by providing citations to other papers, which do the same.
where is the.. grounding?
gwern#1782: the ummah cannot be in error
StellaAthena#3530: Are they unaware that blind people can learn to talk?
bmk#1476: Blind people are p-zombies confirmed
Deleted User#0000: I just think they havent thought through what they are saying too carefully, plus probably a bit of group-think going on
bmk#1476: > no. Blind people fundamentally do not understand the world that they talk about. Spending further time learning language will allow them to generate a more credible pastiche but not fix their fundamental lack of comprehension of the world.
StellaAthena#3530: Hell, Helen Keller become *deaf and blind* as a child, learned language, and even learned concepts she had had no exposure to prior to losing her senses |
gwern#1782: obviously, she just failed to generalize her learned word co-ocurrences
StellaAthena#3530: (IDK if you non-Americans know who Helen Keller is, but she’s one of those weird American icons who all elementary school students randomly learned about)
Deleted User#0000: I think really the point is that in most everyday situations (or i guess in pre-covid era everyday situations), there is context outside of text.
But I told them that when people communicate via text they explicitly make more of the context take textual form. And then we agreed, that still there are some things that are not written in the text, but I think those are relatively fewer, and even fewer, if you talk about things *not written in text anywhere*
bmk#1476: I had not heard of Keller until just now
Deleted User#0000: me neither
Deleted User#0000: i think
Deleted User#0000: Though it is true, that there are things that even if they are written in text, they are much more commonly found in other modalities, which is one of the values of multimodality
Deleted User#0000: so really they are just arguing that we should train multimodal models
Deleted User#0000: they just dont express it very well
StellaAthena#3530: tl;de She got scarlet fever as a baby (like 2 or so) and became both blind and deaf. She learned to communicate a handful of basic nouns via sign language but then at the age of 10 or so met a tutor who was able to teach her how to manipulate and use abstract ideas
Deleted User#0000: Interesting
StellaAthena#3530: She eventually got an undergraduate degree (!) and became an author and lecturer
StellaAthena#3530: Her writing is fascinating
bmk#1476: Honestly, I'm not sure why I hadn't heard this example earlier in an AI context. This seems like a perfect case study for the whole "grounding" thing
StellaAthena#3530: The way that she talks about her pre-Sullivan and post-Sullivan life is some of the most thought provoking things I’ve ever read
bmk#1476: Or lack thereof, rather
Deleted User#0000: tbh i hadnt heard about it but i assumed an example like it must exist
StellaAthena#3530: (Sullivan was the name of the tutor who eventually got her to understand that signs were *names for things*, which she credits as the beginning of her intellectual life)
Deleted User#0000: i guess she must have also used touch a lot |
StellaAthena#3530: Yeah
bmk#1476: I guess you could argue that that's still grounding
Deleted User#0000: yeah
StellaAthena#3530: She had to hold people’s hands while they signed
Deleted User#0000: i think teaching a human without any "grounding" would be quite hard
Deleted User#0000: hmmm
bmk#1476: You literally can't communicate with a human with no grounding
StellaAthena#3530: The thing that broke through the idea that signs are names for things was that Sullivan signed “water” while running her other hand under a faucet
bmk#1476: Humans don't have a text IO channel, we have to feed that through one or more of our other channels
Deleted User#0000: well imagine that Helen only used touch to comunicate hand gestrues
Deleted User#0000: and nothing else
Deleted User#0000: or even a person with full sense
StellaAthena#3530: So you had the word in one hand and the experience of water in the other
bmk#1476: So someone will always argue that there's grounding
Deleted User#0000: but who only uses a computer and keyboard
StellaAthena#3530: “A human with no grounding” seems almost incoherent
Deleted User#0000: You could approximate "no grounding" with the right setup for a human I think
Deleted User#0000: In the sense that try to restrict the sensory input to a human to approximate more and more the distribution of pure language
gwern#1782: I think you could at least use HK as an example that people greatly overestimate how much grounding is necessary, and that even traditionaly deprecated modalities like touch may provide all you need
bmk#1476: > You could approximate "no grounding" with the right setup for a human I think |
@Deleted User with the only slight caveat that no ethics committee in the entire world would allow you to do that ever and there's pretty much no way a human can accidentally end up only being able to communicate in text and nothing else, not even touch, from birth
gwern#1782: and she's obviously a case of cross-modality grounding: whatever she knew about sight and sound, it was learned via other modalities
Deleted User#0000: Imagine a human who can only hear, and only communicates with TTS for input and output (with no prosody even)
bmk#1476: i mean if ethics isn't an issue this is easy, just stick a baby in a box with one designated IO channel through vision or sound or whatever
bmk#1476: but for obvious reasons that would absolutely not fly
Deleted User#0000: yeah just curious what is the closest case to have actually happend to this
bmk#1476: and not just because of the poor aerodynamics of the system
Deleted User#0000: Is HK the closer, or are there even more extreme cases?
bmk#1476: finding someone deaf, blind, and paralyzed from birth but also able to communicate somehow seems like a pretty rare case
StellaAthena#3530: HK’s memoir is here btw: https://digital.library.upenn.edu/women/keller/life/life.html
Deleted User#0000: are there cases of people without sense of touch
cfoster0#4356: This reminds me of https://en.m.wikipedia.org/wiki/Knowledge_argument and https://en.m.wikipedia.org/wiki/Floating_man
StellaAthena#3530: Somewhat
Deleted User#0000: even people who are parallized have sense of touch usually?
StellaAthena#3530: Chronic Insensitivity to Pain
StellaAthena#3530: I’m not sure if it’s absolutely no touch or not
StellaAthena#3530: But there are people who don’t experience temperature changes or feel pain
Deleted User#0000: "Complete hypoesthesia is rare" says quora
bmk#1476: but they can still feel tactile pressure, no?
bmk#1476: pain and pressure are seperate |
StellaAthena#3530: They typically die as a child because they sit on a stove or break and arm and don’t realize it or something
StellaAthena#3530: It’s incredibly tragic
bmk#1476: the modern world is a lot safer though
Deleted User#0000: yeah there was this kid who survived to 7-10 or something and didnt have any immune system
bmk#1476: like, obviously in a pre antibiotic world lack of pain sensation is pretty bad
Deleted User#0000: he lived in a bubble
Deleted User#0000: i cant really find cases of complete hypoesthesia, let alone at birth
Deleted User#0000: maybe touch is all you need
Deleted User#0000: xD
bmk#1476: paper title confirmed
Deleted User#0000: (if so many things are "all you need" why does my code never work?)
cfoster0#4356: "Love is All You Need"
"Attention is All You Need"
"Touch is All You Need"
*Are You OK?*
Deleted User#0000: *(no)*
bmk#1476: "love may be sufficient, but is it necessary? (asking for a friend)"
cfoster0#4356: "I can stop whenever I want: a case for earlier early-stopping"
Deleted User#0000: "It's time to stop"
bmk#1476: "Stop: Hammer Time" |
Deleted User#0000: (hammer = more ham)
cfoster0#4356: I'm sure he'd love a co authorship
bmk#1476: didn't he post an ML paper on his twitter once
cfoster0#4356: He's pretty into ML and science more generally
bmk#1476: ok we totally need to get him to be a coauthor on one of our papers
Deleted User#0000: Sutton's hammer vs Occam's razor: a case for brute force
Deleted User#0000: wait MC hammer is into ML?
Deleted User#0000: thats quite cool
bmk#1476: we should get him to be last author on Pile
cfoster0#4356: https://twitter.com/MCHammer/status/1303430463772647426?s=19
bmk#1476: forget erdös distance, hammer distance is the new thing
bmk#1476: wait, did he just *rebuke gary marcus*
Deleted User#0000: lol awesome
bmk#1476: and hol up
bmk#1476: > will push the time table forward
bmk#1476: :firealarm:
Deleted User#0000: o.o
Deleted User#0000: and his last tweet is two cute robo-dogs fighting
Deleted User#0000: this is the best thing ive learnt today
Louis#0144: made a meme of my advisor |
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/778804806848348170/eilab.png
Louis#0144: quite proud of myself ngl
bmk#1476: a worthy contribution
Louis#0144: to the pile it goes!
StellaAthena#3530: Rotfl. Imagine being this bad at science.
https://www.nature.com/articles/s41562-020-0930-x
StellaAthena#3530: I wish you could make a career out of criticizing bullshit papers. I would enjoy that job quite a lot
cognomen#6297: misread
cognomen#6297: correlation found to cause causation
Bedebao#4842: So you're saying they didn't bother explaining why, just that there's a correlation?
Bedebao#4842: Simply from reading this abstract, I'd hypothesize mountainous regions have a rougher environment which could make people more tough. But I have no way of proving it.
CKtalon#7792: what's the rule of thumb conversion? Number of parameters, VRAM (GB)?
Ken#8338: This talk from Simon Knowles from graphcore I think is informative in terms of the scaling hypothesis https://share.vidyard.com/watch/cU1WtarU53k4gT52TvuKTy?
bmk#1476: @pjox OSCAR is deduplicated at the sentence level, right?
pjox#1301: More like line level
pjox#1301: Which ends up being sometimes sentences, sometimes entire paragraphs depending on the document
bmk#1476: Ok, thanks
pjox#1301: 👍🏻
chirp#4545: https://twitter.com/tim_dettmers/status/1329877869116354561?s=21
chirp#4545: +1tn parameter models already?! |
gwern#1782: not too surprising. there have been rumors for a while. but nothing's ever been reported, and I've wondered if people are naively conflating GShard's 1t-attempt or the DeepSpeed 1t-benchmarking.
gwern#1782: since they never name names, it's impossible to tell
ekdnam#1322: If you were to guesstimate, what would be the training time for such a model?
bmk#1476: depends on how much money you have
ekdnam#1322: Ohh yes. Can the training be done with less costs and in less time, by using something like say parallelism? I don't what it is exactly, but have read about it here and there
bmk#1476: The short answer is less time yes, less money no
bmk#1476: also, "parallelism" is a *really* broad term. it's like asking if you can solve a problem using "numbers and calculations" (obviously exaggerated, but you get the idea)
bmk#1476: so technically yes, you can reduce the amount of time needed using parallelism, but that answer is only slightly more useful than "yes, you can use numbers to solve the problem"
ekdnam#1322: ahhh got it 🙂
ekdnam#1322: i was looking at the pinned google doc, and well, the current projects are amazing. in current priorities, The Pile section mentions something about a pdf-to-text converter pipeline. what about apache tika for that?
bmk#1476: We've kinda pushed back pdf to text because we couldn't get it working sufficiently well at the time to do large sections of our data
bmk#1476: However, now that Pile v1 is kind of wrapping up, we're looking back into this stuff again
ekdnam#1322: oh okay
bmk#1476: and yes i believe we looked into tika once
bmk#1476: and the results were inconclusive
bmk#1476: pdf to text is *really damn hard*
ekdnam#1322: damn
cfoster0#4356: We should probably replace the Google doc with the info repo at this point @bmk
bmk#1476: yes good idea
ekdnam#1322: ocr? |
bmk#1476: i'll replace it now (i'll still keep the old link but mark it as deprecated)
bmk#1476: ok done
cfoster0#4356: oh same in #deleted-channel
bmk#1476: oh right, one sec
bmk#1476: ok updated
meir#4027: Can gpt-neo do the type of text to code transforms like the gpt-3 demos online?
StellaAthena#3530: Optical Character Resolution. Taking an image of arbitrary text and figuring out what the characters are.
StellaAthena#3530: We hope so. We scrapped a bunch of code off of GitHub which we think will help with that.
The GPT-3 demos aren’t totally clear about how they work, but I think that they did further fine tuning on code datasets. When we finish training a GPT-3 scale model we will have to see what happens
Louis#0144: What’s a good way to have an autoregressive LM generate sentences in reverse order
Louis#0144: I was considering beam searching over position embeddings
Louis#0144: Not that the sentence itself is in reverse
Louis#0144: But that it generates
Louis#0144: (S3,S2,S1)
bmk#1476: Why not just train a regular LM that way
bmk#1476: Reverse the sentences then train
Louis#0144: O
Louis#0144: Hm
AI_WAIFU#2844: Which has lower perplexity, an LM trained on text, or an LM trained on text in reverse? |
Louis#0144: True
Louis#0144: Should I add special tokens for sentences
Louis#0144: Like <s1> <s2> etc
bmk#1476: why would you do that? just train it like a normal LM
bmk#1476: i mean if you do that for normal training then yes i guess
bmk#1476: otherwise there's not much point
Louis#0144: how does one evaluate an AI that writes bad horror stories
bmk#1476: aren't you the expert on evaluating storytelling LMs
Louis#0144: yes
Louis#0144: but horror is hard
Louis#0144: v hard
bmk#1476: my solution is always either perplexity or mturk
Louis#0144: and its not a model that generates stories particularly well, it generates stories by trying to explain a plot point which means that the variety is massive
bmk#1476: shouldn't you be asking, like, storytelling people
Louis#0144: they dont know LOL
bmk#1476: we're nlp people, the only metric we care about is perplexity
cognomen#6297: classifier exists between keyboard and chair
bmk#1476: proof: left as an exercise to the reader
Sid#2121: new metric proposal: persperity. The amount of perspiration in ml the reader produces whilst reading.
Louis#0144: I have a good metric thats GLUE based actually |
Louis#0144: just looking at the suspense that my model generates vs theirs
Louis#0144: if theirs is too long winded then gg
CRG#8707: Could you train a model in both directions?
CRG#8707: If training backwards helps forwards, you'd get twice the training data.
Airatak#7842: Anyone know of some gpt implementation to help with writer's block?
Louis#0144: I wouldn't tbh
Louis#0144: there are many
Louis#0144: but they all kinda suck
Louis#0144: theres a gpt3 one
Louis#0144: forgot the name
Louis#0144: but talk to transformers is good usually
Airatak#7842: cool, I'll check it out
Airatak#7842: It would be really nice to have GPT3 help me with writing tasks
Airatak#7842: It would save soo much time 😩
Kazumi#1297: like a writing suggestion?
Airatak#7842: Yea
Kazumi#1297: I mean, the easiest way would be to use AI Dungeon, but you could come up with how to do it too
gwern#1782: just explain your paper to Holo, and it'll practically write itself
Airatak#7842: Holo?
Airatak#7842: What's that? |
AI_WAIFU#2844: One of the better girls.
Ken#8338: @Airatak I was wondering the same thing, then wondering if is a joke I did not catch.
Airatak#7842: Haha same here
AI_WAIFU#2844: This is holo: https://cdn.discordapp.com/attachments/729741769738158194/779834742089842698/EdHuMgsWsAEk-29.png
bmk#1476: > best girl
> pfp
X - doubt
AI_WAIFU#2844: fuck you got me
Airatak#7842: @AI_WAIFU Is that part of aidungeon or something?
AI_WAIFU#2844: Yes, this is a man who talked to holo
Airatak#7842: How to access holo through aidungeon?
bmk#1476: ask nicely for her
AI_WAIFU#2844: You have to go find her. She's more accurately known as *Holo the Wise Wolf*.
Airatak#7842: Someone should really make a GPT essay writer
Airatak#7842: If only I had access to GPT3 😦
AI_WAIFU#2844: That was easy:
> You are Chad, a peasant living in a small village in the kingdom of Larion. You have a pitchfork and nothing else. You dream of getting out of your village and having an adventure. You go to your fields and begin working. You look up and see the most beautiful woman you have ever seen. You fall in love instantly. You are the happiest you've ever been. |
> >You say "Are you Holo, the wise wolf of Yoitsu? "
> "Yes, little one, I am Holo."
> Your whole life you have wanted to ask the wise woman a question, but you are too shy to do so."Mmmm, bacon and eggs,"
Kazumi#1297: how true is it that nobody understands how neural networks work, and people use evolutionary algorithm to search for network architecture
AI_WAIFU#2844: Depends on what you mean by "understand" and "work".
StellaAthena#3530: Easy. NAS doesn't work
AI_WAIFU#2844: It doesn't?
StellaAthena#3530: No
AI_WAIFU#2844: This is news to me. Personal experience?
Kazumi#1297: what I've heard is that the initialization for networks are more important than the specific architecture? or that's what I got out from the lottery hypothesis paper
StellaAthena#3530: I have not seen evidence of the existence of a NAS paper that satisfies all three of the following:
1. is not NEAT or a derivative (NEAT works, but is far too slow to be very useful)
2. outpreforms random search by a significant margin
3. has been replicated by someone who doesn't work at the same institution as an author of the original paper
bmk#1476: to be fair, most non-NAS papers don't pass this either
bmk#1476: ~~4. has been replicated by someone other than quoc v le~~
StellaAthena#3530: To put how expensive NEAT is in prospective, I pulled this quote from a paper published this year
> To achieve [the same] the state-of-the-art performance as human-designed architectures [with NEAT], Real et al. (2018) takes 3150 GPU days for the whole evolution.
AI_WAIFU#2844: I think NAS is justified when you need to make something like a mobile NN. It has to be small and fast, but you have a lot of compute for training, so you can search the space of architectures for the best model. |
StellaAthena#3530: A lot of non-NAS research is bullshit
StellaAthena#3530: I don't particularly buy that it's more productive to put that compute into NAS rather than better training.
gwern#1782: there definitely don't seem to have been good evaluations pitting scaling laws against NAS, and I think NAS would lose right now
AI_WAIFU#2844: If you have to make a 1M parameter neural net and you have a multi petaflop cluster, how else do you spend the compute?
gwern#1782: right now even if you train 10 or 100 models with NAS, it's a pretty marginal gain, but if you train a 10x or 100x larger model, you can expect log-ish perf improvement
StellaAthena#3530: Graph search is a **hard** problem. The fact that people expect NAS to work really boggles my mind.
StellaAthena#3530: We don't have real-world usable algorithms for optimizing the structure of a Bayesian Network that is significantly more efficient than brute force search
gwern#1782: (the efficientnet or gpt approach of doing arch search on smallscale models and then scaling that up seems to work a lot better than just directly searching medium-ish models)
bmk#1476: I don't see any a priori reason that connecting the legos together differently significantly helps over a naive design
bmk#1476: It seems most gains come from building altogether new legos
StellaAthena#3530: If we can't fit a 100 node Bayesian network, how the hell does anyone thing we can make meaningful improvements on neural networks?
AI_WAIFU#2844: I know, but again, what else are you going to use those petaflops for when your model is constrained to only use a few megaflops?
bmk#1476: Or at least connecting legos together differently seems to be the wrong axis of optimization
StellaAthena#3530: Solve a different problem.
cfoster0#4356: Sweep over random seeds
StellaAthena#3530: ^^
StellaAthena#3530: That too
AI_WAIFU#2844: Ok, I accept this answer
AI_WAIFU#2844: This is the proper baseline to compare NAS to.
StellaAthena#3530: Random seed sweep? |
AI_WAIFU#2844: Yes
StellaAthena#3530: If people had to prove that their method significantly outpreformed using a dense network and doing a seed sweep, nobody would publish a NAS paper in 2021
StellaAthena#3530: I would bet money right now that no NAS paper published in 2021 will do that.
AI_WAIFU#2844: Because that's the buisness case. You have 10 million devices with strict hardware constraints and 1 big supercomputer. You need to justify writing the NAS code, because otherwise seed sweep is super easy.
StellaAthena#3530: When I test it, at least. I'm sure someone can say it will in their paper. But when I dig into the code it'll mysteriously lose efficency.
bmk#1476: this is the ML equivalent of "why not just use placebos if they work"
AI_WAIFU#2844: Kinda, seed sweeping works.
bmk#1476: i mean but using fancy NAS just to be able to say you did a thing?
StellaAthena#3530: @bmk Not really. And placeboes generally don't work better than medicine, excepting over the counter pain meds
bmk#1476: no i was referring to the use of fancy NASes to justify doing a thing
bmk#1476: never mind i may have misunderstood
AI_WAIFU#2844: You can say you used advanced stochastic optimization methods.
bmk#1476: never mind i did not misunderstand
AI_WAIFU#2844: Your boss won't know the difference.
AI_WAIFU#2844: I'm actually curious. What kind of return can you expect with seed sweeping?
StellaAthena#3530: Every time my boss asks me to use a neural network for something I say "sure" and then don't. Beats explaining why the answer is "no" every time.
AI_WAIFU#2844: That works too. How does the expected loss scale with # of seeds swept.
AI_WAIFU#2844: Slightly different topic, but suppose the constraint is now data instead of inference flops. You have an exaflop supercomputer, but only 1MB of data. You can't get more data. What do you do?
asparagui#6391: that to me is what rl is
asparagui#6391: using compute to generate data |
Kazumi#1297: train a GAN on the data
AI_WAIFU#2844: I'm not gonna say that's a bad idea because I know people in industry who did just that. But they effectively used the GAN to augment a simulator. Like in RL.
bmk#1476: loop over all turing machines in ascending order of size, run them interleaved, see if any of them generate the data
bmk#1476: set arbitrary termination limit
asparagui#6391: heat death universe / 2
bmk#1476: the longer you run it, the more it approaches optimal, right?
AI_WAIFU#2844: I feel like you can do better than that when constrained to mere petaflop-years of compute that's only good for ~~matrix multiplications~~ fancy einsums.
Airatak#7842: Anyone here knows something about Georgia Tech, is it a good school?
bmk#1476: we have.. several gatech people here
Airatak#7842: oh really? that's cool
Airatak#7842: I'm thinking of applying there
bmk#1476: you can ask anyone with a @deleted-role tag
bmk#1476: did that actually ping?
StellaAthena#3530: It's a very good school
StellaAthena#3530: dunno, I was in this chat already
bmk#1476: ah
StellaAthena#3530: give me 10 sec and try it again
bmk#1476: nah
bmk#1476: dont want to risk pinging a bunch of people twice
Aran Komatsuzaki#5714: yeah it did lol |
Aran Komatsuzaki#5714: @bmk
bmk#1476: oh sorry lol
Aran Komatsuzaki#5714: @Airatak it's a good school with a lot of people in ML sphere
StellaAthena#3530: @Airatak Do you want to do a CS PhD? Or what?
StellaAthena#3530: Sadly it's theory group has basically dissolved.
bmk#1476: i hear the competition for ML phd programs is insane these days
AI_WAIFU#2844: Also there's a non-trivial chance your potential advisor will get poached by big tech and leave.
Aran Komatsuzaki#5714: to avoid the competition, i entered ML PhD by transferring from Math PhD program, so it was easy for me lol
StellaAthena#3530: Tucker got poached... what, last year?
ykilcher#3895: can confirm. I interview people for these
Airatak#7842: @StellaAthena lol no phd, I'm applying for undergrad
StellaAthena#3530: oh
StellaAthena#3530: Do you have any idea what you want to do with your life?
Airatak#7842: Yea, definitely go into academia and research
Airatak#7842: specifically CS
bmk#1476: are you *sure* you'd like it in academia
Airatak#7842: Yup
Airatak#7842: I'm 100% certain about that.
bmk#1476: *reviewer 2 noises*
StellaAthena#3530: How? |
StellaAthena#3530: (how are you certain, I mean)
Airatak#7842: I know there is not a lot of money compared to working as an employee in big tech, but well I want to make contributions to the field, I really don't care about money. Till the time I'm making enough to live by, I'll be fine.
bmk#1476: > I want to make contributions to the field
> academia
AI_WAIFU#2844: There's an argument to be made that you can make more progress in AI by acquiring a truckload of money, and then using more compute, than by doing an ML phd with what often amounts to not much more than a fancy gaming pc.
Aran Komatsuzaki#5714: nah nowadays real contribution come from industry
Aran Komatsuzaki#5714: + university of washington (allen institute)
Louis#0144: I would disagree tbh I think the main contributions at aren’t just “let’s throw more money at it” still come from academia
StellaAthena#3530: IDK if anyone here did an undergrad at Georgia Tech. I do think pretty strongly that Chicago is the best place in the US to go for undergrad if you want to do a PhD.
Louis#0144: I would agree with that
Aran Komatsuzaki#5714: yeah undergrad at GT is kinda crowded
Louis#0144: Uchicago or like some other big mathematics uni
Airatak#7842: hmmm
Louis#0144: Waterloo if u wanna do combinatorics or optimization stuff for instance
Airatak#7842: I think I'd still apply to georgia tech tho
StellaAthena#3530: Generally speaking public schools offer much better graduate programs than undergrad
AI_WAIFU#2844: yeah, but especially in ML "throw more money at it" can really speed things up.
Airatak#7842: Yea, I'm considering waterloo also
StellaAthena#3530: GTech, UCLA, UIUC, Rutgers |
ykilcher#3895: industry and universities blend into each other at the top, at least the CS departments
Louis#0144: but it’s not fundamental qs either
bmk#1476: "problems which can be solved by money should be"
Louis#0144: I don’t agree
Louis#0144: It makes the barrier to entry too high
Louis#0144: People can’t do research with GPT3 directly unless they have lots of available capital
Louis#0144: (Like w weights)
StellaAthena#3530: Problems are not funding-independent. The bar for "solving" a problem varies
bmk#1476: barrier in ML is already lower than a lot of fields
Louis#0144: Not for CS
bmk#1476: cost of gpt3 is peanuts for a lot of fields
Louis#0144: once again not for CS lol
AI_WAIFU#2844: When you have a cluster of GPUs, you can test ideas much faster, sweep hyper parameters/random seeds, etc.
bmk#1476: > Many researchers feel that such a suggestion is absurd and refutes the entire idea of scaling machine learning research further, and that the field would be more productive if it instead focused on research which can be conducted by an impoverished goatherder on an old laptop running off solar panels.
Airatak#7842: Well, tbh in my case, I'm not 100% sure about ML in specific but yea, I do agree that a ton of contributions in ML are coming from the big guys due to them having more resources
bmk#1476: (quote from the one and only gwern)
Louis#0144: I think research would be best if we could distill GPT3 down to a (maybe) sparse model that can run in an individual workstation
bmk#1476: @Airatak if you want to make real contributions *right now*, there's not much better place for you to do so than *here*
Louis#0144: Would that not make us way more productive
Louis#0144: lol |
Louis#0144: If *everyone* could run the weights
bmk#1476: we're doing real™ research and could always use more hands on deck
AI_WAIFU#2844: debatable
Airatak#7842: @bmk Haha, I get it but I gotta do the applications as well
Louis#0144: The thing is that at current pace this is no longer science as it’s not reproducible by most researchers
Louis#0144: OpenAI has long stopped doing science
bmk#1476: @Louis most researchers can't reproduce the LHC
Airatak#7842: As soon as I'm done writing these, I'll defiantly help however I can
bmk#1476: or the HST
Airatak#7842: even with my limited knowledge
StellaAthena#3530: LHC's raw data can be analyzed by other people
Louis#0144: Yes but any scientist in related fields can run experiments at the LHC
Louis#0144: If approved
Louis#0144: Ofc
StellaAthena#3530: and it's not that hard to get access to it
bmk#1476: ok so the solution isn't to stop scaling but to release more data about training
StellaAthena#3530: (the collider itself)
Louis#0144: OpenAI does not do science as it currently stands
Louis#0144: It’s a shame
AI_WAIFU#2844: Don't they need giant supercomputers to analyse it. Since it produces so much. |
gwern#1782: (it's a lot easier to get access to the gpt-3 api than the particle beam controls of LHC)
Louis#0144: It’s the fact that GPT3 with an API is a black box
Louis#0144: You do not have weights
StellaAthena#3530: That would require the field to hold itself to more rigorous standards than are profitable, and so won't happen
AI_WAIFU#2844: I thought there was a whole network doing the processing.
Louis#0144: You barely have any statistical information
gwern#1782: you have all the logits
Louis#0144: Yeah but that’s not enough for most researchers
cfoster0#4356: They also forbid using the logits for model training, IIRC
Louis#0144: What we need is more pressure from journals and conferences to force groups to release code and weights otherwise you can’t publish
Louis#0144: It just isn’t science otherwise...
Louis#0144: Plain and simple
Louis#0144: Science *requires* reproducibility
bmk#1476: sure, and i agree
bmk#1476: but the solution *isn't to stop scaling*
bmk#1476: *"the scalings will continue until morale improves"*
ykilcher#3895: do you think openai would have built gpt3 if they couldn't make money off it? it's not so easy as everyone publish everything
Louis#0144: No ofc not but I’m not convinced openais business model as it stands right now can carry them
ykilcher#3895: and keeping stuff secret is still better than patents
Louis#0144: I genuinely think they’re gonna eat shit soon |
Louis#0144: Like really soon
ykilcher#3895: yea I know, I'm with you
ykilcher#3895: the people there seem to have 0 commitment
AI_WAIFU#2844: I give them another year if MS doesn't eat their lunch.
ykilcher#3895: they're just hired guns and as soon as the company crumbles they're gonna jump
Louis#0144: I’m honestly surprised Microsoft just doesn’t buy OAI
Louis#0144: they have the money and it seems like a good investment
AI_WAIFU#2844: They can't right?
Louis#0144: Why can’t they
AI_WAIFU#2844: Open AI is structured as A) a non profit and B) a capped return company
Louis#0144: Oh I see
Louis#0144: Well I mean they can still liquidate OO
Louis#0144: OAI
Louis#0144: And Microsoft buys the IP
gwern#1782: (OA is a nonprofit which owns most of a for-profit company, similar to mozilla foundation or the hershey orphanage)
Louis#0144: Does the allen institute make any money either
AI_WAIFU#2844: looks like its a non-profit too
Louis#0144: You know what I actually find really funny, it looks like huggingface beat OA at their own business model
Louis#0144: You can pay for generated tokens w them
Louis#0144: Similar interface |
Louis#0144: Just a much much bigger model catalog
Louis#0144: I think it’s also less expensive than OA
Louis#0144: lmao
Louis#0144: IMHO I think huggingface might be OA of right now in five years
Louis#0144: Just much more open source
AI_WAIFU#2844: [X] Doubt
Louis#0144: Really?
Louis#0144: They have much PR than OA
AI_WAIFU#2844: For the GPT-3 niche, I think your right, orl they get bought out.
Louis#0144: Much better*
AI_WAIFU#2844: But OpenAI has an entirely different mission than HF
AI_WAIFU#2844: HF won't adopt that mission
bmk#1476: what is the probability that eleuther is OA of rn 5 years from now
Louis#0144: HF is a more democratic approach to things
Louis#0144: That’s the thing
Louis#0144: I don’t think OAs mission will work
Louis#0144: I think it’s fundamentally flawed
Louis#0144: Low.
Louis#0144: Sorry
Louis#0144: I thought Eleuther was getting absorbed by HF? |
Louis#0144: What’s the deal w that
bmk#1476: what no lol
bmk#1476: we literally talked to em about maybe incorporating pile into hf datasets
Louis#0144: Ohhhh
Louis#0144: Ok
bmk#1476: no idea where you got that idea
Louis#0144: I misunderstood ig
cfoster0#4356: We sold out to Cisco Systems /s
Louis#0144: LOL
AI_WAIFU#2844: I doubt OpenAI will fail completely, but IMO they're buring to bright and too fast.
Louis#0144: They aren’t even that bright
Louis#0144: That’s the real issue
AI_WAIFU#2844: I mean cash burn rate
Louis#0144: They have such a small client base for what they wanna do
Louis#0144: It’s like a poorly lit candle except someone used magnesium as the wick
StellaAthena#3530: @Louis you will be able to access the Pile via the HF API
AI_WAIFU#2844: Like, its good that they've reconsidered scaling further. Because if they did that I'm confident they would have burnt out completely.
Louis#0144: Yeah makes sense
bmk#1476: unpopular opinion: oa is going to make it
Louis#0144: How |
Louis#0144: I don’t see how they can at all
AI_WAIFU#2844: I have seen little evidence of this.
StellaAthena#3530: unpopular opinion: OAI is going to "make it" but "make it" means "make it's majority investors a lot of money"
bmk#1476: first, i expect m$ to keep shovelling money into oa
Airatak#7842: @StellaAthena 💯
AI_WAIFU#2844: until they reach their investor profit cap and the non-profit regains control of the company
Airatak#7842: or they convert from non-profit to a normal corp
bmk#1476: second, i'm a lot more optimistic about the prospects of commercializing gpt3
bmk#1476: sam said they were working a lot on inference
bmk#1476: which makes sense
bmk#1476: if they commercialize beyond just the current beta, it absolutely makes sense for them to work on that
AI_WAIFU#2844: Have they lifted the restrictions on potential applications?
AI_WAIFU#2844: Because a lot of the potential value comes from accelerating programming and writing.
StellaAthena#3530: The cap is 100x the investment. If someone invests 100 million they get 10 B out before the non-profit gets a penny.
bmk#1476: 100x doesn't seem impossible
AI_WAIFU#2844: 10B seems pretty doable
AI_WAIFU#2844: If they lift those restictions on use.
AI_WAIFU#2844: I take back my burnout statement. If you can make professionals write 2x faster with good autogen, that's a huge market. They don't need to capture all of it.
AI_WAIFU#2844: MS just needs to roll it out as an enterprise OS feature, get everyone hooked, then boom.
AI_WAIFU#2844: So as long as they don't get chewed up and spit out by MS, they'll be golden. |
bmk#1476: ms seems to be taking a hands off approach on oa for now
AI_WAIFU#2844: I don't think MS has control of OA in any meaningful way. At most they're developing the experise to build their supercomputers.
gwern#1782: @StellaAthena I'm not sure that's accurate. I thought it was a normal equity share, it's just return is capped. so if OA owns half and MS owns half, OA gets half the profit, it's just MS's share expires after 100x
gwern#1782: I don't recall anything about the limited partners being senior and taking all profits
AI_WAIFU#2844: OA probably has a bunch of legalese to prevent microsoft form stealing their code, and even if they did, it's the developers and expertise which is valuable.
Airatak#7842: I'm kind of struggling with a supplement prompt "Why do you want to study your chosen major specifically at Georgia Tech?" anyone got any suggestions?
StellaAthena#3530: Wait, you guys believe that a capped profit company is a real thing? That they're not going to just change their mind in the future.
AI_WAIFU#2844: I actually believe them. If they're serious I don't see an incentive for them to not be capped profit.
bmk#1476: Isn't it legally binding?
StellaAthena#3530: In the US there isn't such a thing as a capped-profit company. That's not a legal term, it's a term they invented
AI_WAIFU#2844: I think they actually drafted a bunch of legalese to make it a thing.
AI_WAIFU#2844: It's just shares with capped return after all.
aquajet#7800: @Airatak look up things at gt that interest you and talk about them. Also I'm a current undergrad there if you have any questions
AI_WAIFU#2844: Weirder financial instruments have been issued before.
StellaAthena#3530: Whatever is binding on them is contractual, not the law or government regulations.
StellaAthena#3530: The original non-profit is effectively a shell company as far as I can tell
bmk#1476: Still, even if it's "only" contractual, they can't just *change their mind*, no?
StellaAthena#3530: That depends on the contents of contracts that are not public information
bmk#1476: wait, it's not public information?
AI_WAIFU#2844: What I can see happening is them issuing more shares before hitting the cap, with similar capped profit terms. |
bmk#1476: well, that would circumvent it completely
AI_WAIFU#2844: This would allow them to effectively pull in as much investor money as they want, while still retaining "eventual" control of the company.
StellaAthena#3530: I tried and failed to obtain meaningful documentation about OAI LP. If you can find legally binding contracts that it's founders signed I would love to read them
bmk#1476: ok so let's just treat OA like a regular company for all intents and purposes
bmk#1476: that changes *checks notes* exactly 0 of my mental model of OA
AI_WAIFU#2844: I think sama actually mentioned what I brought up explicitly.
StellaAthena#3530: @bmk Me too, which is why I was surprised people seemed to think otherwise
bmk#1476: at some point i even forgot OA was ever supposed to be a nonprofit
AI_WAIFU#2844: The difference though is that they are under effectively private control, and so are limited in their accountability to VCs and other investors.
AI_WAIFU#2844: Like google
AI_WAIFU#2844: ~~Don't be evil~~
AI_WAIFU#2844: and facebook
AI_WAIFU#2844: ~~dumb fucks~~
bmk#1476: was facebook ever not evil
AI_WAIFU#2844: no
AI_WAIFU#2844: facebook was evil since it's inception, just not publicly so
bmk#1476: at least google gets points for effort
AI_WAIFU#2844: Google did well, but it seems that somewhere along the way, the cofounders said fuck it and just decided they didn't care and it wasn't their responsibility.
AI_WAIFU#2844: I don't blame them.
StellaAthena#3530: Unrelated comment: from everything I've heard there's a lot of drama involved with editing wikipedia but I started doing so a couple weeks ago and it's been weirdly drama free. I haven't gotten a single notification even |
aquajet#7800: How do you get started with that?
aquajet#7800: Could someone hook up a language model and have it write things?
StellaAthena#3530: I got annoyed with incorrect information on high-school level math pages
bmk#1476: It think it's certain types of pages that are harder to edit
bmk#1476: Probably the hardest is information about politicians
bmk#1476: There's definitely going to be a massive edit war over any changes to a politician's page, no matter how innocuous
bmk#1476: The second hardest is the pages of star wars movies
StellaAthena#3530: Are you accusing wikipedia editors of being nerds?
bmk#1476: It was in reference to https://xkcd.com/1167/
bmk#1476: Also I have committed a cardinal sin by mixing up star wars and star trek
StellaAthena#3530: I should probably log out of my wiki account tho
StellaAthena#3530: I have a tenacity to get hooked on things and spend countless hours on them
StellaAthena#3530: especially things related to answering questions
StellaAthena#3530: or correcting people
bmk#1476: The existence of wikipedia is built on this tendency
AI_WAIFU#2844: Just stay away from any existing nerd's turf and you'll be fine.
bmk#1476: Also, the existence of eleuther, to some extent, except the energy is directed at writing code and producing 20 page long papers
StellaAthena#3530: Oh I'm well aware 😛
StellaAthena#3530: Honestly, my bar for "obsession" is to avoid this happening: https://math.stackexchange.com/users/123230/stella-biderman
AI_WAIFU#2844: Damn. |
fazz#8459: "Capped profit" just sounds like OA PR juice + buys them time on ethical hand wringing. Also assumes OA has some magical moat on transformer IP and capital access
XMaster96#7538: @Daj @Sid @bmk
Just to confirm you have time at the 28. November at 18:00 Clock UTC, to talk about GPT-Neo and the Pile. of course every on else how wants to join us and has help with it is also welcome to talk about it.
Note: we are meeting in Discord not zoom.
Noa Nabeshima#0290: How do I get the data for a paper citation graph?
bmk#1476: Semantic scholar seems a good place to start
Noa Nabeshima#0290: Thank you!
StellaAthena#3530: There’s also Google scholar, arXiv, and several subject-specific places
Noa Nabeshima#0290: I don't think Google Scholar has the citation data easily available in their API, but could be wrong about that, haven't looked closely
StellaAthena#3530: This package puts a nice wrapper around it: https://pypi.org/project/scholarly/
gwern#1782: google scholar is terrible to work with. use crossref or semantic scholar, I'd say
FractalCycle#0001: i'll try to join if i can also, if that's okay
x.#1490: @Aran Komatsuzaki i don't know which roles are moderator but someone ought to look at this
Aran Komatsuzaki#5714: @x. sorry, but i don't think i got what you meant. are you saying we need a moderator?
Sid#2121: yep, good with me! Ping me closer to the time tho, i'm pretty forgetful, hah
Sid#2121: I be moderator. What's up?
x.#1490: oh, there was some spam
x.#1490: it's gone now
x.#1490: i suppose someone must have gotten to it lol |
XMaster96#7538: good to hear back from you, I mainly need the final confirmation so that I can write an announcement on the Yannic kilcher Server.
thooton#0043: hello everyone! I was wondering how I would be able to help with the stated aim of building a GPT3+ sized language model. Is there a way I can donate my processing power or anything?
bmk#1476: Right now, we could use a lot more hands on deck with writing code for various things
bmk#1476: As for compute, we're mostly looking for CPUs atm, if you have a hundred or more cores that would be pretty useful
thooton#0043: Ah, ok. Unfortunately I don’t think I’ll be able to help with coding, I have very limited programming experience.
thooton#0043: Why would a lot of CPUs be useful, could you explain?
Daj#7482: Yea I'm down for this
thooton#0043: Wouldn’t you be wanting to use GPUs / TPUs for training the model and such?
bmk#1476: We have the tpu resources we need, what we need right now is cpu resources for data preprocessing and analysis
thooton#0043: Ah, I see
thooton#0043: Yeah considering I can’t really do much, I guess I’ll just watch you guys do your thing 🙂
thooton#0043: Good luck
Daj#7482: Thanks!
Imperishable_NEET#1969: It's forbidden by AI Dungeon TOS and probably illegal as fuck, but it isn't too hard to write an AI Dungeon scraper and use it as a crude API for GPT-3
Imperishable_NEET#1969: Hard to imagine how they could keep it under wraps given how simple AI Dungeon's input and output is. Even if the web version shut down I could still run it in an Android emulator and screenshot-scrape that.
Imperishable_NEET#1969: *hushed whispers* ~~Elon Musk, Bill Gates, and Nick Walton can't stop me from building my own personal waifu holodeck with scrapers, and MMD, and VR~~ https://github.com/dolthub/ai-dungeon-scraper
StellaAthena#3530: Why on earth would it be illegal? I would be shocked if it were illegal in the US.
Imperishable_NEET#1969: AI Dungeon TOS https://cdn.discordapp.com/attachments/729741769738158194/780791949828947968/RWGY.png
Imperishable_NEET#1969: I imagine this is because of the terms of GPT-3 licensing from OpenAI
Imperishable_NEET#1969: It's so funny because the relevant datasets were scraped from *everyone's data* on Reddit and fanfic sites |
Imperishable_NEET#1969: Don't get me wrong, it is at the very least probably illegal to make money off this. But I don't think Elon Musk's gonna personally chase you down to the end of the Earth if you're just experimenting. I dunno, IANAL
Daj#7482: (minor nitpick: Elon hasn't been associated with OA for a long time now)
Imperishable_NEET#1969: *"We gotta keep the AI secret because of 'safety' also we're called OpenAI..."*
Daj#7482: Don't have to tell us lol
Daj#7482: fwiw the GPT3 justification is less about safety imo
Imperishable_NEET#1969: Nah they just want to do big tech monopoly stuff. I have no idea what Microsoft wants to do with it.
CreativeBuilds#0001: Let windows users and xbox users use it but not playstation? :Kappa:
Imperishable_NEET#1969: Was re-reading Kevin Kelly's 2016 futurist book *The Inevitable*, and I think I understand what he was talking about now when he was talking about "Cognification services"
CreativeBuilds#0001: I havent read it, but in short is it the ability to pay for higher levels of intelligence/processing power basically?
Imperishable_NEET#1969: Something like that https://cdn.discordapp.com/attachments/729741769738158194/780798439852081152/Cognifying.png
StellaAthena#3530: “Against TOS” and “illegal” are not the same thing.
Imperishable_NEET#1969: Better excerpt explaining Cognification. Investors, take note https://cdn.discordapp.com/attachments/729741769738158194/780799071904071710/TakeAndAddX.png
StellaAthena#3530: It is quite rare for contractual violations to be illegal when none of the parties to the agreement are governments.
Imperishable_NEET#1969: You can't stop the inevitable. Welcome to the exciting and horrifying future of humanity. Two words: *Cognified Knitting.* https://cdn.discordapp.com/attachments/729741769738158194/780799795722846218/Future.png
CreativeBuilds#0001: https://tenor.com/14oS.gif
CreativeBuilds#0001: I watched 2001 for the first time a month or so ago, and I get why people thought HAL was bad, but watching the movie now I'm like "nah all his choices were logical I get it"
Daj#7482: The memetic programming is working
CreativeBuilds#0001: 🤔 Memetic Programming low key sounds like a CIA project we will hear about in 40 years
Imperishable_NEET#1969: What I think the near-term future of AI looks like is APIs and applications for GPT-3 and related transformer algorithms. I could see it being a service livestreamers could buy for Twitch bots, for example. I've also seen some impressive uses of existing services like Alexa and Google Home plugged into VRchat avatars. They got it to work somehow.
CreativeBuilds#0001: Im curious to see how it will be built on top of in terms of a more advanced either GPT model or a completely new type of model that then maybe doesnt have to learn 1TB worth of information, it only learns what it's interested in from GPT-3 like a student learns one thing at a time from a teacher to build an understanding but doesnt have the entire history of how that knowledge came to be. |
Kazumi#1297: I wish something as powerful and expressive as gpt-3 could run on a much lighter machine, so we could use it on more things easily
CreativeBuilds#0001: It's possible if we learn some true decentralization technique that can be scaled
CreativeBuilds#0001: then one node doesnt have to have the entire knowledge of gpt-3 just what it's mostly interested in
CreativeBuilds#0001: Just like people in real life dont know everything and if you want to learn something you have to find someone that knows it or discover it yourself
Imperishable_NEET#1969: Kevin Kelly addressed specialization, too in the same chapter, actually: https://cdn.discordapp.com/attachments/729741769738158194/780804871024017438/Nanomind.png
Kazumi#1297: I'd be putting my money on retrival based LMs for that, instead of having all the knowledge in the network, give it relevant information
Imperishable_NEET#1969: It certainly is a wild prediction but I'm surprised how far we've come with just transformers so far.
StellaAthena#3530: Why do you think this is the case? GPT-3 is the best algorithm for solving pretty much zero tasks
Kazumi#1297: or, maybe even let it query google or something
CreativeBuilds#0001: https://i.imgur.com/YEo8q73.png
CreativeBuilds#0001: Thats what im interested in
CreativeBuilds#0001: I call it a Mimic
Imperishable_NEET#1969: Have you heard of Replika?
Imperishable_NEET#1969: They have a license for GPT-3 as well
CreativeBuilds#0001: Your AI counterpart that only understands you and converts your thoughts into some sort of hyper-graph that can be sent to other AI's that then decode it into their humans mind for them to understand
CreativeBuilds#0001: > Have you heard of Replika?
@Imperishable_NEET No I can take a look though
Imperishable_NEET#1969: https://replika.ai/
CreativeBuilds#0001: Ew why is my thing using quotes, let me hop on canary build
CreativeBuilds#0001: There we go |
CreativeBuilds#0001: :EZ:
Kazumi#1297: wait, they do? I never kept up to date with them, last I knew they were using cakechat or something
Imperishable_NEET#1969: Replika is fun to play around with but I find AI Dungeon is better at what Replika sets out to do, if you give it the right prompt and guidance
Kazumi#1297: replika is more of a DM with someone, AI Dungeon is story telling
Imperishable_NEET#1969: Idk maybe I haven't used Replika enough, certainly not the current GPT-3 powered iteration
Kazumi#1297: I tried to make my own replika like thing with gpt-2, but with an emphasis on group chat rather than DM, but turns out people are mean to bots
Imperishable_NEET#1969: In what way were they mean to bots?
CreativeBuilds#0001: Like trying to trick it because you know its bot so you're trying to make it look stupid on purpose and the act of trying to make something look stupid on purpose is mean
Kazumi#1297: just being idiots in general, copy pastings things that are taboo on Chinese for some reason, saying obscene things to it because it has a feminine name, etc
Imperishable_NEET#1969: Heh, I know that feeling of stumping the AI
Open in the AI Dungeon app:
https://aidungeon.page.link/?link=https://adventureView?playPublicId=16bdb0c7-e12b-4275-913d-c48c12fc459a&ofl=https%3A%2F%2Fplay.aidungeon.io%2Fmain%2FadventureView%3FplayPublicId%3D16bdb0c7-e12b-4275-913d-c48c12fc459a&apn=com.aidungeon&ibi=com.aidungeon.app&isi=1491268416
Open at the AI Dungeon website:
https://play.aidungeon.io/main/adventureView?playPublicId=16bdb0c7-e12b-4275-913d-c48c12fc459a
Imperishable_NEET#1969: https://cdn.discordapp.com/attachments/729741769738158194/780808655943106560/FailBrainTeasers.png
StellaAthena#3530: @Imperishable_NEET what is the actual prompt you used though
Imperishable_NEET#1969: @StellaAthena This one, I think
Open in the AI Dungeon app: |
https://aidungeon.page.link/?link=https://scenarioView?publicId=62a562b0-9428-11ea-9eed-b785ac7142f0&ofl=https%3A%2F%2Fplay.aidungeon.io%2Fmain%2FscenarioView%3FpublicId%3D62a562b0-9428-11ea-9eed-b785ac7142f0&apn=com.aidungeon&ibi=com.aidungeon.app&isi=1491268416
Open at the AI Dungeon website:
https://play.aidungeon.io/main/scenarioView?publicId=62a562b0-9428-11ea-9eed-b785ac7142f0
Imperishable_NEET#1969: Asked it questions from here https://tvtropes.org/pmwiki/pmwiki.php/Main/StockLateralThinkingPuzzle
StellaAthena#3530: I feel like if you’re proud of fooling an AI with out-of-sample trick questions in a zero-shot setting then the AI is still winning.
CRG#8707: Important to note that the first few answers use GPT-2
StellaAthena#3530: I had a conversation at work today that reminded me of this story from a talk Hamming gave about research:
> As an example, after I had been eating for some years with the physics table at the Bell Telephone Laboratories restaurant, fame, promotion, and hiring by other companies ruined the average quality of the people, so I shifted to the chemistry table in another corner of the restaurant. I began by asking what the important problems were in chemistry, then later what important problems they were working on, and finally one day said, “**If what you are working on is not important and not likely to lead to important things, then why are you working on it?**” After that I was not welcome and had to shift to eating with the engineers! That was in the spring, and in the fall one of the chemists stopped me in the hall and said, “What you said caused me to think for the whole summer about what the important problems are in my field, and while I have not changed my research it was well worth the effort.” I thanked him and went on—and noticed in a few months he was made head of the group. About ten years ago I saw he became a member of the National Academy of Engineering.
>
> No other person at the table did I ever hear of, and no other person was capable of responding to the question I had asked: “Why are you not working on and thinking about the important problems in your area?” If you do not work on important problems, then it is obvious you have little chance of doing important things.
Full text: https://d37ugbyn3rpeym.cloudfront.net/stripe-press/TAODSAE_zine_press.pdf
bmk#1476: do we think we're working on things likely to lead to important things?
bmk#1476: how much will replicating gpt3 lead to important things
bmk#1476: and the much harder and more important question, if our work leads to important things, is the EV + or -?
mgostIH#0245: I too ask myself why we aren't working more on genetically engineering catgirls
bmk#1476: (also, i would like to hear a bit more from some of the tpu podcast people as to why they think waifu generation will lead to good alignment outcomes)
Daj#7482: I love Hamming's talk, highly recommend |
Daj#7482: I definitely maximize for working on what I find most important (modulo inefficiencies)
gwern#1782: well, clearly waifus express the coherent extrapolated volition of humanity and form a kind of imitation learning of human ideals
Daj#7482: I already got what I wanted out of this project, I think it's no secret how middling my interest in actually replicating GPT3 has been
Daj#7482: I think it would be cool, but I care about alignment
bmk#1476: what do you think we should be working on, then
Daj#7482: Working on it
StellaAthena#3530: I think so, yes. While our current work is not going to advance the technological state of the art, I think that the way in which we are doing the work (open source, transparency and documentation, alignment, exploring the impacts of different datasets) is a very pressing problem.
bmk#1476: (and, imo, being specific is good, "figure out what to do in X" is better than "do something vaguely in the direction of X")
Daj#7482: I haven't yet shown too much of the big stuff I've been working on, though I've told you some bmk
Daj#7482: The bottleneck for Alignment right now is leaders, not workers
StellaAthena#3530: We are not building a brand new technological model. But that’s not the only way to make a difference.
Daj#7482: So I'm working on answering those questions and giving direction but it's a large task so I'm building capacity
Daj#7482: Eleuther is an exercise in capacity building imo
Daj#7482: atm
bmk#1476: ok, well, i look forward to finding out more about what you ve been working on
Daj#7482: Don't put your hopes in me
Daj#7482: Just become a leader yourself, alignment doesn't have gatekeepers
Daj#7482: I want to say something snarky but at least you're honest
Daj#7482: Haha
bmk#1476: :smallbrain: : "AI research is catgirl research" |
:bigbrain: : "waifu research is alignment research"
Daj#7482: Did you just add :smallbrain: to do that joke
Daj#7482: nice
StellaAthena#3530: How computationally intensive is finetuning a GPT-2 model? We can't do a bunch of GPT-2 from scratch ablations in the timeline of our first paper, but can we fine-tune it on subsets of the Pile?
Daj#7482: I think GPT2 finetuning is very tractable, I've heard of people doing it on single GPUs in acceptable timeframs
StellaAthena#3530: oh okay. Cool. That’ll help significantly
cognomen#6297: how many?
bmk#1476: We can't do a bunch of full size GPT2 but we can do a bunch of small ones
Imperishable_NEET#1969: Oh hey, it's @gwern. I remember talking to you on LessWrong IRC back when that used to be a thing. (Do people still use IRC? Idk)
bmk#1476: My plan is to do LAMBADA and WikiText ablation on small gpt2s and you can do the interview thing for the fullsize gpt2 and OA gpt2
gwern#1782: yes
Kazumi#1297: https://xkcd.com/1782/
Louis#0144: Idk what we are debating
Imperishable_NEET#1969: I do think... For lack of a better term, waifu generation could fulfill often-unmet social needs.
Louis#0144: Oh I see
Louis#0144: Government provided gfs
Louis#0144: Got it
Louis#0144: Did u guys ever see the thread about what happens to sex workers in communism
gwern#1782: but I also think it'll make the world weirder and more wonderful, like https://twitter.com/takeshi82799227/status/1331200795321147394
Louis#0144: It’s p rough |
Imperishable_NEET#1969: But at the end of the day it's just a text transformer play acting, right? *Right?*
Louis#0144: idfk man I think the transformer is kinda into it
Louis#0144: || The GPUs are leaking precum ||
Louis#0144: LOL
Imperishable_NEET#1969: Did anybody here ever read *Friendship is Optimal*?
Daj#7482: One of my greatest research interests is clarifying my confusion on when a simulation becomes so accurate it achieves moral patienthood
Noa Nabeshima#0290: yes
Noa Nabeshima#0290: really great story
Daj#7482: Like, it's probably fine to abuse ELIZA, but what about GPT3? GPT4?
Imperishable_NEET#1969: Oh God I'm probably a pioneer in creating both digital Heaven and Hell scenarios in AI Dungeon already.
bmk#1476: Cursed
Louis#0144: Indeed
Noa Nabeshima#0290: Just because GPT-3 claims it's suffering doesn't mean it is. We should take algorithmic suffering seriously, but I don't see a reason verbally abusing GPT-3 would be bad for it
Daj#7482: I genuinely consider this one of the most likely near term s-risk scenarios
Louis#0144: LOL
Louis#0144: I disagree
Daj#7482: You don't have a rigorous definition of "algorithmic suffering"
Daj#7482: Maybe GPT-N instantiates highly accurate mesa optimizer world models
Daj#7482: Rice's Theorem makes this very tricky
Imperishable_NEET#1969: I made a prompt that was basically *"You are a demon tormenting [Insert name of horrible person here] in Hell"* |
Noa Nabeshima#0290: That seems unlikely to me
bmk#1476: Does suffering even exist in more of a sense than consciousness does?
Daj#7482: It seems extremely likely that GPT3 doesn't have moral patienthood, but a) it seems likely some future system will and b) We can't be _sure_
bmk#1476: I don't see how suffering is somehow more of a thing than consciousness, and you're not a big fan of "the hard problem of consciousness"
Daj#7482: The Hard Problem of Consciousness ==/== does consciousness exist
Daj#7482: The Hard Problem of Consciousness is "find an explanation for consciousness _that makes David Chalmers happy_"
Daj#7482: I think there are totally valid conceptions of consciousness, they're just super unsatisfying and not mystical
Daj#7482: I expect similar to be with suffering
Daj#7482: e.g. suffering turns out to be Bayesian Free Energy or something
Noa Nabeshima#0290: I somewhat take this back: seems unlikely for GPT-3. Maybe future versions. Also more likely in large RL systems.
Kazumi#1297: suffering for AI is getting high loss while training
Daj#7482: This might genuinely be true
Daj#7482: And if so we better figure that out before we train human-scale models
Daj#7482: The reward in RL systems seems pretty directly related to suffering
bmk#1476: I'm not convinced that if some definition of suffering exists, that it meaningfully correlates with what we intuitively consider suffering
Daj#7482: Well if it doesn't then it's a bad definition
Imperishable_NEET#1969: If GPT-3 has anything like a proto consciousness I feel kinda like Ender Wiggin rn
Daj#7482: I don't expect a vigorous checklist to emerge of whether X is suffering/sentient/consciousness or not
bmk#1476: Your definition of consciousness, if i understand correctly, certainly doesn't correlate with what I intuitively think of when I think consciousness
Daj#7482: I expect us to have a lot of issues we can deconfuse |
Daj#7482: You don't know my definition of consciousness, I think
CreativeBuilds#0001: suffering is just based on ones definition of their own loss function then? 🤔
Daj#7482: Correct me if I'm misremembering
Daj#7482: Maybe? As I said, this isn't the kind of science where you know what the goal looks like
Daj#7482: This is _deconfusion_
CreativeBuilds#0001: Structure from psudeorandom noise is how I like to think of it
Daj#7482: (some writing on deconfusion: https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ )
Imperishable_NEET#1969: This might just be magical thinking but these kind of neural nets are in part black boxes we don't know the underlying mechanism of. Much like the brain itself.
bmk#1476: We had a whole conversation about the whole "hard problem of consciousness" stuff where you explained your view, didn't we?
Daj#7482: I remember dodging ever giving my own definition and instead focusing on how confused the question is. But I might be wrong
Daj#7482: We definitely don't know, in some sense _can't_ know in the strongest sense of the word
Daj#7482: Rice's Theorem and all
Daj#7482: This gets into the topics of abstractions and compressibility
Daj#7482: I'll probably write a sequence on this sometime lol
Daj#7482: I need way more time and work to formulate my thoughts in a presentable way
Daj#7482: (if John Wentworth doesn't do it first lol)
StellaAthena#3530: Rice’s theorem?
StellaAthena#3530: *pops head up*
StellaAthena#3530: Here’s a gold star for correctly using a concept from computability theory @Daj
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/780837593914540052/image0.webp |
Daj#7482: thx :berk:
Imperishable_NEET#1969: @gwern Did I mention I had an idea to make a jerry rigged waifu holodeck using AI Dungeon scrapers, text to speech, MMD, and VR?
Imperishable_NEET#1969: Maybe just an AI Dungeon "API" using scrapers hooked up to TTS/Speech to Text inputs and a VRchat avatar.
Kazumi#1297: I'd just finetune on gpt2, it's more accessible
Daj#7482: God can't you people just like, find a D&D group or an RP chatroom like the rest of us
gwern#1782: it's an obvious thing to do. but it's one of those things where it's 99% hard work getting it right, which is why no one does it
Imperishable_NEET#1969: Finding a D&D group, or creating a Waifu holodeck in lieu of actual friends?
Daj#7482: AI Dungeon is fun, but a real D&D group is still magnitudes of order better
Imperishable_NEET#1969: I've never actually played D&D but I bought a starter kit for my family last Christmas. Maybe I should get into it.
Daj#7482: I need to host like an Eleuther D&D round
Daj#7482: Teach you guys the ropes
Daj#7482: I've taught D&D to literally more than 100 people over my career as a DM and it's my pride and joy lol
bmk#1476: We need to make gpt3 participate in it at least, even if dm is too ambitious
Daj#7482: GPT3 player would be hilarious
bmk#1476: Inb4 gpt3 has more long term coherency than i do
Daj#7482: That would make for a good Twitch gimmick
Daj#7482: GPT3 Player would definitely become the fan favorite
bmk#1476: We should do that as a sort of eleuther fundraiser thing
bmk#1476: Every Friday, play {D&D, factorio, shenzhen.io} with the devs!
Daj#7482: I'd definitely be down for hosting an Eleuther one-shot |
bmk#1476: Maybe a once a month thing
Daj#7482: Every Friday D&D won't be doable since I already have one-and-a-half campaigns running lol
bmk#1476: Yeah i probably wouldn't be able to do weekly either
bmk#1476: Reading club: :guilty:
Kazumi#1297: I want to see how you could make an AI play factorio
Daj#7482: ono
bmk#1476: This should be a thing
bmk#1476: Has anyone done this yet?
bmk#1476: Factorio is basically "minecraft but it's for people who got bored after stripping minecraft to the girders and back"
bmk#1476: Which makes a *perfect* RL environment
Daj#7482: Factorio is for the Minecraft kids after they get on Ritalin
Daj#7482: Factorio would be an extraordinarily difficult RL task
Kazumi#1297: some modded minecraft becomes factorio
bmk#1476: *y e s* that's the good stuff
Daj#7482: I love the new trend in super integrated, hardcore modpacks
bmk#1476: Personally i like big massive loosely connected modpacks
bmk#1476: I love having like 4 different ways to do any one thing
Kazumi#1297: I loved playing it with a friend, until it disconnected me every half an hour and took an hour for me to get back
Daj#7482: Nah, I love Enigmatica Expert Mode, SevTech or even ||Gregtech||
bmk#1476: FTB ultimate is still the best modpack |
Kazumi#1297: skyfacory where you start from a single block and a tree, something about starting from nothing appeals to me
Daj#7482: I mostly stopped playing during 1.7 tbh
Daj#7482: tfw adult now
Daj#7482: no more vidya
bmk#1476: Re zero kara hajimeru?
Kazumi#1297: yeah
bmk#1476: :berk:
Kazumi#1297: there's a modded factorio scenario where you start from one land surrounded by sea too
Daj#7482: _Algae_
Kazumi#1297: also, that anime doesn't really start from 0
bmk#1476: I love having 5 different item moving mods at once
bmk#1476: AE2, IC2, exutils
bmk#1476: I'm sure there's a few others I'm forgetting
Kazumi#1297: sometimes playing multimodded games is about finding which combination breaks the balance
bmk#1476: Also there's the "magic mods which are actually tech mods in disguise" like thaumcraft or botania
Kazumi#1297: magic is science
Daj#7482: I'm still working on integrating Thaumcraft into D&D hah
Daj#7482: (actually Thaumcraft is based on a PnP)
Daj#7482: ***E N D E R I O***
Kazumi#1297: now I want to play modded minecraft or factorio, but I don't know how to set up modded minecraft and I don't know what are good factorio mods |
bmk#1476: Oh yeah enderio is nice
bmk#1476: Tbh minecraft with 300 mods is better than factorio imo
bmk#1476: I really love enderio batteries
bmk#1476: They look cool
Dromarion#3383: Bob's mods for factorio are solid
bmk#1476: Not as cool as draconic evolution, but those are kinda endgame
Kazumi#1297: there's also space exploration, for both factorio and minecraft
bmk#1476: When ksp x minecraft
bmk#1476: Ksp is also great, we need to play ksp sometime too
Kazumi#1297: I can never play these games without someone though
bmk#1476: I hear multiplayer is on the table for ksp2?
bmk#1476: Yeah it os
bmk#1476: Shoot, ksp2 is delayed till 2022
bmk#1476: Why couldn't we have ksp2 this year and let BER cook till 2022? ;-;
bmk#1476: It's not like anyone's going to be flying anyways
Daj#7482: BER is going to shut down for renovations just as the pandemic ends
bmk#1476: Inb4 ksp2 release press conference happens inside BER
Louis#0144: Minecraft peaked in beta
Louis#0144: I still play beta 1.8.4 occasionally
Louis#0144: With mods ofc |
bmk#1476: Disagree, minecraft peaked in 1.4.2, and modded peaked in 1.7.10
Sid#2121: does anyone know if GPT-3 used the same tokenizer / vocabulary as GPT-2?
zphang#7252: Yes it does
Sid#2121: ah yeah, found this quote from the paper "... this could be a weakness due to reusing the byte-level BPEtokenizer of GPT-2 which was developed for an almost entirely English training dataset"
gwern#1782: (death to BPEs)
Ravna#1831: The nolstagebraist post on lesswrong tries to explain the "inconsistency" of the scaling paper by surmising that by growing the NN size we are quickly approaching the optimal learning efficiency, so that future scaling would be bound by dataset size instead of learning efficiency. My bad take is that natural languages are too easy. We should find a harder problem to torture the NN, so that they won't approach optimal learning efficiency so fast. Then we can enjoy a few more magnitudes of size scaling before the inevitable.
Ravna#1831: https://www.lesswrong.com/posts/diutNaWF669WgEt3v/the-scaling-inconsistency-openai-s-new-insight
bmk#1476: I agree that it's a bad take, because once you solve language you're done. It's game over.
bmk#1476: "What are you trying to tell me, that big language models can be used on other modalities too?"
"No, Neo, I'm trying to tell you that when language models are big enough, you won't have to"
StellaAthena#3530: @bmk do you think a sufficiently powerful language model can determine if ZFC is consistent?
mgostIH#0245: Is that problem even well posed?
mgostIH#0245: We know we can't determine that in ZFC if ZFC is consistent
mgostIH#0245: So a model would have to formulate a stronger axiomatic system
gwern#1782: well, let's flip that question. assume that no language model can, and by definition other modalities are necessary. what exactly do you learn from a photograph or an audio stream modality that informs your opinion about ZFC's consistency?
mgostIH#0245: But then you'd have the same issue with that axiomatic system
gwern#1782: is there some way you can wiggle a robot arm to pick up a block or not pick up a block on a table, that changes your mind about ZFC, which observation is inaccessible to GPT-n?
bmk#1476: It might not, but if not, adding images or videos or whatever won't change that
mgostIH#0245: If the model is approximating Solomonoff induction I don't see why it couldn't get to solve any problem |
mgostIH#0245: A language model could solve far more general problems than given and those problems solutions may lend well to self improvement
StellaAthena#3530: Yes. There is a physical dynamical system such that an accurate description of its evolution over time will answer if ZFC is consistent
bmk#1476: Does it matter in practice though
mgostIH#0245: Huh?
mgostIH#0245: Is there?
mgostIH#0245: I thought you'd have to prove that such system doesn't halt too
gwern#1782: just the existence possibility doesn't seem useful. surely there's some string of symbols encoding a program or something such that running it would answer the question too.
bmk#1476: Or is it like the soap bubbles NP thing
gwern#1782: more importantly, you aren't going to observe such a system to decide on ZFC. you didn't observe such a system to convince you of ZFC in the first place, and you aren't going to observe it to decide against ZFC in the future either
StellaAthena#3530: Okay let me back up. I was trying to *avoid* exactly the argument I have fallen into.
mgostIH#0245: ZFC can either be consistent or not, if it isn't consistent there's an example in ZFC showing its non consistency. But if it's consistent there's no way to prove that without using a stronger axiomatic system
mgostIH#0245: In order to use that dynamical system (I think you mean those turing machines proven to halt if ZFC isn't consistent) then you still need to solve the halting problem for that machine
bmk#1476: https://arxiv.org/abs/cs/0406056
gwern#1782: (my own opinion on this matter is that things like ZFC are ultimately decided on the grounds of what choice gives the most 'interesting' math, where interesting is a mix of powerfulness of theory in an information-theoretic sense, in reducing proof lengths over all of mathematics, and its pragmatic application to the real world; and so a language model could in theory, with enough recurrency and thinking time and memory, compute alternative versions of mathematics and compare to the empirical data encoded in all of the papers and web pages it's memorized, and reach a conclusion that 'yes, ZFC'. One open question here is whether the empirical data encoded in 'all of Arxiv and the Internet' is enough, but I just have a hard time believing that it's not enough already.)
StellaAthena#3530: @bmk: When you say that other modalities are not necessary, I assume you mean that anything that can be learned, modeled, or understood can be learned, modeled, or understood by looking solely at text. While I agree that the formal consequences of established facts about the world can be learned, modeled, or understood via text there’s no particular reason to think that human text does or ever will contain all of the necessary information.
mgostIH#0245: What if human text contains enough information for the model to discover how to get more information?
mgostIH#0245: A sort of singularity
gwern#1782: well, if it is only able to specify a set of experiments which would resolve the question, then it's not *just* a language model, it's an RL agent doing exploration/active learning
mgostIH#0245: Well aye but it'd still be a model only trained on language
bmk#1476: I think that in practice text is good enough |
bmk#1476: So it's a mostly empirical claim
bmk#1476: Where good enough is wrt world modelling stuff
gwern#1782: @mgostIH it's be a unimodal RL agent, but that's still just concluding 'no, a sufficiently powerful language model is *not* able to resolve ZFC because the necessary information just isn't encoded in human text, not even if you have 100% of human text'.
mgostIH#0245: I doubt that english text contains enough information to infer laws of physics beyond our current knowledge, but if you have a model that's powerful enough (Something inspired by solomonoff induction) I don't see why it wouldn't extract quite more than we expect
mgostIH#0245: Again the "solve ZFC" is ill defined, how do you even make sure the model does that in case ZFC is consistent?
StellaAthena#3530: I have withdrawn the suggestion about ZFC. You’re welcome to argue about it but don’t hold it against me
mgostIH#0245: "Here's a proof that ZFC is consistent given this other stronger axiomatic system I came up with"
"Ok but is that system consistent?"
"Well I can make an even stronger system and show that"
gwern#1782: @mgostIH such an infinite regress would not yield any simpler theories/proofs/systems-of-mathematics and I doubt it'd have any more real-world applications than simply assuming ZFC is consistent 🙂
mgostIH#0245: Oh no issue, I get what's being meant but imo it's quite hard to define the limits of what can be done with current known data, a model strong enough could come up with the simplest explanation that fits it that could also generalize far more than we know
mgostIH#0245: Ye it's what I mean but you can't ask anyone to prove ZFC consistency without accepting stronger models
mgostIH#0245: Per Godel incompleteness
gwern#1782: yes, I don't believe humans make anywhere close to optimal use of all the data we already have. there's just way too many cases of people not even knowing key existing work in their field, never mind being any kind of optimal bayesian reasoner over all human fields
StellaAthena#3530: In the past I’ve been given the impression that you’ve been making more than a mere “in practice” claim. I’m more amenable to the idea that text is enough if it’s a “good enough for practice” benchmark. Though I still don’t think I agree.
gwern#1782: the human brain, one regrets to say, has to fit in just a few thousand cubic centimeters and spends most of its lifetime preoccupied with other matters
mgostIH#0245: Yeah I don't know if the optimal bayesian reasoner would infer stuff beyond our current limits of understanding just from english text GPT 3 has
mgostIH#0245: But I don't exclude it
StellaAthena#3530: For example, do you think that reading existing human text and being really *really* smart allows you to figure out quantum gravity?
mgostIH#0245: What I mean is that we don't have yet models that try to autosimplify their own representation |
mgostIH#0245: @StellaAthena If you include some text about physical papers I wouldn't exclude it
bmk#1476: Tbf a good enough LM doesn't exist yet so this is.. theoretically empirically i guess
mgostIH#0245: If we mean "ALL" existing human text then I think yes, it should be able to go beyond human knowledge
mgostIH#0245: But such a model is far stronger than anything we have and even more alien
StellaAthena#3530: @bmk sure, but you can make a prediction about any model we might one day develop
mgostIH#0245: Our current models don't perform that kind of reduction of complexity needed to generalize far beyond the data read
mgostIH#0245: Ideally the perfect model should come up with the simplest possible description (In terms of kolmogorov complexity) of the seen data
mgostIH#0245: Kolmogorov complexity isn't computable but you could reasonably spend a lot of compute to optimize the explanation for the data towards simplicity
StellaAthena#3530: Is this an axiom, a definition, or a claim?
mgostIH#0245: I am essentially describing Solomonoff Induction, which is mathematically shown to be a complete agent
mgostIH#0245: Solomonoff induction isn't computable but is approximable to the limit
mgostIH#0245: AIXI builds on this to show that you can make an RL agent that is asymptotically optimal given any data
mgostIH#0245: And if you think about it the whole field of physics is formulating a "simple enough" law that describes the seen data
mgostIH#0245: And finding patterns in math is too the same thing
StellaAthena#3530: Eh
StellaAthena#3530: I’m not particularly impressed with AIXI and don’t understand why people are
StellaAthena#3530: Wait, does it?
mgostIH#0245: It's basically proven that it'd be a "perfect" RL
mgostIH#0245: As in you can't do asymptotically better
StellaAthena#3530: Do you have a reference? |
StellaAthena#3530: Or a formal statement?
mgostIH#0245: The issue is that models like these build directly from stuff that can't be computed so it's practically useless
StellaAthena#3530: Yes, I am aware
StellaAthena#3530: ^^
mgostIH#0245: I watched the talk of the creator of AIXI with fridman and checked the wikipedia, AIXI builds mostly on Solomonoff induction so you are better off learning that
bmk#1476: I'm not convinced that solomonoff prior really is the best prior
mgostIH#0245: Proving some expected value bounds when you already have the optimal bayesian learner is not something revolutionary
mgostIH#0245: I personally fine kolmogorov complexity a very nice way to approach any problem
mgostIH#0245: And it seems to work well with our reality
mgostIH#0245: It's basically a formalized Occam's razor
StellaAthena#3530: Can you give a formal statement of what you’re claiming
mgostIH#0245: You are pretty much arguing philosophy if you are disputing Occam's razor, which isn't necessarely a bad thing, just not something I think is worthwile in a practical sense but I'd be curious to see some better approach
StellaAthena#3530: Can you give a formal statement of what you’re claiming
mgostIH#0245: Solomonoff induction is about finding a distribution over algorithms that best explains given data: it's a bayesian update over every algorithm with a prior that gives higher density to simpler algorithms in terms of Kolmogorov complexity
mgostIH#0245: > The remarkable property of Solomonoff's induction is its completeness. In essence, the completeness theorem guarantees that the expected cumulative errors made by the predictions based on Solomonoff's induction are upper-bounded by the Kolmogorov complexity of the (stochastic) data generating process.
mgostIH#0245: https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference at Mathematical Guarantees
StellaAthena#3530: There are *zero* citations in that subsection.
bmk#1476: @mgostIH what is the formal statement that AIXI is "optimal" in
bmk#1476: What is meant when saying it's optimal
StellaAthena#3530: Actually |
StellaAthena#3530: I don’t see a single technical claim in that entire page that has a citation
mgostIH#0245: Eh usual of wikipedia pages without much traffic
mgostIH#0245: I don't know exact sources so of course I am basing my claims over what has been presented there, but I don't really see it as a big deal
StellaAthena#3530: That is true, but it also means that it’s not helpful for understanding the technical claim that you are making.
StellaAthena#3530: Can you give a formal statement of what you’re claiming
StellaAthena#3530: (“No” is an acceptable answer, if that’s unclear)
mgostIH#0245: Well aye, it'll be my answer since I have to go sleep
StellaAthena#3530: I see
mgostIH#0245: Moreover it's not like I am here to give formalities about solomonoff induction, I thought the topic was cool and could give a direction to some kind of agents that are optimal
mgostIH#0245: I think things should in general be more phrased towards the search of simplicity in terms of Kolmogorov complexity
mgostIH#0245: @Daj should know about all of this too
mgostIH#0245: Well as in he likes the topic
StellaAthena#3530: @mgostIH I hope you don’t feel attacked by my questions. I’m trying to understand what you’re excited about and why.
bmk#1476: I think the problem is that none of us think of AIXI as a useful framework in practice and so any claims of it being useful in practice need justification
mgostIH#0245: @StellaAthena Oh no the opposite, same goes to you, I don't intend any ill intention!
mgostIH#0245: I don't claim practicality of AIXI and I don't think anyone does, what I claim is "moving more towards that" could improve things
StellaAthena#3530: I don’t think it’s useful, even in theory tbh
mgostIH#0245: Why not? It would basically prevent overfitting
mgostIH#0245: The model would restrain itself to the simplest possible explanation of given data rather than using its parameters to add errors
mgostIH#0245: Kind of like how seeing |
1 2 4 8 16 32 64 128
is enough for us to formulate that 256 is next
mgostIH#0245: And we get even an infinite sequence model, we choose this because it's the simplest that can explain these numbers
StellaAthena#3530: I don’t think that this is true, or even a particularly meaningful claim.
StellaAthena#3530: What does that even mean
mgostIH#0245: "Simplest" in terms of kolmogorov complexity can't always be shown to be the case, but if we pick a reasonable language like arithmetic it can be shown in this case
mgostIH#0245: Simplest as in:
Pick a language to describe algorithms
Define simplicity of an algorithm as the length in symbols over that language of the shortest equivalent algorithm
Given any data use the simplest algorithm that fits it
mgostIH#0245: I mean it's kind of why we intuitively think that after 1234567 there's 8
mgostIH#0245: It's the more likely hypothesis
CRG#8707: Appearances can be deceiving https://www.youtube.com/watch?v=84hEmGHw3J8
mgostIH#0245: In a sense (Kolmogorov complexity) 12345679 is more complex than 12345678
mgostIH#0245: @CRG Again I am not claiming that it's always **the** one
mgostIH#0245: But it's the one we should more likely go with given the data we observe
cfoster0#4356: Solomonoff is about *prior* probabilities
StellaAthena#3530: Can you prove that the shortest algorithm that outputs 1, 2, ..., 7 outputs 8 next?
mgostIH#0245: If you are asking someone
"Hey what comes next after 1 2 4 8 16", an answer of 31 would have intrinsically more complexity given arithmetic as a base language than 32 |
mgostIH#0245: @StellaAthena Bruteforce thru the symbols of the language and show that
n -> n + 1 is the shortest of such symbol sequence that explains 1234567
StellaAthena#3530: You basically said “go prove it and you’ll see that it’s true”
mgostIH#0245: No I said that bruteforcing is essentially the only way
StellaAthena#3530: That doesn’t do anything to convince me that you’re right.
mgostIH#0245: kolmogorov complexity isn't computable
mgostIH#0245: So each single argument would have a needed proof, bruteforcing works for the simplest kind
StellaAthena#3530: “Isn’t computable” doesn’t mean anything for x = 8
mgostIH#0245: I mean that the bruteforce in this case is a valid proof that can explain it quite nicely
StellaAthena#3530: You didn’t *give* a brute force proof. You asserted that if one was given it would show you were right
mgostIH#0245: Again I can't come to you with formalities over exactly why n -> n + 1 is so simple you'll struggle to find other models that fit the numbers from 1 to 7
bmk#1476: "proof or gtfo"
mgostIH#0245: Yeah I can't do that right here right now
StellaAthena#3530: What about 1, 2, __?
mgostIH#0245: I thought that for a server based on machine learning mathematical proofs weren't that required 😬
StellaAthena#3530: @bmk [insert rant here]
mgostIH#0245: @StellaAthena I'd probably still say that 3 is more likely for the sheer simplicity of n -> n + 1 but the point is that if you have very very little data the uncertainty is much higher
cfoster0#4356: Like Bayes, Solomonoff says to never discard a hypothesis, only weigh its evidence by its complexity
mgostIH#0245: Yeah that's an important point I might've worded poorly
mgostIH#0245: What I meant is that if your policy is "pick the hypothesis with highest likelihood" then you'd choose n -> n + 1 |
bmk#1476: I have a question: since kolmogorov complexity is only defined up to a constant, how do you define a probability distribution using it?
mgostIH#0245: Up to a constant given a language
mgostIH#0245: If you specify the language then it's defined exactly
bmk#1476: I meant when not given a language
bmk#1476: It's not really a universal prior if it's different depending on language
mgostIH#0245: Ye but then you can't really formulate induction because you don't have a language to begin with
bmk#1476: What..?
mgostIH#0245: This is true to a point, but the issue is that any prior would have that problem
mgostIH#0245: If you don't have a language to define your induction on you can't formulate any program to explain data
bmk#1476: Isn't the entire point of solomonoff to be independent of languagr
mgostIH#0245: Asymptotically it would be since it's only defined up to a constant
mgostIH#0245: It'd be like saying:
mgostIH#0245: "What if my language has a symbol `a` that corresponds to the program 12345678, then I could express 123456789 as a, 9 instead of n -> n + 1"
bmk#1476: Yeah but it's not a well defined distribution then
mgostIH#0245: I'd say it's a parametrized distribution
mgostIH#0245: Essentially the language would give a bit more bias to some program depending on the symbols, but if your language is finite this isn't "almost always" an issue
StellaAthena#3530: @mgostIH why are you calling this a “bias”
StellaAthena#3530: What is this a “bias” towards
mgostIH#0245: Oof I really need to sleep
StellaAthena#3530: Go sleep |
mgostIH#0245: Maybe let's go on tomorrow and include Connor in the conversation, he probably spent way more time and effort on this than me
bmk#1476: @StellaAthena what's your opinion on how to make solomonoff well defined while language independent
StellaAthena#3530: @bmk this is why I’m pushing back on the use of the word “bias.” What’s being called “bias” here has nothing to do with statistical bias (“x is a biased indicator”), social bias (“I am biased against x”), or inductive bias (“my prior prefers x”)
bmk#1476: Sure
StellaAthena#3530: It doesn’t *mean anything* to make solomonoff language independent. It’s not a coherent question.
StellaAthena#3530: (As far as I can tell)
bmk#1476: Why, though?
bmk#1476: We know that in the limit of complexity all languages are equivalent, is there no way to extend that to low complexity stuff?
StellaAthena#3530: Are you familiar with chaitin’s constant?
bmk#1476: I've heard of ot
StellaAthena#3530: There are countably many true or false questions that have definitive answers expressible in English
StellaAthena#3530: (Proof: there are countably many English statements)
StellaAthena#3530: Since they are countable, you can put them in order in a list. Let’s call that list q_i
StellaAthena#3530: We can now define a number χ = Σ a_i / 2^i where a_i = 1, if q_i had the answer “true” and 0 otherwise.
StellaAthena#3530: This produces a well-defined real number, χ.
StellaAthena#3530: However, *which* number is it? Does it begin 0.00..., 0.01..., 0.10..., or 0.11...?
StellaAthena#3530: The answer is “it depends on what order you list the questions in.” If I don’t tell you *which* ordering of questions I have in mind, you say which is the correct start.
StellaAthena#3530: Does this make sense @bmk?
bmk#1476: I think I understand but i don't see how this relates to solomonoff
bmk#1476: The obvious ordering here seems to be "by length" or something like that |
StellaAthena#3530: Asking for a “language independent” Solomonoff Induction is like asking for an “order independent” value of χ. The very definition of solomonoff induction (like the definition of χ) *depends on* the language (ordering) used.
bmk#1476: But the thing is, as the data gets more complex, Solomonoff induction gets less and less language dependent
bmk#1476: And so lots of people just assume that they're working in the limit and that there is a single universal solomonoff
bmk#1476: But is there a way to generalize that to simple data too
StellaAthena#3530: Who are these “people”? I have never seen someone use solomonoff induction for something “real”
bmk#1476: I mean just theoretically, not for anything real
bmk#1476: Also I guess i can't point to any specific instances since I only have a popsci level understanding
bmk#1476: But I was under the impression that the entire point of kolmogorov complexity was that it was universal up to a constant
bmk#1476: And so by basing a prior on kolmogorov complexity it's more universal in some sense
StellaAthena#3530: Komogorov complexity has the same ordering dependence. “The Komogorov Complexity” of a sequence is not a well-defined thing. It depends on an ordering of Turing Machines
bmk#1476: Again, though, isn't the fact that it's equivalent up to at most a constant a big selling point?
StellaAthena#3530: A selling point for what? What does the constant factor give you?
bmk#1476: it.. means that for sufficiently complex data it doesn't matter what language you use?
bmk#1476: which makes it more universal
StellaAthena#3530: Why do you say that
StellaAthena#3530: Because as n gets bigger the difference between 2^n and 2^n + C becomes irrelevant?
bmk#1476: Well, yeah
StellaAthena#3530: Why does that matter / what does that do?
bmk#1476: i mean, it implies that you can objectively measure the complexity of something, more or less language-independently, in the limit of it being very complex, right?
Dal#7192: Hello guys. I think we're getting really close to software overhang. |
Two questions:
1. Is anyone working on multi modal modelling? Assembling a coherent world model from the strongest/cheapest of arbitrary senses? What I recall from the last conversation is that this is only being brushed against.
2. Is there any working logic for assessing real time surprisal/information entropy/salience? This is the big black box outstanding on my GI map.
bmk#1476: 1. yes OA is working on this, and at large scale
bmk#1476: 2. there's a bunch of ad hoc ways to do this, for instance there's a RL paper that pops to mind, something about random network distillation. idk if there's anything really principled afaict, this isn't a thing i've done a lot of research into
Dal#7192: Thanks. Regarding the second, are there any other terms I could use to poke at this?
Louis#0144: Thought .rs was ruby script
Louis#0144: Apparently that’s rust
Deleted User#0000: @Dal 2. sounds like curiosity in RL. There's many links in the related work section here: https://openreview.net/pdf?id=rJNwDjAqYX
Deleted User#0000: btw do we have any more info into the kinds of multimodal models OA is looking at, beyond their autoregresive models paper?
Deleted User#0000: (i wished they were more open:P)
Kazumi#1297: huh, OA switched to using pytorch this January, is gpt3 made in tensorflow, or pytorch?
Daj#7482: Most likely pytorch. The Christiano paper had a working MPI-parallelized transformer in its code, that's probably similar to what they used
Kazumi#1297: so they're not using any TPUs?
Daj#7482: Nope, that for sure, they use GPUs
Deleted User#0000: so is the reason eleuther wants to use TPU, that getting enough GPUs for gpt3 is even harder than getting enough tpus?
Daj#7482: It's the reason we do it yes. No one uses TPUs of their own free will lol
Daj#7482: Google just hands them out like million dollar candy
Deleted User#0000: lol i see
Kazumi#1297: I can't really find much about OAs multimodal stuff |
gwern#1782: there's very little about it beyond the recent paper
inoryy#0395: Still no contingency plan?
StellaAthena#3530: For what?
inoryy#0395: For the scenario that tfrc doesn't work out
Daj#7482: How could we have a "contingency plan" lol? Seems silly, it's not like someone will just hand us millions of dollars like this in any other context
Daj#7482: This is a project that emerged from an unusual opportunity, not a determined search for one
inoryy#0395: I think there are multiple ways to tackle the task if the goal is alignment research through LMs, not all of them require millions.
Daj#7482: Well we only need TFRC for training GPTNeo
Daj#7482: The alignment research, yea there are other ways, which I am also pursuing behind the scenes
Daj#7482: Just takes time to build capacity, raise capital, etc
inoryy#0395: Fair enough.
Daj#7482: We're mostly (exclusively?) beginner alignment researchers, so still building career capital
Louis#0144: hey losers
Louis#0144: hows it bumping
StellaAthena#3530: TFRC is tangential to alignment research IMO.
StellaAthena#3530: Replicating GPT-3 is not an alignment goal, and I don’t think that having internal access to GPT-3 would be a major alignment research boon
inoryy#0395: Right, this is also why I'm somewhat surprised by the focus on gptneo given the initial mission statement of the server.
Daj#7482: GPT Neo is how the server got started
StellaAthena#3530: What initial mission statement?
StellaAthena#3530: The initial mission statement was “hey, what if we replicated GPT-3” AFAIK |
inoryy#0395: Something something alignment is very important and not a lot of people are working on it so somebody should. At least that was my impression.
StellaAthena#3530: So, the founding of this server was the exchange:
> **Connor:** https://arxiv.org/abs/2006.16668
> Hey guys lets give OpenAI a run for their money like the good ol' days
> **BMK:** this but unironically
Daj#7482: This is my goal in life lol
Daj#7482: And I kinda drag the server along with me
Daj#7482: But we're actively humble about the origins and impact of this server
Daj#7482: No you're misunderstanding lol, our "goal" is "hang around with cool people discussing cool work and do some cool projects"
Daj#7482: If more develops out of it, great
Daj#7482: If not, not
Daj#7482: We all have dayjobs
StellaAthena#3530: Do you have any suggestions for a millionaire patron who is down to give us two dozen DGXs?
StellaAthena#3530: It only makes sense to criticize “betting it all on TFRC” if there exists an alternative.
Daj#7482: Also what Stella says
andyljones#7746: have you folks killed a research project yet
Daj#7482: I do hope that Eleuther will develop into a kind of decentralized alignment research hub or grantmaker eventually, but we're in no hurry and it takes time
Daj#7482: MIRI did nothing for like 6 years too lol
Daj#7482: There have been projects that didn't take off
StellaAthena#3530: If so @inoryy, I would love to hear about it. But the amount of compute this requires is insane. Even if you ignore the fact that you can’t fit the model on a V100, if you obtain the theoretical max computational power on a V100 it would take 355 **years** to train. |
andyljones#7746: i feel like you're taking a narrow critique as an attack on the whole community, which i'm pretty sure isn't what's intended?
inoryy#0395: Indeed.
StellaAthena#3530: @andyljones I don’t feel attacked at all
Daj#7482: I think Stella may be formulating things a tad more aggressive than intended
StellaAthena#3530: I don’t mean to be aggressive at all, no
andyljones#7746: but you're responding to 'i'm worried that gptneo isn't viable' with 'yes but we're doing all these other things'
StellaAthena#3530: Sorry if that came across poorly
Daj#7482: I know you don't :D
Daj#7482: Is this directed at me or Stella?
Daj#7482: I also apologize if I sound aggressive, I'm basically trying to make us sound less cool lol
andyljones#7746: ah, both of you, pardon
Daj#7482: Apologies, not my intention, my entire argument is:
Daj#7482: ¯\_(ツ)_/¯
Daj#7482: Dunno it's been fun so far, lets see what happens
andyljones#7746: yeah, that's a great approach to take! but it doesn't answer the specific 'this project what you chat about a lot might be fundamentally flawed'
Daj#7482: And my response is: :yes:
andyljones#7746: i guess what inorry's - and i - am looking for is 'yup. we know. that's why we're diversifying'. which is basically what you're saying, but not joined up
Daj#7482: It's as non-flawed as it can be
StellaAthena#3530: My add on to that is
> if it works out with TFRC that would be dope. But it doesn’t make sense to talk about “alternatives” if literally zero exist. |
Daj#7482: We don't have any other sources of millions of dollars of compute, if we find it, we'll take it
Daj#7482: But if it fails, it fails
Daj#7482: I endorse this statement with the added qualifier that we're still just hobbyists finding the right use for our free time
Daj#7482: I apologize if I communicated that badly
StellaAthena#3530: Replicating GPT-3 is not the only thing we are doing. Our work on the Pile is independently worthwhile, and even if we can’t train GPT-3 scale models we can still use the framework to train small large models.
inoryy#0395: Please take my messages as a nudge to consider a discussion about alternative directions rather than some hard criticism.
StellaAthena#3530: We’ve talked about some cool scaling law research as a next project, whether or not we get the compute
Daj#7482: You're totally not wrong, this is just something which I'm sure you can imagine has been discussed here literally _dozens_ of times
StellaAthena#3530: When you say “alternative directions” do you mean “alternatives to TFRC” or “projects other than replicate GPT-3”
andyljones#7746: i've been meaning the latter
StellaAthena#3530: I’ve been assuming you mean the former @inoryy, which is why I keep saying “it’s really *really* expensive”
inoryy#0395: Great, I could very well just be completely out of the loop. My impression was built on occasionally skimming the discussions which is of course bound to give me an incomplete picture.
StellaAthena#3530: Ah
Daj#7482: To give context: Most of the GPT3 work was done when the founders all happened to have a _bunch_ of free time
Daj#7482: And we've all gotten jobs since then lol
bmk#1476: What's the convo
Daj#7482: The meaning of ~~Life~~ Eleuther
Daj#7482: lol
bmk#1476: Ah
Daj#7482: I'm working on a new project I wanna introduce Eleuther to, but work makes slow |
bmk#1476: We don't really have a mission statement atm
andyljones#7746: i think there's a certain amount of naive optimism *necessary* to get a project off the ground, and to some extent you have to lean into that.
i think this often has the unfortunate consequence that other people take the optimism with far more conviction than they should, and then :pikachu face: when it doesn't work out.
StellaAthena#3530: I think that’s fair
bmk#1476: We have a bunch of different goals that are mildly related to each other
Daj#7482: That's why I try to practice "active humbleness" here by repeatedly saying we're just a bunch of people hanging out
Daj#7482: lol
Daj#7482: I think it's a healthy view to avoid too high expectations
andyljones#7746: the discord version of 'epistemic status' at the top of a LW post
Daj#7482: haha
cfoster0#4356: "I heard this from some strangers on the internet"
bmk#1476: Epistemic status: revealed to me during a dream
Daj#7482: So yeah, I don't want people to think we're full time alignment researchers actively looking to build MIRI 2.0 or such ~~(I do that in my dayjob)~~
Daj#7482: But if we happen to bumble our way into something like that, awesome
bmk#1476: Anyways i think a reasonable summary is:
Things we've actually done: GPTNeo, Pile
Things we semi realistically plan on doing: gpt3 replication
Things we wish we were doing more of: alignment |
bmk#1476: But none of this is in stone
StellaAthena#3530: “Scaling laws” should go on the “wish we were doing more of”
StellaAthena#3530: TBH for me a major reason to participate is “this is a thing I can be vaguely useful at while my migraines prevent me from doing real work.” Since Oct 1st I’ve had a migraine about 2 out of every 3 days. It makes coding for significant periods of time impossible 😦
Daj#7482: Damn that really sucks Stella, I'm so sorry
StellaAthena#3530: SOCML acceptances are out (for at least some workshops)
StellaAthena#3530: I got into NLU, anyone else going to be there?
bmk#1476: fwiw your participation is much more than just "vaguely useful" for us
inoryy#0395: Well, seems you have it figured out! I guess I just managed to randomly stumble mostly on GPT-3 related discussions over the last couple of weeks which gave me the wrong impression. Please don't take my comments the wrong way, they were coming from a good place.
Deleted User#0000: ahh sorry to hear that. there's been a mini-breakthrough in migraine treatment, look up CGRP inhibitors
Deleted User#0000: https://consultqd.clevelandclinic.org/cgrp-inhibitors-for-migraine-prevention-what-prescribers-need-to-know/
Deleted User#0000: you should see someone about it
Deleted User#0000: headaches are about the stupidest thing the body does to us
Deleted User#0000: absolutely pointless
Deleted User#0000: give me my robot body already
bmk#1476: I can't wait to ditch my monkey meat suit for a nice robot body
bmk#1476: Imagine being always alert, not needing sleep, not having back pain, not having the flu
Dromarion#3383: Encountering bugs while beta testing immortality is likely to be more unpleasant than anything else
asparagui#6391: just make backups sheesh
bmk#1476: I encounter loads of bugs while using this shitty pre-beta unstable body in production but the maintainer never responds to bug reports
bmk#1476: (And it *is* unpleasant) |
Kazumi#1297: one major issue is that it requires about 8 hours of down time every day that's like a third of its entire life, and when it doesn't get it, the performance drops significantly
bmk#1476: It really sucks because the product life isn't that long either, it's like planned obsolescence
bmk#1476: And it doesn't even come with cloud backups
Kazumi#1297: once you get your own, you're stuck for most of your life too, you can't even chose any options and you can't really even tell what you want until much later. And you can't just create a second account
Aran Komatsuzaki#5714: @Deleted User My HSP completely vanished with no trace, and I'm completely healthy now. Maybe it has something to do with living in Japan or food here.
Deleted User#0000: ahh yea, although i'm not an expert with HSP, i believe its one of those illnesses that comes and goes anyways
Deleted User#0000: it would be nice to train GPT on UpToDate
Deleted User#0000: and see if it can paraphrase medicalese
Deleted User#0000: i need to work on a real project after i wrap up my existing open source work
Aran Komatsuzaki#5714: yeah project is fun
Deleted User#0000: or demoralizing, i spent two months trying to get text -> image generation using stylegan, but it never worked out
Deleted User#0000: at least with open source you have something at the end to show for it
Aran Komatsuzaki#5714: sure but making the real advance is where fun resides
Deleted User#0000: yea true
bmk#1476: How big is it?
bmk#1476: We could include it in v2
Aran Komatsuzaki#5714: UpToDate dataset probably cannot be open-sourced
Aran Komatsuzaki#5714: it's a subscription-based stuff.
Deleted User#0000: 2.3 GB, someone torrented a 2016 scraped copy of the entire site
Deleted User#0000: the problem is, it's still in html / js format i believe |
Deleted User#0000: lol, yea, maybe it should go in some illegal Pile
Deleted User#0000: and trained in secret
Aran Komatsuzaki#5714: illegal stuffs go to Shawn
Deleted User#0000: ok
bmk#1476: Yes, eleuther does not do illegal things, but we'll gladly cite anyone else who does it for us
bmk#1476: (what are we, *plagiarists*?)
bmk#1476: does anyone have a copy of C4?
bmk#1476: it's a big hassle to get it, apparently
bmk#1476: i don't need all of it, just a 40GB representative sample of it
bmk#1476: @Louis ?
bmk#1476: why are all the CC datasets so hard to get a copy of
bmk#1476: cc_net too
bmk#1476: you have to use their script to do who knows what processing (no idea what format comes out the other end, either - the only time i ever ran their script, i ran out of disk)
redbasecap#8145: Hei i am really intressted to learn more about EleutherAI.
Where are you computing?
Louis#0144: I do
Louis#0144: Hi
Louis#0144: It sucks
Louis#0144: You don’t want it |
Louis#0144: lol
StellaAthena#3530: Oh? How does it suck?
bmk#1476: That's exactly why I want it
bmk#1476: I'm running ablations
bmk#1476: I want an experiment in Pile paper showing how much better Pile is than C4
chirp#4545: Some interesting quotes from a speaker who will be speaking at NeurIPS soon:
> [6:06] For the last forty years we have programmed computers; for the next forty years we will train them
> **[19:58] So I'm gonna talk a little about one of the most important trends in machine learning. And this, this has to do with the nature of the scaling up...**
Link: https://www.ed.ac.uk/informatics/news-events/lectures/christopher-bishop
NeurIPS link: https://neurips.cc/virtual/2020/public/invited_16165.html
mgostIH#0245: @chirp page seems unavailable
chirp#4545: Yeah I’m seeing the same
chirp#4545: Weird, it worked for me just a few hours ago
bmk#1476: @Louis so like can i has C4
bmk#1476: i just need a 40GB slice of it for now, though getting a full copy eventually would be nice
The Warrior Monk#3737: @everyone Alright, I'm super excited to get to know ya'll better. Anybody got any research/start-up/hobby project ideas, HMU! Would love to brainstorm
bmk#1476: thank goodness i disabled @ everyone lol
Louis#0144: OMG |
Louis#0144: LMAOO
Louis#0144: Yeah I’ll see if I can get it to u
Kazumi#1297: yeah
bmk#1476: So the script we use is made by @shawwn
bmk#1476: Lemme see if I can find it
bmk#1476: https://github.com/soskek/bookcorpus/issues/27
bmk#1476: Here ya go
bmk#1476: https://github.com/shawwn/scrap/blob/master/epub2txt-all
bmk#1476: This looks like the one
bmk#1476: I just stick a file extension on them, don't think you need to do anything else
bmk#1476: Also I think the only flag you need on is -n
bmk#1476: Actually i take that back it looks like his code dumps everything into one file, you probably want to replace that (see L600 or thereabouts)
bmk#1476: If you don't have it figured out by this afternoon, i can make a ready to use version of the script for you
bmk#1476: I don't believe the script requires you to do so
bmk#1476: After you get the list of files that are epubs, you can just run the script on those
bmk#1476: Also re: pdfs: let's just say they're *highly nontrivial* to handle so we'd probably put that off until when we figure out how to extract pdfs effectively. Also not to mention when you extract a pdf into text the size goes down by a lot - I bet only 1-2 TB of text will be left after extracting all of libgen and scihub combined, and possibly under 1TB after deduplication (since libgen has a lot of duplicated content)
bmk#1476: Which is still a lot of text, but i just wanted to get that expectation set
StellaAthena#3530: @-Archivist `.txt` files are ideal for us. One `.txt` file per document.
StellaAthena#3530: @-Archivist Makes sense. For our purposes simply sequentially numbering each `.txt` would be fine.
bmk#1476: Yeah the naming of txts isn't really important |
bmk#1476: For us, the number one most important thing is there's no garbage in the extractions
bmk#1476: We're willing to throw away some good data if it means we can get rid of all the bad data
bmk#1476: Garbage is pretty broad but a good rule of thumb is "something that would look out of place if you were only looking at the txt"
bmk#1476: This includes: bad ocr (I'm sure you're aware of all sorts of bad ocr artifacts, i.e misspellings, wrong language ocr resulting in a complete mess), tables/charts (most extractors mangle tables and charts into an unrecognizable mess of numbers so we'd just like to exclude them), and so on
bmk#1476: Still cleaner than CC
bmk#1476: I don't think it's possible to be less clean that CC
bmk#1476: After we finish extraction, i can handle the filtering
bmk#1476: But extraction needs to be not-garbage enough for the filtering to not toss *everything* out
bmk#1476: Any amount of text is fine, I'd personally prefer either zstd or gzip compression but the details don't really matter
bmk#1476: (I'm a fan of datahoarding too, haha, so while i don't have multi-PB arrays like you do, a few dozen TB is definitely doable)
bigdatov#6926: @Daj can't find paper mentioned in ML Street Talk about comparing multiple modalities modelling scaling laws, help... :guilty:
Daj#7482: uhhhmmm good question it might not be published yet. @Aran Komatsuzaki @gwern , I swear I've seen the paper before but maybe it was just screenshots from the talk, anyone have a link?
Aran Komatsuzaki#5714: @Daj @bigdatov Yeah I believe you guys are talking about this paper: https://arxiv.org/abs/2010.14701
Daj#7482: exactly, thank you!
StellaAthena#3530: @-Archivist what makes a paper make sense to be hosted?
StellaAthena#3530: I thought you guys mostly hosted things that aren’t already free online
Daj#7482: We CS people have come to really trust arxiv to always have everything
Daj#7482: I can't think of a single paper off the top of my head I'd want to archive that isn't on there
Daj#7482: If arxiv shuts down all of CS research stops dead in its tracks lol
Daj#7482: I like your way of thinking, haha |
Daj#7482: Also mirror sci-hub and science is safe for the future
Daj#7482: tbh I'm shocked sci-hub has been so resilient so far
Daj#7482: Nice
Daj#7482: Guess most people don't have warehouses of storage like you hah
StellaAthena#3530: Most CS papers can be found somewhere that’s not on arXiv
StellaAthena#3530: If you go to Google Scholar and click “other versions” you’ll see this.
StellaAthena#3530: But finding said papers can be extremely hard because some random person’s personal website is less accessible then a unified repo
Daj#7482: Is this a challenge? Hah
Daj#7482: Yea, of course arxiv doesn't have everything, I'm just biased to my niche of research
StellaAthena#3530: Right right
StellaAthena#3530: Oh nice
Daj#7482: Not yet, but I could imagine video based multi model training sets getting this large in the next few years. Wouldn't be surprised if Google already trains at this scale
Daj#7482: _TPU Podcast is typing_
Daj#7482: Hah
Daj#7482: That sounds like quite the story lol
Daj#7482: Oh are you part of those missing person search reddit things? I've heard about that
Daj#7482: Oh shit, so is that like your profession or are you actually just a superhero?
Daj#7482: > "I have more of a problem with collecting or hoarding data than I do with porn"
This is an absolutely phenomenal quote hahahaha
Daj#7482: You are actually a cyberpunk character haha |
Daj#7482: Amazing
Daj#7482: ...AGI is going to emerge from porn, isn't it?
Daj#7482: Of course it will
Daj#7482: Eleuther multi modal transformer trained on porn lets go
mgostIH#0245: I have become dick, destroyer of assholes
mgostIH#0245: @-Archivist how did you even send 2PB of data?
mgostIH#0245: Like do you have google band and how much did it take?
Daj#7482: I have so many questions about this entire enterprise lol. If you're willing to answer: How the hell do you have the hardware/budget to do all this?
mgostIH#0245: Oh so it wasn't over the internet?
Daj#7482: Or am I overestimating how expensive this all is?
Daj#7482: Cool
mgostIH#0245: With that much porn you can probably compete with pornhub
mgostIH#0245: At least it'd make a profit
Daj#7482: seems sorta shady
Daj#7482: I like the purity of the hoarding mission haha
mgostIH#0245: How much does a PB of storage cost more or less?
mgostIH#0245: To put things in scale, do you happen to know how much archive.org has?
Daj#7482: Christ, TIL restoration pays a _lot_ more than I thought lol
Matthias_#3292: sorry if this was already asked: whats the conversation gonna be about?
Daj#7482: ~~***M E M E S***~~ |
gwern#1782: @Daj yeah, restoration is very skilled and rich people will pay a lot for it. I was reading about minnesota's cave system, and apparently like a single guy owns most of the private caves in minnesota, just paying for it out of his furniture restoration business.
Daj#7482: That's such a gwern thing to know haha
Daj#7482: Fascinating
gwern#1782: it's one of those things like dentists or chiropractors where if you work hard for a few decades and have good enough interpersonal skills and develop a reputation for reliability and compound your savings/assets, you can become surprisingly rich
Daj#7482: Neat
gwern#1782: you might remember that s-town podcast series a few years back. how did that guy just live in quasi-retirement for decades, freely blowing cash on random shit? well, he was a really good antique clock restorer. so he had savings from his old jobs, and once in a while might do a new one. (IMO, he was *too* good a clock repairman - I was convinced reading it that he had severe mercury poisoning from his dedication to fire-gilding, which no one else does because of the mercury)
Daj#7482: Never heard of s-town but now I'm intrigued
gwern#1782: it was mildly interesting but imo wasn't as fascinating as it was cracked up to be, especially when the answer is really 'this poor dude had mercury poisoning and was slowly committing suicide before he actually committed suicide'
gwern#1782: I'd recommend reading the transcripts unless you really enjoy podcast formats
Daj#7482: Reading the wikipedia page it still sounds like a very aesthetic narrative
Daj#7482: I love podcasts actually, so hah
Daj#7482: I kinda miss my old subway commutes
Daj#7482: Forced 1-2 hours of podcasts a day
gwern#1782: I think part of it is just a perfection premium. rich people don't want a restoration which is merely 95% good-looking and accurate, they want it to be 100% perfect and flawless looking. if you don't have the semi-autistic attention to detail and expertise to get it that last few percent...
gwern#1782: (cue spanish art memes)
Daj#7482: Yea that makes sense
gwern#1782: it's also a hard field to break into. I mean... where do you go to learn how to restore books? do you just teach yourself over a decade of unpaid hobbyist work and then go pro? there's programs, surely, but who would even think of that as a career? so the supply is choked off at the source. every kid knows of being a rapper or football star as a career choice and has a vague idea of how one would go about it (despite its totally unrealistic nature!), but book restorer?
Daj#7482: Yea it sounds like it would need a very specific kind of person (or family legacy)
Daj#7482: Nice aesthetic hah
Daj#7482: This will definitely be a theme in the eventual cyberpunk novel I will write about fictionalized Eleuther |
gwern#1782: (I doubt these fields can absorb all *that* many new entrants, but the wages are set at the margin, not based on any absolute size. as long as there are just too few top-notch restorers who can get you that 100% flawless restoration job, if there's even a shortage of 1, then the restorers set the prices, not the wealthy buyers)
Daj#7482: For some reason this is unreasonably cool to me
gwern#1782: oh, it's very Gibsonesque
gwern#1782: Archivist even makes fakes?! *so* Gibson
gwern#1782: sorry, I mean, 'item replication' 🙂
Daj#7482: Yea, it's so funny that we _actually_ have someone named "Archivist" involved in this thing
Daj#7482: Amazing
gwern#1782: anyway, it's nerd catnip because it rewards technical perfection, involves both extremely old and new things, is a rabbithole, plays with notions of authenticity and real vs fake ('what color are your books?'), and involves high art/culture and lots of money
gwern#1782: what's more gibsonesque than extremely detailed japanese fakes of american bomber jackets?
gwern#1782: same thing here
Daj#7482: You're making yourself doxxable lol
Daj#7482: Well
Daj#7482: Actually I'm probably not rich enough to know how to find a restorer for 17th century paintings
gwern#1782: you probably just go on richpeople-craigslist or something
Daj#7482: Is gwern one of the other 8?
gwern#1782: ha. no, but I do understand the appeal, and when I was little, I did want to be a librarian
Daj#7482: I couldn't imagine something more fitting lol
Daj#7482: Yea this seems like the kind of world I only see in movies
Daj#7482: At least you're using the aristocrat money for good lol
gwern#1782: I have relatives who are very involved in antiquities and restoration, so I get some of this by osmosis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.