data
stringlengths 115
7.61k
|
---|
zphang#7252: I think in the HF/fine-tuning world, "train" means fine-tuning
AI_WAIFU#2844: We can do a bit more than 1 step
bmk#1476: Dozens!
cfc#2691: couldn't we make a folding@home approach to training big models, using deepspeed?
triggerhappygandi#0001: Doesn't work.
EricHallahan#1051: Way to much latency.
triggerhappygandi#0001: Model parallelism doesn't work due to latency
triggerhappygandi#0001: Yeah
CRG#8707: <https://github.com/EleutherAI/info#qa> https://cdn.discordapp.com/attachments/729741769738158194/806956949945319534/6ac9468341b1d00b40b0aac515fa1360.png
EricHallahan#1051: Will 100% of the time diverge.
cfc#2691: makes sense
triggerhappygandi#0001: Even having storage and compute in different locations on GCP causes significant slowdowns
triggerhappygandi#0001: Imagine what an international hivemind would do
bmk#1476: Note to self: add a trigger to @Isaac McHorse that posts the link to the faq every time someone says "folding@home"
StellaAthena#3530: This is literally on my TODO list for when we get around to updating the bot
mgostIH#0245: @bmk blacklist #alphafold
StellaAthena#3530: What for?
mgostIH#0245: For the folding@home thing kek
mgostIH#0245: Should've replied :Think:
mgostIH#0245: Quite likely that the topic of protein folding comes up in that channel :S |
bmk#1476: Oh, right
triggerhappygandi#0001: Lol
gwern#1782: _finally realizes what 'same energy' means. oh. that makes sense._
jrowe#5371: i searched a picture of my puppy, it found lots of the same breeds
jrowe#5371: labradoodles and german wirehaired pointers, pretty neat engine
jrowe#5371: puppy tax https://cdn.discordapp.com/attachments/729741769738158194/806997040146939966/snoot.jpg
jrowe#5371: my german wirehaired labradoonter
bmk#1476: I read that as wireheaded at first
jrowe#5371: lol
gwern#1782: I read that as 'biggan-generated' at first
jrowe#5371: abstract blanket pattern 4tw
StellaAthena#3530: I’m the last person to validate this because I’m faceblind, but when I search Google Images for “zendaya” all the images that come up are from articles about zendaya.
StellaAthena#3530: Ah that makes far more sense
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/807003976389754921/20210204_134527.jpg
jrowe#5371: content doggo
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/807009966342012988/image0.jpg
StellaAthena#3530: She decided it was time for me to stop working
erin#5432: got my 3090 set up 😻
erin#5432: now i have to fucking figure out docker lol
erin#5432: i actually got stylegan2-ada-pytorch to run |
erin#5432: but that's it
erin#5432: and plus i barely understand it's code
StellaAthena#3530: How hard is it for a large company to switch from ASCII to Unicode for their systems? I saw another tweet by someone who was annoyed that their bank didn’t accept their name as valid, and got to wondering.
EricHallahan#1051: I can't imagine it to be hard at this point. UTF-8 has been around since...
EricHallahan#1051: *rummages through papers*
EricHallahan#1051: ... September 1992.
EricHallahan#1051: I think ~30 years is plenty of notice to make the switch.
EricHallahan#1051: I note it is also another *Plan 9 from Bell Labs* innovation.
EricHallahan#1051: It's easier to understand than any of the TensorFlow versions, luckily.
erin#5432: yeah
erin#5432: i'm trying to use docker on my old code
erin#5432: i'm just getting stupid insufficient shared memory shit
StellaAthena#3530: @EricHallahan apparently the *National Bank of Ireland* doesn’t accept all Irish letters as valid?!?!?
StellaAthena#3530: https://twitter.com/tadhgmalonely/status/1357295860879675392?s=20
EricHallahan#1051: I'm scratching my head.
StellaAthena#3530: I mean, it’s almost certainly some kind of colonialist bullshit.
EricHallahan#1051: Is it the `Ó` that it doesn't like?
StellaAthena#3530: Yeah
StellaAthena#3530: In the comments someone says they had issues with é as well
EricHallahan#1051: From Wikipedia: |
> Ó is widely used in Irish where it has various meanings
StellaAthena#3530: Their daughter’s name was changed to have an E instead
StellaAthena#3530: (It’s in the middle of the name)
StellaAthena#3530: Oh yeah it’s indisputably common
StellaAthena#3530: In the comments people have mentioned that BoI has been trying to drive the Irish language extinct for over 100 years. Apparently they removed accept marks from paper copies pre-computers as well
StellaAthena#3530: This isn’t an accident or incompetency. The question is if it’s deliberate linguistic imperialism or something else IMO
EricHallahan#1051: "Linguistic imperialism" sounds like a pretty good way of saying it for this case. There is no excuse to not support UTF-8 in that context.
Sahl#0630: I don’t know much about the context behind this, but I’m willing to bet they’re just using shitty tools/didn’t think about it
Sahl#0630: But not having Unicode support in 2021 with a UTF-8 interface is really gross
gwern#1782: unicode causes so many problems. just doppelganger characters are a PITA
gwern#1782: 'bank of ireland please wire $1m to sergey notbadnik' 'oh it went to the wrong one because the 'e' happened to be a different unicode e which looks identical? so sorry, surely an innocent mistake'
Sahl#0630: this is actually an issue of using names to wire money
StellaAthena#3530: This is not unicode’s fault. This is why you should use a unique ID
gwern#1782: _remembers back when wikipedia didn't ban those unicode characters and people would register doppelganger accounts to get the original banned when the admin didn't exactly copy-paste the nick but wrote it out the way it looked_
Deleted User#0000: basically, if its a bank, and it's not some elite tech group at goldman sachs, you can expect things to not change. source, friends who defected from wall street to valley spill all at the bars
Sahl#0630: although the problem is more salient for domain names
StellaAthena#3530: @Deleted User Sure, I don’t contend that the bank wouldn’t be good at accomplishing this even if they wanted to
Sahl#0630: the thing is, you can always restrict to a subset of Unicode
gwern#1782: (even if you weren't lucky enough to get an admin to ban you by typing out the nick, you could still make the original look bad when 'their' nick showed up in various places)
StellaAthena#3530: My question is: if you wanted to and brought on some competent people and gave them what they needed is it really that hard |
Sahl#0630: but it’s hard to expand to a superset of ASCII/other weird codepage
bmk#1476: given what i've heard about banks, i'd second this. i bet their backend is probably *utterly horrifying*
Sahl#0630: I intern at a bank now
gwern#1782: 50 years of patched cobol and fixed-width records
bmk#1476: it's probably *really hard* because of all the technical debt and cruft that's accumulated
Sahl#0630: I’ve only seen other co-op code, but it’s very cursed
CKtalon#7792: i don't think it's the bank alone
CKtalon#7792: it's across banks that's the problem
bmk#1476: more importantly, theres a possibility that the slightest change breaks everything and they cant make it work again
gwern#1782: the more terrifying thing is how the code will embody all sorts of cryptic knowledge about the endless arcane tax and legal and financial rules, none of which anyone knows
CKtalon#7792: if a wire needs to be sent from B to that bank A (the bank in question)
Sahl#0630: Software development is not a bank’s core competency
CKtalon#7792: A might support unicode, but B might not
CKtalon#7792: then that causes an error
Deleted User#0000: even at a valley company, for us to migrate a column in the database sometimes took up to 3 months
Sahl#0630: don’t use names then, use a number
Deleted User#0000: and that's considered agile
CKtalon#7792: blame the SWIFT system
CKtalon#7792: it requires a name
gwern#1782: (it's also just the unknown unknowns. I'm sure there are people to whom the doppelganger attack is super-obvious, even back in 2004, based on all they know about unicode. as it happens, no one involved in mediawiki knew they didn't know how to avoid doppelganger attacks exploiting unicode) |
Sahl#0630: I’m somehow doing both a MVP and test driven development + peer programming for the same project
CKtalon#7792: although all one should need is SWIFT code and account number
CKtalon#7792: and it's a 1973 system..
CKtalon#7792: so no unicode
Deleted User#0000: things take time when you have million lines of spaghetti code
EricHallahan#1051: That sounds right according to what my father tells me about databases. (Worked as a performance engineer until last year.)
bmk#1476: a local bank recently migrated their database to a newer system
bmk#1476: guess how much it cost
EricHallahan#1051: A lot.
Deleted User#0000: https://news.ycombinator.com/item?id=25975110 says it all
bmk#1476: **over $300 million across 3 years**
gwern#1782: anyone who thinks it's trivial to just migrate banking systems to handle unicode probably needs to be whacked with about a dozen of those 'Lies Programmers Believe about X' guides, covering names, unicode encodings, computers and bytes (I still remember being shocked by discovering support in X11 for systems where bytes aren't 8 bits), and so on
Deleted User#0000: yea, anything that is in finance, government, or healthcare
Deleted User#0000: you can expect the software to suck
Deleted User#0000: incompetence is astronomic
bmk#1476: on the one hand, holy fuck, it cost that much
bmk#1476: on the other hand, holy fuck, im impressed they even managed to pull it off at all considering just how horrifying it all is
EricHallahan#1051: Hey, we shouldn't forget about the possibility that they could still be using OS/2.
Deleted User#0000: makes you really appreciate even working for the shoddiest startup around the bay area
Deleted User#0000: lol |
gwern#1782: os/2 is still used, you know
Aran Komatsuzaki#5714: asking deloitte to do real work is probably not a good idea.
Sahl#0630: is there a way for banks not to do software
Deleted User#0000: but it happens all the time
Sahl#0630: and just do the investing
Deleted User#0000: the rest of the world runs on these contracts and broken projects
Deleted User#0000: in the US, we had a disaster rollout of healthcare.gov
Deleted User#0000: and they sent in valley experts to DC to fix their stuff
Deleted User#0000: for some 100 million dollar contract work
EricHallahan#1051: It's sad that it still is.
Deleted User#0000: the microsoft of healthcare (Epic Systems), still runs on visual basic
Deleted User#0000: lol
CKtalon#7792: simply because the contractor knows the banks/government has the money
CKtalon#7792: so they just overcharge
CKtalon#7792: suckers
Deleted User#0000: can you imagine a billion dollar company running on visual basic?
Deleted User#0000: anyways, we complain about tensorflow here
Deleted User#0000: but really, if you are a dev wrestling with legacy code at one of these dinosaurs, its a lot worse
Deleted User#0000: lol
triggerhappygandi#0001: That's how MATLAB has justified it's existence till now |
EricHallahan#1051: Guess who decided to side with Oracle?
https://www.supremecourt.gov/docket/docketfiles/html/public/18-956.html
dmvaldman#4711: where do you see the decision?
EricHallahan#1051: They still haven't decided yet.
EricHallahan#1051: The oral argument was broadcast live back in October.
dmvaldman#4711: ah, so you're _actually_ asking me to "guess" 🙂
EricHallahan#1051: No, it's in the docket.
EricHallahan#1051: > Brief amicus curiae of The Mathworks, Inc. filed.
dmvaldman#4711: ah gotchya. nice find! misinterpreted "decide" to be directed to the supreme court, not to the side of 3rd parties
gdawg16#0493: ITS ALMOST WANDAVISION TIME
triggerhappygandi#0001: Marvel
triggerhappygandi#0001: :zucc:
fazz#8459: Only Goldmans has any tech edge or competence on sell side. The rest are a hilarious dumpster fire
StellaAthena#3530: Why is it so hard for people to remember that dystopian sci-fi is supposed to be a warning, not a life goal?
https://apnews.com/article/legislature-legislation-local-governments-nevada-economy-2fa79128a7bf41073c1e9102e8a0e5f0
EricHallahan#1051: Re: *Tomorrowland (2015)*
Daj#7482: _Peter Thiel has entered the chat_
Daj#7482: lets fucking go lmao
triggerhappygandi#0001: Lmao what |
cognomen#6297: going full burbclave already?
triggerhappygandi#0001: Thiel flat out says he believes monopolizing is good. I bet if this becomes serious he would also shill for private governments and how they are good for society.
StellaAthena#3530: The only explanation I can come up with for Peter Thiel is “I am so rich that I am immune to negative externalities and therefore they don’t effect my decision-making”
triggerhappygandi#0001: I somewhat liked his book on startups, where he is out right calling to you to create monopolies. Just do it™
AI_WAIFU#2844: You know what. At this point, why not? Fuck it, let's see what happens. Best case senario this solves the NIMBY problem.
cfc#2691: companies usually solve problems better than governments, if there was some competition for local governments that could be nice
triggerhappygandi#0001: Yeah but... _greed_
triggerhappygandi#0001: Some of them could just.. hmm how do I say it.. choose to "optimize" poor people, so to say.
Daj#7482: Corporations solve _certain kinds of problems_ better than government
Daj#7482: It's all about incentives, and some incentives work well for certain kind of problems, other for other kinds
triggerhappygandi#0001: A lot of companies are just as inefficient as governments. Re: every single company that invented everything we now call legacy software.
triggerhappygandi#0001: Except microsoft. They seem to have evolved.
Daj#7482: yea there's an argument to be made that corporations just outperform governments because they're exposed to more selection pressure
cfc#2691: that's my point in saying "competition for local government"
Sahl#0630: that’s what democracy tries to achieve
cfc#2691: but the structure remains the same in democracy
Daj#7482: I'm actually pretty big in favor of Charter Cities and the like
Daj#7482: so yea, why the fuck not
cfc#2691: this would allow more freedom in local government administration types
cfc#2691: instanciate your local government as a direct democracy, as a monarchy |
Sahl#0630: I guess federal and provincial governments exist to alleviate coordination problems
Sahl#0630: If cities could override higher level legislation then they could optimize at the cost of the country/province
cfc#2691: assuming government are legitimate
cfc#2691: i mean, i never signed the social contract, it's just imposed on us for existing in their border
cfc#2691: more like cattle on a ranch than citizens of a civilized society
cfc#2691: if someone optimized for 0 taxes and full anarchy i'd move there
Sahl#0630: I probably wouldn’t
triggerhappygandi#0001: There is a need for someone to spend money without any expectation for return
Sahl#0630: The real way to optimize for 0 taxes and full anarchy is to kill humanity
triggerhappygandi#0001: A company can't do that
triggerhappygandi#0001: I was happy while reading 0 taxes not gonna lie.
triggerhappygandi#0001: But if this is how hard it is to achieve, oh well.
cfc#2691: if the company can lose costumers to another company they have some pressure
Sahl#0630: It would be nice to impose more evolutionary pressure on governments
triggerhappygandi#0001: Yeah.
Sahl#0630: but have fitness not be determined by the typical stuff because that’s how you get dictatorships
andyljones#7746: imo this captures a lot of thiel
https://www.bloomberg.com/opinion/articles/2017-01-12/bond-covenants-and-skeptic-skepticism https://cdn.discordapp.com/attachments/729741769738158194/807259619344646154/unknown.png
andyljones#7746: and, well, he's been handsomely and repeatedly rewarded for this kind of thinking. doesn't matter if he's wrong *on average*, long as the philosophy gets him a few long-tail payoffs right |
andyljones#7746: (i am an admirer if you can't tell)
Sahl#0630: what if misaligned AI isn’t so bad
Sahl#0630: let’s try that
Daj#7482: Oh hey i always do this too, I've been trying to think of a name to call this trick
Daj#7482: I don't usually accept the inversions all the time but it's a good imagination exercise
StellaAthena#3530: Why? This is something I fundamentally don't get. Why are people so vehamately opposed to paying taxes.
Sahl#0630: same
Sahl#0630: I like what taxes buy me
StellaAthena#3530: Like, if you don't make enough money to live the quality of life you'd like to *because of* taxes sure, but that seems like a very narrow band of people and probably is impossible in some countries.
Daj#7482: Taxes are an amazing deal for the benefits they bring
Daj#7482: Even in countries that somehow inexplicably lack public healthcare
StellaAthena#3530: @Daj Fun fact: depending on which order my doctor and my girlfriend's doctor process our recent drug refills I may or may not be on the hook for $500 for 3 months of medicine
StellaAthena#3530: ("fun" is the right word to use there, right?)
Sahl#0630: Fun fact: my uni insurance is worse than my country’s insurance but overrides it, meaning I have to pay hundreds monthly
Sahl#0630: and I’m forced to take uni insurance
cfc#2691: i don't get amazing benefits for my taxes in brazil, all we get are corrupt politicians stealing 50% of our money
cfc#2691: and i'm morally opposed to taxes
cfc#2691: it's unfair to charge for a service i didn't agree to
Daj#7482: One time, I needed some emergency medicine pretty quickly, so when I got to the pharmacy, I was missing a certain form from the doctor. So the lady looked at my very firmly and said: "I'm really sorry, but without that form, you'll have to pay for it yourself." I swallowed heavily. "How much...?" "20€, and your insurance will pay you back once you mail them the required form"
Sahl#0630: oh no that must be so hard for you |
Daj#7482: Living in a first world country is nice
Sahl#0630: I’m in a first world country but the law changed recently so it’s all a bruh moment
StellaAthena#3530: In the US people with conditions like epilepsy that can suddenly incapacitate you often carry around cards saying "please don't call an ambulance if you find me"
Daj#7482: literal cyberpunk dystopia shit
cfc#2691: in brazil there's an upper limit, by the government, of how many doctor schools and how many students can they have, so it doesn't get cheap
cfc#2691: and doctors can't display prices on anything
cfc#2691: imagine the calamity of having a lot of doctors with market competition for prices
StellaAthena#3530: That is legitimately not the situation in the US
cfc#2691: we can't even sell blood here
cfc#2691: or plasma
cfc#2691: so there's usual shortages
StellaAthena#3530: I'm not saying you don't have it bad. I'm not even saying you don't have it worse
StellaAthena#3530: I'm saying that
> imagine the calamity of having a lot of doctors with market competition for prices
is a false description of the US system
cfc#2691: i know, just saying how it is
cfc#2691: here
StellaAthena#3530: :/
Deleted User#0000: US healthcare system is a scam, plain and simple
StellaAthena#3530: I know a little about it because my ex briefly looked at moving to Brazil before realizing it would kill him |
Deleted User#0000: science and the therapies are real, everything else is fake
cfc#2691: good call on your ex
Deleted User#0000: it's all gift wrapped to look so nice too, so people don't question it
Deleted User#0000: i was hoping the pandemic would expose how the emperor has no clothes, but people seem to not want to fight and overthrow the system
StellaAthena#3530: He's really screwed though... He's polish, in the US on a student visa. He has a rare neurodegenerative disease that he can only receive treatment for from a handful of countries, which notably don't include his country of citizenship (Poland) or any country he has relatives in (Brazil, Greece)
cfc#2691: that's tough
StellaAthena#3530: Basically if he wants to be able to walk at the age of 35 he has to live in the US, Canada, France, Germany, the UK, Israel, Japan or maybe one or two other countries I am forgetting to name. Probably Australia? S. Korea?
Teven#6831: At least he has EU citizenship - of course his situation is terrible but at least he can move to DE/FR without having to deal with visas to stay healthy
StellaAthena#3530: Yeah. He's currently studying German for that reason.
bmk#1476: If he ever needs a conversational partner, you know who to direct him to lol
StellaAthena#3530: The medicine he takes requires an absurd amount of medical tech infrastructure to be able to produce, transport or store. It's made out of stem cells and synthesized in hamster ovaries or something like that. Needs to be stored in special liquid nitrogen fridges until injected
Teven#6831: I can't help but be in awe that this is possible at all
StellaAthena#3530: Yeah
EricHallahan#1051: Biology is *weird*.
cfc#2691: can't wait until we properly hack it
cfc#2691: creato some proteine micromachines
EricHallahan#1051: Locally we have these things called Spotted lanternfly.
EricHallahan#1051: Highly invasive.
EricHallahan#1051: Terrifying.
EricHallahan#1051: But yet entirely harmless to humans directly. |
EricHallahan#1051: They are pretty much impossible to get rid of.
EricHallahan#1051: I think biocontrol is going to be the only option to get rid of them apart from genetic engineering.
Arrow#1878: Check out this guy on youtube called "The Thought Emporium". He does a lot of bio hacking videos.
Arrow#1878: One really cool one was "I Grew Real Spider Silk Using Yeast" which was fantastic
triggerhappygandi#0001: Man. That's rough
triggerhappygandi#0001: And by neurodegenerative do you mean it could fuck with his brain if he doesn't take medicines?
triggerhappygandi#0001: I feel blessed to only have massively poor eyesight.
triggerhappygandi#0001: > synthesized in hamster ovaries
triggerhappygandi#0001: I hope they don't hurt them for it
Daj#7482: I have some bad news for you about how bioscience happens
MicPie#9427: Those are very likely these cells/this cell line: https://en.wikipedia.org/wiki/Chinese_hamster_ovary_cell
StellaAthena#3530: Assuming "by fuck with his brain" you have "destroy his memory, ability to reason, and other things related to thought" in mind, no. That's one kind of neurodegenerative disease, but a neurodegenerative disease is just one that causes the degradation of parts of your nervous system. The central examples in most people's mind are things like Parkinson's or Alzheimer's disease, but in his case the primary impact is loss of motor control. We dated like five years ago so the details are a little fuzzy but I believe it effects both his brain (the motor control regions) and his nerves
triggerhappygandi#0001: :dogecri:
triggerhappygandi#0001: @StellaAthena yeah that's about what I thought. Like unraveling the brain or something.
StellaAthena#3530: Fortunately no. He'll be able to think just fine, but wheelchair bound due to an inability to direct his muscles to actually have his body perform the mechanical action of walking.
triggerhappygandi#0001: Man. The body is very weird
triggerhappygandi#0001: Sprouts some bugs in the hardware out of thin air
triggerhappygandi#0001: Is it genetic? @StellaAthena
triggerhappygandi#0001: Or does some outside factor causes it
andyljones#7746: y'make seven billion copies of a thing with a dodgy printer, well, |
triggerhappygandi#0001: After 4.5 billion years I'd assume it would iron out most mistakes.
StellaAthena#3530: @triggerhappygandi I mean, the answer is "genetic with a strong environmental component" but that's a cop-out answer in the sense that it's true of basically everything terrible that can happen to you between the ages of 3 and 30 that
1. doesn't effect your parents
2. wasn't caused by a disease
3. wasn't caused by drinking lead paint or similar as a child
StellaAthena#3530: It has ironed out most mistakes
andyljones#7746: was about to say that, but realised i'd never seen a list of heritable conditions that have disappeared in the last few centuries 🤔
andyljones#7746: reporting issues would swamp it ofc
StellaAthena#3530: I think that time scale is too short. The primary piece of evidence that comes to mind immediately is the fact that the majority of pregnancies are believed to end in miscarrages
StellaAthena#3530: That implies a very sensitive self-corrective system that we don't even see
andyljones#7746: but would think there'd be at least one. porphyria maybe, as a plausibly less-common one?
MicPie#9427: General increase in quality of healthcare decreased the selection pressure.
andyljones#7746: idgi, can you expand on this?
andyljones#7746: quality of healthcare was a fukkin negative until the last ~hundred-ish years wasn't it
andyljones#7746: oh yeah lemme drain your blood
andyljones#7746: stick my dirty hands in this wound
StellaAthena#3530: It's awkward, because any good candidate likely has strong epigenetic influences and was likely eradicated or nearly eradicated before we discovered epigenetics
triggerhappygandi#0001: Do they end as miscarriage in animals?
triggerhappygandi#0001: Because we are getting very good success rates due to medical science
StellaAthena#3530: Health care received was negatively correlated with outcomes as late as the 1800s in some disciplines |
Daj#7482: Still is with schizophrenia for some reason
Daj#7482: Medicine is weird
MicPie#9427: Yes, I guess that was kind of bigger turning point: https://en.wikipedia.org/wiki/Contemporary_reaction_to_Ignaz_Semmelweis
MicPie#9427: I also meant like in the last 50-100 years the pressure was going down.
StellaAthena#3530: Is this the guy who started washing hands between working in the morgue and delivering babies?
MicPie#9427: Yes!
StellaAthena#3530: My girlfriend is in public health and talks about this guy every chance she can
StellaAthena#3530: John Snow and Gin and Tonics as Malaria cures are other entires on her list of party conversation topics
StellaAthena#3530: We actually have an old sign from like 190X about G&Ts for malaria in our apartment
StellaAthena#3530: Probably because lithium fucking sucks
StellaAthena#3530: oh wait, wrong condition
CRG#8707: https://twitter.com/SilverVVulpes/status/939820606614274049?s=19
StellaAthena#3530: What drugs are given for schizophrenia?
Daj#7482: Antipsychotics, usually
Daj#7482: Which are just fancy tranquilizers
Daj#7482: But it's even stranger than that
Daj#7482: It was thought that schizophrenia doesn't even occur in more traditional societies until like the late 70s
Daj#7482: When they found similar occurences of schizophrenic symptoms in _all_ populations, but the negativity of the symptoms directly correlated with wealth of the country
Daj#7482: The more rich the country, the more "evil" the voices, the worse the occupational dysfunction, etc
Daj#7482: weird as hell |
StellaAthena#3530: Yeah, I've heard that
Daj#7482: well, this guy knows how to write an endorsement that will get me to read a book
MicPie#9427: Yeah, nature does not want to copy DNA 100%, thats a feature not a bug. 😉
triggerhappygandi#0001: It would cause immortality too
triggerhappygandi#0001: That's why DNAs lose their edges every time they replicate.
StellaAthena#3530: To clarify for someone reading this as I think it's ambiguously phrased, Connor doesn't mean that people in more traditional societies don't have symptoms like delusions or hallucinations. He means that the experience of those delusions or hallucinations is not perceived as negative by the experiencer. Much lower rates of "the government is reading my mail" and much higher rates of "angels are watching out for me" even though the basic delusion (agents of a powerful entity is spying on you) is the same.
StellaAthena#3530: (From what I have read before at least? Correct me if I'm wrong @Daj)
Daj#7482: You got it
Daj#7482: So many of these people never reported themselves as "sick"
StellaAthena#3530: (Or are never reported as sick by those around them)
triggerhappygandi#0001: I can attest to that being true
Daj#7482: The level of dysfunction is actually also very different iirc, but that may be due to different demands placed on them by society
Daj#7482: i.e. in traditional societies, they fulfill their roles to a much greater degree than schizophrenics are able to in our societies (where it is usually debilitating)
triggerhappygandi#0001: Depression isn't registerd as a disease here. Just a "phase"
Teven#6831: here = ?
StellaAthena#3530: I wonder if you can examine this by specifically looking at schizophrenic adults who move to, e.g., the US as adults (say, 30, 35)
triggerhappygandi#0001: That's why India is like 16th on suicides per capita. That's higher than japan and all Scandinavia
Daj#7482: That would make for a great study
bmk#1476: Telomeres are not really the main cause of aging
Deleted User#0000: that's because "healthcare" of the past is really just a bunch of voodoo |
Deleted User#0000: they used to think illness was caused by spirits, kid you not
Deleted User#0000: before the discovery of bacteria and viruses
triggerhappygandi#0001: No but DNA loses some molecules at the edge every time. This causes deterioration in replication, which is a part of aging
Daj#7482: to be fair, viruses are totally weirder than spirits
Daj#7482: they're evil little fat droplets that reprogram your cells
StellaAthena#3530: Don't even get me started on prions
triggerhappygandi#0001: All the more surprising one of them was able to detect them
Deleted User#0000: and its still a bunch of voodoo, even today! https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)32874-X/fulltext
triggerhappygandi#0001: Also the fact that they are 99% of all life
triggerhappygandi#0001: Practically all life is invisible
triggerhappygandi#0001: In retrospect
Deleted User#0000: thank god for evidence based medicine
Daj#7482: :smallbrain: The world is suffused by living spirits
:bigbrain: The Earth is covered by quadrillions of invisible lifeforms that do all kinds of things including making us sick
Deleted User#0000: i feel like i've truly seen the dark underbelly of our healthcare system
CRG#8707: Telomeres are really only a small part of aging. (Only 13% increased lifespan ~~in mice~~) <https://www.nature.com/articles/s41467-019-12664-x#:~:text=Strikingly%2C%20we%20found%20that%20hyper,normal%20telomere%20length%20controls%20(Fig.>
Daj#7482: Yea...
triggerhappygandi#0001: I see. I'm mostly basing it on one kurtzgesagt video I saw@CRG
Deleted User#0000: one reason i was excited to move to Berlin (when it was uncertain whether Trump would win) was the healthcare system there
triggerhappygandi#0001: They mentioned multiple causes of aging |
triggerhappygandi#0001: With telomeres being one of them
CRG#8707: I recommend <https://nintil.com/longevity/>
StellaAthena#3530: Controlling it would be hard because you'd want comparison groups from four categories: people of the same/different cultural background combined with people who have only lived in the original / new country of the immigrants. Getting that data together would be challenging.
Deleted User#0000: it's actually not dysfunctional, i heard from a friend
Daj#7482: Yep, would be a huge project. It does feel like the kind of study that might have been performed though, schizophrenia has been studied internationally for decades after all
Daj#7482: German healthcare system saved my family's life
Daj#7482: It...just works
triggerhappygandi#0001: Just like Germans themselves
Teven#6831: tbh that sounds like most German families haha
Daj#7482: It was even wilder since we lived in America
Daj#7482: Father made really really good money in Hollywood
Daj#7482: all evaporated in <1 year
Teven#6831: ah OK that doesn't sound like most German families then
triggerhappygandi#0001: Wtf. So even actors in US can dry themselves if they get sick?
Daj#7482: We faced living on the streets, then we just moved to Germany since my mother is German and my father always reminiscent how surreal it was, because all the doctors just instantly gave him what he needed no questions asked, didn't cost a cent
triggerhappygandi#0001: Also, Hollywood ree
Deleted User#0000: the US pretty much tries to bankrupt you if you hit a chronic illness
Deleted User#0000: or some serious disease
Daj#7482: They gave us free social housing, even a check each christmas to buy presents for the kids (not kidding!)
Deleted User#0000: even insurance can't really save you completely |
Daj#7482: Writer, actually, did quite well
triggerhappygandi#0001: Mfw Connor could afford cloud server prices out of pocket.
StellaAthena#3530: For a person with chronic illness I'm not really that sick. I'm 27, athletic, and in good physical shape. My healthcare would cost more than 40k every year out of pocket
triggerhappygandi#0001: Man
Daj#7482: Well, it all evaporated lol
triggerhappygandi#0001: 40k
triggerhappygandi#0001: A year
triggerhappygandi#0001: That's like 3 million rupee
triggerhappygandi#0001: Holy shit
Daj#7482: I complained when my premium went from 70€ a month to 110€
Daj#7482: (this is premium private insurance too)
triggerhappygandi#0001: Is there a one line explanation for this difference in prices?
StellaAthena#3530: Universal healthcare
Daj#7482: Incentives, collective bargaining
Teven#6831: I was always under the impression that rent-seeking and middlemen were really the bane of US healthcare
StellaAthena#3530: There's zero feedback between the customer and the people whose decisions influence prices too.
triggerhappygandi#0001: By middleman in healthcare I assume you mean the insurance people? @Teven
triggerhappygandi#0001: I hate banks so much
Teven#6831: yeah but I remember looking at a dependency graph and finding it thicker than expected
Teven#6831: maybe there's several layers of insurers ? |
Daj#7482: The US is fractally fucked
triggerhappygandi#0001: Even when I have the opportunity to steal something I can't get away with it due to my inner voice. How do the bank people silence their morality when squeezing money from people in dire situations?
Teven#6831: anyway even if there was universal healthcare, in the current system the cost would be better distributed but people would still pay a lot more for the same quality of healthcare right
MicPie#9427: Afaik drug prices are also one of the highest in the USA.
StellaAthena#3530: Most US insurance plans have a scale where the amount you pay for care goes down as the total amount you have spent goes up. After a certain point, care is pretty cheap (or free, if you have really good healthcare). My girlfriend and I have phenomenal healthcare for people who have chronic illness. We spend 15k USD by early April and then aetna picks up the bill for the rest.
triggerhappygandi#0001: It's not like you're ripping a tourist with fake salt water
Teven#6831: although collective bargaining helps with that
triggerhappygandi#0001: 15k/year? @StellaAthena
StellaAthena#3530: If I were to lose my job, the "free after april" thing goes away AND the price of care goes up
StellaAthena#3530: @triggerhappygandi Effectively, yes
Teven#6831: I wonder how much of it is also because people don't prevent disease cause it's expensive and let it fester -> then it's more expensive than if you had acted earlier (but then maybe you financially couldn't)
triggerhappygandi#0001: Still a lot. That's more than I make in a year entirely
Teven#6831: in poor areas of France this is a big problem
StellaAthena#3530: Fun fact: insurance companies were only required to cover preventative care under the ACA (Obama Care). Before that, many insurance companies didn't cover preventative care because it was "not necessary"
Teven#6831: yeah that sounds like it would make the whole system worse
Daj#7482: Hard to express how :bigbrain: this is
triggerhappygandi#0001: These are the same people that caused the 2008 crash and got away with it.
StellaAthena#3530: Different people actually, but they hang out at the same clubs
Daj#7482: something something Moloch did it
triggerhappygandi#0001: Yeah that's what I meant. People from same club |
Teven#6831: But then the EU system is also built on top of young-doctor slave labour so that's gonna drive the costs down 😛
Teven#6831: very very very glad I didn't go that route after high school
triggerhappygandi#0001: I remember there was a company with assets worth $600**billion**, in 2008 no less, which went under during the crash
Daj#7482: I mean, so is the academic system
Teven#6831: I get the impression that US medical staff is better off than EU medical staff, but I don't know too many on the US side
Teven#6831: oh yeah, but I also think this is sort of unfair
triggerhappygandi#0001: Do EU doctors not make doctor money? In most countries being a doctor instantly means you're beyond upper middle class.
Teven#6831: it would be nice to compensate people to the extent that they contribute to society
Daj#7482: They do quite well in Germany at least yea, I know many rich doctors
Daj#7482: But the work is still extremely intense
MicPie#9427: In pharma you also have a trend towards expensive drugs for chronic illnesses because you can make a lot of more money with them. Nobody cares about vaccines or similar tratements because they are super cheap and you have to apply them in the best case only once (but hopefully that changes with the pandemic, but I highly doubt it).
Daj#7482: Conspiracy theory: Anti Vaxx is funded by big pharma
Teven#6831: I feel like older doctors are fine ; but before 30 you're 1. dirt poor 2. working 36-hour shifts 3. unable to decide where you live
Daj#7482: or at least tacitly not opposed
StellaAthena#3530: In the US, the "Earnings per year" and "earnings per hour" look *very* different for doctors
triggerhappygandi#0001: My only hope is that when finally the boomer generation dies, the next one isn't as greedy when people's lives are at stake.
Daj#7482: I really think any explanation for complex social maladies that boils down to "X group of people is evil and greedy" is probably false
andyljones#7746: it's a bit of a mistake to-
andyljones#7746: wot connor said
triggerhappygandi#0001: Well, probably. But I get that impression from bankers/insurers et al |
Daj#7482: Sure, they can _also_ be evil and greedy
StellaAthena#3530: @triggerhappygandi Some quick googling indicates that US doctors make more than 3x the median salary but less than the median wage
Daj#7482: But it's not enough of an explanation
triggerhappygandi#0001: How@StellaAthena
andyljones#7746: don't lean into that. it'll nudge you into trying to fix people rather than fix systems
triggerhappygandi#0001: Is tax different for doctors or what
Teven#6831: you just have to work a lot haha
andyljones#7746: IREAM, Incentives Rule Everything Around Me
Daj#7482: something something Inadequate Equilibria
StellaAthena#3530: Because I work 40 hours a week, so I have a higher hourly wage than a doctor who works 90 hours a week but is paid twice as much as I am.
Daj#7482: This seems like an unforunate acronym :guilty:
triggerhappygandi#0001: Ah
triggerhappygandi#0001: I REAM
triggerhappygandi#0001: :guilty:
Daj#7482: https://equilibriabook.com/
andyljones#7746: smdh
https://en.wikipedia.org/wiki/C.R.E.A.M.
Daj#7482: Read this, which explains why the US is constantly poisoning babies and no one can do anything about it
Daj#7482: (I think they stopped recently actually) |
Daj#7482: also https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
triggerhappygandi#0001: Looks interesting. I will look if there's an audiobook for it@Daj
StellaAthena#3530: **Correction:** the numbers I mentioned before are lifetime expected and debt-adjusted (going to med school is flipping expensive). If you just look at hours worked and salary paid doctors do well, but aren't rich
CRG#8707: (A good metaphor for aging/cancer/moloch): https://distill.pub/2020/growing-ca/figures/unstable.mp4
Daj#7482: It's pretty short and easy to read fwiw
triggerhappygandi#0001: Ohh. Cool
Daj#7482: also, very snarky
Daj#7482: lol
Teven#6831: that sounds like the numbers that matter then !
Teven#6831: but then an advantage is job security, which is something that people typically flock to
StellaAthena#3530: True
Daj#7482: ***STATUS***
StellaAthena#3530: (As long as you survive it)
andyljones#7746: now you've reminded me of the short-bowel thing, i *am* looking forward to the FDA getting a good kicking post-pandemic
Daj#7482: Prediction: Everyone but niche nerds will forget all the lessons learned in <10 years
Teven#6831: Status is also an important part, but I wonder how you'd quantify it
Daj#7482: In status coins
Teven#6831: OK fair
andyljones#7746: they'll forget the lessons, the institutions will remain
StellaAthena#3530: Fun fact: 2020 was the first year on record for which the number one killer of medical residents in the US wasn't single-person car crashes |
andyljones#7746: cf. all the shit that came out of WW2
Teven#6831: yeah, that's what institutional memory is for
andyljones#7746: national labs are still ticking over
Daj#7482: I have no idea how recalcitrant the public instutions really are, but I'm pessimistic
Daj#7482: but total human annihilation by AGI in 20 years anyways so lmao
triggerhappygandi#0001: Is there a timeline like "this is the time when things started getting downhill healthcare wise" in US?@StellaAthena
StellaAthena#3530: 1776?
triggerhappygandi#0001: Lol
andyljones#7746: magic googlable phrase is 'cost disease'
StellaAthena#3530: More seriously, the US never had a functional healthcare system
StellaAthena#3530: Basically nowhere on earth did pre 1930 or so
StellaAthena#3530: 1920?
triggerhappygandi#0001: It really is a magic googlable phrase
StellaAthena#3530: NHS was 1946 and was an early adopter of unversal healthcare
Teven#6831: https://www.nature.com/articles/s41467-019-09102-3
Teven#6831: I think you'd enjoy this article
Daj#7482: neat
Daj#7482: I wonder if modern information economy changes this mechanic, and in what direction
Teven#6831: "For a period of one generation after each flood, new settlements appeared in safer places. However, respect for floods waned in the second generation and new settlements were established closer to the river. We conclude that flood memory depends on living witnesses, and fades away already within two generations. Historical memory is not sufficient to protect human settlements from the consequences of rare catastrophic floods."
triggerhappygandi#0001: It sounds so unintuitive to the uninformed |
StellaAthena#3530: This is amazing. Can't wait to read it
triggerhappygandi#0001: It actually sounds like it shouldn't be possible.
fristiloverke#4159: https://www.youtube.com/watch?v=-ZbKzL6ikY8
fristiloverke#4159: is this the future of labour
Teven#6831: ooooh that's cool
triggerhappygandi#0001: Well, bullshit job is a real term
triggerhappygandi#0001: So why not
Teven#6831: > The original study was conducted for the performing arts sector.[1] Baumol and Bowen pointed out that the same number of musicians is needed to play a Beethoven string quartet today as was needed in the 19th century; the productivity of classical music performance has not increased. On the other hand, the real wages of musicians (as in all other professions) have increased greatly since the 19th century.
Funny example, but I can't help but wonder whether musicians are better now than then
Daj#7482: https://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/
triggerhappygandi#0001: > Mumble rap, autotune, basically anything on youtube trending
Maybe not in the mainstream
Teven#6831: Hahahaha this is another debate but I could hardly disagree more
Daj#7482: _Mathcore has entered the chat_
triggerhappygandi#0001: But there are some gems
triggerhappygandi#0001: That I would prefer over Mozart
triggerhappygandi#0001: Any classical music lover can fight me
Teven#6831: You're always someone's snob and someone else's plonker; there's no winning
triggerhappygandi#0001: Indeed. Art is subjective |
triggerhappygandi#0001: Except furry art
Teven#6831: Anyway I feel like you're talking about the means and styles of music, which strikes me as more of a capital/means of production innovation
triggerhappygandi#0001: You do not have the right to disagree
Teven#6831: I'd be willing to bet that the pure technical proficiency of musicians has increased in the last 500 years
Teven#6831: aka the labor / personal productivity part
xen0#3601: yo, hi, i've been out of the loop recently
xen0#3601: anyone got info on when pile model is coming out?
xen0#3601: i've heard january thrown around but forgot it and remembered about it now
gwern#1782: which pile model?
xen0#3601: 1.5b pile model
xen0#3601: basically, gpt 2 replica trained but on pile
bmk#1476: There are no promises
xen0#3601: so, it's still not here?
bmk#1476: We haven't released anything yet
xen0#3601: oof
gwern#1782: if it makes you feel better there are already bigger public models
triggerhappygandi#0001: How did Andrew Ng know that we will have something out by August? AI prophet?
bmk#1476: I will bet money that we don't have gpt3 out by August
triggerhappygandi#0001: Just to prove him wrong?
bmk#1476: No, to hedge my bets |
nz#9710: maybe he already replicated it and he's planning to give it to eleutherAI by august 🤔
jrowe#5371: you.... bet hedger.
jrowe#5371: acting like all this is hard or something.
xen0#3601: the problem is, they wouldn't be runnable on colab
xen0#3601: and pile has significantly better data than original gpt-2
bmk#1476: I don't think I'd be able to agree with that with full confidence
bmk#1476: We don't know *that confidently* that pile is better for things other than math and medical stuff
triggerhappygandi#0001: gpt-2 was just wikipedia and reddit right?
triggerhappygandi#0001: @bmk please tell me Pile doesnt have words like _chungus_
xen0#3601: not reddit, sources from reddit
xen0#3601: if some news source was linked in reddit post then it was in gpt set
triggerhappygandi#0001: Ah yes
bmk#1476: No
bmk#1476: Webtext
jrowe#5371: "We created a new dataset which emphasizes diversity of content, by scraping content from the Internet. In order to preserve document quality, we used only pages which have been curated/filtered by humans—specifically, we used outbound links from Reddit which received at least 3 karma. "
jrowe#5371: looks like they tried to filter for quality
bmk#1476: It's a pet peeve of mine when people say that it was trained in reddit
triggerhappygandi#0001: Does Pile have any mention of chungus?@bmk
triggerhappygandi#0001: I do not want Neo to be polluted by big chungus
jrowe#5371: i think you've probably just guaranteed its repeated inclusion. |
tin481#8570: Have you all read those OpenAI/Stanford HAI proceedings? https://arxiv.org/pdf/2102.02503.pdf
triggerhappygandi#0001: :guilty:
jrowe#5371: https://skylion007.github.io/OpenWebTextCorpus/
tin481#8570: "Participants suggested that developers may only have a six- to nine-month advantage until others can reproduce their results". The meeting was in October, so OpenAI may have been expecting GPT-Neo finished by April or July
jrowe#5371: "Since the data was no longer available via the Reddit API, I still had the data from my real-time ingest database. In the interest of research, I included these comments in the October 2017 dump. The comments from the real-time database will have a score of "null". This only affects a subset of /r/incels comments for the months of October and November 2017. "
jrowe#5371: you could probably edit out chungus yourself. then you could distribute OpenWebTextCorpus_NoChungus
triggerhappygandi#0001: Not necessarily neo
fristiloverke#4159: theyre probably thinking of the chinese
triggerhappygandi#0001: Or Google
triggerhappygandi#0001: Or Microsoft itself
fristiloverke#4159: now thatd be a plot twist
tin481#8570: No, the context is before the models go public
tin481#8570: "OpenAI and other organizations will not have a monopoly on large language models forever"
tin481#8570: I think they're talking about open weights
tin481#8570: Or at least a very large number of people/groups with access
EricHallahan#1051: Full context:
> Several participants noted that OpenAI and other organizations will not have a monopoly on large language models forever. Participants suggested that developers may only have a six- to nine-month advantage until others can reproduce their results. It was widely agreed upon that those on the cutting edge should use their position on the frontier to responsibly set norms in the emerging field. Additionally, some participants pointed out that, due to standard advances in technology, it will only become easier for other actors to replicate models like GPT-3 over time. This further suggests the urgency of using the current time window, during which few actors possess very large language models, to develop appropriate norms and principles for others to follow.
fristiloverke#4159: noble thought but how are you gonna set standards if you dont release anything
EricHallahan#1051: I think this is precisely what they are talking about. They explicitly mention "other actors," so it is very likely that they are discussing malicious organizations here.
EricHallahan#1051: Though the fact that a large language model can store text in a highly compressed representation gets me thinking... |
EricHallahan#1051: could a large language model make it past the great firewall?
fristiloverke#4159: you mean from china to the outside world?
fristiloverke#4159: sure
fristiloverke#4159: tiktok did
EricHallahan#1051: From the outside world *in*.
fristiloverke#4159: lol nah
triggerhappygandi#0001: What's the great firewall
fristiloverke#4159: the great firewall of china
EricHallahan#1051: https://en.wikipedia.org/wiki/Great_Firewall
triggerhappygandi#0001: Ahh
fristiloverke#4159: i.e. china blocking foreign websites
triggerhappygandi#0001: Probably not, since the best they could do is translate to Mandarin
triggerhappygandi#0001: But Chinese text would have its own cultural significance
triggerhappygandi#0001: Which forms a whole distribution
triggerhappygandi#0001: That probably couldn't be approximated properly?
fristiloverke#4159: regardless of the quality of the model, if it gets too big theyll just block it
bmk#1476: The firewall is pretty easy to get around
bmk#1476: 翻墙
fristiloverke#4159: it is, but only because they government allows it
fristiloverke#4159: during the national days they took out pretty much all vpns |
fristiloverke#4159: was really annoying
fristiloverke#4159: just to show: we still got all the power
bmk#1476: I don't think it was to show off at all
bmk#1476: There are very practical reasons they'd take out the vpns on those days, and also not take out the vpns on other days
triggerhappygandi#0001: Tor browser? @fristiloverke
triggerhappygandi#0001: As in due to terrorism threats?
bmk#1476: Yes
fristiloverke#4159: like what
bmk#1476: This is just national security 101
triggerhappygandi#0001: Someone could be planning a bombing or something
fristiloverke#4159: why would there be more threats during the national days
fristiloverke#4159: they dont take it out on other holidays
triggerhappygandi#0001: Large groups of people
fristiloverke#4159: chinese new year would be easier
bmk#1476: Important Schelling point for terrorists
fristiloverke#4159: everyone going by train
triggerhappygandi#0001: Plus anything that happens on national holiday would hit harder.
triggerhappygandi#0001: To the country's image
fristiloverke#4159: nah
mgostIH#0245: @triggerhappygandi Better for GPT-Neo to think that Big Chungus is funny |
bmk#1476: Why does the white house get more security during the inauguration than when biden just gives a normal appearance
fristiloverke#4159: they can still see what youre doing even with vpn
mgostIH#0245: how
EricHallahan#1051: Can be tracked through Tor if you make a single mistake. China knows everything about you already through their social credit system, so they would find you out fast if that happens.
fristiloverke#4159: im sure there are ways to get around it
fristiloverke#4159: but there are plenty of people who get arrested cause of things they did through a vpn
triggerhappygandi#0001: Btw, how do you become that guy from the meme "I am behind 7 proxies"?
bmk#1476: By not asking the question
triggerhappygandi#0001: :zucc:
bmk#1476: If you have to ask, you can't make it work
triggerhappygandi#0001: I know a few ways, but they seem very tedious
triggerhappygandi#0001: By a few I mean 2
triggerhappygandi#0001: I would have to be paranoid to try it
nz#9710: just use TOR?
triggerhappygandi#0001: Never used it
jrowe#5371: it's worth spending a week or so to get familiar with Tor and i2p and freenet and other distributed, anti-surveillance p2p type projects
jrowe#5371: theyre easy and it helps to understand what the software is doing when it comes up in the news or conversation or whatever
jrowe#5371: I think I've only ever seen one post, on reddit, from someone in Iran who was using it to circumvent The Man
jrowe#5371: everything else seemed more like the 7 proxies guy lol
Sid#2121: We have both a 1.3b and 2.7b model trained and ready to release... I don’t know what @bmk is talking about, he knows this |
Sid#2121: Should be within the next week
bmk#1476: We haven't released them yet
bmk#1476: And the *last* time we "almost" had a model ready, we ended up messing it up and then not having a model
bmk#1476: So i really don't think we should be going around and promising stuff
cfc#2691: guys, quick question, suppose you're training a neural net on time-series data, so xs are (-1, 300,4), for 300 timestamps of 4 digits data
cfc#2691: would it be better to reshape it to (-1, 4, 300), for the conv1ds to have effect on long term data?
bmk#1476: This is a wild guess, but do those 4 data points happen to be the open, high, low, and close of a stock
cfc#2691: y-yes
gwern#1782: cfc feels seen
bmk#1476: To save you a lot of time, it won't work
bmk#1476: At all
cfc#2691: i'm trying this task for 2 yrs now
bmk#1476: Sorry to burst your bubble
cfc#2691: tried many different input reshaping, masking, transforming in images, predicting next price, next close variation sign, angular coefficient of a linear regression of the next four points
cfc#2691: autencoder for feature extraction
cfc#2691: latest thing is making transformers pass trough the tests
cfc#2691: then i'll try RL
bmk#1476: Let me make another wild guess: does the thing you're trying to trade in question happen to be bitcoin
cfc#2691: no, i'm trying forex actually
bmk#1476: Ah |
cfc#2691: got me a nice 15gb dataset
jrowe#5371: people with PhDs are inundated with opportunities to develop algorithms for hedge funds and banks to do this, with multibillions of dollars of resources
jrowe#5371: you're trying to compete with NASA at rockets
bmk#1476: I can list a few things wrong with what you're doing, but i don't want to because that would only encourage you to fix those issues only to run into even more issues
cfc#2691: what if i promise i won't try that this year if you say what's wrong?
cfc#2691: i really want to learn
bmk#1476: You want tick level data, not OHLCV candles
bmk#1476: And also i personally think that any kind of technical analysis is doomed to fail no matter how good you make it, but some might disagree with me
cfc#2691: i personally agree
bmk#1476: My advice would be just to give up
Sahl#0630: is this due to EMH?
bmk#1476: Kinda, yeah
bmk#1476: If you have unique data sources you have a lot of space to come up with some secret sauce
cfc#2691: i used to work at an investment company
bmk#1476: If you're just looking at the same price chart that everyone else is staring at, good fucking luck
cfc#2691: wanted to show them some good numbers to get access to the data
bmk#1476: I think the majority of professional traders/investors are full of shit
cfc#2691: but i never did
jrowe#5371: the majority of humans are full of shit
jrowe#5371: lol |
cfc#2691: i think so too
bmk#1476: Yes, that's true
bmk#1476: Anyways that's just my 2c
cfc#2691: they had tick-level data and didn't even use it
jrowe#5371: markets aren't rational. People are crazy. Hard to account for that algorithmically
cfc#2691: and didn't want to store it ;_;
bmk#1476: I don't think this is a good way to spend your time
Sahl#0630: if people were irrational in a consistent direction, you’d be able to make money consistently
jrowe#5371: 3d printing is awesome
bmk#1476: Something something "the rational response to irrationality is to remain rational"
cfc#2691: but the sunken cost fallacy has me by the balls
bmk#1476: https://www.lesswrong.com/posts/msJA6B9ZjiiZxT6EZ/lawful-uncertainty
jrowe#5371: I'm gonna sell my filament printer and buy a resin SLA printer
jrowe#5371: and be happy that I stopped wasting time on crypto and forex and daytrading
bmk#1476: I've personally wasted a load of time on trying to figure out the markets too, which i suspect is an extraordinarily common experience among engineers. I personally don't regret giving up at all
jrowe#5371: now maybe you could use gpt-neo and realtime finetuning on twitter feeds to produce signals?
jrowe#5371: then write a paper and watch the offers come in from international banks and funds
jrowe#5371: doesnt even have to work.
cfc#2691: yeah, damn
cfc#2691: i got a few datasets not related to trading |
cfc#2691: and actually had fun
cfc#2691: got accuracies that made sense, could see growth
cfc#2691: sorry to get all depressive over the chat, this is good news, fuck the financial market
cfc#2691: woohoo
jrowe#5371: lol
AI_WAIFU#2844: It's a rite of passage.
AI_WAIFU#2844: https://xkcd.com/1570/
jrowe#5371: the worst part is the global brain drain through the ones that are marginally successful, imo
Math ap Mathonwy#7453: LOL
Math ap Mathonwy#7453: My field of expertise IS Finance
Math ap Mathonwy#7453: and I honestly don't have much more to offer
Math ap Mathonwy#7453: markets are subject to extensive behavioural biases and distortions, no, there is no robust model to predict which ones are relevant at which times.
Math ap Mathonwy#7453: its a stochastic process that can be dominated (for periods of time the length of which you cannot predict) by factors that are correlated in complicated ways
jrowe#5371: read: random walk go brr :brr:
Math ap Mathonwy#7453: no its worse than that
mgostIH#0245: What if I just use my intuition
mgostIH#0245: And common sense
jrowe#5371: anything exploitable will be leveled out by HFT
jrowe#5371: depend on the steady growth of the market and diversify
jrowe#5371: it'll continue working until it doesnt |
Math ap Mathonwy#7453: this is samuelson in the 1960's
jrowe#5371: lol
jrowe#5371: that's only half tongue in cheek
Math ap Mathonwy#7453: usually credited to Fama, but Samuelson published it years earlier
jrowe#5371: I learned it from my dad
jrowe#5371: who probably heard it in the 60s
Math ap Mathonwy#7453: Samuelson isn't a nobody, he won the Economics Nobel in 1970
Math ap Mathonwy#7453: so I have no idea why it gets credited that way
zphang#7252: also he wrote a textbook
jrowe#5371: because economics is esoteric as hell lol
jrowe#5371: its like AI - "normies" don't know Schmidhuber from a potato
Math ap Mathonwy#7453: unfortunately normies are much more likely to weigh in on economics
jrowe#5371: and I would have said Samuelson was Larry King if you asked me from a picture 😛
Math ap Mathonwy#7453: I just know his work
zphang#7252: personally, I blame it on finance money getting into economics
Math ap Mathonwy#7453: well, economics is intrinsically political
Math ap Mathonwy#7453: people care, and tend to have strong opinions about, how things are distributed within societies
Math ap Mathonwy#7453: that's not meant to start a fight or endorse any side at all
Math ap Mathonwy#7453: please don't it interpret that way
jrowe#5371: how many stingray spines for your fine stone axeheads. Also, please don't hit us with them. |
Math ap Mathonwy#7453: yes
jrowe#5371: here, we'll give you extra.
jrowe#5371: no matter how far you abstract out, you have frenetic apes at the base
Math ap Mathonwy#7453: no argument from me on that
Dal#7192: So I was thinking a few minutes ago when I hopped into the shower
Then I was thinking about my thinking.
I asked myself the question: Why do we think in language? What's the utility in forming coherent communicable thought?
So I thought for a few moments and considered:
Using the basis that the brain/neurons are a problem-solving in-out-association engine, other people and any responsive aspect of one's environment is roughly indistinguishable from other parts of one's brain. In terms of utility it's all a gestalt.
So for achieving any high level task (any problem worth *thinking* about), it makes sense to frame it in a communicable way. At some point you're going to draw on external parts of "your" overbrain to finish solving the problem. The utility of preparing the concept for transmission is high - higher than most anything else your brain could be abstracting at that time. And conversely, one's brain isn't particularly chatty when focused on a problem it can solve internally.
i.e. There's utility in thinking out the phrase "I want a burrito" because at some point I'm going to have to coordinate with a secondary association to achieve that goal.
Conversely: If I'm trying to visual spacetime there's no utility in trying to communicate it both because it'd be obscenely difficult and because the problem I'm trying to solve doesn't further draw on external associations.
Dal#7192: Does that seem sensible/nonsense to anyone?
jrowe#5371: grid cells. memory palaces. memory-prediction framework of cognition. near universal plasticity of the entire neocortex.
jrowe#5371: when you speak or act on thought, you're developing connections - those connections serve as "hooks" for concepts to build on, so talking, even to yourself, can help problem solving efforts
jrowe#5371: even horribly abstract things can benefit from repeated, varied explication, since it gives your brain more cross referencing and resources to work on it
Dal#7192: I agree, but I'm starting to think that's secondary. That insights like https://www.nature.com/news/2008/080411/full/news.2008.751.html are more fundamental
Dal#7192: Though I suppose those aren't exclusive to the question I was posing, so that's a good point
jrowe#5371: the whole memory prediction framework has had my noodle baked for over a decade. Whenever I think about how the human brain might accomplish something, there's at least a semi-lucid interpretation from that perspective
jrowe#5371: from navigating a room in the dark to psychedelic ego death to tribal politics |
Dal#7192: As opposed to a perspective that's more sensation/input-based?
jrowe#5371: more input based?
jrowe#5371: the internal model described by MPF is constructed by predictions elicited by inputs , so the state at any given time includes all the inputs
jrowe#5371: and all the processed inputs from t-n steps ago, etc
Dal#7192: Well, I don't see an alternative basis from memory. Even the MPF frames the process as sensation eliciting response based on pre-existing associations
jrowe#5371: right, you have to have a network of associations to contextualize an input, or it's just noise
Dal#7192: I think that doesn't apply to transduction, but for any eventual use, yes
jrowe#5371: in the case of people brains, noise gets filtered, or interpreted by some other context
Dal#7192: Yeah
Dal#7192: > there's at least a semi-lucid interpretation from that perspective
Are there any opposed hypotheses?
jrowe#5371: so part of what I was initially getting at, if you talk through an idea repeatedly, write it down, read about, etc, you're triggering a whole ton of different synaptic connections, increasing the surface exposed to noise and other ideas
jrowe#5371: and if other ideas are related, or if noise that fits a piece of another idea occurs, that can translate into brand new, e=mc^2 level thinking
Dal#7192: Yessss but we don't often instantiate (imagine) our thoughts that way as a matter of course
jrowe#5371: think of all the dream inspired chemistry discoveries
Dal#7192: It could simply be that the utility of doing so isn't worth it, but I'm still considering there's a distinction there
Dal#7192: Or at least, on the balance it makes sense to prime something to communicate with the overbrain
jrowe#5371: sure, but the brain works like that regardless of our subjective experiences
Dal#7192: Works like?
jrowe#5371: that was the big deal with the grid cell / memory palace paper |
jrowe#5371: it gave us a hook into the functioning of the human brain, with a very direct and real example in the memory loci skill
jrowe#5371: https://discourse.numenta.org/ - lots of good resources here
jrowe#5371: ymmv for their software, but theres a helluva lot of science that's on point
jrowe#5371: if you look in #art right now, those animations are eerily psychedelic - something those things are doing is similar to something our monkey brains do while on psychedelics.
Dal#7192: Yep, salient associations abstracting
Dal#7192: Thank you for pointing me to grid cells, those slot in nicely though the biology will take a while to comprehend
jrowe#5371: sure thing - the Thousand Brains idea is a good one as well, although I'm not as sold on that as the MPF foundation for intelligence
andyljones#7746: > I asked myself the question: Why do we think in language?
fwiw, lots of people don't
jrowe#5371: what would it have been like for the first few generations of genetically modern humans?
jrowe#5371: how radically different their concept of "I" must have been, lol
Dal#7192: That's part of why I thought there was a utility answer!
Dal#7192: I'm very bad at articulating the things I consider, and often enough I catch myself thinking through concepts without any use of language
Dal#7192: There's a distinction somewhere in the brain about when to "speak" internally, and I'd posit there's a utilitarian reason for it
Math ap Mathonwy#7453: I wonder if study of mammals that display language-like behaviours would elucidate
Dal#7192: I'd propose it follows a distribution relative to cooperation access
Dal#7192: A lion doesn't have to communicate much more than "stay the hell back"
Dal#7192: A wolf, though...
Math ap Mathonwy#7453: or a prairie dog |
Dal#7192: But without any studies on that metric I'll stick to figuring out whether there are obvious holes in the idea
Dal#7192: > Now imagine the same mug, but this time you grasp it with multiple fingers at the same time. Whereas before you had to move your finger to recognize the cup, now you might be able to recognize it with a single grasp. The columns associated with each finger don’t have enough information on their own to identify the cup, but connections between columns allow them to reach the correct answer more quickly. In effect, the columns “vote” as to what is the most likely object, and quickly settle on cup. The same process occurs across senses, so cortical columns that process visual input can communicate with columns processing touch. In fact, there are connections in the cortex between low level sensory regions that don’t make sense in the classic hierarchical model of the cortex but do make sense in the Thousand Brains Theory.
Dal#7192: This is multimodal efficiency like I was expecting, but I don't see how this result (or at least this summary) conflicts with a hierarchical model
Math ap Mathonwy#7453: yikes
Math ap Mathonwy#7453: https://www.nature.com/articles/s41586-019-1099-1?utm_medium=affiliate&utm_source=commission_junction&utm_campaign=3_nsn6445_deeplink_PID100045715&utm_content=deeplink
Dal#7192: Or based on the illustration here https://numenta.com/wp-content/uploads/blog/2019/01/16/images/classic-vs-thousand-brains.png, it's not so much that it conflicts but that they weren't expecting the process to involve consensus... which was unimaginative of them.
Math ap Mathonwy#7453: I'd read about that a long time ago, but they hadn't made it work yet
Dal#7192: Ooph. That's horrific
Dal#7192: I think we will have (actually already have) banished most of the soul in our lifetimes but we are nowhere near ready to reckon with it
andyljones#7746: https://www.youtube.com/watch?v=KDqh-r8TQgs
Math ap Mathonwy#7453: Yikes
Math ap Mathonwy#7453: well the nature article is going a step further
Math ap Mathonwy#7453: pure brain in a jar
Math ap Mathonwy#7453: take it out of the skull
Math ap Mathonwy#7453: hook it up
Dal#7192: Quickly finishing off my chain of thought. Is it really novel to consider that the brain uses a consensus based good-enough (salient) symbolic recognition system?
Dal#7192: I doubt I invented that in my bathrobe
Math ap Mathonwy#7453: I would guess not
Math ap Mathonwy#7453: but I couldn't point to any papers off hand
Dal#7192: I guess I'll poke at the paper that site linked and figure out where the field is from there https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full |
Math ap Mathonwy#7453: How on EARTH did the pig brain researchers get ethics review approval for that?
Math ap Mathonwy#7453: and since they apparently did, I question what the ethics reviews are even doing
Math ap Mathonwy#7453: The Yale University’s Institutional Animal Care and Use Committee decided NO OVERSIGHT was necessary for that work?
Math ap Mathonwy#7453: 🤯
Dal#7192: lol Yale ethics
andyljones#7746: their job, for once. it's a pig, you don't see abattoirs getting ethical approval
Math ap Mathonwy#7453: TBF there's at least nominal regulation on it not being needlessly cruel
Math ap Mathonwy#7453: now how well that's followed...
Math ap Mathonwy#7453: that's another issue
Math ap Mathonwy#7453: but I had thought standards for how research animals are treated was supposed to be more stringent than that
Math ap Mathonwy#7453: also resurrecting a brain outside its body would appear to me to present novel levels of ethical consideration. The way they were particularly blase about it is troubling. Would it be no big deal to do that to a human, just because they're legally "dead" already?
Math ap Mathonwy#7453: look I'm not vegan. (or Vegetarian) but I don't like the idea of subjecting people or animals to things that would be unnecessarily cruel.
bmk#1476: some day i need to run the math on how much animal suffering is indirectly inflicted by the average discussion about animal ethics
Math ap Mathonwy#7453: well fair
Math ap Mathonwy#7453: I was annoying one of my friends recently by citing papers that indicate evidence that plants appear to have pain like responses
bmk#1476: also im still searching for coauthors for my next paper "Energy and Policy Considerations for Human Learning in NLP"
bmk#1476: https://arxiv.org/pdf/1906.02243.pdf which will be basically a blow-by-blow parody of this paper
Math ap Mathonwy#7453: ok I"m fully on board with that paper is ridiculous
Math ap Mathonwy#7453: I would even suggest that journalists obsession with that question is actually politically motivated.
Math ap Mathonwy#7453: oh sorry could I ask a MUCH more on topic question? |
Math ap Mathonwy#7453: I've looked to try and find one but have come up empty so far, does Eleuther have a set of code style guidelines?
bmk#1476: ```Recent progress in hardware and methodology for training humans has ushered in a new generation of humans trained on abundant educational material. These humans have obtained notable gains in life accomplishment across many tasks. However, these improvements depend on the availability of exceptionally large gastronomical resources that necessitate similarly substantial energy consumption. As a result these humans are costly to train and develop, both financially, due to the cost of schooling and electricity or classroom time, and environmentally, due to the carbon footprint required to fuel modern tensor processing wetware. In this paper we bring this issue to the attention of anthropologists by quantifying the approximate financial and environmental costs of training a variety of recently successful humans. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in educational research and practice.```
bmk#1476: nope, it's a free for all
bmk#1476: cc @StellaAthena @Daj what do you think of my proposed abstract for "Energy and Policy Considerations for Human Learning"
Math ap Mathonwy#7453: Surely that's exactly the kind of work GPT-3 could rapidly accelerate?
Math ap Mathonwy#7453: Thank you.
StellaAthena#3530: As a *reductio ad absurdum* its rather lacking, as I read that and go "yeah that's reasonable"
StellaAthena#3530: Or at least "I can imagine this paper continuing reasonably"
bmk#1476: oh
StellaAthena#3530: reducing cost, reducing energy consumption, and increasing equity are good things
bmk#1476: it's an exact play-by-play parody of their abstract, maybe to emphasize the absurdum i need to diverge from their thing
StellaAthena#3530: They are desirable outcomes to achieve
bmk#1476: the (well, *supposed*) absurdity is that the proposed solution to reducing cost, energy consumption, and inequity is to *reduce people*
Math ap Mathonwy#7453: a modest proposal for a new age
bmk#1476: maybe i'd stick in a sentence somewhere casually proposing "the prioritization of energy efficient humans for gastronomical purposes" or something that's basically a euphemism for "mass murder people through engineered famines"
cfoster0#4356: pls no
StellaAthena#3530: IDK, sounds like a euphemism for the green revolution part 2?
cfoster0#4356: Lol is this for submitting to a joke journal?
Math ap Mathonwy#7453: who can tell anymore?
bmk#1476: Well, i personally think ecofascist people are off their rocker, but yes i forgot there exist people who would take it entirely unironically |
bmk#1476: Or whatever the right term is
StellaAthena#3530: Ecofascist is either an *egregiously* wrong term to use or we have totally unrelated things in mind
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/807424719593930782/Screenshot_2021-02-05-18-37-20-027_com.android.chrome.png
bmk#1476: Anyways i don't really care about the terminology
StellaAthena#3530: Yeah that's not remotely what comes to mind when I read that
StellaAthena#3530: What I think of is genetic augmentation
StellaAthena#3530: Not genocide
bmk#1476: My point is that I don't think murdering people through planned famines is a good idea (hot take of the week, right), but also there are a scary number of people who do
StellaAthena#3530: Do you think that the authors of the Parrots paper support mass murder through planned famines is a good idea?
bmk#1476: No
bmk#1476: Absolutely not
bmk#1476: This was supposed to be a reductio ad absurdum anyways
bmk#1476: In particular, mirroring the claim of "just train less models"
bmk#1476: It's definitely a massive strawman
Math ap Mathonwy#7453: you might hope a citation for (Swift, 1729) would help people pick up on that, but then...
StellaAthena#3530: The core problem is that comparing them to such people because they dared to be overly Luddite basically validates everything they say about the ML community. The only thing your attempt at satire does is validate them
bmk#1476: Ok, fair point, so maybe it's a bad idea
StellaAthena#3530: Also, weren't you saying last week that alignment arguments made you less bullish about LMs > 200B?
StellaAthena#3530: Even if your reasoning is wildly different, you do thing that there are moral reasons to be concerned about the current pace and direction of NLP research
bmk#1476: Yeah i didn't really think it through |
bmk#1476: The half of my brain that wants to halt all capabilities research is constantly arguing with the half of my brain that wants 1 quadrillion parameters or bust
bmk#1476: So yes i do think halting capabilities is good for alignment still
StellaAthena#3530: I've always been more sympathetic of people like Timnet than most people here, but tbh the last two months have definitely made me far less sympathetic to their loudest critics.
bmk#1476: I guess the right way to think about stuff like emissions for big models is kolmogorov complicity? Like getting people to train less big models is a good thing for alignment even if the justification is kinda weak
StellaAthena#3530: I have no idea what you're gesturing at, tbh
bmk#1476: like, i think the emission stuff about big models is a really bad argument against big models
bmk#1476: but i would also want big models to slow down because i'm worried about capabilities
bmk#1476: but lots of people buy the carbon argument and not very many are into alignment
bmk#1476: so should i just play along with the carbon argument and not make a big deal out of it, because it's ultimately good for alignment?
Kazumi#1297: If carbon argument worked, people won't be mining bitcoin
bmk#1476: if it didnt work, *more* people would be mining bitcoin
Dal#7192: There's a pretty legitimate argument to be made in the "How much energy would it take an algo to learn and answer this question vs how much energy would it take a human to learn and answer this question" comparison
Math ap Mathonwy#7453: I genuinely think the people pushing the carbon argument are being disingenuous
Sahl#0630: The carbon argument won’t work because companies will just start a campaign where they plant a tree for each 100 parameters
Sahl#0630: And everyone will go “epic”
bmk#1476: there's still going to be *strictly less* capabilities research happening as a result, no?
Sahl#0630: That’s true
bmk#1476: though maybe it won't be significant
bmk#1476: hmm
bmk#1476: i have no idea tbh |
bmk#1476: i honestly have no clue
StellaAthena#3530: Daily reminder that deforestation was solved a decade ago and we've had net canopy growth year on year for a while
Sahl#0630: I know that planting trees doesn’t solve anything
Sahl#0630: People convinced by stupid reasons can be unconvinced by stupid reasons
jrowe#5371: our they'll "save" 100 trees by buying land with trees on it, then getting tax breaks for carbon credits
StellaAthena#3530: No, I mean we literally do not have a deforestation problem
StellaAthena#3530: We used to
StellaAthena#3530: Then we fixed it
Math ap Mathonwy#7453: I tend to suspect its journalists worried because they realize GPT-3 is very close to capable of doing their jobs.
StellaAthena#3530: Now we don't
Sahl#0630: I didn’t know that
jrowe#5371: Amazon does, globally though, you're right
bmk#1476: i think what jrowe is saying is that we have other problems but the general populus doesnt realize that deforestation isnt the only problem
Sahl#0630: But my point stands
Dal#7192: Clicks > Rigor
bmk#1476: which.. not sure i buy the argument?
Dal#7192: It doesn't really matter what the most-salient truth is, just so long as you achieve defensible interest
bmk#1476: but idk
jrowe#5371: ocean acidification, scary af
bmk#1476: i think we've gone off topic |
bmk#1476: back to capabilities and stuff
StellaAthena#3530: I don't disagree with y'all, I just wanted to point out that deforestation is a non-problem. We have real climate problems. Just not that one
StellaAthena#3530: Does anyone in the alignment community compare AI research to virology and microbiology?
StellaAthena#3530: They've been grappling with similar ethical and sociological issues for decades
bmk#1476: no clue
Dal#7192: I envision a lot of intersecting concepts but I don't see them as especially similar
Dal#7192: micro vs macro
StellaAthena#3530: They aren't mechanically similar
bmk#1476: the main problem is the technical challenges are very different imo
Math ap Mathonwy#7453: you meant risk?
jrowe#5371: I think it's too late to pump the brakes, if gpt models can actually generalize to agi, then it's a matter of engineering and the race is on
StellaAthena#3530: But I think we can learn a lot on a sociopolitcal and risk management level from looking at how they handle what's called Gain of Function research
bmk#1476: and if you buy into the idea that the technical challenges are most of the challenge then youd view the policy similarities as trivial
EricHallahan#1051: Carbon credits are the real problem.
StellaAthena#3530: Even if they do a terrible job handling it, analyzing it so we know to do something else is still worthwhile
bmk#1476: what's that?
Math ap Mathonwy#7453: you study what a virus will do if it has a capability, by... engineering a version of the virus with that capability
jrowe#5371: how do you make the leap between ostensibly well understood examples of disease and "this chatbot can kill us all"?
StellaAthena#3530: You can learn a lot about microbes by modifying thier behaviors. Experimenting with transforming a contact-transmited one into an aresolized one, or increasing/decreasing it's lethality
Dal#7192: GPT does not generalize to AGI, it's a tool that can be configured as an AGI |
StellaAthena#3530: This is also obviously Exceptionally Dangerous
Dal#7192: AGI is a structural designation not a processing one
Math ap Mathonwy#7453: with bacterial research is, IMO, safer in principle
jrowe#5371: the first mover advantage goes beyond nuclear weapons, so there's no force on the planet that could stop the race
StellaAthena#3530: Like I said, I'm not speaking on a technical level at all. I'm solely talking about risk management and sociopolitical issues
Sahl#0630: alignment won’t be a science and is all or nothing, unlike diseases
Math ap Mathonwy#7453: virological research is trickier
Math ap Mathonwy#7453: you can limit a bacterium by crippling it metabolically (and then providing that in the lab)
Sahl#0630: plus people are convinced of the dangers of disease but aren’t of AI
Sahl#0630: many of the core assumptions vary wildly
StellaAthena#3530: I'm clearly not communicating well because nothing you guys are saying are even replies to what I said, let alone refutations
bmk#1476: stella i'm still listening
StellaAthena#3530: I'll try again in a bit
StellaAthena#3530: After thinking it through some more
bmk#1476: we can take the on topic talk to #off-topic and leave the off topic chat here in #on-topic
Dal#7192: Where's the ML field's thinking on long-term learning? My impression is an emphasis on datasets and few-shot training
Math ap Mathonwy#7453: I'm Sorry, I was thinking through it on the technical side
StellaAthena#3530: Assuming you mean "learning throughout the deployed use of the technology" this is typically called "lifelong learning." I don't know a whole lot about it, but it's been something people have studied extensively.
StellaAthena#3530: Not in the context of LMs so much
StellaAthena#3530: That's something we're interested in in #deleted-channel |
StellaAthena#3530: There's an interesting related phenomena where you design technology actively aware of the fact that the design lifecycle you have planned will make at least some of your work obsolete
bmk#1476: snarky remark: this is the norm in ML, some people just dont seem to have noticed and are still surprised by it
StellaAthena#3530: Oh absolutely
AI_WAIFU#2844: No because that's how you summon horrible eldritch memes.
Math ap Mathonwy#7453: IMO, you're right about AGI having risk similarities to Virology research. But unless you're going to make the AI equivalent of "Level 5" CDC labs, with multiple layers of Airgap between the AI and the world. I'm not sure what would be transferable
bmk#1476: @Math ap Mathonwy but ai box experiment
AI_WAIFU#2844: One minute it's working in your favour, the next minute you've completely lost control of the situation.
StellaAthena#3530: I mean.... why not? Air gap networks are a thing
bmk#1476: ok so ill just continue the status quo of me doing usually-not-utterly-retarded meme spreading
Math ap Mathonwy#7453: that is a path that could be considered, you probably also want equivalents for sterilization protocols
bmk#1476: can we please not have this devolve into an ai box debate
StellaAthena#3530: Nobody's talking about AI boxes but you
bmk#1476: that's just because nobody else is using the literal term "ai box"
jrowe#5371: level 5 virus research lab
jrowe#5371: lol
bmk#1476: putting the ai on an airgapped network to study it is literally the definition of an ai box
Dal#7192: Is there a term for placing an AI in a series of more sophisticated simulations until it graduates to the real world?
Dal#7192: Under the premise of training it to never be certain it can safely betray you
AI_WAIFU#2844: Suicide
StellaAthena#3530: Isn't this basically the conclusion of (TV show spoilers) ||the Good Place||? |
AI_WAIFU#2844: You don't ever want any sort of adversarial relationship between you and the AI. That should never be a thing.
jrowe#5371: one of openai's training environments is(was?) literally a virtual machine with Ubuntu and an open internet connection
jrowe#5371: I think educating decision makers and creating policy towards beneficial ai is the best way, any sort of boxing is likely to fail
AI_WAIFU#2844: Like I think boxing is still a good idea. But you should just build your entire security stack around the assumption that the AI can leave the box whenever it wants.
AI_WAIFU#2844: Even if there are multiple layers of air gaps and security protocols
bmk#1476: whats the point of the box, then?
jrowe#5371: or design an ai that can be trusted
bmk#1476: this is square in the middle of "draw the rest of the fucking owl" territory
AI_WAIFU#2844: To keep morons away from your doomsday device.
Math ap Mathonwy#7453: It makes people who don't know any better feel safer?
jrowe#5371: haha, haven't heard that before
Dal#7192: We're all morons next to a SI
bmk#1476: so it's not to protect you from the AI but rather the AI from everyone else
AI_WAIFU#2844: Now your getting it.
EricHallahan#1051: There is a joke about my hearing aid that goes along those lines.
Dal#7192: Bare with me for a sec: If we accept that any working AGI design is functionally similar to a brain, is there any reason to distinguish the control problem for AGI from that if we applied it to humanity?
Dal#7192: Broadly, do we expect that because we coded a brain equivalent that we can solve control there though still not in biology
bmk#1476: > If we accept that any working AGI design is functionally similar to a brain
bmk#1476: i don't accept
jrowe#5371: even if it's limited to the speed of a regular human, it's still software and can be networked |
jrowe#5371: if it's not limited in speed, it can be scaled up
Dal#7192: Okay, can you give an example distinction between the two?
jrowe#5371: speed being shorthand for subjective perception of time between thoughts relative to people
Dal#7192: Oh a high level the structures look very similar to me in functionality and role
bmk#1476: way too much wiggle room in "functionally similar" for me to go at it
Dal#7192: fair
jrowe#5371: new neuroscience podcast from Sam Harris just dropped, I like these ones
gwern#1782: (turns out to be ~285lb right now, should anyone still care. I sprained my butt to find this out for you.)
tin481#8570: Is your question how similar modern DL systems are to the brain? Because they only correspond very loosely
Dal#7192: No, just poking at the current thinking in the field
Dal#7192: If we assume/figure we'll have an easier time controlling AGI than BGI
tin481#8570: Usually, we think the opposite. Humans have been crafted by, and are subject to, the constraints of evolution. We have a strong evolutionary incentive to get along.
tin481#8570: It's not clear that a more powerful system will have any such inclination
Dal#7192: Indeed. Though our constraints are very informal
Dal#7192: We deal with people acting outside our desires continuously and have sophisticated systems to handle when and how they do so
Dal#7192: We only get away with it because no psychopath is a singleton
tin481#8570: I guess I'm confused. What are you trying to say? Aligning an AI is different from "aligning humanity". AGI is not well thought of as "a new person"
Dal#7192: Does the nature of being programmed suggest that code with similar sophistication to a human brain would be easier to enforce control over than a biological brain?
tin481#8570: Noone knows how to program a brain, even a simple one. In ML, we don't program, really. Instead, we take some large set of numbers and subject it to strong optimization pressure. If the set is big enough and the optimization is strong enough, we get something useful.
tin481#8570: It is highly non trivial to enforce a constraint in ML |
Dal#7192: So your answer would be no, neither problem looks immediately surmountable
tin481#8570: I think alignment is one of those areas where your thinking has to be precise, like math and physics. Analogies, natural language, can only get you so far.
Kazumi#1297: complex patterns can emerge from simple rulesets
Louis#0144: Hi nerds
EricHallahan#1051: Hello
EricHallahan#1051: `sudo make me a sandwich`
blackdaku#3072: Hello#all
cv___#1146: Hi. Can you help I didn't get it from the paper. Is there a way to use pretrained CLIP and then finetune on a custom dataset. I have output from different GANs say 30 and the task is so pick the most relevant pic , can I pass it image embedings to CLIP without finetuning. How much data and compute is required to train relieased CLIP .
triggerhappygandi#0001: `sudo apt-get install solution to all my problems`
cv___#1146: Sorry if offtop for this channel, didn't find yet a relative one
cfoster0#4356: Theoretically you could fine-tune from the weights OAI released. Idk if I've seen anyone do it yet, so hard to say how much data/compute is required
cfoster0#4356: And yes you can pass it image embeddings without fine tuning. There are guides floating around that show you how to do that
StellaAthena#3530: FYI: We didn't invent CLIP and are not affiliated with OAI in any way.
cv___#1146: Thanks @cfoster0 . Is it reasonable to take only a few layers from OAI CLIP and train on 10k dataset . like I see in the paper scalabitity discussed ,but like VIT vs wideResnet that 256 gpu only but what about close to the ground solution for say one v100 16g . so, interesting how it's scaled down
RazikMazilya#0001: Hello, everyone. Heard about Eleuther from some people who play AI Dungeon. Read up on it, color me interested, Let's just say I'm very vocal about "ClosedAI" on the AI Dungeon Discord server.
RazikMazilya#0001: While I was reading the goals of the GPT-Neo project, I noticed that one of them would be to release a distilled version of it. I'm curious if, when this is done, it would be possible to finetune a distilled model or if one would need to finetune the full model and then distill the result.
Daj#7482: I probably shouldn't answer because I don't know much about distillation, but my cop-out answer would be "we don't know until we try because distilling models at this scale has never been tried before"
tin481#8570: Usually, fine-tuning must be done first. Intuition here is that language modeling learns a vast array of latent features, some of which are useful for almost any task. For any specific task, though, most of them are unnecessary. So if you have a large model and a task in mind, distillation 'prunes' those unneeded features
tin481#8570: But, you have to know which features are necessary before you prune. If you distill first, you're throwing out most of what the model has learned.
bmk#1476: The entire point of distillation is to crunch a big model into a small model, so you do have to train the big model first. Again, nobody has tried distilling at this scale yet so nobody knows what's possible |
tin481#8570: People have tried distilling BERT, Google uses the distilled model for search. It is possible to distill BERT finetuned on sentiment analysis, part of speech tagging, QA, GLUE, 50 - 1000x without loss. The language modeling objective itself can only be distilled ~50%, and that with some loss.
triggerhappygandi#0001: Why the specific 50%?@tin481
triggerhappygandi#0001: How do you reduce it by half
bmk#1476: BERT isn't exactly at GPT3 scale
bmk#1476: about 3 orders of magnitude off
tin481#8570: From some paper. I'll dig it up. There are two coexisting effects here.
1) self-distillation: First, train a model. Then train another model of the same size and architecture on the pseudolabels generated by the first model. The second model will have lower loss!
2) "finetuning distillation". First, train a large, general model. Then, finetune it on a more specific task. A lot of the capacity of the model is still dedicated to other things, so a (much) smaller model can be distilled from the larger, usually several orders of magnitude
tin481#8570: You can reduce most models by some amount due to (1), but you only get very large (100x) gains from (2)
RazikMazilya#0001: Apparently one can fine tune distilled GPT2 directly
https://github.com/huggingface/transformers/issues/2141
tin481#8570: Sorry, maybe I wasn't clear. What I'm saying is that if you distill, then finetune, you'll suffer a large performance penalty vs fintune, then distill
tin481#8570: You can finetune any model
RazikMazilya#0001: Given GPT3 models size, it would be borderline impossible to fine tune them for cheap, if at all
RazikMazilya#0001: So maybe the performance loss is worth it, and in ~~some~~ most cases the only option
RazikMazilya#0001: Unless the performance penalty makes it literally untrainable on any hardware, but we’ll have to wait and see
cfoster0#4356: No one knows the performance penalty. Maybe it's small for the application. Who knows
tin481#8570: I guess I'd warn that you may not see gains vs existing models at that size.
RazikMazilya#0001: Honestly, I’m actually considering running some distributed training on an entire computer lab at my college if it releases before I graduate. |
RazikMazilya#0001: What do you mean?
tin481#8570: Have you tried GPT-2 for your use case? It may perform similarly to a distilled GPT-Neo of similar size
RazikMazilya#0001: Personally, after having seen GPT3 outputs, would rather use a similar model if it ends up possible. The application I’m looking at modifying already uses GPT2 as well
RazikMazilya#0001: By “perform” do you mean speed or quality of output?
tin481#8570: Quality
RazikMazilya#0001: I thought the point of distilling it was to maintain similar quality
RazikMazilya#0001: While reducing the complexity to run it
tin481#8570: That's the catch 22: fine-tune then distill has no loss, distill then fine-tune has large loss
tin481#8570: Distilling is mostly useful for decreasing the cost to serve a model. The training cost is actually higher
RazikMazilya#0001: So given that info, I guess it’s hopeless to do what I want to do unless I can convince someone to let me access a TPU pod, which will probably never happen
RazikMazilya#0001: Maybe running distributed training on a bunch of the gaming grade computers at the campus lab may not be a bad idea
RazikMazilya#0001: Too bad I graduate in a year/year and a half
tin481#8570: Hopefully the model will be out by then!
RazikMazilya#0001: True!
tin481#8570: And I didn't mean to discourage you. As others have said, this is at a new scale, and there's a lot of variation between tasks. Certainly worth pursuing.
RazikMazilya#0001: I wonder if in Distributed Training, the RAM of all the GPUs is used together.
Like 2x8GB GPUs yield 16GB total for use or if it’s just 2 8GB GPUs training separately
RazikMazilya#0001: Excuse me if it’s a stupid question, I’m actually new to actually doing all this
RazikMazilya#0001: I don’t even know if the university’s domain controller will let me log into multiple computers, so I might need to test that first
EricHallahan#1051: To give you the (slightly cop-out) answer: It depends upon your hardware and your model. There are a lot of variables that go into optimizing training on a single system, let alone distributed training. |
EricHallahan#1051: It is a lot of optimizing for the hardware you have or can afford.
erin#5432: lol anyone know how to install opengl through docker because whenever i add it to my dockerfile, build & run, it still tells me "no module named opengl"
cfoster0#4356: This probably isn't the best place to get help with that
Deleted User#0000: hey everyone my name is abdul im a recent college cybersec grad, always been fascinated by the world of entrepreneurship and software... learned about gpt-3 recently and now eleutherai... nice to meet everyone. my question is, is it possible to make similar projects in elueutherai as people are doing in gpt3?
Deleted User#0000: thanks in advance... 🙂
EricHallahan#1051: Welcome! I suggest looking at the #rules first, as it may have some answers to your questions.
Deleted User#0000: ok 😄
Deleted User#0000: and thank you @EricHallahan
Deleted User#0000: ok looks like it can!
Deleted User#0000: is anyone kind enough to drop some pointers as to where i even begin?
jrowe#5371: the code is on github, so setting up a colab on Google cloud and setting up an environment could be a good start
EricHallahan#1051: > a recent college cybersec grad
Considering that, you may be interested in the pins in #alignment-general.
Deleted User#0000: @jrowe appreciate it
Deleted User#0000: @EricHallahan thanks!!!!!
Deleted User#0000: AI alignment 😮
Sid#2121: Alright @-Archivist
Sid#2121: I’m downloading all of mapillary
Sid#2121: Fancy hosting/downloading the images for me when scraping the metadata is done?
Daj#7482: @Sid and anyone else interested, happening in one hour: https://www.youtube.com/watch?v=6c3DyhaIhD4&ab_channel=G.StolyarovII |
Daj#7482: audience gets to vote and ask questions to so should be fun, also to see how these people think
Sid#2121: oh i thought it was now
Daj#7482: nope I had the time wrong
bmk#1476: i just googled this david kelley guy
bmk#1476: and oh boy
bmk#1476: https://hpluspedia.org/wiki/David_Kelley
bmk#1476: >David is best known for his work with the AGI Laboratory including developing [...] an ICOM theory of consciousnesses
bmk#1476: this is going to become a consciousness argument is it?
Daj#7482: Me: Mom, can I have rationalists?
Mom: We have rationalists at home
Rationalists at home:
Daj#7482: That's a bit mean but lol
bmk#1476: reading this, i feel like he's the super ultra anthropomorphization of AI kind of person
thenightocean#6100: "Connor Leahy DESTROYS David J.Kelly with facts and logic on AGI and consciousness"
bmk#1476: >David is somewhat notorious for his position on Artificial General Intelligence (AGI), placing AGI ethically on par with humanity
bmk#1476: this sentence screams anthropomorphization to me
Daj#7482: yea this is basically what I expected
Daj#7482: I think productive discussions can still occur if everyone is respectful
Sphinx#2092: He used to post on the human-level ai server.
Sphinx#2092: Or at least briefly. |
bmk#1476: idk, i feel like the inferential distance here is kinda big
bmk#1476: anyways the advantage of being in the audience is i can pull out the popcorn
Daj#7482: I think I've had success with people with even higher distance
Daj#7482: Worst comes to worst it's good practice
CRG#8707: The moderator is interesting: <https://en.wikipedia.org/wiki/Gennady_Stolyarov_II> https://cdn.discordapp.com/attachments/729741769738158194/808067210320281660/5287b39ed60f0ea0b6d5adfda322f204.png
bmk#1476: im really bad at the whole communicating with people thing
Daj#7482: In his profile pic he wears a top hat
bmk#1476: >Stolyarov started a crowdfunding campaign to raise money to give his children's book, Death is Wrong, to 1000 children.[1] In the book, he argues that death is an enemy[2] and encourages readers to help overcome it using technology.[1]
based and longevitypilled
CRG#8707: ~~Crowdfund The Sequences in schools~~
nz#9710: wtf I love stolyarov now
Daj#7482: Remember our roots kids, before there were rationalists, there were transhumanists and extropians
bmk#1476: transhumanists are the outgroup, normies are the fargroup
Daj#7482: > At the same time his company Artificial General Intelligence Inc has gotten into blockchain-related AI engineering
Daj#7482: :ultrazucc:
Sid#2121: :guilty: https://cdn.discordapp.com/attachments/729741769738158194/808070890234314782/Screenshot_from_2021-02-07_21-24-48.png
RazikMazilya#0001: It is based, but on what particularly is it based on? Hmm I wonder...
nz#9710: where is my blockchain AGI
RazikMazilya#0001: Also, does anyone know about Shortly Read? |
Daj#7482: Goertzel is on the case
nz#9710: thank you mr goertzel very cool
andyljones#7746: it's a trap 😬 https://cdn.discordapp.com/attachments/729741769738158194/808072385159757841/unknown.png
RazikMazilya#0001: Lol, deleting an entire article because of one editor. How petty
Sid#2121: It didn't actually get deleted fwiw, hence its continued presence
RazikMazilya#0001: Yeah, but the mere suggestion is petty
Sid#2121: it turns out wikipedia user 'AynRandsGloveToy' *actually isn't* Gennady Stolyarov despite the perfectly fitting anagram
RazikMazilya#0001: That’s actually funny
Daj#7482: What a name
RazikMazilya#0001: And even if it was the person himself, it doesn’t justify deleting the entire article, merely editing out any biased parts would be enough
Sid#2121: I think it depends if the person's actually notable or not
Daj#7482: Eh, wikipedia is pretty strict about notoriety criteria
Daj#7482: yea
Sid#2121: like if i went and created a wikipedia for myself, they should absolutely delete it lol
EricHallahan#1051: I beg to disagree.
RazikMazilya#0001: Who determines who and what is noteworthy?
Daj#7482: The Wikipedia™️
Sid#2121: wikipedia editors
EricHallahan#1051: Their guidelines.
RazikMazilya#0001: Anyone can edit Wikipedia |
RazikMazilya#0001: That’s the whole point of it
EricHallahan#1051: That is not true.
andyljones#7746: fun fact: the majority of wikipedia is concerning how wikipedia should be edited
EricHallahan#1051: Some people are far more powerful than others.
Sid#2121: yea but there's guidelines, hence why not every single bored teenager doesn't have their own wikipedia article
Daj#7482: Wikipedia is an amazing work of applied alignment
andyljones#7746: within five years, it will be overtaken by the fraction of wikipedia concerning how the pages about wikipedia should be edited are edited
andyljones#7746: iterated wikipedia amplification
nz#9710: wait really?
andyljones#7746: no
Sid#2121: i mean, maybe
RazikMazilya#0001: I once had a vandalism note for Wikipedia on an IP address I was assigned. This is why I recommend against IP based punishments and warnings.
andyljones#7746: but there is a lot of it
nz#9710: I would be curious about an estimate of just how much discussion goes into editing wikipedia pages
bmk#1476: the failure cases are fascinating too
RazikMazilya#0001: Someday, AI will edit Wikipedia for us
RazikMazilya#0001: Lol
Sid#2121: pls god no
RazikMazilya#0001: I’m going to use DALL-E to put furry artists out of a job.
andyljones#7746: dragging it back to connor's showdown, here's a paper his nemesis apparently put into AAAI last year |
ERROR: type should be string, got "\nhttps://cdn.discordapp.com/attachments/558163388811706383/709433921909293106/ITSC18_-_ICOM_Theory_of_Conciousness.v2.pdf\nDaj#7482: GPT4 replaces wikipedia :bigbrain:\nDaj#7482: Oh boy, surprisingly short given the abstract\nRazikMazilya#0001: :KrisWoke:\nbmk#1476: did it get *accepted*?\nDaj#7482: why do furries and weebs always have like an entire arsenal of custom emotes\nbmk#1476: also i know gatekeeping bad but also this causes me pain https://cdn.discordapp.com/attachments/729741769738158194/808075042700525588/unknown.png\nbmk#1476: the equations arent latex\nbmk#1476: it's a screenshot\nbmk#1476: a low resolution one\nSphinx#2092: Lol its certainly a trap. Though I take the extreme approach of lumping anything involving agi in the same bucket.\nnz#9710: oh god please no\nbmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/808075223311581204/unknown.png\nbmk#1476: actually wait there's a bit of it that's real text\nbmk#1476: but the rest is an image\nnz#9710: \";\"\nEricHallahan#1051: I was about to say that.\nRazikMazilya#0001: Why does Unicode have an entire Armada of Custom Emotes?\nSphinx#2092: Also is it a real paper or a workshop paper?" |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/808075443290505226/unknown.png
StellaAthena#3530: rotfl
StellaAthena#3530: What is this nonce
Sphinx#2092: I feel like workshops have 100% acceptance rates so...
Daj#7482: https://xkcd.com/1953/
Daj#7482: I mean, if true that'd be cool, but I'm a bit skeptical of a 7 page paper with a fucking _screenshot_ in it lmao
StellaAthena#3530: I've reviewed for a workshop twice and out of the 12 total reviews (3 per paper, 2 papers per workshop) I was the only one to ever vote for rejection
EricHallahan#1051: There's *always* a relevant XKCD.
StellaAthena#3530: On the other hand, based on submission numbers the NeurIPS workshop I published at last year rejected applicants (that or it had a crazy high withdrawal rate)
andyljones#7746: that's just what you asked *last* time!
https://discord.com/channels/443778471798243330/558163388811706383/709433922605285466
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/808076174495973456/Screen_Shot_2021-02-07_at_3.46.04_PM.png
StellaAthena#3530: Fine, keep your secrets
andyljones#7746: ah rats, here
https://discord.com/invite/tscXbYN
andyljones#7746: would've thought the link'd resolve to a invite
nz#9710: Wait, when people say you need a couple publications at top tier conferences to be competitive, do workshops count?
RazikMazilya#0001: Lol |
bmk#1476: what is this server?
andyljones#7746: human level ai server, sez it right on the invite
andyljones#7746: (idk sphinx mentioned it and i went poking about)
bmk#1476: but like
Sphinx#2092: I pride myself in my consistency.
bmk#1476: what *is* it
andyljones#7746: https://www.youtube.com/watch?v=Kh0Y2hVe_bw
bmk#1476: thanks, birds. thirds.
Isaac McHorse#2007: are you for real
Sid#2121: David's strategy: say big numbers
bmk#1476: i think the corporation is an instance of agi argument is an interesting one but it does not run in the opposite direction
bmk#1476: agi is not a type of corporation
bmk#1476: > self motivating
bmk#1476: this just in, i am not agi
Chlorokin#6581: Well Alamos Gold Inc is a corporation.
Chlorokin#6581: I think that is what got everyone confused on this point.
bmk#1476: godwin's law get
CRG#8707: > Take out all human biases :yud:
Chlorokin#6581: Question is not if the baby will be Hitler; the question is will the baby be a human.
Chlorokin#6581: Corporations are already AGI, check, human rights for code check, only humans have agency check. |
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/808084125944643594/c6ad6d8fe78bce4a16413cd9cbcc2e63.png
Chlorokin#6581: Did Connor claim this?
bmk#1476: > Who thinks us Americans won't have the first sentient AI? I do. I bet China or Japan will beat us Americans. Anybody disagrees?
你好
CRG#8707: Nah, it's from: <https://slatestarcodex.com/2015/12/17/should-ai-be-open/>
bmk#1476: > it's not like it can replicate out of control
Chlorokin#6581: It's not like a proccess in Azure can replicate itself. Totally impossible.
bmk#1476: :guilty:
Chlorokin#6581: Conflation of the finite with infinity, check.
bmk#1476: connor if youre reading the chat, i think you should steer away from the politrib about regulation
Chlorokin#6581: When among libertarians use public-choice theory as a case of misalignment. When among leftish people use corporations.
CRG#8707: > Unplug
bmk#1476: off-switch argument: check
bmk#1476: > Are the 3 rules of robotics used by all AI developers?
bmk#1476: the 3 laws of ai:
1. catgirls
2. furries
3. ponies
Chlorokin#6581: The first rule of AI: there are no rules. The second rule: no catgirls. |
Chlorokin#6581: The true catgirl is the real girl you met along the way.
bmk#1476: inferential distance thing: i think the other people in this debate dont have the intuition that sufficiently powerful optimization ends up in weird edge cases
Chlorokin#6581: Maybe a good approach would be to go up a level and transfom economic analogies to ecological ones, which are harder to argue with and less sacred to Libertarians.
bmk#1476: connor is using all the words that trigger cached thoughts in libertarians
CRG#8707: 🍿
mgostIH#0245: Ponies above furries now that I read the alignment literature
mgostIH#0245: But I am waiting for a catgirl themed AI alignment work of literature
Chlorokin#6581: She got to you.
mgostIH#0245: who
Chlorokin#6581: Celestia
mgostIH#0245: Oh lmao
mgostIH#0245: Space racist pony singularity here I come 😎
Chlorokin#6581: "It would write its own utility funciton": check.
mgostIH#0245: It still seemed quite consistent
bmk#1476: "humans arent rational" connor 2021
bmk#1476: "wouldn't you think agi would be similar to a human"
"no :chad: "
mgostIH#0245: Once the singularity happens will I be able to use it to beat the stock market? :hmm:
Chlorokin#6581: Can you hear Pavel or is it just me who cannot?
bmk#1476: i cant hear him either |
Chlorokin#6581: After the singularity, all stocks will be meme stonks.
mgostIH#0245: So I should invest in Bitcoin rn
Chlorokin#6581: Doge.
mgostIH#0245: Guys I have an idea for alignment
mgostIH#0245: Tell the AI to turn me into the singularity
mgostIH#0245: So I'll have control over stuff and make everyone happy
bmk#1476: :ultrazucc:
Chlorokin#6581: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.cbsnews.com%2Fpictures%2Fthe-10-greatest-twilight-zone-episodes%2F5%2F&psig=AOvVaw1cp1_YwhYSMh_c4FINXkR3&ust=1612820387055000&source=images&cd=vfe&ved=0CAIQjRxqFwoTCPDgsaTe2O4CFQAAAAAdAAAAABAD
mgostIH#0245: I am not 6 years old 😎
mgostIH#0245: Or should I ask yo mama
mgostIH#0245: Because she's so fat if she eats another cake she turns into the singularity
mgostIH#0245: :viriglasses:
Chlorokin#6581: I mean, she indulges her vice to such an extent it almost becomes a virtue.
bmk#1476: >don't connect it to the internet
bmk#1476: >what about the internet of things tho
CRG#8707: Radio signals from RAM goes brrrr
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/808090377096659024/unknown.png
bmk#1476: hmm who is this "ben goertzel" person
CRG#8707: > Scaling
bmk#1476: :ultrazucc: |
bmk#1476: y e s time for a l i g n m e n t
bmk#1476: i look forward to ai with emotional issues just like me
Chlorokin#6581: This but with Zuck.
bmk#1476: plot twist: this has already happened
CRG#8707: > Hansonian EMs :guilty:
mgostIH#0245: :surferzucc:
bmk#1476: cause of death: a pretty crappy agi
Chlorokin#6581: "There is another theory which states that this has already happened."
bmk#1476: elon musk: check
Chlorokin#6581: The "Let's attach our brains to a system we would otherwise mistrust" idea always mistifies me.
bmk#1476: dan elton is sama confirmed
CRG#8707: Orthogonality ✅
bmk#1476: david kelley is sama confirmed
Chlorokin#6581: Connor is Yudkowsky confirmed.
bmk#1476: would not be surprised if it was
Chlorokin#6581: Have you ever seen them in the same room together?
bmk#1476: :yud:
Chlorokin#6581: Uhgg. Why did you remind me of that?
bmk#1476: audience question time
bmk#1476: let's flood the comments with questions from eleuther people |
Chlorokin#6581: From the Youtube comments: "I feel that people are jealous of AI and want it to fail."
Sid#2121: the entirety of this guys argument is just shilling his proprietary AI system
bmk#1476: lol
bmk#1476: i hope he asks this question https://cdn.discordapp.com/attachments/729741769738158194/808097883080622110/unknown.png
Chlorokin#6581: Our question should be: how is your proprietary AI system doing on common benchmarks?
bmk#1476: extremely unbiased question of course
bmk#1476: oh god the mother of all politrib
Chlorokin#6581: politrib?
bmk#1476: political tribalism
bmk#1476: politrib is the mind killer
bmk#1476: > how to verify agi
you'll know.
bmk#1476: [omninous music plays]
Chlorokin#6581: At least David's time efficiency is to be commended.
bmk#1476: keras: ✅
bmk#1476: nflt: ✅
Chlorokin#6581: Lol at this.
bmk#1476: > locality (probably)
bmk#1476: voting time: eleuther brigade |
bmk#1476: bamboozled
bmk#1476: an actual infohazard https://cdn.discordapp.com/attachments/729741769738158194/808108606201135155/unknown.png
Chlorokin#6581: https://tenor.com/view/tastes-like-victory-victory-success-coffee-gif-4512955
CRG#8707: :guilty: https://cdn.discordapp.com/attachments/729741769738158194/808109437746282556/1880a9f1e5b2014770b14a06d2b19d2e.png
Chlorokin#6581: Literally Bostrom's first example of an ineffective patch considered a good idea ✅
Louis#0144: that name sounds familar
Louis#0144: i have no idea why
Sahl#0630: maybe it’s an antimeme
Louis#0144: no
Louis#0144: ive seen that name on reddit
Louis#0144: i dont remember when
bmk#1476: Matt Levine's alter ego
Louis#0144: OH
Louis#0144: no
Louis#0144: thats it
Louis#0144: youre right
Daj#7482: Well that was pretty fun, about what was to be expected
Daj#7482: Did no one appreciate my epic paperclip prop?
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/808122743266082886/Screenshot_from_2021-02-08_00-08-19.png
bmk#1476: am i blind? i dont think i noticed |
Chlorokin#6581: Blind here too
Daj#7482: Maybe the stream cut off my video or something, because I had a box of paperclips propped up in the lower left hand corner
bmk#1476: yeah it got cut out
Daj#7482: RIP
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/808124551883915307/unknown.png
Daj#7482: Fuck it looked fine on my end
Daj#7482: It was so cheeky
bmk#1476: next time wear it on your head
Daj#7482: Step 1: Cover yourself in paperclips
bmk#1476: inb4 The AI™ is intentionally trying to edit the paperclips out
triggerhappygandi#0001: How do you get 32/64 V100 instances on AWS? By contacting the sales personally?
triggerhappygandi#0001: The paperclip incident, XX/XX/20XX
zphang#7252: https://tenor.com/view/mg-mega-man-mega-man-robot-animated-shocked-gif-17556784
Sid#2121: a tonne of street view like pictures with geolocation and other metadata attached
Louis#0144: 🥖🏷
jrowe#5371: ... bag and tag?
Louis#0144: Gluten tag
Louis#0144: Yes
jrowe#5371: lol
Space#4359: where is this "info" page the #rules speak of? |
StellaAthena#3530: It is linked to by that sentence?
EricHallahan#1051: It's in the rules.
StellaAthena#3530: > If you have questions about what we do, take a look at our info page (https://github.com/EleutherAI/info). If you can't find the answer to your question there, feel free to just ask and the regulars will be happy to help.
Space#4359: oops
Space#4359: how did I miss that
StellaAthena#3530: tbh, not sure 😛
StellaAthena#3530: No worries. It happens
Hatter the mad#7424: Ppl I’ve got a question, how’s the project going right now? Any release date?
bmk#1476: it's going well, no promises about release dates for anything
Hatter the mad#7424: Any predictions? Generally interested
bmk#1476: we should have 1.2B and 2.7B models released soon™ but i dont want to promise anything wrt exact dates
bmk#1476: the best way to know exactly when things will be done is to join us and become the person that does them
bmk#1476: we're always looking for more members
bmk#1476: well, we dont really have a strict concept of membership, you can just start working on thigns whenever you feel like it
bmk#1476: but you know what i mean
nmkd#1425: what project are we talking about
Hatter the mad#7424: Yes obviously)) am in AI myself I get it)))
I mean like spring, summer, or by the end of the year?))
bmk#1476: i presume the various model releases we want to do?
nmkd#1425: i mean there's gpt-neo, dall-e etc |
nmkd#1425: are those gpt-neo models?
bmk#1476: 1.3B and 2.7B are basically done pending some QoL stuff like integration into hf, evaluation, etc
bmk#1476: yeah
Hatter the mad#7424: Well actually maybe I will... I have been here for a long time, but I’ve been sort of busy with my work. I have had more free time recently though
bmk#1476: perfect
bmk#1476: what is your skillset
Hatter the mad#7424: Well I have a degree in applied mathematics with specialization in data science. From more practical part I have worked for a year developing military drones (machine vision and no transformers) and now I have spent the past year and a half working with NLP and even GPT-2 at times. Which is why I am asking, waiting to upgrade my company’s AI))) So Python, pytorch, pandas, SQL, some pipeline stuff, not to much. A bit of experience with tf but would not call myself an expert. I think I have quite a good theoretical understanding. The main thing I luck is experience developing models of this size. Although that sounds like something I would want to get some experience in))
bmk#1476: that sounds like it overlaps a lot with what we need
bmk#1476: stella is our resident mathematician if you want to help with something along those lines
bmk#1476: #gpt-neox-devs is where the scaling stuff happens and that's mostly led by sid so ask him if you wanna work on that, i'm not sure what the bottleneck is rn
bmk#1476: the pile (v2) is currently in hibernation because the first version is already done now
bmk#1476: i'm mostly working on model evaluation, so if you're looking for something that's low-hanging fruit to whet your appetite you can come help in #lm-thunderdome
Hatter the mad#7424: I like the humbleness of this message))
Space#4359: will there be a mass pinging whenever the next release comes out?
bmk#1476: i mean it's basically accurate, no?
Space#4359: Also, how much coding experience does it take to run the pretrained model?
bmk#1476: er, i'd lean towards "no" because i strongly dislike @ everyone but i will probably get overridden by everyone else who does want to ping
Space#4359: from a scale of 1 (what is a computer?) to 10 (single handedly coded all of google)
bmk#1476: ¯\_(ツ)_/¯
Space#4359: maybe make a role for "ping when shit happens" |
Space#4359: well, how do you run a pretrained model?
bmk#1476: do you know how to run a gpt2 model
bmk#1476: if so, you will know how to run our pretrained model
Space#4359: No, but that is reassuring nevertheless
Space#4359: also, what are the necessary steps between now and model release?
bmk#1476: we need to get the checkpoint hosted, write up whatever we decide to write up for it, run it through our eval suite, and probably other things that i'm forgetting
EricHallahan#1051: Licensing terms?
Space#4359: What is your super duper worst possible case scenario on time to completion?
Space#4359: lol I just turned this into an interview
bmk#1476: oh right
StellaAthena#3530: You may use it
bmk#1476: we need to figure out the licensing
bmk#1476: several billion years
StellaAthena#3530: This is an exaggeration. More like a decade.
Space#4359: Ok, what about worst possible scenario you find plausible that also completes it before a hundred years?
StellaAthena#3530: A decade
Space#4359: Best case?
bmk#1476: i mean, obviously
EricHallahan#1051: What if AGI extinguishes us in a time paradox?
EricHallahan#1051: It could be never. |
andyljones#7746: several billion years if bmk and sid find something better to do with their time and everything grinds to a halt without them
gwern#1782: `$ cat LICENSE.txt` `yes`
bmk#1476: i wonder if that would actually be legally binding
Space#4359: I think there is a chance it might
Deleted User#0000: btw i was just trying some SOTA QA LMs and was surprised how bad they were, when compared with Google's Snippet feature (e.g. https://twitter.com/guillefix/status/1358935997896159232 for what i tried with ELI5, but RAG wasnt much better either)
Deleted User#0000: why isnt Google Snippets used as a baseline/comparison in QA papers I wonder?
Deleted User#0000: coz it would just show how bad they are lol?
Deleted User#0000: I mean I guess its an unfair comparison because google is probably using much more resources, but its not like all comparisons are done at equal resources in papers either~
EricHallahan#1051: But that is when you point out that your using 10x less resources or whatever.
Deleted User#0000: i guess a problem is that we donno how many resources google is using
ethan caballero#6044: @bmk does https://github.com/EleutherAI/lm-evaluation-harness have option to eval models at multiple parameter sizes (e.g. 1M, 10M, 100M, & 1B) so that one can plot the scaling laws for all the downstream tasks being evaluated like in all the plots of GPT-3 paper?
bmk#1476: not a top priority
bmk#1476: top priority rn is "implement everything we need to implement" and "make sure it actually works correctly"
ethan caballero#6044: Has eleuther opensourced the weights of some models trained at at multiple parameter sizes (e.g. 1M, 10M, 100M, & 1B) (on something like webtext or pile) somewhere so that I can manually plot the scaling laws for some downstream tasks?
bmk#1476: the eval harness doesnt work yet
bmk#1476: at least, not correctly
bmk#1476: kinda pointless to plot if the numbers arent even correct
EricHallahan#1051: BTW, I got the numbers back to that reasonable region in the 500-600 range.
bmk#1476: well, *almost* reasonable
bmk#1476: what did you change? |
EricHallahan#1051: I had done some preprocessing before I had the detokenizer which stripped the newline at the end. Obviously not trying to strip it twice obviously would be a good place to start.
EricHallahan#1051: (i.e. the target was pretty much always a line feed)
EricHallahan#1051: Has LAMBADA been verified to be correct?
bmk#1476: yes
bmk#1476: could you push your ptb changes?
EricHallahan#1051: I pushed.
bmk#1476: currently looking it over
bmk#1476: well, the good news is i've figured out a way to make it even worse https://cdn.discordapp.com/attachments/729741769738158194/808519342563655690/unknown.png
bmk#1476: also unrelated but i found this gem in the data
bmk#1476: ```east germany's politburo met amid speculation that the ruling body would oust hard-line leader honecker whose rule has been challenged by mass emigration and calls for democratic freedoms```
EricHallahan#1051: It is from 1989.
EricHallahan#1051: Very much so.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/808521172642824223/good-bye-lenin--good-bye-lenin--d-2003-regie-wolfgang-becker-daniel-H8B71M.png
Louis#0144: pytorch question I cant find the answer to. I have two pytorch vectors of dim 600 but the last N components of the first vector are 0 and the first 600-N components of the first vector are 0
Louis#0144: is there like
Louis#0144: a masked concat?
Louis#0144: I want to keep my gradients
kindiana#1016: slice and then concat?
Louis#0144: does slice preserve gradients?
kindiana#1016: yeah |
EricHallahan#1051: It does.
Louis#0144: o
Louis#0144: damn
Louis#0144: ok
Louis#0144: didnt know that
Louis#0144: ty
bmk#1476: hm
bmk#1476: ok so ive updated the ptb code to be more accurate to my understanding of the task
bmk#1476: unfortunately, the resulting ppl is still an order of magnitude off
EricHallahan#1051: What are you using to test?
bmk#1476: it's possible to get a better score by only considering the last word, but that's *not* what the task is
bmk#1476: cushman
EricHallahan#1051: Does cushman have public numbers?
bmk#1476: we know it's >= curie
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/808522495828557844/unknown.png
EricHallahan#1051: True
bmk#1476: this is my gpt2 result
bmk#1476: this is cushman result https://cdn.discordapp.com/attachments/729741769738158194/808522565823234088/unknown.png
bmk#1476: still miles off the mark
EricHallahan#1051: That is exactly what I have been seeing. |
EricHallahan#1051: I've been running on CPU.
bmk#1476: im honestly stumped
bmk#1476: ill push my changes - it reflects my best understanding of the task, rather than strictly what got the best result
EricHallahan#1051: I think we need to reach out to someone who knows the task. My understanding of the task is that your supposed to evaluate every word.
bmk#1476: same
bmk#1476: but evaluating every word gets *worse* results basically by definition
bmk#1476: which is indeed what i observe
EricHallahan#1051: You would expect it.
EricHallahan#1051: It looks like the vast majority of code for this task was based upon a TensorFlow LSTM example.
bmk#1476: @wuthefwasthat can you help us figure out PTB? we're looking at loglikelihood across the entire sentence (put through a rudimentary detokenizer) and normalizing by word count but we're nearly an order of magnitude off
guac#4716: if you output mean of the logits in gpt2 instead of sum then you get a bit closer like 128
guac#4716: (+ detokenization)
EricHallahan#1051: Needs to be half that.
guac#4716: yep
bmk#1476: that's just wrong though
bmk#1476: the only reason that happens to work is because tokens are smaller than words usually
bmk#1476: @asacoopstick hey so i'm trying to run the conversion script and i'm getting this error `can't allocate memory: you tried to allocate 416578223421504 bytes. Error code 12 (Cannot allocate memory)`
bmk#1476: which is an absurd amount of memory (400TB) - this is 2 orders of magnitude more memory than even the amount of memory in a pod (4TB)
bmk#1476: full stack for reference
|
```Traceback (most recent call last):
File "convert_gpt.py", line 112, in <module>
model = GPT2Model(config=config)
File "/home/connor/.local/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 490, in __init__
self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True) for _ in range(config.n_layer)])
File "/home/connor/.local/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 490, in <listcomp>
self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True) for _ in range(config.n_layer)])
File "/home/connor/.local/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 270, in __init__
self.attn = Attention(hidden_size, n_ctx, config, scale)
File "/home/connor/.local/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 130, in __init__
"bias", torch.tril(torch.ones((n_ctx, n_ctx), dtype=torch.uint8)).view(1, 1, n_ctx, n_ctx)
RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 416578223421504 bytes. Error code 12 (Cannot allocate memory)
```
bmk#1476: wait
bmk#1476: ohhhhhhhhh
bmk#1476: **false alarm** disregard the above i was being a fucking numpty
triggerhappygandi#0001: This monstrosity is perplexity? _how_?
bmk#1476: always has been
EricHallahan#1051: 🌍 🧑🚀 🔫 🧑🚀
StellaAthena#3530: FAANG has been a term for... a decade? |
StellaAthena#3530: The term came from business circles
StellaAthena#3530: It’s about stocks
jin kazama#3736: Is there implementation of Linformer (linear transformer) and Longfromer and Performer and Reformer combined into one thing?
bmk#1476: Why would you want that
bmk#1476: Some of those things are mutually completely incompatible, and moreover even ignoring the conflicts i see no reason why you'd want it
kindiana#1016: I think long range arena implemented all of them in one codebase if that's what you mean
kindiana#1016: but you can't turn them all on at the same time lol
bmk#1476: I assume he meant all at the same time
jin kazama#3736: Reformer showed up-to 60X time boost (bigger the data, more the speed to train) and performer essentially do the same. To gain more speed (less time to train and handel longe sequences)
kindiana#1016: you can't combine all of those into one thing lol
jin kazama#3736: Ok, I ssusptectd that
jin kazama#3736: Anyway, what is difference between Memfromer and feedback transformer?
jin kazama#3736: No one wants to talk?
Deleted User#0000: @jin kazama they can't be combined - re: reformer, linformer, longformer
bmk#1476: read the paper(s)
Deleted User#0000: @jin kazama memformer is for encoder
Deleted User#0000: feedback transformer is for decoder
Deleted User#0000: those two can be combined, since they offer ways to deal with memory across segments of sequences
cfoster0#4356: *printing out The Bitter Lesson on stationery*
jin kazama#3736: That is the plan. (But that will take time since I do not know much, yet, and will take enormous time to understand them). |
bmk#1476: if you're not at a point where you could understand the papers without too much effort, i don't suspect our answers to your question will be useful either
Deleted User#0000: yea, just ask me
Deleted User#0000: perhaps you even bring up a point that jars some idea for me to try
cfoster0#4356: I dunno about that. Feel like most of those papers weren't easy on first read
cfoster0#4356: But yes giving time to read them will help
Deleted User#0000: actually, linformer can be combined with the rest of the sparse attention @jin kazama
Deleted User#0000: i spoke too soon
Deleted User#0000: but linformer has a big deficiency
bmk#1476: ok tbf i havent read the papers, but mostly because im not interested in efficient attention
Deleted User#0000: it assumes fixed sequence length
Deleted User#0000: and masking does not work
Deleted User#0000: and also, no auto-regressive
Deleted User#0000: so it only works in a very limited scenario
kindiana#1016: just watch all the yannic kilcher videos :bigbrain:
cfoster0#4356: Why no autoregressive? 🤔
bmk#1476: you cant do masking im assuming
Deleted User#0000: because the keys and values are all mixed together
Deleted User#0000: by the projection matrix
cfoster0#4356: Can't you just keep recomputing on every new token?
kindiana#1016: sounds n^2 🤔 |
EricHallahan#1051: That would be expensive?
Deleted User#0000: i can't think of a way
cfoster0#4356: Yes and yes
Deleted User#0000: yet another new efficient attention from today https://twitter.com/ak92501/status/1358965767816040450?s=20
cfoster0#4356: You wouldn't need to keep the n^2 attention matrix in memory but you would need to do a buttload of extra compute in both training and inference. still, possible
jin kazama#3736: Oh, Lucid rain is here, hi Lucid Rain, big fan. Saw your github reps.
even If linformer can be combined with switch transformer, still I am not it would be worthwhile, because majority of parameters in switch are sparse (I am new to this world, but how does that help, can sparsity work just like CNN connections, because they are not dense either, but they work). Even if they can work like CNNs still combining them with anything, I am not sure if that could be helpful, but
bmk#1476: this diagram is wild
Deleted User#0000: yea it is lol
EricHallahan#1051: It hurts.
bmk#1476: i have no idea what the heck is going on in this diagram
Deleted User#0000: ohh, linformer can be combined with switch transformer
Deleted User#0000: because switch transformer is sparsity on the feed forwards
Deleted User#0000: not the attention
Deleted User#0000: and thanks, *blushes*
Deleted User#0000: @bmk yea, i think it only works for self-attention, and similarly has issues with masking
Deleted User#0000: probably won't pursue it
jin kazama#3736: I can't even read the name, let alone the paper. lol
Deleted User#0000: ö looks like a surprised face
Deleted User#0000: or maybe a ring |
bmk#1476: AEIÖU
jin kazama#3736: yea, that is a tough name to pronounce. (I will give it a try though, next week insha Allah, busy this week).
bmk#1476: Alles Erdreich Ist Österreich Untertan
jin kazama#3736: I wish I could hear the sound of that (to understand)
bmk#1476: ö = oe
jin kazama#3736: Thanks. 🙂
triggerhappygandi#0001: a microcontroller circuit
bmk#1476: At least microcontrollers usually make sense
triggerhappygandi#0001: You clearly haven't seen bad design on Matlab simulink
triggerhappygandi#0001: People create e-frankensteins there
triggerhappygandi#0001: btw how is this different from Linformer/Reformer
triggerhappygandi#0001: One of those looks similar to this
EricHallahan#1051: MATLAB is the worst place on earth.
StellaAthena#3530: Counterpoint: New Jersey
triggerhappygandi#0001: lmao
EricHallahan#1051: *"Everything is legal in New Jersey"*
EricHallahan#1051: It's boringly flat.
Hatter the mad#7424: With the help of God and a math degree I managed to understand what’s going on in the diagram however I am still struggling to understand why does it work lol
Hatter the mad#7424: Gona have to read the paper
nz#9710: maybe one day we'll have a good efficient attention |
kindiana#1016: maybe d * n^2 is already efficient once you take into account the fact that parameters scales with d^2 :berk:
Ravna#1831: Even if your attention mechanism had become so efficient that it would only take 0 cycle to finish, you would still have needed O (L* n * d ^ 2) of compute just to do some token-wise data scrambling via the ff layers.
Louis#0144: 🥖🏷
EricHallahan#1051: OH ITS GLUTEN TAG
Louis#0144: Yea
EricHallahan#1051: IT DAWNED ON ME SO HARD.
EricHallahan#1051: Okay, I'm good.
nz#9710: noooo you can't introduce yourself with gluten tag
bmk#1476: Nicht lustig, lachte nicht
Louis#0144: @bmk did u get the pun immediately
bmk#1476: Nein
Louis#0144: lame
bmk#1476: Weil man das Wort auf deutsch nicht gleich wie aus English ausspricht
Louis#0144: gluten is p close to guten tho
triggerhappygandi#0001: I dont know if kek or cringe.
Louis#0144: yes
Louis#0144: i did this yesterday
Louis#0144: and no one noticed
Louis#0144: idk why u guys care now
EricHallahan#1051: I saw it yesterday but didn't get it until now. |
𝓒𝓵𝓪𝓻𝓪#0888: congrats on making OpenAI shake and cry about not getting to be the gatekeeper anymore
𝓒𝓵𝓪𝓻𝓪#0888: I just saw the articles on VB lol
bmk#1476: Has OA ever publicly acknowledged our existence
bmk#1476: Because I don't remember it ever happening
jrowe#5371: <https://venturebeat.com/2021/02/09/openai-and-stanford-researchers-call-for-urgent-action-to-address-harms-of-large-language-models-like-gpt-3/>
Daj#7482: lol, I only skimmed that paper but I am pretty sure that wasn't the message
EricHallahan#1051: Yeah, they are more concerned with China.
bmk#1476: Oh, there's a *new* VB article
bmk#1476: I thought you were talking about the old one
Daj#7482: We are the Hacker Known as Eleuther now
StellaAthena#3530: Neither "Eleuther" nor "Gao" nor "Neo" show up in the paper
Sid#2121: https://tenor.com/view/mega64-hacking-in-progress-hacker-hacked-hd-gif-16542434
jrowe#5371: first they ignore you
𝓒𝓵𝓪𝓻𝓪#0888: > ...researchers from OpenAI, the Stanford Institute for
> Human-Centered Artificial Intelligence, and other universities convened...
>
> ...
>
> Participants suggested that developers
> may only have a six- to nine-month advantage until others can reproduce |
> their results. It was widely agreed upon that those on the cutting edge should
> use their position on the frontier to responsibly set norms in the emerging field.
𝓒𝓵𝓪𝓻𝓪#0888: "others"
EricHallahan#1051: When was this published today?
EricHallahan#1051: The VB article.
Daj#7482: Yea VB just added that in their article
𝓒𝓵𝓪𝓻𝓪#0888: "widely agreed upon" by a group of people afraid of losing their gatekeeping power 🤣
Daj#7482: actually imo the core argument of the paper isn't that bad
Daj#7482: (iff taken at face value)
Daj#7482: (might of course just be rationalizing)
bmk#1476: Hacker, Eleuther. "GPTNeo," Proceedings of SIGBOVIK 2021
Daj#7482: Yes, we need to publish as a singular entity
Daj#7482: exclusively at joke conferences
bmk#1476: Hacker et al.
guac#4716: Satoshi Eleuthermoto?
EricHallahan#1051: We *are* going to write a SIGBOVIK paper. No excuses.
bmk#1476: Brainstorming time
bmk#1476: What do we do
StellaAthena#3530: Have fun. IMO it's a boring conference with unfunny jokes
𝓒𝓵𝓪𝓻𝓪#0888: Does a valid reason still count as an excuse? |
EricHallahan#1051: That is an excellent question.
EricHallahan#1051: By definition, no.
𝓒𝓵𝓪𝓻𝓪#0888: It's more of a philosophical question about how it should be defined at all.
bmk#1476: That depends on what the definition of the word is is
EricHallahan#1051: `reason` and `excuse` have a Hamming distance of 6.
shamoons#7147: Hi all - looking to get involved
EricHallahan#1051: Hello! First I will direct you to the #rules, and then we can discuss anything else on your mind.
shamoons#7147: Yup - read through it
EricHallahan#1051: Cool. What kind of experience do you have?
shamoons#7147: Software development for almost 20 years. Reasonably strong math background. Almost finishing my PhD in AI, with my research focused on speech reconstruction
fristiloverke#4159: what do you mean by reconstruction exactly
fristiloverke#4159: speech enhancement?
shamoons#7147: Ahhh - no. Enhancement is an interesting problem for sure
shamoons#7147: But reconstruction aims to REPLACE missing portions
jrowe#5371: he o!
EricHallahan#1051: That sounds very interesting to me especially, as I have been working on a personal project relating to speech synthesis.
shamoons#7147: Awesome! What sort of project?
fristiloverke#4159: ohhh that sounds interesting too
EricHallahan#1051: It involves low-compute vocoding via linear prediction using LPCNet. The goal is to eventually be able to use it for Voice Conversion and TTS. The idea is to pretty much design a system that can take in phonetic features and return with speech from arbitrary speakers in a controllable manner.
AI_WAIFU#2844: If you haven't already, take a skim through the Projects discussions and their pins, that'll give you an idea of the different things that people have been working on, along with the associated repos. |
shamoons#7147: Seems like only 5-6 open issues at the moment?
shamoons#7147: Would love to talk through this
cfc#2691: not really what you're talking about, but this TTS is pretty impressive https://r9y9.github.io/wavenet_vocoder/
EricHallahan#1051: You should be able to look at this conversation: https://discord.com/channels/729741769192767510/730090096287547444/804794453540732928
cfc#2691: @bmk what about numer.ai? (on the topic of NN trading), they seem to be doing well with market models and a buttload of data
shamoons#7147: Ahhh - interesting
shamoons#7147: What about learned embeddings?
shamoons#7147: Instead of STFT or LPC?
shamoons#7147: Similar to wav2vec
shamoons#7147: For my input, I'm actually using stacked raw audio
EricHallahan#1051: There was paper published to ArXiv at the beginning of the month which pretty much did what I wanted pretty much exactly.
shamoons#7147: Do you happen to remember the name?
EricHallahan#1051: I'm trying to find it...
EricHallahan#1051: Found it:
https://arxiv.org/pdf/2102.01991.pdf
shamoons#7147: I'll give it a read
EricHallahan#1051: It's a continuation of another paper presented at *INTERSPEECH* last year.
EricHallahan#1051: The difference comes down to using self-attention instead of bidirectional LSTMs.
shamoons#7147: I'm looking at transformers
shamoons#7147: Having some issues though |
EricHallahan#1051: The backbone of my work is using LSPs over LPC polynomial coefficients, MFCCs, STFTs, or learned embeddings. They are naturally very dense representations of the envelope, and have the advantage of being robust to noise and able to be interpolated in a way that is semantically meaningful.
EricHallahan#1051: Of course, MFCCs are much better than pure LPC when it comes to describing the envelope as they are not constrained to an all-pole filter.
Louis#0144: Ehhhh
Louis#0144: Lots of super high pitched frequencies that should not be there
Louis#0144: Sounds extremely unnatural
Louis#0144: That’s my issue with the text to speech
Louis#0144: Humans are so good at analyzing other human voices
Louis#0144: That the barrier to entry for good TTS is extraordinarily high
Louis#0144: I think we will get there in 10 years
Louis#0144: Or maybe we will just descend further into the uncanny valley
Louis#0144: That is uncanny valley territory, unlike most of the time when people talk about uncanny valley
cfoster0#4356: Are you talking about the vocoding?
cfoster0#4356: Or the tacotron portion?
Louis#0144: This imho is literally the only example that I know of where the term uncanny valley is actually appropriate
Louis#0144: I meant like a sample being indistinguishable from human
cfoster0#4356: I think the SOTA vocoders are nearly indistinguishable
Louis#0144: Really? Sample?
cfoster0#4356: The text-to-spectrogram portion is not
EricHallahan#1051: WaveNet is *way* to slow.
Louis#0144: Like that’s the thing w uncanny valley is that consciously it’s indistinguishable but subconsciously it’s unnerving |
EricHallahan#1051: It requires a GPU because it is based on expensive convolutions unlike WaveRNN.
Louis#0144: I think getting past that point will be extraordinarily difficult
Louis#0144: How does it perform on mobile?
cfoster0#4356: Look, I'm just saying, I think if we did a double-blind comparing WaveNet samples conditioned on spectrograms to the actual audio behind the spectrograms, very few people will reliably pick correctly
Louis#0144: oh
Louis#0144: I see
Louis#0144: Perhaps
EricHallahan#1051: WaveRNN can't do it, but LPCNet can run real-time on a modern smartphone in wideband.
Louis#0144: Oh cool
triggerhappygandi#0001: What about hifigan?
Louis#0144: I went through a phase where I tried getting DNNs to run on mobile
Louis#0144: It’s very difficult
Louis#0144: lol
triggerhappygandi#0001: _who wouldve thought_
EricHallahan#1051: That's why LPCNet is based on sparse GRUs.
Louis#0144: I got distil Bert to fine tune on an ipad
Louis#0144: lol
Louis#0144: Took a week
Louis#0144: Which is why I was confused when Stella couldn’t get Bert to fine tune on a cpu cluster
Louis#0144: @StellaAthena did ever get that working |
Louis#0144: It was some weird logistical issue right?
StellaAthena#3530: @Louis I didn’t say I couldn’t. I asked how feasible it was
Louis#0144: Nothing about ur ability ofc
Louis#0144: Oh
Louis#0144: I misremembered
Louis#0144: Did it work tho
StellaAthena#3530: I figured I’d get a consult before I told my boss I would do something and then a week later be like “lol JK apparently this takes 5 months”
Louis#0144: Lol
Louis#0144: Been there
Louis#0144: Done that
StellaAthena#3530: And I haven’t done it yet
StellaAthena#3530: (And probably won’t for month for bullshit reasons)
Louis#0144: Rip
asara#0001: 10 years is a very long time, I think we are pretty close to human quality with the right settings, so I'm not sure what you think would take that long wrt TTS
EricHallahan#1051: We can get really close with HMMs already.
bmk#1476: 10 years is so long that 10 years ago, yud was confidently claiming that nns wouldnt work
asara#0001: samples from the best TTS we have sound almost perfect already, the best complaints you could give are details like nuance in reading speeds across long sentences, and a lot of finer details (emotions for most systems too)
asara#0001: but the actual sound quality is great
kurumuz#5695: yud?
bmk#1476: the One True Caliph, Eliezer Yudkowsky |
Louis#0144: It sounds indistinguishable but it’s still off putting
Louis#0144: That’s the issue
Louis#0144: It sounds fine at a glance
Louis#0144: But like the noise (not the concept) is unnerving because it sounds *almost* human
Louis#0144: I think that last bit especially as we scale to nontrivial sentences is going to be hard
Louis#0144: I don’t have that much experience with TTS, I’ve just read a handful of papers
EricHallahan#1051: Can you quantify this "noise"?
Louis#0144: I’ve never implemented it
Louis#0144: It’s like high pitched screeching
Louis#0144: It’s always there with computer generated voices
Louis#0144: It’s quiet ish but still noticeable
bmk#1476: what if you just add a low pass filter
EricHallahan#1051: Sounds like excitation noise.
EricHallahan#1051: Just add a low pass filter that should be there.
Louis#0144: Yet how come no one ever does that for demos....
Louis#0144: I always assume they already filter
EricHallahan#1051: Actually, you add a filter to shape the noise so that it is masked via psychoacoustics.
Louis#0144: You’ve lost me at this point
Louis#0144: I don’t really know what you’re referring to anymore
EricHallahan#1051: https://en.wikipedia.org/wiki/Auditory_masking |
asara#0001: actually a lot of demos so basically no DSP
asara#0001: ML people like to throw more neurons and networks and things at audio problems instead of refer to DSP
bmk#1476: says mr plotholes-as-endofunctors
EricHallahan#1051: That's why LPCNet works. BRB.
Louis#0144: Ten years was a safe estimate 🤷♂️ probably like 3-5 all in all... that’s how long it took GANs to become amazing right?
Louis#0144: They are
Louis#0144: smh
asara#0001: Well if you are asking me *personally* my answer is "We already have TTS that is human-quality in some contexts and domains"
asara#0001: but I think the easiest way to refute that is basically "Sure, but it is lacking enough human elements that it cannot generalize enough"
Louis#0144: I don’t think I can really pin down what the issue is
Louis#0144: It’s very close
Louis#0144: But it just sounds off...
Louis#0144: No idea why at the end of the day
asara#0001: The hard mode would be something like "Read an entire short story, make sure every pause, intonation, emphasis, emotional tone, and pronunciation is perfect" and if that is as good as a human, then you're set. But if you just want "Read this short sentence and make it sound good, maybe with some fine-tuning" I'd say we're there, so it depends on what you think TTS should entail
asara#0001: even so I imagine a few years for the hard version of that
Realmsmith#4506: Fantastic!
Louis#0144: Strong doubt tbh
Louis#0144: Most of those datasets have been found to have been faked
Louis#0144: Or atleast have massive biases
Louis#0144: Like the one of recreating shapes from fMRI data was faked |
Louis#0144: I’ll find the rebuttal paper
Louis#0144: I worked in a lab that did fMRI stuff for two years
Louis#0144: There’s *incredible* amounts of noise on every brain scan technique that doesn’t kill the participant
Louis#0144: Even electrode based methods
Sahl#0630: sounds like the solution is to kill the patient
Sahl#0630: 👍
Louis#0144: https://www.eurekalert.org/pub_releases/2020-12/pu-pru121420.php
Louis#0144: https://ieeexplore.ieee.org/document/9264220
Louis#0144: Sorry it was EEG based
Louis#0144: There is a similar rebuttal for electrodes and fMRI though. I have them saved somewhere
Louis#0144: I refuse to believe methods like this do anything except show how overwhelmingly biased our datasets and models are
Louis#0144: Anyway these kinds of experiments are very very hotly debated, I just am extremely naive of any findings
Sahl#0630: are electrodes not very useful? from the little research I did they seem to be enough for eg. controlling cursors
Louis#0144: They’re super useful
Louis#0144: I did some work with them at Columbia briefly
Louis#0144: I was having personal issues while I worked there though
Louis#0144: So I didn’t get much done
Louis#0144: Paninski is great through
Sahl#0630: I’m very interested in where we’re going with electrodes, it doesn’t look like the other methods will be able to do much
Louis#0144: @Sahl I used to work with orchard and eliassmith |
Louis#0144: You should take their course
Louis#0144: It’s great
Louis#0144: It’s under the SYDE dept
Louis#0144: I think there exists better alternatives to electrodes that haven’t been fully explored
Louis#0144: But in the short term yes electrodes are good
Sahl#0630: ooh what are they
Louis#0144: There’s methods that can do individual neuron resolution
Sahl#0630: it sounds like they’d have to be intrusive though
Louis#0144: Some are
Louis#0144: Some aren’t
Louis#0144: Electrodes are already invasive tho
Louis#0144: A lot of the advances w electrodes are two fold: spike filtering and more densely packed sensors
Louis#0144: Better spike filtering alone could bring you to an incredibly high resolution
Sahl#0630: is spike filtering detecting neuron spikes instead of it all averaging together?
Sahl#0630: so like higher time resolution
Louis#0144: Yeah
Sahl#0630: That sounds very interesting
Sahl#0630: Once I have electives I’ll look into it
RazikMazilya#0001: So, I'm going to apply for GPT3 API access, wish me luck
StellaAthena#3530: See you in a decade |
RazikMazilya#0001: lol
IKEA#9631: To make what? A furry erotica generator? :mesh:
bmk#1476: welcome to "engineering is research too" land
bmk#1476: here, no engineering effort goes unappreciated
Louis#0144: My thoughts exactly
Louis#0144: It’s incredible
Louis#0144: I will literally never get one
Louis#0144: I refuse
Louis#0144: Id rather die
Louis#0144: Literally
Louis#0144: Like when people say that it’s an exaggeration. If it was a life or death situation and a neural implant would save me then I’d rather die
Louis#0144: I’ve thought about this a lot
Louis#0144: There is no conceivable way that we know the long term effects of a neural implant like that. Plus privacy issues
Louis#0144: Bullshit IoT neural implants
Louis#0144: No fuckin thanks
jrowe#5371: I'd get one to fix my hearing, if I were certain of security and graceful failure
jrowe#5371: being mostly deaf makes the proximity of a real fix very tempting
IKEA#9631: Apparently modern cochlear implants sound even better than normal healthy ears
Imperishable_NEET#1969: Wire me up baby :Wirehead:
Imperishable_NEET#1969: YOLO |
jrowe#5371: btw, we have long term cochlear implant data
jrowe#5371: neuralink just has a couple orders of magnitude more io
Imperishable_NEET#1969: Maybe when your brain dies with a neural prosthetic small bits of your consciousness continue on in the prosthetic. :Transhuman_think:
IKEA#9631: Something something black mirror
jrowe#5371: theseus shitty boat, to ferry your digital soul
Imperishable_NEET#1969: I mean, ship of Theseus and all that. IIRC prostheses can be integrated into the brain and become part of your mind's substrate.
bmk#1476: inb4 chalmers time
jrowe#5371: hippocampus replacement has been demonstrated in rats
Imperishable_NEET#1969: Fascinating https://en.m.wikipedia.org/wiki/Hippocampal_prosthesis
jrowe#5371: only problem with Chalmersian zombies is that there would be nothing which it is like to be one, yet the biological configuration would be more or less identical to your own, so unless you're willing to posit that there are many such zombies among us, the perception of consciousness in others has to be sufficient evidence of such, since no evidence to the contrary exists
jrowe#5371: fmri, eyesight, etc - lots of evidence for, none against. yet.
jrowe#5371: I think you think, therefore you are. lol
Imperishable_NEET#1969: I'm just gonna assume other minds than mine exist since the alternatives are Solipsism, Boltzmann Brains, or Panpsychism, and it really doesn't make much difference in my life. If you truly believed other people were P-zombies, it's probably still best to act as though they're not. Because what if you're wrong and you can't just treat people like NPCs in a video game?
jrowe#5371: plus, what would that do to your psyche?
jrowe#5371: Westworld, amusement park and psychopath factory :mesh:
Sahl#0630: as a P-zombie, I’d still want to be treated just like anyone else 😠
mgostIH#0245: Down with p-racism
Space#4359: https://www.lesswrong.com/posts/kYAuNJX2ecH2uFqZ9/the-generalized-anti-zombie-principle
Space#4359: you cannot have p-zombies
Space#4359: if you have a thing doing exactly what i would if i was conscious, then it is just me |
Space#4359: if everything is completely identical and indistinguishable, nothing has changed
Space#4359: and, consciousness isn't some magic thing
Space#4359: you can know you are conscious
Space#4359: and you know that it has to do with your brain state. if you flip a switch and say "yer not conscious", i'd take a second to check, and then say "umm yes i am"
Space#4359: if the change isn't big enough to even disrupt your chain of thought, surely it can't make you unconscious
Deleted User#0000: i just open discord and see this, and somehow im not surprised at all
Deleted User#0000: btw random MoE idea: The idea is to deal with the problem MoEs have where each expert doesnt get much training:
train a standard LM until it achieves a good performance for its parameter count.
Then duplicate some of the trained layers into experts, and then continue training in a MoE way after that.
This way you should probably at least ensure that you have as much performance as the non-MoEified model, and each expert has gotten at least as much training as the 'base model', before "specializing".
Deleted User#0000: @Aran Komatsuzaki has something like that ^ been tried?
Aran Komatsuzaki#5714: this method sacrifices the "expertness" of each expert in the sense that each expert starts from the identical model that has already learned quite a lot and they may lose the diversity required to imitate a large model.
Aran Komatsuzaki#5714: so, i'm not sure if it will outperform or underperform MoE
Aran Komatsuzaki#5714: but it's certainly an interesting direction
Aran Komatsuzaki#5714: so there's always expertise-diversity tradeoff
Deleted User#0000: The idea is that they specialize in the second phase
Deleted User#0000: but yeah not sure how much diversity they may loose for starting from the same condition
Deleted User#0000: another ideas is to train an ensemble of N networks independently, and then maybe combine them somehow into one, except you dont combine the layers that will become the experts, and instead each expert's starting point for the second "specialization" phase is what it was for the ith net in the ensemble. The idea being that they could have more diversity that way
EricHallahan#1051: I've been thinking a lot on ensemble and MoE lately. I need to do a lot more reading on it. |
Deleted User#0000: I need to actually try to implement my ideas lol
Deleted User#0000: i began trying one of them, but got tired trying to make MPI work, and havent come back to it
RazikMazilya#0001: Potientially, I told them I wanted to explore how it could be used in games that weren’t simply a text adventure format. X>
IKEA#9631: ..of course :zucc:
jrowe#5371: furry erotica card games
jrowe#5371: roguelike sext games
chirp#4545: Some cool results from the new GPT-3 “instruct-series” models:
chirp#4545: https://twitter.com/mattshumer_/status/1359339959585562625
chirp#4545: https://twitter.com/AndrewMayne/status/1359606445788987394
bmk#1476: Anyone on board to help build the OpenInstructDataset?
bmk#1476: Basically openai collected data for training the instruct models
StellaAthena#3530: (maybe provide some context for what that is?)
bmk#1476: What if we did that ourselves
bmk#1476: I just coined that on the spot, i assumed the name was self explanatory
StellaAthena#3530: I don't know what "the instruct models" are, for one
bmk#1476: Oh, that might be why
bmk#1476: I assume it's self explanatory given the prior information that the instruct models are basically gpt3 but fine tuned on [redacted] so that it's better at being used in a certain way, namely through being given imperative commands (this is all public knowledge)
bmk#1476: I'm proposing doing the [redacted] but, like, open
Louis#0144: Easy way to concat two vectors and remove padding in one go?
Louis#0144: In pytorch |
Louis#0144: I can’t find anything googling
bmk#1476: why not just do it in two gos
EricHallahan#1051: Can you slice the padding out?
Louis#0144: I thought so
Louis#0144: But it never seems to copy
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/809203859070255165/image0.jpg,https://cdn.discordapp.com/attachments/729741769738158194/809203859268304936/image1.jpg
StellaAthena#3530: @bmk oh sorry I ~~can’t read~~ am having extreme brain fog and missed what was going on. You’re totally right.
Louis#0144: It’s so weird
Louis#0144: I think it might honestly be a pytorch bug
EricHallahan#1051: Could you `torch.split()`?
bmk#1476: @Louis can you make a minimum replicatable example
bmk#1476: because your code is complicated as heck rn
Louis#0144: Yeah I’ll do that later today
Louis#0144: Meetinfs n stuff Rn
Louis#0144: I just wanted to know if it was a known Issue
Louis#0144: They were split
Louis#0144: I’m reorganizing them
EricHallahan#1051: I mean to remove the padding.
Louis#0144: The issue is that I had to pad them at one point and now@I want to recombine with padding removed
Louis#0144: Oh |
Louis#0144: Hm
Louis#0144: I’ve been stuck on this bug for days
Louis#0144: I don’t usually ask for help 🤷♂️
Louis#0144: Like 3 or 4 days now lmao
EricHallahan#1051: I used `torch.split()` to run separate activation functions on different parts of a tensor and then just `torch.cat()` them together again. So just split off the padding and throw the remaining part be concatenated together.
Louis#0144: ooo
Louis#0144: kk
Louis#0144: i will try
EricHallahan#1051: `torch.split()` returns a `torch.Tensor.view`, so your not wasting any memory doing so.
Louis#0144: @EricHallahan youre a life saver
Louis#0144: it works
Louis#0144: thank u
Louis#0144: fuckin RELIEF
Louis#0144: i wasnt gonna have a deliverable for my next meeting
Louis#0144: lmao
Deleted User#0000: any idea what kind of data thatd be?
bmk#1476: Not entirely sure yet, only some vague ideas
Deleted User#0000: could be like https://arxiv.org/abs/2011.08115#:~:text=Typically%2C%20machine%20learning%20systems%20solve,perhaps%20an%20example%20or%20two. but less lame?
Deleted User#0000: but yeah im curious about what they useddd
Deleted User#0000: maybe they just payed lots of people to come up with tasks and solutions, which would be ooof |
zphang#7252: My impression from that paper is that it's almost just QA
bmk#1476: my original idea was just to crowdsource it
bmk#1476: that's what OA did
bmk#1476: ask people to help write prompts by hand
Deleted User#0000: yeah. its much less interesring than what the abstract sells. The only difference with standard QA they say is that their Qs apply to many texts rather than being specific to a passage, but meh
Deleted User#0000: i wonder how many prompts did oai get, vs how many we could get
Deleted User#0000: also u mean write good prompts for specific tasks or have people come up with tasks?
bmk#1476: we could solicit in the oa slack
bmk#1476: and get a reasonable number
bmk#1476: and i meant just coming up with tasks in general
zphang#7252: yep. I also ran into this when my lab was considering building an instruction-based dataset :p
Deleted User#0000: out of curiosity how many tags did shawwns tagging website get?
Deleted User#0000: if someone knows
EricHallahan#1051: Do I need to open a consultancy?
> HALLAHAN PYTORCH CONSULTANCY
> 13 ROVER AVENUE
> ERIC HALLAHAN, PRESIDENT
> No Case Too Small
> 25¢ Per Day Plus Expenses
Louis#0144: Expenses |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.