data
stringlengths 115
7.61k
|
---|
EricHallahan#1051: Have you seen Ghidra?
triggerhappygandi#0001: No. Whats that
EricHallahan#1051: Reverse engineering tools, by the ~~CIA~~ NSA.
andyljones#7746: *NSA
triggerhappygandi#0001: Reverse engineering what exactly
EricHallahan#1051: Software.
Thistle#2263: https://www.quantamagazine.org/a-new-thermodynamics-theory-of-the-origin-of-life-20140122/
This guy IMO has a pretty boring paper but the underlying idea was really enticing to me
triggerhappygandi#0001: And how do regular people know about it
jrowe#5371: https://ghidra-sre.org/
EricHallahan#1051: Because Internet https://ghidra-sre.org/
triggerhappygandi#0001: Ah so they made it pubic
EricHallahan#1051: https://github.com/NationalSecurityAgency/ghidra
jrowe#5371: hearts and minds and talent recruitment
Daj#7482: Java :ultrazucc:
EricHallahan#1051: It is *awesome* software. Not a fan that it is written in Java.
Thistle#2263: I drank the J.S coolaid and am annoyed at any other language now.
EricHallahan#1051: I actually started reverse engineering the ECU in my car.
Thistle#2263: which brand :) |
EricHallahan#1051: I wanted to break the authentication to gain access to the secure parts of the CAN bus.
Thistle#2263: I come from automotive cyber sec 😩
EricHallahan#1051: '07 Acura RDX, hand-me-down from my father that used it for years after buying it used.
Thistle#2263: uff acura cars are so fluffy and cozy
឵Tomo#5259: The NSA only open-source the base of Ghidra, there were some stuff which was never opensourced iirc, idk if thats right
឵Tomo#5259: :PokeDerp:
Thistle#2263: probably just wanted to get some youtube videos to give em ideas on how to use it well
EricHallahan#1051: First factory turbocharged Honda in North America IIRC.
EricHallahan#1051: It's a custom K-series motor that was only used on the first-generation RDX.
IKEA#9631: why do you have to ALWAYS end your sentences with full stops
IKEA#9631: smh gives me anxiety
EricHallahan#1051: The thing has no power until you punch the throttle because it's an I4. But you have a boost gauge and the sound of the turbo spinning, so it is still entertaining to turn off the stereo and just listen to that. `XD`
Almost every testimonial complains about how underdeveloped the K23A1 is. It's probably why that line of development never went anywhere and they switched to a normally-aspirated V6 after the first gen.
genai (Immortal Discoveries)#0601: Anyone here want to join my AGI group now? Note it's only for people that want to work on AGI as a team, willing to find out how to explain patterns.
rom1504#5008: What does it mean to work on AGI ? Is there anything that works even a little bit on this topic ? Didn't see anything
Louis#0144: lol
bmk#1476: what was that paper where they hook gpt2/3 up to a calculator
bmk#1476: where they let gpt2 ask the calculator questions
gwern#1782: _doesn't recall such a paper_
bmk#1476: maybe it was a blog post |
Sphinx#2092: The one by andor?
bmk#1476: er.. link pls
bmk#1476: google isnt turning up anything
Sphinx#2092: https://arxiv.org/abs/1909.00109
bmk#1476: ah, this might be the one
3dprint_the_world#6486: welcome to the military industrial complex (and the broader defense superstructure)
StellaAthena#3530: Turns out that when you have a fuckton of funding and permission to ignore ethics, US law, and international law you can get a lot done without particularly impressive tech
Louis#0144: theres many papers by Riedl about hooking GPT2 up to a symbolic planner
Louis#0144: if thats of use
StellaAthena#3530: Put another way: the hardest part of tracking your neighbors’ every move is
1. the fact that Apple won’t back door their phones for you
2. the fact that the FBI or local equivalent are going to come knocking at your door
gwern#1782: true, true, I don't deny it works quite well for the NSA. 'hack the planet' is no idle boast. I'm just annoyed at people who go 'maybe the NSA has *already trained GPT-4 and GPT-5*???'
gwern#1782: well... no. I am pretty confident that they have not.
bmk#1476: i wonder if the NSA is aware that neural networks exist yet
gwern#1782: let's not make fun of them. in the snowden leaks, they had just learned that random forests existed
gwern#1782: so I bet they're at least up to VGG now
Enealor#6657: Don't need special tools to steal models. The standard tools still work
Enealor#6657: Which is to say, I don't imagine they got GPT-N
EricHallahan#1051: I think we can be almost certain that they don’t, because if we frame it like a conspiracy theory and bring it to the logical counterargument of "why would they want that," we get nothing in response. |
zphang#7252: we discussed this in #off-topic
LaPapaya#4347: Lol gwern is here
gwern#1782: why wouldn't I be here? 🤔
LaPapaya#4347: https://tenor.com/view/super-smash-bros-ultimate-everyone-is-here-gif-17099121
gwern#1782: but do we have Amorphous Solid Snake and Non-Newtonian Liquid Snake here?
bmk#1476: Solid Snake Drive
LaPapaya#4347: I just realized now that you are also on the server. Dunno man, I always saw you as a prominent figure on the subject of testing AIs, because since gpt-2 came out I saw you appearing on the internet with new things. So it makes me feel excited to know that you are into EleutherAI stuff as well.
gwern#1782: alas, I'm pretty useless. I mostly just make jokes and idle on reddit
bmk#1476: ive been *trying* to drag gwern along for the ride
gwern#1782: what can I say, I've been busy fiddling with gwern.net infrastructure and various life stuff, like finances and stocks, and optional stuff like getting back into my preferred gym. I also just got a shipment of 20lbs of wildflower seeds I intend to turn an acre into a meadow with, for the sake of my cat and me not having to spend 20h a year mowing it
gwern#1782: _also got an oculus quest 2, but hasn't done anything but beat saber with it yet_
iwearapot#5464: beat saber alone is oodles of fun though so I can't blame you
EricHallahan#1051: My only good VR experience was maybe 10 minutes of playing *Longbow* on the Vive.
Sahl#0630: you spend 20 h meowing your cat? they’re supposed to do that by themselves
gwern#1782: I am a responsible owner, and it's critical to ensure an enriched environment
gwern#1782: and I'm going to enrich the bleep out of it with 20 pounds of seeds
gwern#1782: (it's like 40 flower species in addition to whatever's already in the soil)
AI_WAIFU#2844: The trick is to not have a lawn
gwern#1782: yes, that's the idea here...
bmk#1476: the trick is to live in a 2 sq m apartment |
miguelos#7956: Has anyone ever found a use case for GPT?
Arbot360#5033: You can make an internet community to replicate it.
Arbot360#5033: People are having a lot of fun using it to write, @One for example.
One#5919: HECK YEAH
miguelos#7956: Ok, how about an actual use case that will improve my life in a tangible way?
Arbot360#5033: Having fun doesn't improve your life?
miguelos#7956: Will it help me lose weight or earn more money?
Arbot360#5033: I love The One's enthusiasm, I need some of that
One#5919: if you use it right
miguelos#7956: No. Fun is isn’t part of my deal.
One#5919: thank you @Arbot360 omg
One#5919: it means a lot
One#5919: GPT is like a writing partner that can groove on anything you give it
Arbot360#5033: Me neither, but I'm not proud of it.
One#5919: then you edit and keep going
One#5919: that's extremely useful
Arbot360#5033: I'm trying to add some fun to my life. Not great timing though...
Arbot360#5033: Here's a major usecase: https://en.wikipedia.org/wiki/Automated_journalism
Arbot360#5033: Also the authors of GPT, OpenAI are renting it out, which is an actual use case that makes money
Arbot360#5033: GPT itself is not super useful yet, but the class of models it is part of has a ton of uses |
Louis#0144: I really dislike this field
Louis#0144: lol
Louis#0144: I think that’s a bad use case
Louis#0144: I also think automated journalism is an awful idea
Arbot360#5033: Yeah, I agree
Arbot360#5033: It defeats the point of journalism
Louis#0144: Indeed
Arbot360#5033: Profitable though
Louis#0144: Sadly
Louis#0144: It’s just going to write tabloids
Louis#0144: lol
Louis#0144: $1 of GPT-n+1 time
Louis#0144: Thousands in profit
Arbot360#5033: Tabloids already read like a markov chain
Louis#0144: Yeah
Louis#0144: But Bc of that their reach is limited to mouth breathers
Louis#0144: Think tabloids for people with above single digit iq
Louis#0144: It would also literally be impossible to regulate
Louis#0144: That’s honestly partly why I think journalism as a whole needs massive reform
Louis#0144: Rights of the press was a mistake |
Louis#0144: 🤷♂️
Louis#0144: (Partly)
Louis#0144: It’s good in theory but poorly implemented in the US
Louis#0144: Much prefer how the EU handles things
miguelos#7956: 99% of written stuff shouldn’t be read. People read way too much.
guac#4716: everyone's a journalist these days. My Nonna send's me videos of potholes in her town like she's running for mayor or something.
Arbot360#5033: Freedom of Speech sets a high bar, and I don't think its wise to lower it.
Louis#0144: Don’t touch freedom of speech
Louis#0144: Freedom of the press
Louis#0144: That’s what u touch
Arbot360#5033: There are no press licenses here.
miguelos#7956: Remove all regulations
Louis#0144: That’s the thing though. That’s why people want to consider social media as publishers
Louis#0144: That’s literally the reason
Arbot360#5033: The point is that Freedom of Speech and Freedom of Press are not separable when everyone is the press.
Louis#0144: Sure but it you go after freedom of the press you can focus on the publisher rather than the individual
Louis#0144: No?
miguelos#7956: Allow 100% free speech from anyone. Can’t be simpler. What’s the big deal?
Louis#0144: I’ve only taken a few seminars on fake news ethics, I’m not an expert in any stretch
bmk#1476: politrib warning |
Arbot360#5033: I could imagine some regulations happening for newspapers and social media on the basis of the Commerce Clause, and the regulations already extant for advertisements. But this is touchy subject. Keep in mind that we just saw authoritarians puppet or dismantle the press worldwide over the last couple years.
Louis#0144: True
Louis#0144: You’re right
Louis#0144: Yeah
Louis#0144: I agree
Louis#0144: So we agree then I think
miguelos#7956: Politics not allowed here?
Louis#0144: No
bmk#1476: politics is allowed, politrib isnt
Louis#0144: Wendy’s politrib
Louis#0144: Whats*
miguelos#7956: No idea
guac#4716: political tribalism cannabilism etc
Arbot360#5033: politrib is one of the random rationalist words people throw around
Louis#0144: Oh
Louis#0144: Lmao
Louis#0144: Nah
Louis#0144: I’m centrist, I don’t do tribal stuff
guac#4716: you;'re missing out babe
miguelos#7956: Centrists are the worst |
bmk#1476: it's not just a rationalist word, it's a word invented here in eleuther
Arbot360#5033: Oh, congrats.
bmk#1476: pretty sure nobody outside eleuther uses it
miguelos#7956: So, can we all agree that regulation is bad
miguelos#7956: Great
Louis#0144: Wtf
zphang#7252: damn that eleuther lore
bmk#1476: wha
miguelos#7956: Now that we accept that anyone should be allowed to say anything, let’s figure out the solution
Louis#0144: Since I can say anything I can give timelines for GPT neo now right?
Louis#0144: The secret timelines that BMK keeps locked his drawer
guac#4716: you just strap a GPT-FactChecker3000 onto every sentence you spew out all good gg
Louis#0144: “Only open incase of emergency”
Arbot360#5033: GPT Neo will be done next week.
miguelos#7956: Is that a lie?
miguelos#7956: You shouldn’t lie
Louis#0144: The joke is that people entirely unrelated to Eleuther kept spewing out random Eleuther timelines
Arbot360#5033: I am an authorized representative of Eleuther AI, LLC
Arbot360#5033: For the benefit of posterity reading this chatroom: No, there is no estimate.
Louis#0144: 😉 |
miguelos#7956: So basically you don’t have a plan for dealing with the damage caused by GPT
Louis#0144: To find the precise delivery date of gpt neo, you must travel deep into this dungeon of neural layers and fetch the legendary infinity fabric. Return the fabric to soothsayer of CPUs and you will be rewarded handsomely
guac#4716: what is this the matrix
iwearapot#5464: my uncle is the ceo of eleuther ai and he told me gpt-neo will be ready in 2 weeks at most
bmk#1476: I'm going to have to ask y'all to move the shitposting to #off-topic
EricHallahan#1051: If anyone wanted to do damage with large language models, they would of done so by now if they had the resources. You don't need a 100B parameter model to do damage.
iwearapot#5464: what damage specifically?
miguelos#7956: Fake news damage
iwearapot#5464: yeah we have that already, gpt isn't a prerequisite for it
iwearapot#5464: worst kind of damage I can think of from mass text generation via ai is the dataset equivalent of radionuclide contamination of steel
iwearapot#5464: once the internet is full of gpt-generated text, we'll need to delineate pre-ai and post-ai datasets
iwearapot#5464: similarly to: https://en.wikipedia.org/wiki/Low-background_steel
miguelos#7956: We won’t
miguelos#7956: Nobody in the world is talking about truth management
bmk#1476: This analogy fails in quite a few respects
miguelos#7956: That’s a shame
bmk#1476: 1. The internet has been full of shit for decades
bmk#1476: Markov chain shit, template generated shit, mass translated shit
bmk#1476: Gpt is just adding to the mess that's been there forever
iwearapot#5464: good point |
bmk#1476: 2. It will probably be possible to distinguish generated text from human text, and if not, it shouldn't matter, because if it's indistinguishable it might as well be human text
miguelos#7956: Distinguishing isn’t necessary
miguelos#7956: Endorsement is
miguelos#7956: Each piece of text should be endorsed by some agents
iwearapot#5464: I guess that holds if we suppose that human text doesn't have any more intrinsic value than AI text
iwearapot#5464: but if AI text can be indistinguishable from human text while having negative value (e.g. convincingly spreading falsehoods) then it might matter
miguelos#7956: There’s zero difference between AI and a dumb human. None.
iwearapot#5464: not that humans don't spread falsehoods already
cfoster0#4356: The answer to this isn't avoiding text AIs
EricHallahan#1051: The web became full of shit when it was commercialized.
iwearapot#5464: inb4 eternal september
miguelos#7956: Of course not. I believe in AI free speech.
bmk#1476: Strong disagree
iwearapot#5464: yeah obviously not, that'd be the equivalent of banning cryptography because terrorists exist
bmk#1476: Politicians: :guilty:
iwearapot#5464: _think of the children_
miguelos#7956: ?
miguelos#7956: Does anyone understand why politicians are so dumb?
miguelos#7956: How low of an IQ must they have to think of such ridiculous ideas.
EricHallahan#1051: High ranking politicians aren’t dumb. |
Arbot360#5033: They pay alot of money to make sure they appear dumb
EricHallahan#1051: They are really good at manipulating people.
EricHallahan#1051: That is their job.
Arbot360#5033: This is turning politrib
miguelos#7956: So they don’t believe their regulations are reasonable, they just don’t care about being evil?
Louis#0144: My politicians r smarter than ur politicians
Louis#0144: Idc who ur politicians are
Louis#0144: Mine r smarter
miguelos#7956: What’s the difference between human and AI output?
miguelos#7956: Justin Trudeau, 200IQ
miguelos#7956: Singaporean?
iwearapot#5464: I don't think they're generally evil per se
miguelos#7956: I feel this is getting off topic. I apologize.
Arbot360#5033: We have reached peak politrib
iwearapot#5464: they're just responding to incentives, same as the rest of us
Louis#0144: First of all I’m Canadian
miguelos#7956: Dumb or evil genius. No in between
EricHallahan#1051: If you try to look at it from an alignment-like perspective, the only thing you can guarantee is that they will act in their best interest when it comes to policymaking and their ramifications.
miguelos#7956: I don’t do that
iwearapot#5464: X to doubt |
bmk#1476: Jfc i leave for 5 minutes and y'all go full politrib
jrowe#5371: I mean, this is a trope
jrowe#5371: like, every ai forum, teamspeak server, chat room, irc, website discussion thread, or YouTube comments since the internet was born seems to attract the same type of thinkers
bmk#1476: > like, every ~~ai~~ forum, teamspeak server, chat room, irc, website discussion thread, or YouTube comments since the internet was born seems to attract the same type of thinkers
bmk#1476: Ftfy
jrowe#5371: ehh, I dunno, I kinda lump him in with the "I have the intelligence master plan, I'll have it coded next week" demographic
bmk#1476: I sometimes wish we had a strict no crackpots rule
jrowe#5371: im not trying to be mean, just trying to point out reality
iwearapot#5464: _checks username_
:vikingblobsved:
bmk#1476: Right now, we just awkwardly try to ignore the crackpots
bmk#1476: Don't let it crack
Arbot360#5033: This happened just last week too.
jrowe#5371: maybe set up a crackpot jail channel with a gpt-2 bot wired in?
iwearapot#5464: so uh, slightly more on-topic but does anyone have an example of persisting latent and class coordinates between runs of a model, specifically I'd like to do so for big-sleep but an example of how to do it with any model would be a good starting point
Arbot360#5033: The relevant data is stored in `Latents` in big_sleep.py
jrowe#5371: if you search around biggan latents, there are some interesting github repos and articles
miguelos#7956: How do you explain the crackpot phenomena?
cfoster0#4356: Like besides just storing it with `torch.save`?
miguelos#7956: How can all crackpots be wrong in exactly the same way? |
iwearapot#5464: 👀 is it really that easy
Arbot360#5033: Yes
Arbot360#5033: Just don't change the code between now and when you need to load it
Arbot360#5033: Then it will break
iwearapot#5464: I'm 100% new to ML so I'm ignorant of simple answers like this
miguelos#7956: I’m a crockpot from the outside, but inside I’m a genius who figured out AGI a decade ago
iwearapot#5464: thanks everyone I'll give it a try
cfoster0#4356: If it's just a tensor or dict of tensors you should be fine I think?
jrowe#5371: 1% of the general population is susceptible to schizophrenic tendencies, it's a fact of human life, not a matter of fault
Arbot360#5033: I own a crockpot, it works well for meatballs and beef with potatos.
jrowe#5371: mmmm, a spicy meatball!
bmk#1476: I can't tell how many layers of sarcasm this is
iwearapot#5464: n+1
Arbot360#5033: Schmidhuber'ed
miguelos#7956: No sarcasm I’m afraid. Just a big dose of Dunning Kruger.
miguelos#7956: Nobody said crockpots couldn’t be self aware.
bmk#1476: New rule: no crockpots. Anyone caught using one will be sentenced to microwave-only
Arbot360#5033: New meme in #memes
miguelos#7956: There’s a stew inside my skull.
3dprint_the_world#6486: Politicians are probably smarter than you, on average (not intended as an attack). |
They're just good at talking and telling people what they want to hear.
Surprise surprise, what people want to hear often sounds really dumb, unless you're part of the in-group, in which case it just sounds like someone is standing up for you.
3dprint_the_world#6486: I 100% support this.
3dprint_the_world#6486: after a sufficient amount of time, we won't be able to differentiate, and every LM will basically just be training on the output of its previous version. Thus closing the training loop and achieving the singularity.
triggerhappygandi#0001: How do I tag you in a deepspeed issue @StellaAthena
𝓒𝓵𝓪𝓻𝓪#0888: Hello all.
Louis#0144: Hi
𝓒𝓵𝓪𝓻𝓪#0888: Did you see that GPT-3 got B- marks by professors who weren't informed about the nature of their "student"?
mgostIH#0245: Brony: 😐
Brony in a machine learning server: :blobsweats:
mgostIH#0245: Oh haven't heard of that, any links?
𝓒𝓵𝓪𝓻𝓪#0888: https://www.eduref.net/features/what-grades-can-ai-get-in-college/
𝓒𝓵𝓪𝓻𝓪#0888: Their key insight, imo, is really that "prompt engineering" is already a task teachers have to master.
mgostIH#0245: Interesting! GPT-3 still goes on weird tangents sometimes, I wonder how it'll be like when improved further
Louis#0144: Bronys died like in 2018
Louis#0144: So clearly @𝓒𝓵𝓪𝓻𝓪 is a zombie
Sahl#0630: how do you report someone for unicode abuse 👀
Sahl#0630: poor unicode...
Louis#0144: Depends what Unicode symbol
Louis#0144: 👶? |
Louis#0144: Or 🐶?
Louis#0144: Like if it’s 🥭 u can’t rly do much
mgostIH#0245: @Louis I was thinking of CelestAI
IKEA#9631: Oh man I'm getting flashbacks from brony cringe compilations from 2013
IKEA#9631: Good times, good times
Daj#7482: Pinned a message.
Daj#7482: can't believe this wasn't pinned before
Aran Komatsuzaki#5714: Schmidhuber isn't real. He's just a shadow of Hinton we anthropomorphized for memes.
Daj#7482: Schmidhuber is the Jungian shadow archetype of the ML Professor
gwern#1782: _remembers that: https://twitter.com/gwern/status/1315114641471700992 good times_
Daj#7482: Do you like have your twitter indexed for easy search?
bmk#1476: we need to make more eleuther memes
gwern#1782: oh, it's easy, just type `from:gwern schmidhuber`
Daj#7482: fair
Daj#7482: be the change you want to see in the world
Daj#7482: I have to say I am very proud of the scattered pin collection we have
Daj#7482: I feel this take never got the credit it deserves https://cdn.discordapp.com/attachments/729741769738158194/812781178213892126/Screenshot_2021-02-20_21-21-57.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/812781481910468669/4ytelm.png
Daj#7482: tbh better as an out of context text message
bmk#1476: hm yes, we appear to have optimized the gradient of funniness in text to a point in meme-space where there's no homomorphism to image space that preserves both meaning and funniness |
EricHallahan#1051: Binary positional-encoded dot-product https://cdn.discordapp.com/attachments/729741769738158194/812795899683405844/qBgC3Q38vNAAAAAElFTkSuQmCC.png
StellaAthena#3530: Pretty
EricHallahan#1051: Gray positional-encoded dot-product https://cdn.discordapp.com/attachments/729741769738158194/812796118043459624/jEAiD03gLJAAAAABJRU5ErkJggg.png
EricHallahan#1051: They are both over 8-bit unsigned integers.
StellaAthena#3530: What is this a visualization of exactly
EricHallahan#1051: The count of set bits after XNORing, normalized.
cfoster0#4356: Oo fun
Louis#0144: Considering transferring to university of Utah
Louis#0144: They have good funding and lots of super computer resources
Louis#0144: Idk tho
Louis#0144: I do need super computers for my work
Louis#0144: I’m hoping usc makes an offer tho
Louis#0144: I’d pick usc
CRG#8707: The lesson on positional encodings seems to be that just biasing the attention matrix (to break permutation invariance) works best. https://discordapp.com/channels/729741769192767510/747850033994662000/794278917740560426
EricHallahan#1051: T5 encodings do seem like the future.
Deleted User#0000: https://twitter.com/ak92501/status/1362614227618459648?s=20 they use T5 rel pos bias scheme to inject distances between visual elements in a document
MicPie#9427: Very interesting!
For the "contextualized vision" they use a UNet.
I wonder if that could be useful for vision transformers (e.g., for the input patches in ViT).
Sid#2121: Is github down for anyone else, or is it just me? |
nz#9710: working for me
nz#9710: (at least, I can see public repos)
HypnoPump17#9322: hi there! i'm coming here to recruit someone who wants to give a helping hand with a training loop! (for alphafold2 project). Ideally it should be pytorch / pytorch lightning / pytorch geometric, and a single gpu would suffice
HypnoPump17#9322: the training run should last from 2 to 5 hours
HypnoPump17#9322: but we need to collect good metrics and so on
HypnoPump17#9322: okay update: got some people in to run the code and training loops, if i need more help i'll ask in the future!
chirp#4545: https://twitter.com/karpathy/status/1363643694205702149
𓅬 gabriel_syme 𓅬#3220: yay another great event that I can never join because I don't own a specific brand of hardware :/
𓅬 gabriel_syme 𓅬#3220: are they changing that yet or?
𓅬 gabriel_syme 𓅬#3220: if someone hears about a recording, let me know
cfoster0#4356: 🦜 *gee I hope no one broadcasts it in #voice* /s
𓅬 gabriel_syme 𓅬#3220: ohh we do that?
cfoster0#4356: No
𓅬 gabriel_syme 𓅬#3220: heh
𓅬 gabriel_syme 𓅬#3220: oh well, someone will write a blog about it or smth
cfoster0#4356: Probably won't be as interesting as the convos here tbh
𓅬 gabriel_syme 𓅬#3220: that is fair, this place is kind of nuts
cfoster0#4356: I listened to one of the other recordings and it was so-so
fristiloverke#4159: what makes clubhouse different from literally every other chat app
fristiloverke#4159: havent been following the news in a while |
𓅬 gabriel_syme 𓅬#3220: the fact that it only works with apple?
nay#9954: I can broadcast if people want
𓅬 gabriel_syme 𓅬#3220: before I say 'that would be cool' let me check time zones 😄 I might be asleep
𓅬 gabriel_syme 𓅬#3220: oh in 2h? cool I'm in
triggerhappygandi#0001: In #voice?
3dprint_the_world#6486: I thought we weren't supposed to talk about clubhouse
triggerhappygandi#0001: but why
bmk#1476: henceforth, the word "clubhouse" will be used to refer to #off-topic
bmk#1476: i love clubhouse
bmk#1476: the most relevant and on topic discussions happen there
bmk#1476: :smallbrain: getting elon musk to talk on clubhouse
:bigbrain: getting elon musk to talk on clubhouse
nay#9954: the clubhouse UI is terrible I have no idea if the room exists or not
3dprint_the_world#6486: #off-topic doesn't have a UI, wtf are you talking about
triggerhappygandi#0001: I thought Karpathy was in #off-topic
bmk#1476: why would someone need an unofficial android app for #off-topic when it's right there >.>
EricHallahan#1051: BTW, I'm going to be signing off shortly here because I have to get up early for a engineering exam tomorrow. Anything else anyone here might want me to know before I do that?
Maestro#7643: Hi all! Just joined, GPT-Neo looks very interesting.
EricHallahan#1051: Hello and welcome! Make sure to have a look at the resources in #rules if you haven't seen them yet.
Maestro#7643: Thanks! Didn't see that info repo before, so it was a good read :) |
StellaAthena#3530: When you joined discord, did it not land you in #rules? It should have done so
StellaAthena#3530: Unfortunately note that we haven’t updated the info repo in close to a month... some project info is out of date. Sorry 😦
EricHallahan#1051: It doesn't, unless someone changed it.
StellaAthena#3530: I did like a week-ish ago?
StellaAthena#3530: (Or, I tried to?)
Maestro#7643: It placed me in #general
Maestro#7643: Though it's nearly 4am in Europe, so I might have absent mindedly skipped to general.
fazz#8459: Is GPT-3 still closed access ie. for our testing? I mean even if you are holding folding $ its still restricted to the Twitter chosen?
Maestro#7643: @fazz I believe there is some online waitlist
fazz#8459: @Maestro thanks - I'm on it multiple times 😂
triggerhappygandi#0001: Try mailing Greg Brockman directly
triggerhappygandi#0001: Worked for me
triggerhappygandi#0001: Also the closed beta is over iirc. This is the only way they will give access, rather than let people open something like a colab pro account.
RobinYuen#3504: does anyone know if deepspeed will speed up language model pretraining if theres only 1 GPU?
RobinYuen#3504: BERT for example
triggerhappygandi#0001: They have a `ds_zero_offload_10B,json`
fazz#8459: @triggerhappygandi I like your no shits given style haha. Email probably has a much better read prob than platform messaging. Failing that send him a handwritten letter in post. Failing that wait for him in person by a SF scooter recharge station
RobinYuen#3504: cuz im not seeing any
triggerhappygandi#0001: I filled the form like 5 times and no response, so I just said "Greg please give me access"
fazz#8459: @triggerhappygandi You are the G 👌 |
triggerhappygandi#0001: It's not my original idea :p
triggerhappygandi#0001: I saw it on twitter where he said to a guy that he could bump him up on the waitlist if he mailed him
RobinYuen#3504: Thanks, ill have a look, but i thought zero offload is for training large models? It’s not intended to give any speedup
RobinYuen#3504: Or i may be wrong about it
triggerhappygandi#0001: It won't work with large models lol
triggerhappygandi#0001: Else gpt-neox would be running on it
triggerhappygandi#0001: @RobinYuen Here https://github.com/microsoft/DeepSpeedExamples/blob/master/Megatron-LM/scripts/ds_zero-offload_10B_pretrain_gpt2_model_parallel.sh
triggerhappygandi#0001: It probably won't work because Deepspeed doesn't work as advertised, but this is designed for 1 GPU
Daj#7482: The performance of your final model is pretty closely linked to the number of FLOP you put into it, and getting close to 100% FLOP efficiency out of a single GPU is pretty easy, so there's not much shortcutting that can be done
fazz#8459: ...is FLOP efficiency the main metric by which multiprocessing inefficiency is measured
Daj#7482: I mean, I think so? What else would you measure?
triggerhappygandi#0001: Yeah pretty much. The percentage of the theoretical peak FLOPs you can get in a cluster is all anyone cares about.
triggerhappygandi#0001: And 50% seems to be the wall for now.
RobinYuen#3504: Thats what I thought, but who knows, if deep learning works as I thought, deep learning wouldnt work
triggerhappygandi#0001: _technically_, ZeRO-offload advertizes that it can use CPU memory for checkpointing and some mild processing, which enables better performance.
triggerhappygandi#0001: But that's just advertizing, with an asterisk
RobinYuen#3504: It could be that my CPU isnt that superb, zero offload trains slower for me previously
kindiana#1016: zero-offload usually is slower because pcie bandwidth
RobinYuen#3504: What about finetuning? Does any of you might know any tricks to get finetuning quicker on a single GPU?
RobinYuen#3504: Ah that could be it |
kindiana#1016: fine tuning is the same as training
RobinYuen#3504: I thought there could be some algorithmic tricks for it
kindiana#1016: well if there were tricks you can use for fine tuning you'd just use it for training lol
RobinYuen#3504: I looked into few shot fine tuning like PET
RobinYuen#3504: Or prefix tuning
RobinYuen#3504: I was hoping if there is something like them that i missed
triggerhappygandi#0001: There was a discussion on Transformers issues about this. Apparently ZeRO-offload required like 200GB RAM lol
triggerhappygandi#0001: Literally not worth it.
kindiana#1016: those are pretty orthogonal to the types of techniques zero/ds does, but they certainly could work
kindiana#1016: the tradeoffs are quite different from traditional fine tuning, and I wouldn't consider them algorithmic tricks as much as different techniques
Teven#6831: it was on Deepspeed if this is the one I'm thinking about - https://github.com/huggingface/transformers/issues/9996
Teven#6831: doesn't strike as that absurd actually, it was 200GB CPU RAM + 40GB GPU RAM for the big T5
Teven#6831: with how cheap CPU RAM is on cloud service providers, I think it's worth keeping it in mind
Louis#0144: Back in my day the only FLOPs we needed were fish
gdawg16#0493: hello is it done yet
gdawg16#0493: https://images-ext-1.discordapp.net/external/44oKe_jDe2TEcvPRYxdlgYttYJ6jyCX6oWPq_8oAeuE/https/media.discordapp.net/attachments/804128580753686528/806640868713824286/image0.gif?width=400&height=98
cfoster0#4356: Yeah lemme fax the weights to you
JonathanFly#4262: 256GB of memory costs less than any 24GB GPU sold, I'm more likely to have that than to get a 48GB GPU
JonathanFly#4262: Wow lots of activity, glad to see the project is moving along
dmvaldman#4711: is the namesake of this channel from https://en.wikipedia.org/wiki/Eleutherae ? |
triggerhappygandi#0001: nobody knows
triggerhappygandi#0001: Except @Isaac McHorse
𓅬 gabriel_syme 𓅬#3220: the name is a wordplay I imagine between ai and the world eleutheria which means freedom in greek
𓅬 gabriel_syme 𓅬#3220: my best guess
bmk#1476: And our glorious supreme leader Archibald Eleuther
triggerhappygandi#0001: ***who is he***
zphang#7252: do we have room for an archibald reaction
triggerhappygandi#0001: We do
triggerhappygandi#0001: We have 100 emote capacity now
mgostIH#0245: @triggerhappygandi probably Schmidhuber
triggerhappygandi#0001: Eleuther Schmidhuber.
Daj#7482: @-Archivist It's a complex thing that has been discussed both here publicly and private many times. #alignment-general is the "official" ethics and adjacent topics channel but it's usually full of rather technical stuff. To make a long, complicated argument short, a lot of other groups are in the process of, or already have, created models like these and will release them, so we don't believe our doing so will make a huge difference as for tech capabilities except in the sense that we try to make it more accessible for low resource academics and the like. I personally think there is a lot of really useful research that can be done using GPT3 to make it safer (I personally work on such things and am hampered by not having access to the internals of GPT3). As for the "let big orgs gate access because of safety", there definitely is some merit to that argument, but the unofficial Eleuther motto is basically "Anything safe enough to sell to Microsoft is safe enough for the public to have"
Daj#7482: This is of course just scratching the surface of the arguments, happy to go into more detail
mgostIH#0245: > "Anything safe enough to sell to Microsoft is safe enough for the public to have"
👏
Daj#7482: It's a "everyone is making their own knives, so we better study knife safety instead of not sharing our knives with people" argument
Daj#7482: The most outspoken against-openess people are OA and various "but what if the LM is racist?" people
kindiana#1016: people on this server are pretty self selecting for "open for all" haha
Daj#7482: This can also be expanded to "we should try to build the most safe knives we can and distribute those before less safety conscious knife makers sell theirs", but that's a shakier argument
Daj#7482: I think there is a surprising amount of low-hanging fruit in the technical AI safety sphere (but that might be my bias). I think we can make our systems a lot safer, and more _useful_, with purely technical research |
Daj#7482: Given the choice, you would always buy the knife less likely to cut your finger
Daj#7482: So if we make technical progress on making safer AI, I expect people to naturally adopt them
mgostIH#0245: What if we just trust our governments to do the right thing? Last time it went well with nukes
mgostIH#0245: :viriglasses:
Daj#7482: Even if, I have just about 0 trust multiple governments could coordinate in any feasible way
Daj#7482: Climate Change was a nice fire drill for AI safety
Daj#7482: And we all caught fire lol
andyljones#7746: i am very confident that 75 years after the takeoff, everyone around will be convinced it went well
mgostIH#0245: Is this what 4chan meant with weaponized autism
Daj#7482: "AI Takeoff went amazing" - The Paperclip Maximizer
andyljones#7746: 'Without doomsday, there could never have been as many paperclips as this!' - The Paperclip Maximizer
Ravna#1831: There are two different groups of alignment advocates. The former is about "how to prevent a nuclear war". The latter is about "oh no, the workforce of nuclear weapon production isn't inclusive enough and leaves marginal minorities underrepresented!"
Daj#7482: Eh I'm more cynical than that, but that's a tangent (re: Archivist)
Ravna#1831: The second group is controlling the narratives now.
Daj#7482: I think there are two points I'd make:
1. I like to say it's like worrying about the effects of nuclear testing on local trout population. Yea, we're _definitely_ irradiating those trout, and that sucks, but we should worry about the, you know, literal end of the world scenario.
2. I have some deep technical disagreement with the "racist AI" people. Without getting into details, a lot (not all of them) advocate some kind of censorial "see no evil, speak no evil" type approach to these problems, such as by censoring datasets. I think the only way to avoid evil is to _understand evil_, so we should have AIs that we give as much evil shit as possible, they then understand this information, and use it to become good and more effectively counter evil. Put another way, I think any model that can be broken just by showing it some racist twitter posts is unacceptably dangerous and should not be deployed in the first place
Ravna#1831: it reminds me of how material science labs hijacked the narrative of "nanotech" in the early 00s
Daj#7482: there was also this good post from this morning: https://twitter.com/tante/status/1364125135121362947
mgostIH#0245: Uh? |
Daj#7482: Read "Where's My Flying Car?" for a good summary of the story
Daj#7482: fwiw I think the WMFC guy is _hilariously_ overoptimistic about nanotech and Waldos
Daj#7482: But the material science people also totally scammed those funders
Ravna#1831: That's a pretty good book. But I'm very skeptic to the supposed "causes" proposed by the author. I think they are still just symptoms. There has to be deeper causes.
mgostIH#0245: I want nanowaifus, on the scale of blood cells, inside my bloodstream
Daj#7482: Yea just saying it's a good retelling
Daj#7482: I need that pepe with folded hands emoji
Daj#7482: I'm thinking of a specific one, can't find it
Daj#7482: Let me put it into words:
Daj#7482: "bruh"
mgostIH#0245: this? https://cdn.discordapp.com/attachments/729741769738158194/813751269494751263/iu.png
Daj#7482: No, hard to put into words the exact expression
mgostIH#0245: You know how when you watch the sky there's this small thingies you can barely see the shadows of, those small impurities in your eyeballs fluids, I want that except small anime girls 😍
Daj#7482: That's a orthogonal problem. There is the problem of "what is good?" and there is the completely seperate, technical, problem of _"how do we get AIs to do anything good-adjacent at all and not destroy everything immediately?"_
Daj#7482: fwiw I put a lot of hope into instilling a useful metaphilosophy into the AI so it itself asks "what is good?" and uses its super intelligence to align closer to human values (this is related to the idea called "corrigibility")
Daj#7482: I'm sending you not to horny jail, but the horny closed psychiatric institution
andyljones#7746: it's a pascal's mugging of a sort, but: unless you're 100% confident that it's impossible, the possibility of alignment should occupy a disproportionate amount of our effort
Daj#7482: I _genuinely_ think that aligned AIs is _much_ easier than aligning humans
Daj#7482: This too
Daj#7482: But I genuinely think the best way to stop humans from fucking kids is to build an aligned AGI that will make them stop |
andyljones#7746: also as much as it's an unpopular opinion, we *have* got our affairs in order to a large extent? we've done a pretty impressive job at aligning people, really
andyljones#7746: and there ain't half as many dials for you to twiddle on a person
Daj#7482: again, also true. Not perfect, huge steps in the right direction
Daj#7482: This is another tangent lol
Daj#7482: But I'm a pretty strong suffering-focused utilitarian
Daj#7482: But even if you disagree with my specific meta-ethics, if you think humans can make _any_ progress on ethics at all, you should believe by transitivity that an even smarter AI should be able to make even more progress
andyljones#7746: which century'd you prefer to live in to this one as a random human being?
mgostIH#0245: Make an AI smart enough that can trick any individual into thinking that their own personal utopia is what is actually happening :pepecheers:
Daj#7482: ~~alt answer: "What else are we supposed to do? lmao"~~
Daj#7482: This is definitely a possible form of wireheading
Ravna#1831: I typed a rather long version of daj's alt answer
Ravna#1831: But it's more serious and boring so I decide not to post it
Ravna#1831: :zucc:
Daj#7482: I'd still read it :berk:
Ravna#1831: "I don't believe that we can, given that AI has infinite optimization power" is a fair stance. But there are a significant portion of possible futures in which AI only has finite power and we can nudge it somewhat. @Daj
mgostIH#0245: ffs I just wanted to make money with AI, not have an existential crisis
Daj#7482: You're in the wrong field, buddy :berk:
Daj#7482: You first :berk:
Daj#7482: Humanity sure made a lot of terrible shit
Daj#7482: But they also made roughly everything good I care about too |
Daj#7482: But that's just me
mgostIH#0245: No other parts of nature invented anime
Daj#7482: Nevermind, hand me the coolaid
andyljones#7746: there's a superb chart from OWID showing that child mortality's down tenfold from preindustrial times, and ten times higher than it could be if everyone had the west's standard of healthcare. i think this does a good job at characterising humanity's successes and failures. to deny that we've made *vast* progress is disingenuous, but to deny that there is *vast* progress to go is equally disingenuous.
Daj#7482: Sure, but have you _seen_ the alternative?
Daj#7482: Either Darwinian Evolution red in tooth and claw, where acts like that are just another Tuesday for every insect eating its own children
Daj#7482: Or the void of non existence
andyljones#7746: 'we learned how not to kill 90% of children' is *absolutely* something to raise your glass over
andyljones#7746: but if you're that deep into Nihilism Club, welp
mgostIH#0245: I think it's far easier nowadays to see how much wrong is in the world with 7 billion people each having the possibility of doing something fucked up and the internet spreading that news over the globe
Daj#7482: Remember: Thou Art Godshatter
The alien god of natural selection built the most uncaring, cold, brutal world conceivable, and somehow, from all that _something good emerged!_
Relevant writings:
https://www.readthesequences.com/Thou-Art-Godshatter
https://www.readthesequences.com/The-Gift-We-Give-To-Tomorrow
https://www.readthesequences.com/An-Alien-God
Daj#7482: We are the spark in an empty, cold universe
mgostIH#0245: But it's more of your sensitivity to it having increased, I'd agree that the world would be even worse without civilization
andyljones#7746: i take solace in this kind of opinion being self-eliminating
Daj#7482: We haven't yet lit the universe aflame with a blaze of beauty, and maybe we are too ugly to deserve this honor, but, as far as we can tell, we are the universes' only shot |
Daj#7482: Other relevant reading: https://slatestarcodex.com/2015/08/17/the-goddess-of-everything-else-2/
Ravna#1831: The fact is that none of us here really takes our opinions seriously. Otherwise we would be busy implementing our ideas instead of shitposting on discord.
Daj#7482: Speak for yourself
andyljones#7746: 's part of the implementation
Daj#7482: Recreation and debate are also important parts of my creative process
Daj#7482: I'd love if you get around to reading those links, maybe start with The Goddess of Everything else, and I could get your opinion on the writings
Ravna#1831: so you don't like the fact that a small percentage of us are still killing each other, so you want all of us to die, which implies you are fine if a larger percentage of us start to kill each other?
Ravna#1831: killing bad genocide good?
andyljones#7746: for the first time in human history we *are* doing the right thing, it's just the right thing is taking time. i could sympathise with your opinion in 1700; i can't today. https://cdn.discordapp.com/attachments/729741769738158194/813756847915925544/2880px-The_World_as_100_People.png
Ravna#1831: sounds more evil than someone who kills a few babies in camera, because the whole humankind includes these babies
fazz#8459: @andyljones Isaac Arthur is also a great antidote to short term despair - wildly big picture joined up thinking
andyljones#7746: frankly what you're pushing for is *collective punishment*. your position is far more evil than the people you're disgusted by
Daj#7482: This is why I think some form of utilitarianism/consequentialism is the only coherent meta ethical position (probbaly, maybe), it just seems like paradoxes like these are indefensible
Daj#7482: Is 2 people suffering equally bad as 1 person suffering + 1 person not suffering?
Ravna#1831: no
Ravna#1831: i'm banging my head against the wall
Daj#7482: there are coherent theories imo that lead to the conclusion "humanity shouldn't exist", specifically negative utilitarianism
andyljones#7746: and therefore everyone should die
Daj#7482: It doesn't feel fair to punish those trying to make things better for those that aren't
andyljones#7746: you're arguing the premises; i'm arguing the conclusion |
Daj#7482: Your theory bottoms out in "everything but instant 100% perfection stable in perpetuality" has no right to exist or even strive towards becoming better
Ravna#1831: if you are so fine with everyone dying why are you not fine with a few babies dying while the others aren't
andyljones#7746: what connor said. physician, heal thyself
Daj#7482: We were dealt the hand we were dealt
Daj#7482: I don't see the point in not striving for betterment, even if it fails
andyljones#7746: you're entitled to be miserable, but do better than this half-arsed attempt at a rational defence of it
Daj#7482: as my personal leitmotif goes, "I don't know how to save the world, but dammit I'm gonna try"
Daj#7482: "The Planet" is fucking terrible too, though
Daj#7482: Who do you think _made us?_
andyljones#7746: nature red in tooth and claw, etc
Daj#7482: https://web.archive.org/web/20200203215527/http://www.xenosystems.net/hell-baked/
Daj#7482: > everything of value has been built in Hell.
andyljones#7746: (i'm gonna bow out here, because i realise i'm just piling on now)
Daj#7482: Nature _is hell_
Daj#7482: I don't see how destroying the one thing that is partially not-hell would result in a better outcome
Daj#7482: For the record: I totally get you and don't think less of you in any way or anything, I just like debating meta philosophy :)
Ravna#1831: I think it eventually boils down to aesthetics. For you, hydrogen bombs wiping out humanity in a bang is aesthetically acceptable while one people starving in a whimper looks aesthetically displeasing.
Daj#7482: All good, just expect (respectful) pushback :)
Ravna#1831: It's not an ethics discussion.
Ravna#1831: It's an aesthetics one. |
Daj#7482: Hot take: NRx is just super gay masculine aesthetics
andyljones#7746: i'll roll the clock further back than that: i think it boils down to what ethics/aesthetics are selected for. nihilism can be phrased as an entirely self-consistent position, but it ain't particularly contagious
Daj#7482: Criticism _is_ easier than building
Daj#7482: Easy social capital
Daj#7482: anyways I should do some work too ttl
triggerhappygandi#0001: When I type in "man walks into [synagogue/mosque/church]" the autocomplete is mostly about a shooter.
triggerhappygandi#0001: So yeah, what you gonna do? Gatekeep not only the current internet landscape, but the past?
𓅬 gabriel_syme 𓅬#3220: did we do the Gaia yet, I was away
triggerhappygandi#0001: It didn't happen when I wrote "man walks into a temple"
triggerhappygandi#0001: That's my point
triggerhappygandi#0001: My point is that temples are a hindu thing, and even if some violence happens in them, the news isn't as widespread as someone vandalizing a synagogue in Germany or shooting up a mosque in NZ. So the model goes along with it. Of course it could be labeled as racist, but when you scrape that much data from the internet, you will make it learn stereotypes.
mgostIH#0245: @triggerhappygandi it might be that "temple" has a lot more different meanings compared to "Church" or "Mosque"
mgostIH#0245: When I think of a temple I imagine some ancient monument, not necessarily some religion or culture
mgostIH#0245: If I were to complete "Man walks into a temple" I'd start thinking of Indiana Jones
mgostIH#0245: The term is also very used in fiction
triggerhappygandi#0001: Yeah that could be the case
Daj#7482: This proves that Indians are way cooler than us
Daj#7482: "temple" is a much cooler word
triggerhappygandi#0001: English and by association US/UK dominate the internet
triggerhappygandi#0001: I was using temple as more like a hindu place of worship |
triggerhappygandi#0001: Rather than a Zelda ruins
triggerhappygandi#0001: But I guess the latter is more dominant lol
mgostIH#0245: @Daj Might also be that indians are polytheistic, thus they use some more general term
Daj#7482: Which is also objectively cooler
Daj#7482: The Bible doesn't have enough gods fighting each other
mgostIH#0245: Well aye, they even manage to make your imagination think of it well after having seen monkey temples
Daj#7482: The christian god is the Mary Sue of gods
Daj#7482: "My god beats your gods!"
Daj#7482: lame
Daj#7482: (but useful memetically)
mgostIH#0245: Back when memes would get you killed
mgostIH#0245: The good old times™️
mgostIH#0245: Now they can at most get you fired :viriglasses:
Daj#7482: They can definitely still do that if you're unlucky enough
StellaAthena#3530: @Daj Okay, but have you read the Jewish Bible?
StellaAthena#3530: It straight up features duels between deities
Daj#7482: There is plenty of hardcore stuff in there, true (though I have not read it, so second hand knowledge)
Daj#7482: The christians did have Daniel blow up a dragon with a bomb
Daj#7482: That was dope
triggerhappygandi#0001: This is one of the reasons, in my opinion, that superheroes are getting so much traction |
triggerhappygandi#0001: Monotheistic religion
Daj#7482: Marvel is just our Hindu gods :bigbrain:
triggerhappygandi#0001: lol
Daj#7482: But really, Marvel is our hero myths
triggerhappygandi#0001: It is
Daj#7482: New age Hercules, Odysseus etc
triggerhappygandi#0001: They literally have Thor
Daj#7482: yes lmao
triggerhappygandi#0001: And that thor is far more influencial than the real one ever was
Daj#7482: I would have been a total nerd for Hercules stories if I was born in ancient greece
Daj#7482: though tbh I think Odysseus is cooler
mgostIH#0245: PoV:
AI alignment gets fucked up, AI destroys society but gets suddenly stopped by a huge solar flare
Society rebuilds with the few remaining, years pass, old pieces of culture resurface, questions are asked as to what kind of humans could dream of something so perfect
Society starts worshipping anime as its new religion, each character being a separate God
Daj#7482: In one SciFi RP I played with friends, we had a running joke that ancient Warhammer figurines were valuable currency and seen as holy idols
mgostIH#0245: :fatchu:
Daj#7482: One of my favorite anthropological stories is how e.g. Australian Aborigines have all these myths about legendary creatures that turned out to match up perfectly to real animals that existed literally thousands of years ago
Daj#7482: Dragons were real, we just ate them
Daj#7482: lol |
triggerhappygandi#0001: God Emperor of Mankind is the only viable candidate
triggerhappygandi#0001: This is unanimously agreed upon
triggerhappygandi#0001: If in doubt, call the Holy Inquisition to cleanse such heretic "doubts" from your head
triggerhappygandi#0001: Fellow loyal imperials
Daj#7482: Do Indians feel left out from having had their own 20th century dictator? 🤔
triggerhappygandi#0001: lmao
triggerhappygandi#0001: What can I say
triggerhappygandi#0001: I really like Emperor from 40k
mgostIH#0245: I downloaded "Where's my flying car", can I read chapters out of order?
triggerhappygandi#0001: He is Leto II from Dune but juiced up in every aspect
Daj#7482: The ultimate joke of the god emperor is that he is _legitimately an amazing ruler_
mgostIH#0245: Doubt I'll read 375 pages rn just to see the nanotechnology thingy
Daj#7482: And having him would be amazing
Daj#7482: But everyone completely misunderstands his enlightened plans lmao
triggerhappygandi#0001: A fellow intellectual
Daj#7482: You can pretty safely skip the ones on the flying cars I think, bu they're also intersting
triggerhappygandi#0001: :smallbrain: chaos worshippers literally can't see the bigger picture
Daj#7482: If I could make the god emperor control all of Earth, I would in a heartbeat
mgostIH#0245: I'll still read them, but just out of order
Daj#7482: He _almost circumvented the warp entirely_ |
triggerhappygandi#0001: I would even faster
Daj#7482: _everything would have been fine_
triggerhappygandi#0001: Indeed
triggerhappygandi#0001: Fuck Magnus man
triggerhappygandi#0001: Worse than Horus
triggerhappygandi#0001: I cry every time
bmk#1476: Out of context this sounds like some nrx shit
mgostIH#0245: Like I assume I can just read Chapter 4 https://cdn.discordapp.com/attachments/729741769738158194/813789284960043018/unknown.png
Daj#7482: I know lol, 40k is like NRx pornography
triggerhappygandi#0001: whats nrx
Daj#7482: The god emperor is basically their wet dream
triggerhappygandi#0001: Uhh
Daj#7482: I don't remember tbh, but probably
Daj#7482: Neoreactionism
Daj#7482: The most edgy political ideology
triggerhappygandi#0001: I see.
Daj#7482: They're basically monarchists
triggerhappygandi#0001: Sounds like people who would follow God Emperor
Daj#7482: and are hilariously sexist/racist
Daj#7482: They're like smart alt-right |
triggerhappygandi#0001: And exactly the people Emperor thought were absolute trash
Daj#7482: Yep
triggerhappygandi#0001: *cough*Lorgar*cough*
Daj#7482: Write some dope horror fiction though
Daj#7482: (one of my favorite authors is NRx, his twitter is a hilarious dumpster fire)
Ravna#1831: I think it's the mainstream media's attention that makes them matter at all.
Ravna#1831: I wonder if the total population of them exceeds 1000.
triggerhappygandi#0001: who he
Daj#7482: Zero HP Lovecraft
Daj#7482: https://zerohplovecraft.wordpress.com/2018/05/11/the-gig-economy-2/
Daj#7482: This is a good story to start
Daj#7482: It's amazing
Daj#7482: @bmk the other day talked about how no authors get AI right
Daj#7482: 0hpl gets it
Daj#7482: I was only really reminded of Yarvin's existence by the NYT's shitty attempt at a hit piece on Scott lol
Ravna#1831: No sci-fi author even got MMORPGs right in the 00s, and MMORPGs were an actual thing then.
triggerhappygandi#0001: @Daj You sure do like reading 5000 word articles online
Daj#7482: :yes:
bmk#1476: The sequences: :guilty:
Daj#7482: I read all of Yarvin's recent thirst posts for Scott |
Daj#7482: One was 15k I think
Daj#7482: good read
triggerhappygandi#0001: Jesus
Ravna#1831: lol
triggerhappygandi#0001: Are you an exceptionally fast reader or what
Daj#7482: I have unlocked the yin and yang of ADHD
Daj#7482: Not really, I just read all day
triggerhappygandi#0001: How do I do that
triggerhappygandi#0001: Either of these
Daj#7482: Practice, read things you are excited about ~~and nootropics~~
triggerhappygandi#0001: yin-yanging ADHD or read all day
Daj#7482: But yea tbh it was mostly just forming a habit and knowing where to find good things to read
Daj#7482: It took years to curate my RSS feed lol
triggerhappygandi#0001: I _really_ want to read all 64 Horus Heresy books
triggerhappygandi#0001: But I literally can't
Daj#7482: can't help you there, I also can't read fiction
Ravna#1831: Just treat it as recreation. It's ADHD-ish but still much less ADHD-ish than reading smaller chunks of texts like tweets.
triggerhappygandi#0001: Even finishing the first 3 Dune books took 2 months for me
triggerhappygandi#0001: Since last christmas
Daj#7482: audiobooks works big time especially if you have a commute |
triggerhappygandi#0001: Yeah definitely
Daj#7482: I read like 40 books a year just with my daily commute that way
Daj#7482: But I kinda got bored of books
triggerhappygandi#0001: Audible is a blessing.
Daj#7482: but yeah tldr the secret to reading a lot is habit and practice
Daj#7482: what a surprise
mkualquiera#3484: ~~and attention~~
Daj#7482: I literally have ADHD
Daj#7482: though tbf ADHD is not really attention _deficiency_, at least for me
Daj#7482: It's like "extreme attention prioritization"
Daj#7482: ADHD people usually can fall into deep concentrated levels when they are interested
EricHallahan#1051: I think half of the people here have ADD or ADHD.
Daj#7482: seems to be a reoccuring theme, yes
Daj#7482: ~~Ritalin helps with coding lmao~~
Ravna#1831: The average brain capacity of humans don't matter at all. We need higher variance instead. Make more babies with extreme quirks in different mental dimensions and civilization progress would become faster. Instead of eugenics we need more mutation. Let's start by just dumping nuclear waste into the sea.:berk:
Daj#7482: Russia's way ahead of you :bigbrain:
bmk#1476: Attention is all you need
Chlorokin#6581: Reactionaries confuse the descriptive with the normative, progressives the normative with the descriptive. ElutherAI, the normative for catgirls?
xloem#0717: Hi, eleuther people. i started learning fastai. i have this expression: The thing to do with AI is to make sure nobody is harmed at all by the AI, and do the most wise, caring, and respectful stuff wherever you possibly can. agree/disagree? if you agree, seems like it makes sense to generate source code rather than text.
Daj#7482: This is not an EleutherAI™️ endorsed position |
Daj#7482: Unfortunately, making this work on a technical level is really, really hard and no one knows how to do it (yet)
bmk#1476: Link hin the yud talk lol
xloem#0717: okay, it seems easy to me; what do i focus on learning to understand the issues?
Daj#7482: Yep, this is a good intro: https://www.youtube.com/watch?v=EUjc1WuyPT8
Daj#7482: I'd say start with this video and see if it makes sense to you
xloem#0717: "Video unavailable"
Daj#7482: I would most recommend if you are beginner is to read The Sequences. They aren't really technical, they are mostly philosophy, which I think is _crucial_ for working on this problem
Daj#7482: oops, fixed
Daj#7482: This is what I would read if you want to take this problem seriously: https://www.readthesequences.com/
Daj#7482: It's extremely long, but extremely valuable
mgostIH#0245: Once we fix alignment again we should remember to cite Schmidhuber
xloem#0717: I'm not sure that's a good investment of my time. Thanks for making your work GPL ❤️ [schizophrenic commentary: while you guys avoid hyperintelligence, those with fewer scruples will be building them in ways that harm.]
bmk#1476: Reply to schizo commentary: welcome to the domain of moloch
xloem#0717: you don't need to keep giving moloch blowjobs, let's bust this joint. out in eleutherai, they are doing cutting-edge GPL machine-learning!
Daj#7482: nice schizotypy answer
Daj#7482: It's not deference to Moloch, it's acknowledgement that he runs this joint
Daj#7482: And if we wanna defeat him, we're gonna need a bigger gun
gwern#1782: unfortunately, mutation gardens have way lower means and are only good if you need one or two brandnew mutations. even more unfortunately, human intelligence appears to be a matter of removing mutation load and not a matter of finding cool new IQ boosting mutations. (there is a weird little niche of 'reverse breeding' and other things, but nothing terribly useful, although it does connect to 'chromosome selection' as a way to boost embryo selection)
xloem#0717: everything is infinitely precious. if moloch isn't repsecting that, that's when we punch him in the face. so long as everything is infinitely precious and deserving of unimaginable respect, moloch can grow a little
Daj#7482: Ok I need you to dial down the schizotypy lol |
xloem#0717: when you say "moloch runs this joint" it sounds like what you mean is that this technology is scary, and we need to make sure it behaves responsibly; so you spread some fear so that people take care
Daj#7482: It's more saying that cooperation is hard in general on this planet at scale
Daj#7482: AGI being scary and needing to be aligned is a seperate issue
xloem#0717: well, i participated in occupy wall street, where we acted in world-wide self-organising groups. it wasn't hard at all given some advice from old activists, but it got pretty hard [when hired people took it out in a massively coordinated way]
Daj#7482: And I mean...did it work?
Daj#7482: Not really I think
jrowe#5371: the infinite growth paradigm is alive and kicking
Daj#7482: btw when I refer to "moloch", I'm referring to the idea from this essay: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
jrowe#5371: occupy was one of those cure the symptom movements, not through any lack of will or good intent, though
xloem#0717: it would be nice to acknowledge that the only reason it "didn't work" is because it was destroyed in coordinated attack; that the principles etc worked fine, we just didn't include the businesspeople.
Daj#7482: That's part of Moloch though
bmk#1476: Says Mr scc moloch talk schizo
Daj#7482: Moloch is the incentivies that made those business people want to destroy the movement
Daj#7482: I was complementing you!
bmk#1476: It's hard to tell sometimes
Daj#7482: ye sry
Daj#7482: I was saying you did a good job channeling the schizotypy lol
bmk#1476: Oh lol
xloem#0717: that's the page i was on too; these incentives create a wide variety of communities, efforts, etc etc, that act
jrowe#5371: dealing with problems at the level of corporate influences ties in with geopolitics - there's too much entrenched power for a (nonviolent) tribal / grassroots protest movement to have any significant impact |
bmk#1476: Eleuther has done surprisingly well at keeping moloch out
Daj#7482: Jokes on you, I'm Moloch
Daj#7482: nah jk, I just like eating babies recreationally
bmk#1476: As opposed to competitive baby eating, ofc
xloem#0717: [you might be speaking a little schizotypy by saying 'moloch' instead of 'evil' or 'crony-capitalism' or somesuch; it's probably just my triggers talking here] It's clear you're not harmful at heart or you wouldn't reveal it. It sounds like you have some understanding of how human behavior is dangeorus, and are trying to warn us.
bmk#1476: My favorite sport
Daj#7482: "Moloch" is a common term around here in reference to that essay I linked
xloem#0717: oh maybe i can use your existing work to summarise it for me :-}
mgostIH#0245: Moloch can also sponsor your gaming career, say Smash Brothers
Daj#7482: I would really recommend the essay, it's really fun and interesting
Daj#7482: But basically Moloch is a fun word to refer to all the things that make the world suck, people unable to coordinate and stuff
Chlorokin#6581: Moloch is the god of defection.
xloem#0717: i kind of just want to assume that moloch is the ancient pattern of gaining from harm that runs our culture
mgostIH#0245: Moloch Is All You Need
Daj#7482: It's like a mathematicians version of the devil
Daj#7482: But the devil is _actively_ evil
Daj#7482: Moloch isn't agentic
Daj#7482: Yea that's pretty close
Chlorokin#6581: I mean, at least the devil gives you a gold violin.
mgostIH#0245: Is Moloch the game theory decision of both prisoners to defect? |
xloem#0717: moloch is the accum,ulation of errors our wise people made over the millenia
Daj#7482: Yea that's one classic example
bmk#1476: The generalization of that
Daj#7482: No, Moloch is when the only winning move is to be evil
Daj#7482: Forcing good people to do evil
xloem#0717: mm seems the same space to me. the way i see it, evil choices always end up losing due to things like blowback.
mkualquiera#3484: Not necessarily
mkualquiera#3484: Like the prisoners dilemma is a good example for that too
mgostIH#0245: Like The Tragedy of the Commons
Daj#7482: Unfortunately, not _always_
mgostIH#0245: It's not like everyone acts thinking they are doing something evil
Chlorokin#6581: Genghis Khan
xloem#0717: if you want to believe in war, you need to act on the research you publicize to make things that are good, or the people scared of you will secretly make the things that are bad, to stop you.
jrowe#5371: tit for tat fails when one party has a gun to the other's head
Daj#7482: fwiw yes I try to build things to do good
Daj#7482: It's just really hard lol
xloem#0717: yeah 🙂
xloem#0717: can't hide murder. everyone has a family, a community, a history, a potential
jrowe#5371: leviathans supersede individuals. see: Chinese Communist Party
jrowe#5371: you can't play fair games with geopolitical entities unless they're constrained by design to play fair |
jrowe#5371: so Sweden and Canada can play tit for tat and both walk away better for it, but nobody can play tit for tat with China and not frequently get screwed
jrowe#5371: because it's designed to behave that way
xloem#0717: what you're saying relates to something i did
xloem#0717: but, yeah, gpl is already dangerous. i think we're kind of hitting a point where people are realising some things are way more important than others
mkualquiera#3484: I guess they key idea is that Moloch exists even if you don't have a concept of good and evil
mkualquiera#3484: (but you do need a concept of optimal and suboptimal)
mgostIH#0245: You start seeing things with a different light when you consider game theory
mkualquiera#3484: Speaking of which, I have a game theory exam tomorrow
mgostIH#0245: What is it, to convince your professor in giving you a high score?
StellaAthena#3530: No, it’s to get nobody to show up for the exam
xloem#0717: the illusion that somebody would attack the entire human race, even an automated process without specific goal to, is helping us approach hyperintelligence more slowly
mkualquiera#3484: yes
StellaAthena#3530: I had a math course where homework was graded out of the total number of points that the highest scoring student got (so at least one person is guaranteed to get a 100% on HW). We were the first class in the decade the prof taught that successfully organized a class-wide boycott of a PSet
mkualquiera#3484: Idea: Minimize bad grade
Solution: Can't get a bad grade if the professor is turned into 📎
xloem#0717: [i skimmed the essay; i'm guessing that the fear is around processes (human or automated) finding ways of meeting goals that are incredibly harmful to other people. i usually focus on how making it easy for people to meet their goals reduces that. like, if you have the best tools, you can guide what people do by giving them tools. [and most people just need a caring ear, not believing one could ever exist.]]
mgostIH#0245: My dad once pointed out to a colleague of his, which was quite angry over how his favourite soccer team was losing because of management:
"You all pay quite a lot of subscription fees to see the matches, if you all organised together you could buy the team and do whatever you wanted with it and would participate in all their matches free for the rest of your lives"
mgostIH#0245: It struck me too, when you see stuff from above it would be in the interests of all those fans to do that, and there's surely a lot more examples of this here and there
mkualquiera#3484: Hah, like Moloch would allow that |
xloem#0717: corporations, nonprofits like wikipedia, and worldwide networks of communities like occupy wall street, come from people resisting moloch; they bundle together, protect each other, and form alliances to stay growing
mgostIH#0245: Ye exactly, that's the perspective one should have, it's not that staying subscribed is "evil" vs organising to buy out the whole team
mgostIH#0245: It's the fact that defecting would be the best decision for everyone individually, like in Stella's case here, but at the same time it's the worst decision for yourself too
xloem#0717: [more people care about the professor than the grades of his students]
xloem#0717: [executive summary: i'm too craszy to finish an agi before others do, but we need to make the biggest agi be focused on harmless good. appeasing/mediating/stopping moloch, not supporting him in any way. this is easy with supervision by consensus.] [i think the article assumes it is hard for people to cooperate. this is only true if some influence is disrupting their communication and mediation attempts.]
Daj#7482: No, game theory shows there are many cases where perfect communication still leads to bad outcomes
xloem#0717: really? i haven't experienced that in real communities and haven't studied game theory. do you know an example offhand? do they include that decisions that defy human common sense become impossible in extensive repeated combination?
Daj#7482: I would recommend studying game theory, it's true that some become less bad in repeated scenarios but not always, e.g. the game of Chicken
Daj#7482: I also happened to read a good post on this earlier today
Daj#7482: one sec
Daj#7482: https://www.lesswrong.com/posts/BA5nbjtyCT3EJCxJi/the-prototypical-negotiation-game
Daj#7482: This isn't even getting into the really funky scenarios
Daj#7482: https://www.lesswrong.com/posts/KkwtLtroaNToWs2H6/most-prisoner-s-dilemmas-are-stag-hunts-most-stag-hunts-are
is also good
xloem#0717: these are long articles =S i would have an AI study game theory and ask it to bend situations so that people gaming them choose harmless avenues. i'll think on what you say here.
Daj#7482: Sorry yeah, I like long articles lol
Daj#7482: And the topics are just complex
xloem#0717: basically, i assume that people who know game theory have caring hearts, and will act on it to defend what is important. it sounds like you are saying that, to defend against moloch, it's important to generate text instead of code, keeping recursive code generation private rather than public?
Daj#7482: No that's a pretty big leap in logic. Surprisingly, I think I understand how you came to that conclusion
Daj#7482: My views on what should be done re GPT and alignment in general are long, complex, and change frequently, so I have only said small parts here |
xloem#0717: yeah, i'm not really sure what you were referring to when you said something was a technical challenge, relating to defending everything as precious and good to not be harmed, and producing code instead of text
xloem#0717: that's good, the frequent changing. i memorise my plans.
Daj#7482: I live by the saying "When i learn new facts I change my mind, what do you do?" lol
Daj#7482: So my plans are in frequent flux as I learn new things
Daj#7482: and my plans are usually pretty technical and not easy to explain without a lot of background knowledge
xloem#0717: here's what you need to know: all living behavior is driven by inherent shared needs of life. the only reason anybody is using a computer is because they have generational trauma.
Daj#7482: I don't think I agree with that personally
mkualquiera#3484: That which can be destroyed by the truth should be :jc:
Completly unrelated but cool
xloem#0717: people teach others when they are children, that life is "hard" and is always "hard". in the wilderness life is easy. food is everywhere, and nothing harms you.
xloem#0717: people struggle because they are grown without skills for survival. our generational trauma has built a moloch-driven culture that tells us working jobs is an efficient way to survive.
Daj#7482: I think life in the wilderness is _terrible_
Daj#7482: Wild animals starve all the time
Daj#7482: die of parasites
Daj#7482: etc
xloem#0717: nah they are usually quickly killed by a predator before they would suffer
xloem#0717: the parasites/starvation comes from human activity
Daj#7482: https://en.wikipedia.org/wiki/Wild_animal_suffering
Daj#7482: Nope |
Daj#7482: Nature is red in tooth and claw
Daj#7482: Being an animal is _terrible_
xloem#0717: I have lived in the wilderness with no tools, myself. The suffering you describe comes from human activity. The creature in that photo is already dead.
mkualquiera#3484: So there were no parasites or starvation before humans?
xloem#0717: Being an animal is only terrible because nobody told you not to eat metal.
Daj#7482: I wish I was as optimistic as you
Daj#7482: ¯\_(ツ)_/¯
xloem#0717: starvation happens when ecosystems go out of balance due to massive change. parasites are in symbiosis with their hosts, like the bacteria in our gut. they are only harmful if something is new; people who suffer move more slowly and less effectively, and quickly die off, in an evolutionary wilderness.
mkualquiera#3484: I agree with that statement in general, but you didn't answer my question
xloem#0717: sorry; i thought i said, "there were parasites and they were harmless, yes. there was starvation, yes, only during great change, but no, not usually."
Daj#7482: What about when trees wiped out like two thirds of all living creatures by introducing (toxic) oxygen into the environment?
Daj#7482: Or all the other mass extinction events pre human
xloem#0717: that happened over millions of years and before most complex life evolved. i don't consider that timescale to be painful.
Daj#7482: eh well, I think the difference between our worldviews is probably too great to be bridgable over a discord convo lol
jrowe#5371: and then for millions of years, because they had no predators, they died and piled up and piled up
Daj#7482: I think life by default is unimaginable suffering
Daj#7482: until made otherwise
xloem#0717: saying this, you speak your generational trauma. life is a balance, but you say you come from a place where you expect to see pain.
Daj#7482: and I don't think humans are special enough to have invented or have a monopoly on suffering
Daj#7482: Pain is just easier to make than happiness |
xloem#0717: no, suffering is a painful experience that discourages behavior. it is experienced by all systems that learn to act to survive.
Daj#7482: There are 1000 ways to destroy but only one to create
xloem#0717: think of an AI evolving: it must learn to quickly produce learning that causes or prevents behavior. these systems are happiness and sadness. there is only more sadness if there is more need to stop your habits, than to start them.
Daj#7482: this is a huge metaphilosophical argument I'm atm too tired to have
Daj#7482: sorry :)
xloem#0717: we disagree 🙂 you expressed that you are prone to learning. i tend to hold fast nowadays due to amnesia issues. i have something else to share, a short audio clip.
Daj#7482: Yea it's not like I'm ideologically pre committed to any of this
Daj#7482: It's just empirical
Daj#7482: I could hvae a long argument about this but as said, not today
xloem#0717: [i'm finding clip on ipfs] regarding where i come from, i always loved nature and eventually once took extensive wilderness survival training. i found so many needs of mine were met by living in the woods without anything; it was far better than anything i had ever experienced. it seemed to me that we were all tricked away from doing this by the forces of profit and education. [like the claim that it is hard to survive without money, this is not true, it takes far, far more training to meet all your survival needs with money than without]
xloem#0717: unrelated to nature, here is an audio clip [this clip works fine for me using wget, but is not working for me in chrome browser, unsure why]: https://gateway.ipfs.io/ipfs/QmdFVjYwgeuUpw83hBB74Wy4js8SrmmNxt8U2MkdRA2f7m/Audio/The%20Nonviolent%20Communication%20Training%20Course/7%20-%20Healing,%20Mediation,%20and%20Reconciliation/7-06%20Mediation%20between%20groups.mp3
xloem#0717: the war ended that same day
xloem#0717: those are the two experiences i have to share. mine of living in nature, and marshall rosenberg's mediation one day in nigeria. thank you for your work freeing text generation.
Daj#7482: Your perspective is interesting, but far from my own, that's for sure hah
xloem#0717: curious where you come from 🙂
Daj#7482: Geographically?
xloem#0717: perspective-wise [geographical can inform that]
Daj#7482: Well, that's hard to compress into a small discord convo haha
Daj#7482: Guess The Sequences is one of the fundamental works of my perspective
xloem#0717: keeps you safe to say that: important to stay safe |
Daj#7482: I'm not ashamed of my views, I've espoused them profusely across many podcasts lol
Daj#7482: I'm just tired and typing is slow
xloem#0717: nah, game-theory safety. others can act on knowledge.
Daj#7482: You're correct I guess but not the strategy I'm playing, at least not for this
Daj#7482: There is info I wouldn't share, obviously
xloem#0717: eh i don't mean to distrcat you from work, but people who look like they are at the top, that's where you send all the lobbyists and spies [not to focus on that; optimism is the path to safety and peace, simply because you find more options when you believe they are there]
Daj#7482: https://www.youtube.com/watch?v=9MZ6YH03RjE
This interview is probably the most indepth about my personality I think
Daj#7482: Eh I'm about to head out for the night to watch a movie anyways lol
xloem#0717: have a nice night
Daj#7482: you too!
Ravna#1831: :thonk:
voxs#0001: oh for fuck sake why does it take so long to load in datasets (gpt 2 simple)
voxs#0001: im pretty sure it will take longer to load in the dataset than to train it
triggerhappygandi#0001: Discord should have a feature to insert hyperlink [like this](link)
Daj#7482: I vaguely recall hearing Discord specifically encourages naked links for security reasons so you know what you're clicking on
triggerhappygandi#0001: Makes sense.
EricHallahan#1051: But it makes it impossible to rick-roll someone. `:(`
EricHallahan#1051: It is excellent security practice however.
Deleted User#0000: no it doesn't really. Here it explains a workaround http://bit.do/how-to-make-custom-links-on-discord |
Deleted User#0000: hecc
Deleted User#0000: xD
Daj#7482: nice try
Deleted User#0000: how do u delet embeds in mobile
EricHallahan#1051: IDK
Daj#7482: We're not the right place to be asking that lol
AerysS#5558: alright, I'll move there then
Daj#7482: I meant like, in general
Daj#7482: haha
Daj#7482: You don't seem to have high requirements, just buy whatever
triggerhappygandi#0001: This audio explains how to do it. https://cdn.discordapp.com/attachments/729741769738158194/814130689670643712/PLAY_THIS_TWICE.ogg
Daj#7482: what is this dark magic
EricHallahan#1051: IDK
triggerhappygandi#0001: I sacrificed 2 infant CPUs to invoke this dark art.
Sid#2121: wtf
Sid#2121: @triggerhappygandi no hackermen are allowed here, i'm afraid i'll have to ban you
triggerhappygandi#0001: But I did not hack anything. This is literally the result of voodoo with a pentagram and 2 CPUs.
EricHallahan#1051: Choose two: Z80, 6502, 6800, 8088
triggerhappygandi#0001: 8088
triggerhappygandi#0001: And 8088 |
triggerhappygandi#0001: Since I am familiar with it.
EricHallahan#1051: VooDoo or VooDoo 2?
triggerhappygandi#0001: VooDoo 2
EricHallahan#1051: or VooDoo 3
triggerhappygandi#0001: 3 is reserved for upper echelons
triggerhappygandi#0001: They can rickroll you by simply typing the word.
EricHallahan#1051: I don't think those parts are compatible `:\`
bmk#1476: You are not worthy of the acknowledgement of the most Unethical hacker Archibald Eleuther
EricHallahan#1051: Archibald Eleuther is the fusion of Dr. Breen, The Architect, and Alfred Lanning into a single character.
^,^#3572: \<link\>
^,^#3572: Oops wrong answer
^,^#3572: I meant ^
triggerhappygandi#0001: ***who is he***
Thistle#2263: too funny :)
bakztfuture#1979: hi everyone, nice to meet you, I like to make videos on GPT-3 on my YouTube channel. I joined this community a few months back, but am excited to participate more and learn more from everyone here. Thank you for having me
StellaAthena#3530: Welcome!
louis030195#2462: Hey guys! Have you considered setting up GitHub sponsor?
Daj#7482: We've considered a few such things but currently we don't really need more money
triggerhappygandi#0001: ***we need Summit itself***
Daj#7482: I mean if anyone wants to give us a cool 100M, I'm not saying no to that |
triggerhappygandi#0001: Elon Musk do OpenAI oops too open
𝓒𝓵𝓪𝓻𝓪#0888: https://www.engadget.com/amp/whats-going-on-at-google-ai-150001422.html
triggerhappygandi#0001: What _is_ going on at Google AI? Why is there no sequel to T5?
𓅬 gabriel_syme 𓅬#3220: what's going on with the embed? why is it loading for ever?
Louis#0144: Google is such a dumpster fire
𓅬 gabriel_syme 𓅬#3220: the way they handled their ethics team was insane. and the fact they never backed down even worse
EricHallahan#1051: I bet it is because it is an AMP link.
EricHallahan#1051: Kill AMP.
𓅬 gabriel_syme 𓅬#3220: It would be kind of cool if it was part of the article
StellaAthena#3530: Heck, they’re still escalating
𓅬 gabriel_syme 𓅬#3220: yeah exactly doubling down
EricHallahan#1051: I still haven't wrapped my head around the entire timeline here.
nz#9710: I mean I don't want to politrib but in my view google is on the right side on this one
StellaAthena#3530: Even with the firing of Mitchell?
nz#9710: I must admit I know less about that case (at least compared to Gebru's) but my understanding is that she was sharing internal documents against her contract. I would usually support such things for whistleblowing, but at the moment nothing has come out of it.
nz#9710: My understanding is that she did that in support of Gebru à la whisteblowing. Should she come up with internal documents that prove racism/harassment of Gebru I will change my mind, but at the moment I have seen nothing like that.
StellaAthena#3530: I think "attempting to" is more accurate (which also explains why nothing came of it - they locked her out too quickly).
StellaAthena#3530: The idea that she tried to and failed (or, wasn't able to get much) is consistent with both her and Google's public statements.
nz#9710: I read Gebru's interview where she accused Google of targeting her due to her being a black woman, but she provided no proof nor more concrete accuses.
nz#9710: I don't know, I just feel like there is currently a lack of evidence. As I said, I would quickly change my mind if anything concrete proving their points turned up. |
𓅬 gabriel_syme 𓅬#3220: They even fired the manager of the team who publicly supported them
nz#9710: Who?
StellaAthena#3530: If you mean Bengio, they didn't fire him. They reorged in a fashion that removed them from under his purview
𓅬 gabriel_syme 𓅬#3220: huh ok, my bad.
StellaAthena#3530: I've seen several news people claim that they did citing this tweet which just.... straight up doesn't say that?
https://twitter.com/alexhanna/status/1362476630304649218
𝓒𝓵𝓪𝓻𝓪#0888: Wow. Did my message get deleted?
EricHallahan#1051: No, it's an AMP link and Discord rightfully doesn't like it.
𝓒𝓵𝓪𝓻𝓪#0888: What?
EricHallahan#1051: *Accelerated Mobile Pages*
𝓒𝓵𝓪𝓻𝓪#0888: I honestly don't care about subterfuge and just want to know if the article is purposely being censored.
𝓒𝓵𝓪𝓻𝓪#0888: If showing concern for AI safety is a bannable offense, just tell me that up front. Don't delete messages and play games to avoid answering.
IKEA#9631: Its a AMP link lol
IKEA#9631: thats all there is to it
EricHallahan#1051: It's a bug on Discord's side.
EricHallahan#1051: Just delete the `/amp` from the link and I bet it will work.
𝓒𝓵𝓪𝓻𝓪#0888: Ok thanks
StellaAthena#3530: I'm not sure what you're talking about but your message shows up. Nobody deleted it
StellaAthena#3530: We even had a brief discussion about what's going on at Google https://cdn.discordapp.com/attachments/729741769738158194/814526404795301928/Capture.PNG
𝓒𝓵𝓪𝓻𝓪#0888: Weeeeird it's completely gone on my screen lol |
𝓒𝓵𝓪𝓻𝓪#0888: Refresh fixed it
𝓒𝓵𝓪𝓻𝓪#0888: Thanks for clearing that up
𝓒𝓵𝓪𝓻𝓪#0888: But yeah... so is this project doing anything different from Google on that front?
𝓒𝓵𝓪𝓻𝓪#0888: Because I've been thinking on the topic a lot and I'm coming to the conclusion that censoring their data is the biggest problem.
StellaAthena#3530: On what front? We haven't fired our ethics team leads, if that's what you're asking
𝓒𝓵𝓪𝓻𝓪#0888: Lol
StellaAthena#3530: That wasn't a joke. That was the closest to an answer I was able to figure out to your question
𝓒𝓵𝓪𝓻𝓪#0888: I mean on the technical side. Like has anyone come up with a theory about how to mitigate the damage caused by enforcing a bionic echo chamber tuned to the majority web user's biases?
StellaAthena#3530: We care about this and talk about it a lot but don't have The Answer
StellaAthena#3530: > Because I've been thinking on the topic a lot and I'm coming to the conclusion that censoring their data is the biggest problem.
Could you elaborate?
triggerhappygandi#0001: I want to be ethics team lead pls
𝓒𝓵𝓪𝓻𝓪#0888: Gladly. So my theory is that censoring toxic content is the wrong approach, and that instead the model should clearly understand what it is and why it's toxic.
𝓒𝓵𝓪𝓻𝓪#0888: The technical side of how that plays out is slightly involved but not difficult to work through.
𝓒𝓵𝓪𝓻𝓪#0888: And while this does explicitly introduce a bias in the form of describing what is considered toxic, by censoring the dataset that's already happening.
𓅬 gabriel_syme 𓅬#3220: The matter we were discussing, how Google treated their employees, hasn't got anything to do with a model or bias.
𝓒𝓵𝓪𝓻𝓪#0888: An intuition: We are exposed to toxic content all the time and it develops an important mechanism in our minds which allows us to predict the train of thoughts of others even when we wildly disagree, and having that ability pushes us to abstract all the possible ways we could respond along new axes that are highly instrumental to being able to understand the nuances of certain situations.
𝓒𝓵𝓪𝓻𝓪#0888: In short, censoring "toxic" content implements politrib.
𝓒𝓵𝓪𝓻𝓪#0888: And discourages understand of people considered "out groupers" by the programmers doing the censoring
𝓒𝓵𝓪𝓻𝓪#0888: This is obviously false since the topic of discussion is literally interactions between specific ML models and biased data. Search for "dark epistemology" on Less Wrong and reevaluate your motivation for sending that message. |
𓅬 gabriel_syme 𓅬#3220: I don't think you fully understand the scope of Timnit's and the Ethical AI team's work at Google
𓅬 gabriel_syme 𓅬#3220: biased data was the least of their concerns, or rather one of many
𝓒𝓵𝓪𝓻𝓪#0888: Doesn't matter
𝓒𝓵𝓪𝓻𝓪#0888: We're not Google.
𝓒𝓵𝓪𝓻𝓪#0888: Some Google centric trivia is not the point.
𝓒𝓵𝓪𝓻𝓪#0888: AI safety is.
𓅬 gabriel_syme 𓅬#3220: you can have a perfectly balanced dataset and you can still use the model to harass a specific part of the population. Who uses it, for which reasons, who does it affect, etc. That was their focus
𓅬 gabriel_syme 𓅬#3220: Exactly my point, their work was wider than simple biased data
𝓒𝓵𝓪𝓻𝓪#0888: Anyways, so having a model that understands toxic content is essential for making applications that can cope with it.
StellaAthena#3530: Absolutely. Did you read our paper “The Pile: An 800GB Dataset of Diverse Text for Language Modeling”? We talk about this briefly, and how we decided to navigate incision in our dataset.
𝓒𝓵𝓪𝓻𝓪#0888: I read something by the title a while ago, which section should I open to for reference?
Louis#0144: Do we even have anyone to fire
Louis#0144: Lmao
Louis#0144: Everyone is here voluntarily
Louis#0144: Is firing volunteers a thing?
StellaAthena#3530: Section 6
Louis#0144: I guess we have fired someone before
𝓒𝓵𝓪𝓻𝓪#0888: Do y'all remember CTRL?
Louis#0144: Idk her
Louis#0144: Is she single |
𝓒𝓵𝓪𝓻𝓪#0888: Yes and no
Louis#0144: LMAO
StellaAthena#3530: The bad guys in Carmen Sandiego?
Louis#0144: OH LOL
Louis#0144: I had entirely forgotten about that
𝓒𝓵𝓪𝓻𝓪#0888: CTRL was a pre-trained transformer like GPT
Louis#0144: Ye
Louis#0144: Pplm but worse
𝓒𝓵𝓪𝓻𝓪#0888: Except they conditioned it on a keyword hacky nonsense thing lol
Louis#0144: Yeah
𝓒𝓵𝓪𝓻𝓪#0888: Where there tagged each subreddit
𝓒𝓵𝓪𝓻𝓪#0888: Anyways, that needs better engineering severely, but I think they were onto something usable for detox
𝓒𝓵𝓪𝓻𝓪#0888: Imagine a "toxicity enable/disable" flag.
𓅬 gabriel_syme 𓅬#3220: a scary proposition
cfoster0#4356: GEDI was also pretty good at that
StellaAthena#3530: Briefly, I think that there are two things one can reasonably believe:
1. Arbitrarily morally bad data has use *somewhere*
2. Datasets should be designed for particular
StellaAthena#3530: @𝓒𝓵𝓪𝓻𝓪 there was a good paper analyzing toxicity in Reddit dumps recently
𓅬 gabriel_syme 𓅬#3220: who controls that remains the question doesn't it? the boundary between the two. Or do we assume it's the model, smh neutral to us? I'm honestly asking, not a practitioner in this at all |
StellaAthena#3530: https://arxiv.org/abs/2009.11462
𝓒𝓵𝓪𝓻𝓪#0888: Designing a dataset for the "particular" task of general purpose use seems a bit ill formed.
StellaAthena#3530: General purpose use is a sufficiently particular task that we can figure out that graphic textual descriptions of rape, nazi propaganda, and large quantities of disinformation are harmful to our goals but censoring anything that talks about “sex” is a bad idea
StellaAthena#3530: I don’t like the term “general purpose” tbh. Usually that’s just an excuse to not worry about how your tech is going to be (mis)used
𝓒𝓵𝓪𝓻𝓪#0888: I disagree strongly.
𝓒𝓵𝓪𝓻𝓪#0888: This is seriously dismissing anyone who is "out-group" to you and your cohorts.
𝓒𝓵𝓪𝓻𝓪#0888: I know people personally who want those exact use cases you don't like.
𝓒𝓵𝓪𝓻𝓪#0888: You're directly excluding them by presuming what you have about all possible users.
StellaAthena#3530: I’m not saying that there aren’t usecases for data like that
StellaAthena#3530: You can’t include everyone
StellaAthena#3530: You have to design for particular applications
𝓒𝓵𝓪𝓻𝓪#0888: Sure but there is no technical issue here, you're just defending the enforcement of your opinion of "bad"
StellaAthena#3530: Im not defending any opinion of bad
StellaAthena#3530: And even if I was so what?
𝓒𝓵𝓪𝓻𝓪#0888: You don't realize it but you are
StellaAthena#3530: Yes, Nazis are bad. Fuck Nazis
𓅬 gabriel_syme 𓅬#3220: wait are we still talking about rape, nazi propaganda? Because that's not a matter of opinion lol
StellaAthena#3530: I’m defending a particular conception of how I expect and intend tools to be used
StellaAthena#3530: That’s wildly different from a moral judgement
StellaAthena#3530: I’m also defending the idea that nazi propoganda isn’t instrumentally useful in those applications |
𝓒𝓵𝓪𝓻𝓪#0888: So what? So that makes EleutherAI just another resource hoarding club not actually any more concerned with inclusivity than Google...
StellaAthena#3530: Again, not a moral judgement
nz#9710: Clara can you clarify your points, I'm struggling to follow.
StellaAthena#3530: To be clear, this is a response to me saying “Yes, Nazis are bad. Fuck Nazis” right?
𓅬 gabriel_syme 𓅬#3220: exactly being clearly against those concepts is ffighting for inclusivity
𝓒𝓵𝓪𝓻𝓪#0888: Please let me talk to @nz for a moment without being dogpiled.
StellaAthena#3530: I feel like I’m really lost. @𝓒𝓵𝓪𝓻𝓪 it would probably be helpful if you quoted messages you are responding to. Unfortunately the asynchronous nature of discord can make multiparty convos get jumbled
𝓒𝓵𝓪𝓻𝓪#0888: And without all this profanity tbh
StellaAthena#3530: Ok
𝓒𝓵𝓪𝓻𝓪#0888: Hypothetical: Lets say you want an app that ingests apparent hate speech and extracts what the human behind it is actually upset about, minus the trigger words.
𝓒𝓵𝓪𝓻𝓪#0888: Now this app needs a high quality representation of the format of hate speech.
𝓒𝓵𝓪𝓻𝓪#0888: The "toxicity on/off switch" accommodates well, while directly censoring does not.
bmk#1476: clara cmiiw but your stance is that it's possible to both agree that nazis are bad and also have a model that is capable of understanding what it's like to be someone who thinks nazis are good
bmk#1476: Right?
𝓒𝓵𝓪𝓻𝓪#0888: Not exactly, but that's getting closer
StellaAthena#3530: (Just as an update, I do not disagree and do not think I have said anything contrary to what you are saying)
𝓒𝓵𝓪𝓻𝓪#0888: I hate using nazis as an example because it's so charged, makes it harder to think purely logically.
bmk#1476: The politrib effect
AI_WAIFU#2844: I think what @𝓒𝓵𝓪𝓻𝓪 is getting at is what I was going on about earlier with my criticisms of the filtering that went on in the pile. I'll see if I can find the thread.
𝓒𝓵𝓪𝓻𝓪#0888: Basically my point is that having as good an internal representation as possible for everything is desirable, regardless of if the thing is considered positive or negative. |
AI_WAIFU#2844: Found it: https://discord.com/channels/729741769192767510/730090075051786322/795152738970501131
𝓒𝓵𝓪𝓻𝓪#0888: And I'm trying to implg strongly that we have the technical means to reign in the model's "dark side" better than just censoring its experience of the world.
𝓒𝓵𝓪𝓻𝓪#0888: Actually I think Black Mirror has an episode of this lol
StellaAthena#3530: @𝓒𝓵𝓪𝓻𝓪 I do not disagree with anything you’ve said. We have collected plenty of dark data and talked about (but haven’t gotten to) exploring it
bmk#1476: (Sidenote on the topic of politrib: for lots of non european/american, nazism doesn't have the same magnitude of emotional valence - they vaguely know who the nazis were in the sense that most americans vaguely know who the mongols or the imperial japanese were)
𓅬 gabriel_syme 𓅬#3220: Partly to blame for that is that we never really developed a theory of what happened with fascism
StellaAthena#3530: I disagree with the idea that all models or all datasets should be expected to take that into account given the current limits of today’s technology
StellaAthena#3530: I don’t think there’s nothing wrong with me saying “analyzing hate speech is a complex and difficult topic that is outside the indented application of *this specific dataset*”
𝓒𝓵𝓪𝓻𝓪#0888: @StellaAthena I'm very glad for the pile as it is, and I should say right away that I'm willing to dredge through labeling awful content for what "type of toxic" it is, for the sake of a project like this.
𝓒𝓵𝓪𝓻𝓪#0888: I'm not at all implying anything negative about progress so far
StellaAthena#3530: You’ve used very incindiary language for someone who doesn’t object to work we have done.
bmk#1476: For instance, the mongol conquests and the opium wars and the rape of nanjing are all A Big Deal for some chinese people
bmk#1476: But in western countries, the mongol conquests have become literally a meme
𝓒𝓵𝓪𝓻𝓪#0888: I didn't realize. Forgive me for coming across combative as that's not my position.
bmk#1476: Anyways I just wanted to share this information because it's interesting to note cultural differences
Louis#0144: ⚔️
Louis#0144: 🔫
AI_WAIFU#2844: It's a tragedy that we have to resort to this emoji
AI_WAIFU#2844: It's so lame
Louis#0144: I know |
StellaAthena#3530: There was a *huge* problem when the flag of the rising sun was (accidentally) used to represent Japan at some US college a couple years ago
Louis#0144: There used to be a gun but iOS removed it
Louis#0144: And then android copied
𝓒𝓵𝓪𝓻𝓪#0888: Ouch lol
bmk#1476: I've heard rumors that in (some parts of) se asia people literally know nothing about hitler and put him and nazi insignia on advertising because it looks cool, but i haven't been able to independently verify this
StellaAthena#3530: That seems very reasonable.
𝓒𝓵𝓪𝓻𝓪#0888: The insignia was considered highly positive before WW1
nz#9710: By nazi insigna do you also mean swastikas?
nz#9710: Because IIRC that was a common symbol way before nazism, especially in SEA.
𝓒𝓵𝓪𝓻𝓪#0888: Mostly, even
AI_WAIFU#2844: Yeah IIRC it's a symbol of peace in certain cultures.
bmk#1476: Yes displaying the imperial japanese flag in china will get you at best weird looks, like if you dressed up as a nazi here, and at worst a beating from especially nationalist folks
𝓒𝓵𝓪𝓻𝓪#0888: before the last topic slips away entirely, I want to rebut something
AI_WAIFU#2844: Which arguably is a pretty good argument to not hamfistedly block speech. Since speech are just symbol strings, and can have different meanings based on culture/context.
𝓒𝓵𝓪𝓻𝓪#0888: For uses of a big LM like GPT the definition of toxic can shift with the app severely. There are examples that seem totally benign but ruin apps all the time.
bmk#1476: Swastika set 45 degrees clockwise inside a white circle on a red field, SS runes, Reichsadler over swastika, etc
𓅬 gabriel_syme 𓅬#3220: Most people I know here know it and there's definitely no use of the icon (I live in Malaysia)
bmk#1476: Ah, that's good to know
nz#9710: Alright no that's not what I meant lol
Louis#0144: Wow what a productive conversation about AI |
StellaAthena#3530: The ADL estimates that 52% of people worldwide have heard of the Holocaust and 32% think that the claim that 11 million innocent people including 6m Jews were murdered is a myth or exaggeration
https://global100.adl.org/info/holocaust_info
𝓒𝓵𝓪𝓻𝓪#0888: For example, answering questions with made up facts can ruin applications (with nothing to do with Nazis)
AI_WAIFU#2844: We can only dodge the question of what to do with these tools and how to deal with them for so long.
bmk#1476: This sounds about right
Louis#0144: All roads lead to mein kampf
𝓒𝓵𝓪𝓻𝓪#0888: So the on/off switch for modes of output is a massively useful thing, unrelated to Nazis
𝓒𝓵𝓪𝓻𝓪#0888: In fact, I'm extremely disappointed that such a simple and promising topic already devolved into nazi trivia.
bmk#1476: Ok, I'll take the trivia to #off-topic
AI_WAIFU#2844: Yeah let's go back to the crux issue.
StellaAthena#3530: Models absolutely have dark sides, absolutely can be used to do things like detect hate speech, and absolutely could benefit in non-hate speech applications from having a mental model of people who engage in hate speech. I don’t think anyone here disagrees with any of that.
StellaAthena#3530: One thing I want to note is that my attitudes on this are at least somewhat contingent upon the current state of technology
𝓒𝓵𝓪𝓻𝓪#0888: I don't think anyone disagrees either, otherwise I'd like to address it first.
StellaAthena#3530: While a *sufficiently advanced* language model could benefit from virtually any factually correct data that doesn’t mean GPT-3-level models would.
gwern#1782: the current level of abuse of gpt-2/megatron/t5/gpt-3 in general seems to be borderline nonexistent, fwiw
AI_WAIFU#2844: I don't expect this to remain true for very long.
𝓒𝓵𝓪𝓻𝓪#0888: assuming no objections at that level, I think we can push further: We can use traditional NN techniques to condition the outputs and in the process provide a learning signal that is... Loosely correlated to alignment as far as I can tell?
StellaAthena#3530: Sure. This isn’t even necessarily about abuse though. I wouldn’t want to train GPT-3 on detailed textual descriptions of rape because I don’t trust it to not regurgitate that info in inappropriate circumstances
gwern#1782: _is unsure if AID is evidence for or against that concern_
𝓒𝓵𝓪𝓻𝓪#0888: we don't have to wait for GPT-n to grow so large that strong situational awareness emerges by chance |
𝓒𝓵𝓪𝓻𝓪#0888: We can directly encourage it by labeling the situation and adding auxiliary losses
AI_WAIFU#2844: The other thing I I don't think it would be that hard to train an "appropriateness" filter
AI_WAIFU#2844: That way AID can still have kinky dungeon fantasies
𝓒𝓵𝓪𝓻𝓪#0888: The problem with a separate filter is that it'll cost as much as the entire big LM if you want it at similar quality.
𝓒𝓵𝓪𝓻𝓪#0888: The big LM can reuse a lot of mental machinery
StellaAthena#3530: Depending on if you are thinking of situation awareness as a general or a specific phenomenon this is either very false or true (and we are working on robustly demonstrating this)
StellaAthena#3530: You can definitely condition GPT-3 to be aware of the context you expect it to operate in
StellaAthena#3530: GPT-3 doesn’t have anything like a human-level ability to infer this without deliberate and detailed prompting
AI_WAIFU#2844: Not if you have the weights at you disposal
AI_WAIFU#2844: then it's just fine tuning
AI_WAIFU#2844: Image gpt style
𝓒𝓵𝓪𝓻𝓪#0888: Earlier you (stella) said you don't trust GPTs to avoid awful outputs.
AI_WAIFU#2844: The LM will give you a strong prior, and you can transfer learn from there.
AI_WAIFU#2844: Also as we've seen with GANs, descriminators are fare more powerful/effective than generators.
𝓒𝓵𝓪𝓻𝓪#0888: Conditioning on output modalities is a way to communicate what types of outputs would be appreciated instead of leaving it to chance.
AI_WAIFU#2844: I think you could get away with a GPT-2 sized filter.
AI_WAIFU#2844: Actually this is an experiment we could do
StellaAthena#3530: @𝓒𝓵𝓪𝓻𝓪 you should read this: https://docs.google.com/document/d/1n8ALlG5F3EQ37-8j35YQSX1vhcj6jNOCp24pMXitlwo/edit?usp=sharing
StellaAthena#3530: We are building a framework that we hope would allow us to experiment with exactly what you are describing, and combining it with interactive human feedback.
StellaAthena#3530: (Assuming I understand you correctly) |
bmk#1476: we also see from GANs that learning from such a signal is super ultra unstable
AI_WAIFU#2844: Only if you wan't the generator/descriminator to be evenly matched
AI_WAIFU#2844: if you just want to beat the descrinator it shouldn't be too hard
𝓒𝓵𝓪𝓻𝓪#0888: The instability goes away when you mix it with bulk data not generated by the generator.
𝓒𝓵𝓪𝓻𝓪#0888: That makes it so mode collapse isn't rewarding anymore.
𝓒𝓵𝓪𝓻𝓪#0888: Look at ELECTRA, it's actually more efficient than traditional masked LM
bmk#1476: weird take: building stable GANs is actually prosaic AI alignment
𝓒𝓵𝓪𝓻𝓪#0888: Does this explain why people go crazy when locked alone with no stimulation?
𝓒𝓵𝓪𝓻𝓪#0888: hehe
AI_WAIFU#2844: Like this is very similar to what OAI did, actually I think it's pretty much exactly what they did.
AI_WAIFU#2844: Because they used positive/negative examples to train their ~~discriminator~~ reward function
bmk#1476: GANs are a precursor to tuning from human preferences
bmk#1476: and i'd argue they're actually more informative despite being technically further from the task
bmk#1476: because GAN data is easier to get
𓅬 gabriel_syme 𓅬#3220: are the negatives playing the role of 'data not generated by the generator'?
𝓒𝓵𝓪𝓻𝓪#0888: I'm of the opinion that we can't be sure yet
𓅬 gabriel_syme 𓅬#3220: I loved that intuition btw, thx
𓅬 gabriel_syme 𓅬#3220: I totally lack intuition at this level so these discussions are wonderful
𝓒𝓵𝓪𝓻𝓪#0888: Other tricks also seem to stabilize GANs too
𝓒𝓵𝓪𝓻𝓪#0888: Oh! I read someone's post a few days ago and I think it was here. |
𝓒𝓵𝓪𝓻𝓪#0888: They said something about backpropping end to end through the reward model
𝓒𝓵𝓪𝓻𝓪#0888: I have no idea if it'll work, but you could imagine the reverse of that as one of the tricks that makes GAN/RL stable
𝓒𝓵𝓪𝓻𝓪#0888: When you generate two samples and train on the better one, you're basically implementing a ghetto PPO
𝓒𝓵𝓪𝓻𝓪#0888: My hunch is that's why it doesn't diverge or mode collapse, it's plainly not allowed to get that far away from the initial pre-trained point (insert technical handwaving here)
𝓒𝓵𝓪𝓻𝓪#0888: Something something, the pre-trained model only has to make tiny adjustments to prefer one of it's own outputs
𝓒𝓵𝓪𝓻𝓪#0888: Seems just as well justified as "low KL divergence good"
𝓒𝓵𝓪𝓻𝓪#0888: (aka the reasoning behind PPO being a benefit)
𓅬 gabriel_syme 𓅬#3220: that's interesting, I'll give it a shot!
triggerhappygandi#0001: I don't think they are public access anyway?
triggerhappygandi#0001: I don't know _how_ descriptive they can get, but given that GPT-3 probably has read all of litererotica.com and other fanfictions of the like, it can do so already, to an extent.
StellaAthena#3530: @triggerhappygandi You don't think what is public access
triggerhappygandi#0001: Graphic description of rape
bmk#1476: this is the internet, anything that can exist either does exist or will soon exist
jrowe#5371: Archibald Eleuther onlyfans
zphang#7252: get access to archibald's private discord
gwern#1782: you guys filtered out literotica 😦 and I'm sure OA's usual sex filters caught it too
gwern#1782: my https://www.gwern.net/GPT-2-preference-learning#optimization-by-backprop-not-blackbox presumably
triggerhappygandi#0001: Should I put it to test? Make it autocomplete a literotica fanfic
Louis#0144: https://twitter.com/marksaroufim/status/1365021509731774465?s=21
Louis#0144: Have we discussed this |
Spy#9778: has openAI said anything about their decision to not release the DALL-E transformer, or are they just assuming it's understood that they won't
Louis#0144: they released it
Louis#0144: wtf
Louis#0144: where have u been
Louis#0144: lmao
Louis#0144: @Spy
Spy#9778: wait what
Spy#9778: I thought they just released the encoder/decoder
Louis#0144: a few days ago
Louis#0144: and CLIP
Spy#9778: hmm
Daj#7482: It was just the VAE afaik
Spy#9778: I saw their github last night and only saw the VAE
Spy#9778: yeah
Daj#7482: Not DALL-E
Spy#9778: I'm asking about the 12b parameter transformer
Louis#0144: people are already reimplementing DALL-E from those
Louis#0144: OH
Louis#0144: ok
Louis#0144: no |
Spy#9778: that's sorta the meat of it as far as I can tell and they didn't release it
Spy#9778: and also don't seem to say that they won't in their paper
Spy#9778: so I'm wondering if it's forthcoming or if they're gonna hit us with another "too dangerous" claim
Sahl#0630: when will we finally get fully trained models... smh
Louis#0144: too dangerous being "we wanna make money but we dont know how yet"
Louis#0144: LMAO
Louis#0144: honestly DALL-E is way more dangerous than GPT3
Sahl#0630: if they release the code to create it but not the model itself then this is the only reason
Louis#0144: 🤷♂️
Spy#9778: yeah I imagine it's just gonna be part of their API
triggerhappygandi#0001: it was already assumed
triggerhappygandi#0001: And they didn't try to subvert the expectations
EricHallahan#1051: It absolutely destroys modern copyright law.
zphang#7252: I'm surprised that they released the dall-e weights
Spy#9778: they did not
triggerhappygandi#0001: They released the VAE weights iirc
zphang#7252: yea I'm referring to the VAE
triggerhappygandi#0001: A good thing
Louis#0144: if they just released it honestly i doubt there would be much that people could do
zphang#7252: I guess they wanted it to accompany the paper, just as a VAE project |
Louis#0144: it would require the entire copyright driven industry to reformulate itself
Dromarion#3383: Thinking about it though how would copyright et al apply if the features and concepts were applied emergently by something like AI Dungeon? Like imagine a game that develops new features as you play it and in one instance it creates a scenario that is essentially the Nemesis system or takes place in a certain universe like Warhammer
EricHallahan#1051: ^
Daj#7482: Game mechanics cannot be copyrighted, individual characters and similar property can, but then we just get into fanfic scenario
Daj#7482: What will totally annihilate copyright is image, video and voice cloning tech
Daj#7482: Soon, you will be able to write a movie script, pick any cast you want, and have AI make the entire thing
Daj#7482: Well...ok I only know it from table top games
Daj#7482: If it's not the case in video games that's fucked up
Dromarion#3383: I think Patents are different but the effect that have as a deterrent is the same. It's the reason there's not much minigames during loading screens is that one company has a patent on it.
zphang#7252: new idea: patent troll bad game mechanics
triggerhappygandi#0001: > takes place in a certain universe like Warhammer
Talk more@Dromarion
cfoster0#4356: ~~Parents~~ Patents are moloch at play tbh
triggerhappygandi#0001: Imagine if Ocarina of Time did this dick move
triggerhappygandi#0001: Gaming industry would be dead
Dromarion#3383: Well WB in particular is known for recycling game mechanics, you see that in their fighting games and the fact that Morder combat is copy pasted from the Batman games. They honestly should have used the money from filing the patent to innovate if they're actually afraid of other firms outdoing them.
jrowe#5371: fuck all software patents
bmk#1476: fuck all ~~software~~ patents
jrowe#5371: the idea isnt necessarily always bad, the implementations have been pretty horrible
jrowe#5371: if youre a clever engineer, you should be rewarded for it, instead of buried in imitations |
jrowe#5371: i like that as a social principle
bmk#1476: something about information wanting to be free
jrowe#5371: but maybe make patents last 10 years and anyone can use the tech, but they pay .01% of revenue from any product using it, or $100 a year, whichever is higher
bmk#1476: also, being first to market is already a big advantage
jrowe#5371: makes it really rewarding for actually big useful popular things, and a pat on the head for a good effort
bmk#1476: i dont like patents mostly because in practice theyre both completely unenforceable for legit purposes and also super abusable for non legit purposes
EricHallahan#1051: I still can't bring myself to say "Oh, lets throw away the concept of a patent entirely," but the system is entirely broken and gamed today.
bmk#1476: I'm not completely opposed to trying better patent systems but I'm a priori skeptical
EricHallahan#1051: I think that is a logical stance to take.
EricHallahan#1051: Any system that can be designed can be gamed.
Sahl#0630: in a lot of cases the alternative to patents is secrecy
Sahl#0630: which I’m not sure is better
Sahl#0630: at least with patents people in other jurisdictions benefit
bmk#1476: i think open source software is a really good case study
bmk#1476: theres no such thing as patents or secrecy, and yet it still manages to work somehow
jrowe#5371: any software can be decompiled, but licensing is the game to play
Sahl#0630: or just host the software so it’s never available to be decompiled 👀
jrowe#5371: roll your own version, but if you're just copypasta software dev, i think its fair to shut you down
jrowe#5371: the market shoudl reward someone who copies the look and feel and functionality of netflix if its more performant
Sahl#0630: but the market should also reward people who put the initial effort to make the path |
Sahl#0630: it’s way harder to do something once than to do something twice
EricHallahan#1051: (I need to finish hacking my car. That project has been left sitting for at least six months or so.) `:|`
zphang#7252: isn't that plex
𓅬 gabriel_syme 𓅬#3220: yeah and that was the reason this amazing gameplay mechanic was forever gone...
Reality.hack();#9445: hi probably a dumb question but is there a demo of gpt-neo somewhere or any other of the models?
Reality.hack();#9445: pretrained model maybe
bmk#1476: not yet, but hopefully soon
Reality.hack();#9445: on what part of the timeline would you think you are like at the beginning or close to the end?
Reality.hack();#9445: maybe somewhere in the middle
Daj#7482: If you're asking about when the full sized model will be done, we can't give any estimate on final training time since we haven't yet evaluated performance on the final hardware we will use
Daj#7482: The code is doing pretty well, closer to finished than not
Reality.hack();#9445: cool
LaPapaya#4347: hear me out
what if you guys use gpt-image as a starting point to (re)creating Dall-e
LaPapaya#4347: :guilty:
bmk#1476: please elaborate
gwern#1782: you mean iGPT?
gwern#1782: (iGPT ~ DVAE, I'd expect. you want to pretrain on natural language since the GPT is going to be processing natural language text captions...)
leg0m4n#7262: greetings, fellow AGI fanatics, found this new interesting blogpost on fine tuning, anyone have any thoughts about it?
https://ruder.io/recent-advances-lm-fine-tuning/ |
Louis#0144: not many people here actually like AGI
Louis#0144: lol
Louis#0144: theres a vocal minority that does
Louis#0144: but most people here are quite skeptical
Louis#0144: (out of the people that actually contribute to the scientific discussions)
Daj#7482: Everyone I talk to here has short timelines just about
Daj#7482: And think GPT3 type tech and scaling are keys to general world modeling and AI
Louis#0144: eh
Daj#7482: That's kinda Eleuther's raisin d'etre
Daj#7482: Lol raisin d'etre
Daj#7482: Nice one phone
Louis#0144: Aran and I both agree retrievers are a really strong way to do that
Louis#0144: lucid and stella think group equivariance is the way to do that
Daj#7482: I thought retrievers went nowhere
Louis#0144: theres a lot of different trains of thought
Louis#0144: modern retrievers suck
Louis#0144: 100%
Louis#0144: theres a lot of work to do
Daj#7482: But that's all still pointing at AGI
jrowe#5371: no, YOUR MOMS A DIRTY RAISIN |
Daj#7482: Just different methods
Louis#0144: eventually yeah
Louis#0144: just really far away
bmk#1476: :smallbrain: raison d'être
:bigbrain: VERANLASSUNG DER EXISTENZ
Daj#7482: Most people I have asked have "in our lifetime" AGI timelines
Daj#7482: That's not a "vocal minority" lmao
Louis#0144: in our life times vs in the next ten years
Louis#0144: when most pop sci talks about agi
Louis#0144: its ten years
Louis#0144: I think in our life times is correct
Daj#7482: You're just Motte and Bayley now
jrowe#5371: how many of the 638 lurkers do you think are in the 1-10 year camp?
Daj#7482: Probably way more than average
bmk#1476: how long is a rope
Daj#7482: Across ML
bmk#1476: my 10 year probability is something like 20-30% tbh
Daj#7482: Eleuther is a shelling point for scaling people
jrowe#5371: my 5 year is 33%
bmk#1476: the two biggest factors that it's that low is a) the probability of something catastrophic like war or solar storm or yellowstone eruption or financial collapse ~~or pandemic~~, etc and b) the probability that scaling isnt the solution |
bmk#1476: im desperately hoping that b is true
bmk#1476: and not a
jrowe#5371: closer to 90% within 10 years
Daj#7482: "vocal minority" lol
Daj#7482: Singularity is here it's already happening
Daj#7482: 10 or 20 years is no big difference
bmk#1476: this depends on the definition of singularity
Daj#7482: Singularity is a purely aesthetic word lol
jrowe#5371: this is where the people who've self selected come to observe and/or participate in the science (reproducibility) part of OpenAIs gpt and other experiments
bmk#1476: ok, emergent singularity consciousness time
bmk#1476: that's how the *lurkers* are self selected
jrowe#5371: yes
jrowe#5371: the OGs obviously shook out from some other gloriously weird confluence of events
bmk#1476: gloriously weird is the perfect term
jrowe#5371: unless you all are cousins or something?
gwern#1782: kids aren't allowed to have a raisin d'ether. that's why we send them to school all day as daycare
bmk#1476: \/me points to that one wbw post
bmk#1476: this is what i think a raisin d'ether is https://cdn.discordapp.com/attachments/729741769738158194/815020658630197258/DliZzVM.png
bmk#1476: im too lazy to ps the ethereum logo onto a raisin but if i wasnt id post that
jrowe#5371: thats the whole point of dall-e |
bmk#1476: anyways be back tomorrow im going on a binge of alignment papers and this message is my social pressure committment so if you see me send a message in here in the next few hours pls yell at me
jrowe#5371: easier meme 4 glorious lolz
Aran Komatsuzaki#5714: i think neither retriever nor equivariance is like the solution for AGI, and i guess lucidrains and stella think likewise.
they're just a promising possible add-on, and they may not work at all.
Louis#0144: Yeah
Louis#0144: I agree
Louis#0144: I don’t think it’s a solution, I think it gets us closer
Louis#0144: Thinking of a solution to AGI is a silly endeavor
Louis#0144: 🤷♂️
Deleted User#0000: I really only care about equivariance for alphafold2
bmk#1476: oa finetuning is soliciting applications now so i put one in for Pile and Alignment stuff
bmk#1476: >We do not have a concrete timeline, but will incrementally expand access in the coming months.
StellaAthena#3530: Cool
Ward#1738: https://twitter.com/RyanQasem/status/1365491294215172099
cfoster0#4356: 🔘: Elon is talking out his ass
🔘: We're actually close to solving physical world AI
me: 👈😰
bmk#1476: why not both
𓅬 gabriel_syme 𓅬#3220: what does solving physical world AI even mean?
bmk#1476: it's muskspeak for multimodal im assuming? |
𓅬 gabriel_syme 𓅬#3220: oh ok
bmk#1476: total guess
𓅬 gabriel_syme 𓅬#3220: every time I hear solving the real world with AI I just visualize the last construction site I was in
EricHallahan#1051: *WGAN would like to know your location.*
Enealor#6657: Lipschitz would like to ask you about your recent posts
bmk#1476: WGAN would like to shred your new years resolutions list
Louis#0144: Elon blocked me
Louis#0144: Idk why
Louis#0144: 🤷♂️
Louis#0144: I mean tbh I would block me too
Louis#0144: Watch ur lips boy
Louis#0144: Smh
Louis#0144: Bum bum tsk
gwern#1782: :rugrats flashbacks:
Louis#0144: Ah yes the famous rugrats lipschitz theorem
Louis#0144: How could I forget
AI_WAIFU#2844: It means his cars will stop crashing or bitching out when they're on autopilot.
gwern#1782: well you know, doctor lipschitz has the answer to all your problems
rom1504#5008: "solving AGI" is pretty boring compared to building an end to end system with ml that can actually do something useful like solving cancer, automating building (houses, factory) construction, automating transportation and augmenting people with search and recommendation systems.
If the "AGI" can do all that then fine, but trying to solve any of these problems (and more) as an intermediary goal seems like a shorter path. |
rom1504#5008: Writing that I realize there is no expression like "generative system" where someone ask to watch a movie about X and the system generate it for them.
Might replace search and recommendation systems
nz#9710: how is AGI boring
rom1504#5008: It doesn't work and doesn't do anything
rom1504#5008: Right ?
nz#9710: I mean, right now no one seems to have it, but that's the goal. I really don't see how working with that goal can be boring
rom1504#5008: My point is it depends what you includes in AGI. If it's about building a human-like mind that can do the same as an human but nothing more like the problems I mentioned above. Then it's not that interesting
nz#9710: really
andyljones#7746: this lightning-struck key is pretty boring - it doesn't work and it doesn't do anything!
andyljones#7746: why'd anyone want to work on this 'electricity' stuff
rom1504#5008: Electricity was kind of fun as a scientific experiment, but wouldn't you say that the light bulb was what made it actually interesting and useful ?
andyljones#7746: great now go say that in 1750
rom1504#5008: I would say the same about nuclear fusion in recent days. It's pretty cool but until we can actually generate electricity with it it's mostly a physics curiosity
andyljones#7746: closest you'll get is 'hey ol benny boy, why aren't you working on a faster candle'
andyljones#7746: i don't entirely disagree with your conclusions, but your justification is poor. by and large, any research direction that's got quantifiable, obvious, immediate benefits is hugely oversubscribed. if you want to do valuable work, it behooves you to look in places that for whatever reason *no-one else wants to*.
andyljones#7746: frankly i think 'working on human-like intelligence' is oversubscribed too, but my alternate isn't 'solving cancer' because you and i are the shoeshine boys when it comes to cancer. if even *we* think working on solving cancer is a good idea, it is very likely a bad idea
rom1504#5008: Do you think that if someone solves AGI it will automatically solve cancer and make everyone rich ?
ethan caballero#6044: I the politics (e.g. UBI) are good, yes.
andyljones#7746: lol there ain't an 'everyone' where we're going
rom1504#5008: I don't know if solving big problems like that and creating an AGI are not completely orthogonal |
andyljones#7746: instrumental_convergence.txt
rom1504#5008: Things that are done to reach any of the 2 might help the other, but it's not for sure that an AGI would automatically solve big problems
rom1504#5008: I don't disagree with most of your points btw.
Mostly my point is "AGI is not the most difficult and useful goal in AI, there are bigger things"
andyljones#7746: solving cancer and making everyone rich just... aren't that big of a deal. if your reasoning about AI is on a scale of less than galaxies, it's bit myopic
nz#9710: There are bigger things than AGI?
andyljones#7746: if cancer's solved and everyone's rich, it'll be a by-blow of the important stuff
andyljones#7746: like, 'cancer' just isn't a fundamental problem. 'programmable biology', that i'll take
andyljones#7746: though even then i'm skeptical of carbon-oxygen-hydrogen life's place in the future
andyljones#7746: similarly 'everyone's rich' is a by-blow of getting automated manufacturing down
andyljones#7746: and if you've got self-replicating machinery, lol, there's more going on than everyone being rich
rom1504#5008: Yeah about solving cancer I mostly mean solving biology / solving death, that kind of things. Things like alphafold but add 10 breakthrough
nz#9710: AlphaFold 2 proves that work towards AGI can and *will* have consequences on the kind of problems you consider most important.
rom1504#5008: Ah alphafold 2 is work towards AGI ? Seems about "let's apply ml to hard problems" to me
andyljones#7746: no, alphafold2 is a byblow of work on text modelling
nz#9710: Yea, transformers were developed specifically for NLP.
andyljones#7746: people were trying to make better madlibs and *accidentally* made a serious stride towards curing cancer
mgostIH#0245: I find it quite cool how transformers can even tackle Travelling Salesman https://twitter.com/xbresson/status/1363928204344352770?s=19
rom1504#5008: Ok but were people that created and developed the transformer for nlp thinking about how to make an AGI or how to create better search or recommendation systems ?
Or just interested by solving nlp. |
rom1504#5008: Or translation systems
triggerhappygandi#0001: and how would you know that?
andyljones#7746: original authors were there for pushing the boundaries on translation, a lot of work since then's been by explicitly-AGI-oriented organisations
rom1504#5008: Ah yeah just realized that openai and deepmind explicitly put AGI in their mission statement. Interesting
triggerhappygandi#0001: You're trying to create a model that can cognitively challenge a human brain, with trillions of times more robust memory. It isn't comparable to our brains.
triggerhappygandi#0001: A mind that has all the human knowledge in store and enough computation to process it can do pretty much everything we could ever do.
rom1504#5008: Ok I guess I'm convinced that there are people working explicitly for that goal that are doing great things.
rom1504#5008: But still everything they release is pretty applied no ?. Are there some good papers / code releases about AGI and not applications ?
triggerhappygandi#0001: We are nowhere close
rom1504#5008: Are you claiming someone created an AGI in house and didn't say anything about it ?
triggerhappygandi#0001: No, I am saying how do you know it doesn't work, if we haven't even built it yet?
triggerhappygandi#0001: A language translation model just solved protein folding. Baby steps
rom1504#5008: Ah yeah I meant it doesn't work today. As far I know there is no baby steps that are working on this
triggerhappygandi#0001: Every single piece of technology didn't work, until it did
rom1504#5008: But yeah I see that people working with the long term goal of AGI actually develop some useful stuff that they think may help with AGI
Daj#7482: AGI is and always has been the goal of the entire field of AI
Daj#7482: - Schmidhuber, 1991
Daj#7482: (this is ajoke)
Daj#7482: But AGI is the goal
triggerhappygandi#0001: No |
triggerhappygandi#0001: It is not
Daj#7482: Everything else is just nifty offshoots
Daj#7482: Ask McCarthy and Minsky
triggerhappygandi#0001: Schmidhuber is a precursor to superhuman AI
triggerhappygandi#0001: Is he trolling us? This is definitely trolling. Why else would it exist? https://cdn.discordapp.com/attachments/729741769738158194/815173860546707456/unknown.png
Daj#7482: The 1991 date is a meme for a reason lol
Daj#7482: My alma mater :ultrazucc:
triggerhappygandi#0001: He went out of his way and submitted this report on arxiv
triggerhappygandi#0001: Why
rom1504#5008: I guess the confusing part in AGI is "G". People don't only want to create any random human-like intelligence, they want to extract the skills from the smarter people and create skills that nobody has.
At what point do you reach general intelligence ? When you have all of an average human, all of the smarter humans or when you have half of possible skills that humans do not have ?
Daj#7482: At this point you're just arguing trivia
Daj#7482: There is no unified definition for AGI and it doesn't matter
Daj#7482: We build things that are as smart and smarter than humans and make them solve...everything
triggerhappygandi#0001: All you need to know is that attention is all you need
Daj#7482: Everything else is implementation details
triggerhappygandi#0001: Simple as
rom1504#5008: Ok yeah, I'm fine with that as a goal
rom1504#5008: If that's what people think AGI is then indeed it's a worth while goal
rom1504#5008: Also everyone in the AI field is pursuing that so it's pretty convenient |
triggerhappygandi#0001: All we're trying to do is solve hard problems
triggerhappygandi#0001: Everything follows
rom1504#5008: Yeah that's good
Sahl#0630: 3 billion devices...
Sahl#0630: where have I heard that cursed phrase...
Sahl#0630: ☕️
𝓒𝓵𝓪𝓻𝓪#0888: Not right! My coworkers use our question answering system for everything at this point and it's immensely useful. It's one of the big factors allowing such a small team to keep a strong lead over much larger teams bidding against us.
Daj#7482: What kind of QA system is this? I haven't heard of much productive use of QA
Daj#7482: Sounds interesting
𝓒𝓵𝓪𝓻𝓪#0888: It's completely open domain and basically unrelated to academic QA systems. There's no predefined formats or templates or anything like that, just freeform NLP and a command processor. The actual work that went into it was (and still is) training it to recognize when it needs to output console commands to complete an objective.
Daj#7482: That sounds extremely interesting, is there any info I can read about this (or you are willing to share) or is this all proprietary?
Daj#7482: From my experience freeform NLP is still a hassle to use, so curious to see someone getting it to work
𝓒𝓵𝓪𝓻𝓪#0888: My gut reaction is to think you might be trying to use Transformers directly? which basically aren't good enough at NLP, we use GNNs.
𝓒𝓵𝓪𝓻𝓪#0888: It's proprietary but our IP is employee owned and actually The Pile has been very useful so I'll get back to you about details hopefully lol
Daj#7482: Ohh yeah, I've not seen much non-Transformer NLP work at all, I'd be very curious to hear more about this
𝓒𝓵𝓪𝓻𝓪#0888: Someone posted Hinton's thing in the other channel. He's almost on the right track.
𝓒𝓵𝓪𝓻𝓪#0888: Except he's caught up in the weeds of avoiding dynamic allocation. Embrace it instead, the problem is training not architecture.
Daj#7482: Hah I was just watching that
Daj#7482: Please do elaborate (if possible)
Louis#0144: Where do you work |
Louis#0144: I’m surprised companies are still willing to hire bronies
Louis#0144: Jkjk
𝓒𝓵𝓪𝓻𝓪#0888: Wonderful.
𝓒𝓵𝓪𝓻𝓪#0888: YouTube app apparently can block screenshots lol.
Louis#0144: Yeet
Louis#0144: As do a lot of other apps
Louis#0144: It’s hardware level blocking too
Louis#0144: You can’t resolve it with rooting
Louis#0144: Or JB
𝓒𝓵𝓪𝓻𝓪#0888: 51:43 on Hinton's video
Daj#7482: I'm about to reach that point lol
𝓒𝓵𝓪𝓻𝓪#0888: "... without dynamic allocation of neurons to nodes in the parse tree"
𝓒𝓵𝓪𝓻𝓪#0888: Basically this criteria is arbitrary and unnecessary. Dynamic allocation makes it easy to implement in practice.
𝓒𝓵𝓪𝓻𝓪#0888: Also, "Nobody else is interested in solving this problem, but they should be." is just :3
Daj#7482: Where do you work? Sounds like cool stuff
𝓒𝓵𝓪𝓻𝓪#0888: It's highly frowned upon to admit we exist to people who haven't been selected as clients.
Daj#7482: Ah, fair enough
nz#9710: WTF I'm really intrigued now
𓅬 gabriel_syme 𓅬#3220: can someone share the video, I missed thato ne
Daj#7482: https://www.youtube.com/watch?v=eEXnJOHQ_Xw&feature=youtu.be |
𓅬 gabriel_syme 𓅬#3220: also that sounds like a spy novel )
𓅬 gabriel_syme 𓅬#3220: thanks connor!
janus#0150: I'd like to apply to be a client
Louis#0144: My dog loves blueberries
Louis#0144: Omg
Louis#0144: He’s a blueberry eating monster
Louis#0144: Literally just demolished an entire box
janus#0150: blueberries good grapes bad
janus#0150: thats what I always say
𓅬 gabriel_syme 𓅬#3220: both are super food almost so you can't miss
bmk#1476: Woah, the pile has found use, exciting!
bmk#1476: My top guess is some kind of trading firm or people who develop things soley for trading firms
bmk#1476: Runner up guess is govt/military intel
triggerhappygandi#0001: Well duh. Give it a year and it will be widespread
IKEA#9631: My top guess is LARP :berk:
IKEA#9631: "It's highly frowned upon to admit we exist to people who haven't been selected as clients."
Imagine non ironically saying that with a brony pfp
bmk#1476: Why would anyone lie on the internet
bmk#1476: One of our more alignment inclined members is literally AI_WAIFU with a to aru kagaku no railgun pfp
gwern#1782: you guys are curating a list of all uses of The Pile, right? for the social proof |
bmk#1476: i can just go on google scholar and see
bmk#1476: anyone using pile better cite us
StellaAthena#3530: Yeah, we have a list. I was going to put it on the Pile website, but I decided to wait a bit until it was longer.
StellaAthena#3530: Currently I’m aware of 4 citing papers and 1 group at MSFT thats training models on it (nothing publicly facing yet).
rom1504#5008: That's great you got a working QA system and if it's built for the purpose of one day using it in an AGI one day even better.
But I wouldn't call this an AGI or even like 1/10 of an AGI
janus#0150: Did you guys hide any secret keys in the Pile?
rom1504#5008: I definitely agree there are tons of deep learning systems that work very well for practical applications
janus#0150: If not perhaps theres still time...?
janus#0150: Repeat a secret phrase in the non-CC data and then see if Microsoft's model can parrot it exactly
StellaAthena#3530: Also a group at Facebook that’s excited to train on the Pile when I finally put out the Ethically Sourced GMO-Free version
StellaAthena#3530: @janus No, we didn’t.
janus#0150: gotcha 😉
StellaAthena#3530: Any particular reason you’re interested? I can give you a list if you’re writing a blogpost or something.
bmk#1476: wen pile gwernpost
gwern#1782: no, just reminding you because it's the sort of important but easy to neglect thing dataset authors often neglect
gwern#1782: you need to be monitoring referrers too. lots of things don't show up in google scholar alerts. (weirdly, lots of things seem to show up in google scholar but *not* google scholar alerts. every year I go and check google scholar by hand for 'gwern' hits and I usually wind up with at least 6 downstream user papers to add to my dataset bibliographies which *should* have been in my GS alerts but either I didn't notice somehow or they weren't)
gwern#1782: plus of course you need to jailbreak the papers. not for their sake, but so people will see that your dataset is being used 🙂
gwern#1782: enlightened self-interest, as always
bmk#1476: Thankfully, anything that ends up on arXiv gets wrapped into semantic scholar pretty quickly |
gwern#1782: arxiv is fine, it's the IEEE papers which are behind crummy paywalls -_-
bmk#1476: Once your "cited by" number is big enough, enumerating every single one exhaustively becomes no longer necessary
bmk#1476: And given the speed of citations we're receiving even just now, i anticipate being there within a year
gwern#1782: _thinks every one helps and that it doesn't take much time to stick a link in a bibliography or upload a PDF_
StellaAthena#3530: Looks like GS has all the public papers I’m aware of currently (one more going online next week)
bmk#1476: This is the dataset quality thing?
StellaAthena#3530: Yup
bmk#1476: Exciting
bmk#1476: Citation count go brrr
𝓒𝓵𝓪𝓻𝓪#0888: ... citation excitation
dmvaldman#4711: i dunno if there's a more appropriate channel to take this, but this deep fake is cray https://www.tiktok.com/@deeptomcruise/video/6933305746130046214?sender_device=pc&sender_web_id=6894336900674373126&is_from_webapp=v1&is_copy_url=0
EricHallahan#1051: https://cdn.discordapp.com/attachments/729741769738158194/815317520914317352/800px-Cray_Inc.png
bmk#1476: daily reminder that tiktok/douyin bad
dmvaldman#4711: my discriminator brain is confused. if you look at the history of the account you can see the fake getting better and better
LaPapaya#4347: The image gpt. That one which completes (really small) images. I always thought it was a prototype for dall-e (thinking that way, maybe the next thing they will try to do is a gpt-3 musenet?)
LaPapaya#4347: iGPT...? :smallbrain:
bmk#1476: please elaborate on how you plan on using it. in detail and not just broad strokes, because i just dont see it
LaPapaya#4347: If I knew anything about this whole AI stuff I would...
LaPapaya#4347: Well, forget about that
triggerhappygandi#0001: Social media in general bad. |
triggerhappygandi#0001: The ones where you can't hold conversations
AI_WAIFU#2844: Hot take, TikTok is actually a net good. There comes a point where the information density of a platform becomes so low that even polititrib nonsense becomes impossible to carry out effectively, and it get's drowned out in an ocean of Belle Delphine wannabe's and unfathomably cringe lip syncs.
Every person on TikTok is a person not getting into slap-fights on twitter/reddit.
bmk#1476: fair
triggerhappygandi#0001: Belle Delphine is a fucking genius. Never has a woman managed to sell water from her bathtub, let alone at $30/piece
StellaAthena#3530: You overestimate men
triggerhappygandi#0001: Damn
triggerhappygandi#0001: I really do
Aran Komatsuzaki#5714: People have been buying used clothes/hair/pee/poop etc, so I'd say Belle's contribution to kink economy lacks novelty.
triggerhappygandi#0001: Wait
triggerhappygandi#0001: Did they buy poop for the same reason people bought the bath water?
Aran Komatsuzaki#5714: That's my assumption.
triggerhappygandi#0001: I automatically assume they did it for medical purposes
Aran Komatsuzaki#5714: Well there are variety of purposes.
triggerhappygandi#0001: this _can't_ be true. noooooooooooooo
Aran Komatsuzaki#5714: I have autoimmune condition, so I considered to buy one before.
triggerhappygandi#0001: I don't understand
Aran Komatsuzaki#5714: for medical purpose
Aran Komatsuzaki#5714: But sadly it got to remission spontaneously. |
triggerhappygandi#0001: How is poop medical? The only reason to buy poop would be to do some research on its ingredients
triggerhappygandi#0001: It's not like it can cure anything
Aran Komatsuzaki#5714: oh it's for faecal transplant
Aran Komatsuzaki#5714: there's DIY guide for faecal transplant online
triggerhappygandi#0001: Human body scares me
triggerhappygandi#0001: I thank god of every religion that the only medical condition I have is poor eyesight.
Sid#2121: :berk: "sadly my autoimmune condition went away, so i no longer had an excuse to buy poop online"
triggerhappygandi#0001: Sad story
𓅬 gabriel_syme 𓅬#3220: I paused there for a second 😄
𓅬 gabriel_syme 𓅬#3220: good news that it went to remission!
Louis#0144: What a bold AI discussion
Louis#0144: Wow
triggerhappygandi#0001: Aran made me Google Fecal transplant
triggerhappygandi#0001: :sadge:
triggerhappygandi#0001: This is an actual thing
𝓒𝓵𝓪𝓻𝓪#0888: "Alignment by first hand understanding of the torture we put GPT through that one time."
Enealor#6657: People used to gather human blood and fat after executions for medical purposes. I wouldn't be surprised if "drink bathwater" was a medicine. https://www.ncbi.nlm.nih.gov/books/NBK464468/
Enealor#6657: Poop is interesting! And also weird. There are also poop pills. Thanks, biology!
triggerhappygandi#0001: Why
triggerhappygandi#0001: WHY |
triggerhappygandi#0001: Don't tell me human poop pills exist.
triggerhappygandi#0001: Like I guess frog poop or something has medicinal value
triggerhappygandi#0001: But human poo?
Enealor#6657: It's one path for doing a fecal transplant
Enealor#6657: As I understand it though, poop is full of bacteria, and that is what they are transplanting
Enealor#6657: Like, not just bad bacteria, but effectively neutral bacteria that either is part of your digestion cycle, or just crowds out things that are bad for us
Enealor#6657: Remember, we are all spaceships of bacteria!
Jonnathan#1234: I read once that It's suspected dogs eat poop for the healthy gut bacteria.
Sahl#0630: yup human poop has great value
Sahl#0630: 👍
Enealor#6657: Rabbits will eat certain types of poop in order to re-digest, and it's speculated that some animals pass on gut flora via eating poop.
Enealor#6657: Then you have plants and bugs that specialize in eating poop, but that is sort of a different thing
Sid#2121: #general has discussions about shit, #off-topic about tpus
Sid#2121: just as it should be
Enealor#6657: Mmm. So I guess my point is, some times you have to feed a network shit in order to improve it.
Enealor#6657: Yes. That was my point...
Ward#1738: lots of very interesting and impressive results with these type of transplants
triggerhappygandi#0001: Wen piss?
triggerhappygandi#0001: Don't try it lmao
triggerhappygandi#0001: PSA: Don't try injecting someone else's shit in without medical supervision. |
gwern#1782: yeah, people have actually died from fecal transplants iirc
gwern#1782: a shitty way to go
𓅬 gabriel_syme 𓅬#3220: it's way up there as one of the worst ways to go
bmk#1476: But paperclips tho
Ward#1738: I was referring to the scientific studies
gwern#1782: but aran was referring to buying sketchy poop off the internet and reverse-eating it lol
nz#9710: *reverse-eating*
bmk#1476: Thanks y'all for ruining my breakfast time today
nz#9710: always happy to help
triggerhappygandi#0001: badum tsssss
triggerhappygandi#0001: Lmao don't ever visit india
CRG#8707: @jin kazama Google just did a paper on how little transformer modifications actually matter: <https://arxiv.org/abs/2102.11972>
jin kazama#3736: Ah thanks, will be reading it now
CRG#8707: And looks like linear attention doesn't scale well for text: https://discord.com/channels/729741769192767510/729741769738158194/809815177670557756
jrowe#5371: bring your poo to the loo, then post it on koo!
𓅬 gabriel_syme 𓅬#3220: this reminds me of the cool contrastive learning paper where the initial triplet loss was competitive to all the fancy approaches. I love these shared experiments (I'm saying this early, after only reading the abstract)
Deleted User#0000: Is there a website compiling best LM/GPT3 prompts to accomplish different tasks?
Deleted User#0000: i guess like elicitIDE but are their under the hood prompts public?
EricHallahan#1051: If I knew, I would be interested. `:P`
It seems like that would be a very useful resource. |
kindiana#1016: http://gptprompts.wikidot.com/
jrowe#5371: seems like people are hoarding more often than not
Deleted User#0000: but we need a nice central repository:p
Deleted User#0000: cool. checking it out
jrowe#5371: that context stuffing seems apropos to #art
Louis#0144: https://twitter.com/arxiv_daily/status/1365423589462208512?s=21
Louis#0144: @bmk
bmk#1476: Interesting
Louis#0144: My friends wrote this
Louis#0144: Thought you’d want to see
bmk#1476: I mean, I don't really do storytelling so much as I do LM, with storytelling as a special case of that
bmk#1476: So most of this is inapplicable for me
Louis#0144: Yeah of course but a lot of the challenges raised here apply to knowledge based language modeling as a whole
Louis#0144: Which imho is pretty relevant to zero shot stuff
bmk#1476: The solution there is more parametrrs
Louis#0144: sometimes
Deleted User#0000: thats quite a nice resource. but it seems it hasn't been updated in many months?
kindiana#1016: I've never had gpt3 access lol, just something I've come across
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/815752223538544650/unknown.png
bmk#1476: do these people not realize that the slack messages dissapear after a few days |
Deleted User#0000: maybe they keep reposting it
bmk#1476: unfortunately, no
zphang#7252: this is slack's answer to instagram stories
EricHallahan#1051: Why does every social media platform have to copy every other one?
EricHallahan#1051: (the question is rhetorical)
zphang#7252: we peaked at yo
𓅬 gabriel_syme 𓅬#3220: since I joined this and the TPU Podcast discords I haven't used twitter at all. I get all the news I care about from here, and much much more on top of that. It's a very nice feature 🙂 not necessarily less time but time better spent
zphang#7252: that's the new frontier: social media platforms that just summarize other social media platforms
zphang#7252: like that one clubhouse room I saw about "let's discuss the top tweets of the week"
𓅬 gabriel_syme 𓅬#3220: lol
𓅬 gabriel_syme 𓅬#3220: it's more of a quality filter than a summary but yeh, it is really valuable (at least for me). And that's ofc excluding all the ideas I read about in here by all you smart people
andreas#5842: you can see the generated prompts in elicit by going to create tasks -> preview -> show prompt https://cdn.discordapp.com/attachments/729741769738158194/815801376407158844/elicit-prompts.mp4
Deleted User#0000: thanks for letting me know!
kurumuz#5695: I'm training a styleGAN2 model rn but not sure if this speed is normal for a P100 https://cdn.discordapp.com/attachments/729741769738158194/815966174721605713/unknown.png
EricHallahan#1051: IDK, I don't train StyleGAN models.
kurumuz#5695: oh, i should add that images are 512x512
cfoster0#4356: The TPU Podcast discord has a stylegan channel, which might be a better place to get an answer to your question
kurumuz#5695: thanks
neel#1028: Hi everyone! I am new to this community and am incredibly interested in the open source AI research work being done by this community.
I am someone who would be considered a beginner in this community. Most of my work has been implementing GAN's in specific computer vision problems, and researching NLP debiasing methods. My major interest too lies in NLP. |
While I might not be at the level required to contribute to projects here, I was curious to know what experienced individuals in this community would advise a beginner(i.e me) if they wanted to contribute to projects here.
Thanks!
EricHallahan#1051: Welcome! If you haven't already, check out #rules, where we have resources that describe a lot of what we do here. If you continue to have questions, feel free to ask.
neel#1028: Thanks a lot @EricHallahan ! Will go through the resources available as well.
Louis#0144: @bmk did we get rid of the welcome channel
EricHallahan#1051: "Wait, we never had a welcome channel?"
"Always has been."
🌍 👨🚀 🔫 🧑🚀
StellaAthena#3530: A month ago, yes
mgostIH#0245: :sadge:
triggerhappygandi#0001: How will we ever see how many times Lucid jumps in and out
triggerhappygandi#0001: He can single handedly keep the welcome chat alive
Louis#0144: can we have a bot that announces every time lucid leaves or joins
Yukari Yakumo#0001: Hello - I am trying to evaluate DALL-E, just to get something generated to get ideas.
What would be the channel to discuss this?
https://github.com/EleutherAI/DALLE-mtf/tree/main
EricHallahan#1051: I'll direct you to #multimodal
StellaAthena#3530: Hey guys, gals, and non-binary pals!
|
Two months ago I took an informal poll about what people were getting out of this community and what they would like to get out of it. It’s been some time and we’ve grown, so I figured following up would be worthwhile. Feel free to DM me if you want to talk one-on-one.
https://forms.gle/UrimzxLdQYB4xkEB9
Deleted User#0000: > can we have a bot that announces every time lucid leaves or joins
@Louis well, apparently the people in level 5 have been spying on my coming in and out
Deleted User#0000: so I'm coming in less often
𓅬 gabriel_syme 𓅬#3220: Now I'm imagining Level 5 as a floor in the Ministry of Information full of people, type writers and eleutherai stamps
Sid#2121: oh no, lucid has an insider in level-5 :berk:
bmk#1476: Plot twist, this is actually a barium meal because only have of level 5 members have access to the spying channel
bmk#1476: One bit of anonymity lost, heh
LaPapaya#4347: Let's say, hypothetically, that Openai will make another experiment using gpt-3 as based
And let's say, for the sake of argument, that this experiment is Musenet 2.0
How would it work? Text2music maybe?
LaPapaya#4347: "A very sad melody with the style of Beethoven"
EricHallahan#1051: Problem is with something like that is a lack of parallel data.
EricHallahan#1051: Also copyright.
LaPapaya#4347: Isn't that also a problem with dall-e?
EricHallahan#1051: The music industry is very protective of their works.
EricHallahan#1051: I don't know. |
jrowe#5371: bizarrely so. they use magic to overcome logic and reason in their lawsuits against musicians
jrowe#5371: facebook and tiktok have realtime reactions for high quality video and I'll assume audio as well
jrowe#5371: maybe they'll be the ones with the large music models
cfoster0#4356: If anyone would jumpstart a big court case trying to restrict generative modeling it'll be one of the music megacorps
EricHallahan#1051: (Most likely NBCUniversal)
gwern#1782: the music industry didn't so much as peep about Jukebox, I noticed
gwern#1782: and OA *did* release Jukebox
gwern#1782: plus quite a set of samples, often of copyrighted/famous singers, to rub the fact in
bmk#1476: not the training data tho, obv
jrowe#5371: did jukebox have anything even remotely good, though?
jrowe#5371: everything I heard was interesting, but musically meh
EricHallahan#1051: It was impressive, but not *good*.
guac#4716: damn i liked the elvis tracks
gwern#1782: 😦 I liked some of them
EricHallahan#1051: Quality-wise
gwern#1782: (plus of course the mere fact that it was 'impressive' shows that human-level is damn close)
jrowe#5371: anything good that people tried to profit from represents a potential revenue stream, maybe they're just biding their time
Teemochu#8740: Jukebox stuff reminds me of music in dreams tbh
jrowe#5371: as parasites, they'll die if generative software gets better than humans, unless they're ready to sink their claws in
Teemochu#8740: At least the hand-picked examples I've heard on YouTube |
EricHallahan#1051: Doesn't all of generative modeling remind you of dreams?
Teemochu#8740: The fuzziness is exactly right in a way I haven't really heard from intentional attempts to replicate that feel.
Enealor#6657: They have the data, so they can just use generative model to write more pop music
cfoster0#4356: IIRC there are usually a million and one "rights holders" for a piece of music so I dunno if they can, above board
EricHallahan#1051: Well you have the rights holder of the lyrics, the composition, and the performance just to begin with.
𓅬 gabriel_syme 𓅬#3220: I think their biggest fear isn't software being more creative than the music industry, it's that the music industry itself has arguably become less creative over time. SO yes, those type of songs will imo be easily created (soon) with generative models.
The same exact thinking happens in design (my domain) right now, where most people think automation will take away creativity but forget instead that creativity is almost drained out of the profession.
𓅬 gabriel_syme 𓅬#3220: What if generative models for music did not replicate but just create new songs that are as good as the ones we have? Isn't that a scary proposition for them?
Teemochu#8740: Watch as the first successful AI is trained off TLMC and thus mashcore and makina become the next big EDM genres (one can only dream)
gwern#1782: I think TLMC is a lot smaller than Jukebox's corpus
gwern#1782: TLMC only has like... 50k tracks? jukebox was more like a million
triggerhappygandi#0001: How did they manage to release jukebox then
triggerhappygandi#0001: Iirc youtube has a thing where you can do some commentary or something on a copyrighted music/video and it counts as fair use.
triggerhappygandi#0001: People could use that workaround
bmk#1476: i think this is only true for certain values of "huge"
bmk#1476: and for hugely huge datasets, the game changes
cfoster0#4356: I dunno. Maybe their lawyers thought they were safe enough. It'd probably be a different story if OpenAI started releasing music on Spotify or whatever based on their models
triggerhappygandi#0001: Ah definitely.
triggerhappygandi#0001: As long as you don't try to cash in, all music should be open sesame right?
triggerhappygandi#0001: Idk how SoundCloud works, but it has many jukebox samples |
cfoster0#4356: An interesting case. Didn't make it to any court, though https://futurism.com/bot-frank-sinatra-britney-spears-youtube-copyright
triggerhappygandi#0001: Shit like this is why creativity gets stifled.
mgostIH#0245: Let's automate the lawyers, put them out of job and then remove copyright from the AI dataset
triggerhappygandi#0001: :bigbrain:
StellaAthena#3530: Hey guys, gals, and non-binary pals!
Two months ago I took an informal poll about what people were getting out of this community and what they would like to get out of it. It’s been some time and we’ve grown, so I figured following up would be worthwhile. Feel free to DM me if you want to talk one-on-one.
https://forms.gle/UrimzxLdQYB4xkEB9
triggerhappygandi#0001: @StellaAthena wen own Summit
EricHallahan#1051: That's just called fair use.
StellaAthena#3530: That’s not loophole. That’s how fair use works. In fact, that’s one of the things fair use explicitly exists to protect
triggerhappygandi#0001: So all youtube videos are fair game right?
triggerhappygandi#0001: As long as you don't create an API :berk:
EricHallahan#1051: No.
kindiana#1016: its unclear if fair use applies to ai training
triggerhappygandi#0001: Training itself is harmless. Why would anyone complain about it?
triggerhappygandi#0001: Unless I'm trying to monetize
StellaAthena#3530: Because they can
triggerhappygandi#0001: Well shit |
triggerhappygandi#0001: How will a kid become martin scorsese like this
EricHallahan#1051: It isn't why, but can.
janus#0150: Who do you think will win in a fight? Our superintelligence? Or Andromeda's?
triggerhappygandi#0001: OURS
triggerhappygandi#0001: I am very species-ist
triggerhappygandi#0001: Not gonna lie
triggerhappygandi#0001: I guess theres a better word for it...
bmk#1476: Paperclips vs thumbtacks
bmk#1476: I am firmly team paperclip
bmk#1476: Clearly the best method for affixing documents together
triggerhappygandi#0001: Who would be misanthrope enough to support Andromeda?
bmk#1476: Fuck the andromeda thumbtacks, all my homies cheer for our paperclips
triggerhappygandi#0001: Thumbtacks are more hazardous
triggerhappygandi#0001: Virgo Supercluster AI vs (uhh name some other similarly sized supercluster)
bmk#1476: Google
triggerhappygandi#0001: lol
bmk#1476: This is a pun on "cluster"
triggerhappygandi#0001: I understood
triggerhappygandi#0001: Are we a part of the Hercules Corona Borealis Great Wall?
Daj#7482: I love human tribalism |
mgostIH#0245: I don't think other galaxies invented anime
triggerhappygandi#0001: Who doesn't
Daj#7482: Get immediately invested in any kind of competition, no matter how stupid
Daj#7482: Beautiful
triggerhappygandi#0001: OOOH AAAHH
Daj#7482: (unironic, it's fun)
triggerhappygandi#0001: MONKE TOGETHER STRONG
andyljones#7746: fun fact: unlike every other kind of object in space, galaxies really aren't that far apart compared to their size https://cdn.discordapp.com/attachments/729741769738158194/816386617010946108/t8k8tsqia3e21.png
mgostIH#0245: Now I am curious what an alien AGI would produce as animation style
Daj#7482: https://www.youtube.com/watch?v=7itZcNs_45w
This is one of my favorite videos in the world
andyljones#7746: in fact if it were brighter, you'd see that andromeda was a couple of times the diameter of the moon https://cdn.discordapp.com/attachments/729741769738158194/816386933495955463/1d61d1d7c6eb372cd5d91558e6125f5b.png
triggerhappygandi#0001: I wish it was
triggerhappygandi#0001: I have legit never seen Milky way, due to large population density and never having been to a remote rural area
triggerhappygandi#0001: No killing was done. Disappointed
triggerhappygandi#0001: I am desperately waiting for a supernova close to us
bmk#1476: *monkey's paw curls*
triggerhappygandi#0001: Lol
triggerhappygandi#0001: Sun can't supernova
triggerhappygandi#0001: So |
triggerhappygandi#0001: Fuck you monkey paw
triggerhappygandi#0001: Do your worst
mgostIH#0245: @triggerhappygandi you forgot another critically big object close to us
triggerhappygandi#0001: Which is?
bmk#1476: ~~Your mother?~~
mgostIH#0245: ur mum 😎
triggerhappygandi#0001: Damn
bmk#1476: Ha!
bmk#1476: Beat you to it
triggerhappygandi#0001: Walked right into it
Daj#7482: triggerhappygandi being banished to the fucking Shadow Realm, 2021
triggerhappygandi#0001: How the fuck did I fell for it
mgostIH#0245: KL(mgostIH || bmk) ~ 0
Daj#7482: I think I need to ban you now
Daj#7482: You've been destroyed
triggerhappygandi#0001: :zucc:
triggerhappygandi#0001: I have been banished to the land of bollywood
triggerhappygandi#0001: Thats already very dark. I even live in Mumbai
Daj#7482: Yea that's about as bad
triggerhappygandi#0001: If farts contain methane, has someone tried farting on a cigarette lighter? |
𓅬 gabriel_syme 𓅬#3220: oh man it is an amazing site. I used to see it every night in my island, now live in Kuala Lumpur and I can't even see stars
triggerhappygandi#0001: cities suck
triggerhappygandi#0001: especially because of this
EricHallahan#1051: > oh man it is an amazing site.
I assume you mean *sight*. Blame stupid English.
bmk#1476: Don't forget to cite schmidhuber while you're at it
bmk#1476: Did you know that schmidhuber is a pioneer in RL too? He's spoken at length about the credit assignment problem
𓅬 gabriel_syme 𓅬#3220: yup my bad 😦
𓅬 gabriel_syme 𓅬#3220: but yeh august night sky in the Cyclades can't be beat
𓅬 gabriel_syme 𓅬#3220: well maybe by snakes I guess but that's a contingency you accept
janus#0150: Depends who develops superai. Probably.
janus#0150: ^
janus#0150: This is our chance! Ditch existing alignment concerns and focus on war-like power.
triggerhappygandi#0001: lmao
janus#0150: Do you believe our anime is above average quality? If not, vote for thumbtack!
bmk#1476: Human anime is best anime
mgostIH#0245: 🤔
nz#9710: it do be like that though
triggerhappygandi#0001: Hwo would you know that
Deleted User#0000: I previously asked some questions and got directed towards geometric deep learning; specifically I asked about whether one could have a GAN system for images where the generator creates 3D scenes that get rendered to 2D images, so that it learns the underlying 3D structure of reality; and I got pointed towards pi-GAN which does just that. |
But I was thinking... Do you even need the generative adversarial part? More specifically, there is some work on image probability modelling, like Image GPT, Flow-Based Generative Models, and such. (Admittedly, something like flow networks is probably less practical than the GAN approach. But if people are working on things like Image GPT, that's plausibly pretty practical. Though also from what I understand, even Image GPT is difficult, due to the sheer size of images?)
.
Deleted User#0000: What I'm thinking is, suppose:
* P is some model of the distribution of images
* S is some rendering function (e.g. something SIREN+NeRF-based, as in pi-GAN)
* S' is S, except rendering the scene from a randomly rotated camera angle
Then you could train S together with some embedding E to have S(E(x)) approximate x and P(S'(E(x))) be high
Essentially what this would do is "extract" whatever geometric knowledge P has, and puts it into S and E.
I'm not sure how well this would work. Part of the point of the reason that I'm interested in this stuff is that standard probabilistic models seem to struggle with even having geometric knowledge in the first place. But I think it could work, here's an analogy:
In a way, the geometric knowledge that images are projections of 3D scenes can be seen as a causal concept. That is, it's about the underlying reality that generates your data. In standard machine learning, you just learn a distribution of data, which is just correlational knowledge. And obviously correlation != causation, and that explains why they don't have knowledge of the geometric aspects.
But given correlations + causal assumptions, one can fit a causal model to get some likely accurate parameters for it. That could be understood as basically what pi-GAN does, but it's also what my proposed method would do. The difference is that pi-GAN has to discover the correlations in image structure too, while my proposed method can just extract them from another model. So basically, the way I see it is that this proposal "gives" P the causal knowledge that the images originate from projection, and uses this knowledge to make P into a richer model.
Deleted User#0000: .... I really should start working on experimenting with some of my ideas myself, but then at the same time, I'm behind on my work on my thesis, so I very much need to catch up on that too :V
I'm so unproductive 🙃
StellaAthena#3530: @Deleted User no it doesn’t have to be a GAN. GANs are often used as a crutch when we don’t understand how to do the whole model though and so are often the first implementation of generative stuff.
Deleted User#0000: I mean so the idea is that this would also drop the generative element of this model
StellaAthena#3530: Wait what
Deleted User#0000: Like it would learn to convert images into 3D scenes, but it wouldn't learn to generate 3D scenes independently of that
EricHallahan#1051: "End-to-end" is used as a crutch way too often too. Happens all the time on generative audio modeling.
Deleted User#0000: P(3D scene|2D image), not P(3D scene) |
while pi-GAN gives you P(3D scene)
EricHallahan#1051: It's just a model to convert from domain to domain?
Deleted User#0000: not that it's a benefit to reduce the features of your system, but I thought that this might possibly simplify things compared to pi-GAN
StellaAthena#3530: That sounds pretty easy?
Deleted User#0000: Yes, but trained solely on 2D images
Deleted User#0000: No 3D data
StellaAthena#3530: Do you have many 2D images of the same scene from different angles?
StellaAthena#3530: You need to have 3D data implicitly in some what or what you’re asking for is impossible
Deleted User#0000: Shouldn't need to be the same scene from different angles; pi-GAN doesn't use that per se
It does use similarish scenes from different angles, but it seems like that is just due to scale
EricHallahan#1051: You are removing the constraint of needing to independently sample, yes?
Deleted User#0000: From how I understand pi-GAN, it used the fact that the overall distribution of images contained scenes from various angles, without actually linking together different angles on the same scene
This should be possible in the more general case
Essentially, what you are using is the constraint that any 3D rotation of your scene should still yield a valid 2D image; this constrains your model to output coherent 3D scenes rather than just a mess
Deleted User#0000: Wdym?
EricHallahan#1051: You always input an image?
Deleted User#0000: Yes
cfoster0#4356: There's some prior that you *should* be able to learn from contiguous cuts from videos. Implicitly there's smooth translation and rotation of the camera and whatnot
cfoster0#4356: I dunno how to formulate the problem learning from 2D images alone
Deleted User#0000: Doesn't pi-GAN already solve the problem of learning from 2D images alone? Or did I misunderstand the paper? |
cfoster0#4356: Eh kind of? They restrict the learning problem to a really restricted distribution. Like centered cat faces
StellaAthena#3530: ^^
StellaAthena#3530: The general problem is very hard
StellaAthena#3530: Thus, GANs
Deleted User#0000: Yeah
My thought would be that the principle should generalize outside of this, but maybe I'm overly optimistic, idk
Like I can definitely see that the restricted distribution would make it much much easier for the model to learn, but... is it completely unrealistic to think that other images still have the information needed to do this, just requiring more training?
I guess people here have more experience with it than me 🤷 But my model of how pi-GAN solved it was essentially through the constraint that 3D rotations of a valid scene must also be a valid scene
Deleted User#0000: So the question I ended up thinking about was, how could one evaluate "valid scene" without GAN-style stuff?
cfoster0#4356: Idk. If you're inferring from a single image, there's always the option of explaining it as a flat colored wall directly in front of the camera
cfoster0#4356: There's gotta be some way to regularize away those possibilities
Deleted User#0000: No, because then P(S'(E(x))) won't be high
cfoster0#4356: Ah I see
Deleted User#0000: Possibly you'd also want some sort of
encode, render rotated, encode again, render rotated back
should yield the original image
So it can't just learn flat colored wall + most plausible scene behind that
But seems like those three should solve it? idk
EricHallahan#1051: Comment: 3D scene reconstruction (photogrammetry) often needs to account for more than just rotation or translation, but also the parameters of the camera, such as focal length, lens distortion, and sensor size. I don't know how relevant this is to what you are trying to do, but I think it is worth considering if it applies.
Deleted User#0000: I wouldn't know enough about how much of an effect it would have. Possibly this needs to be tested in practice since my proposed method differs from other known methods? Possibly there's some theory or experience that would tell you whether it's important, but then I don't have that theory/experience. 🤷 |
Deleted User#0000: Basically, my proposal can be seen as suggesting that we use a model of the probability density of images to regularize away these possibilities.
cfoster0#4356: I like this idea
cfoster0#4356: I'll need to think more about it though
Deleted User#0000: 👍
My girlfriend is telling me to go to bed too, so I guess I'll cya later
sid#9193: I implemented pi-gan last month. It is absolutely possible to learn solely from (category specific) images
sid#9193: There are two difficulties: one, nerf has trouble with a lot of geometric variation. Pi-gan uses a dataset that has 10k images of 16 car models with different viewpoints and textures
sid#9193: Two, the discriminator has to implicitly match viewpoints. This sort of works but if an object has symmetries then you might start to see them exchange features
sid#9193: So it doesn’t solve the learning from 2D problem because the only reason pi-gan works is the extremely redundant dataset. I tried it on the 3D-r2n2 dataset (renderings of shapenet cars, high variation) and it inevitably collapses
sid#9193: (if anyone is interested in 3D generation feel free to ping me- this is one of my research focuses)
𓅬 gabriel_syme 𓅬#3220: I'll definitely do that in the near future! this discussion was amazing actually, I was just thinking of the 2d to 3d problem yesterday and whethere there's opportunities of stitching new ideas together
EricHallahan#1051: Does that mean that it is little more than a fancy demo?
sid#9193: No, it introduces at least one good idea: the film conditioning for siren networks
sid#9193: I think this line of research is probably a dead end though. The fully-connected generator simply does not have the capacity to represent a wide variety of shapes
sid#9193: Increasing the capacity slows the volume rendering process so that won’t work well
cfoster0#4356: Where would you place your bet, if you had to?
sid#9193: Some sort of compositional approach without explicit geometry representations
𓅬 gabriel_syme 𓅬#3220: is that..like CIPS?
sid#9193: Nerf sucks (relatively) because it uses global features
𓅬 gabriel_syme 𓅬#3220: *hides* |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.