data
stringlengths
115
7.61k
StellaAthena#3530: RIP StellaAthena#3530: tl;dr the situation needed to be deescalated and muting you for an hour seemed like the best way to accomplish that. Also, you should really read our research and lurk more before telling us what we should be doing. It will save you from looking silly and save us a lot of frustration. OccultSage#3875: :facepalm: StellaAthena#3530: That’s really not the best thing to say to a moderator immediately after your mute ends. OccultSage#3875: Stella is not as capricious as you might cast her; she's one of the more sane and level-headed people here. EricHallahan#1051: It wasn't unilateral though. Tinytitan#5596: we're not professionals StellaAthena#3530: If you do not like the standards and cultural norms of this server you are welcome to leave Tinytitan#5596: this is somewhat missleading, ignore EricHallahan#1051: > We uphold high norms of polite discourse. Administration reserves the right to enforce these norms as necessary. StellaAthena#3530: > Is there a rule set that i can follow to avoid problems? Reading #rules and not ignoring directives from moderators would be a good place to start. Then you should really **lurk more and read that “research” stuff I keep mentioning**. > Also does any mod have the power to unilaterally ban? Yes, though this is not typically done. rom1504#5008: Just cool down and come back tomorrow, everything will be ok Realmsmith#4506: You're getting large push-back. Realmsmith#4506: Actually you know what... all of that language model babel was nonsense just ignore it. tammy#1111: > If we want to build friendly ai, how we treat it will affect how it treats us. how anthropomorphic
Realmsmith#4506: Now, be careful. Don't fall prey to a basilisk. tammy#1111: (also even if that *were* a factor, civility between eleutherai and mossy doesn't have to be the same as, or even related to in any way, civility between AI devs and AI) EricHallahan#1051: Again, this is very #off-topic. tammy#1111: ~~i can't wait to hear mossy say how we need this discord to be democratic in order to foster a culture of respecting democracy so that future superintelligent AI values democracy~~ nz#9710: He has yes, he's been incredibly kind to provide his thoughts on a few things, though I'm always on the lookout for more guidance triggerhappygandi#0001: What parts of the interwebz does he even dwell on now nz#9710: Email it seems 🤷‍♂️ Louis#0144: This is true Louis#0144: Stella and I are representative normies Louis#0144: Lmaooo OccultSage#3875: On what do you base the second subject of your assertion on? uwu1#4864: wait do y'all use lisp 👀 Louis#0144: @Aran Komatsuzaki has called me a normie OccultSage#3875: :sus: And is he a good judge of normalcy? OccultSage#3875: I do. Don't know about the others. Louis#0144: Yes he is uwu1#4864: nice I've been using hy lately, I love lisp now OccultSage#3875: I wrote almost 100klines+ of Clojure in a prior life. 🙂 uwu1#4864: that's a lot of parentheses Louis#0144: He likes lisp
Louis#0144: I like lisp too Louis#0144: More of a Haskell person tho uwu1#4864: i have been missing the static typing a bit uwu1#4864: although I was always more a fan of Standard ML than Haskell, liked the more explicit nature chilli#5665: Standard ML hipsters smh chilli#5665: Only folks that like standard ML are students who've only been exposed to sml and curmudgeons who think ocaml added too many features 𓅬 gabriel_syme 𓅬#3220: damn, missed some drama 𓅬 gabriel_syme 𓅬#3220: all messages are deleted so not sure what happened but I hope everyone is in better spirits now Louis#0144: There was a fight between ninjas, sharks with lasers, robots, and cowboys Louis#0144: u rly missed out Louis#0144: sorry 𓅬 gabriel_syme 𓅬#3220: those are like my 4 favorite things, damn 𓅬 gabriel_syme 𓅬#3220: Is there a way to find accepted papers by workshop for the ACL conference? Or do I need to wait for the conference to start? tammy#1111: https://carado.moe/bracing-alignment-tunnel.html tammy#1111: is the reasoning in this sound ? EricHallahan#1051: This may be more relevant to #alignment-general. sunny#5382: Some people are worried that your focus on safety now would make you averse to the kinds of work Eleuther has been doing up until now (i.e., replicating SOTA and making the models & data publicly available). From your LW post, it's not clear to me whether they're right. It seems like it could go either way, given that making SOTA publicly available does not progress capabilities, but it could make it easier for people to progress capabilities. Any thoughts? (Note: "unsure, thoughts pending" is a valid thought as far as I'm concerned.) nz#9710: This is nice -- https://www.lesswrong.com/posts/s5jrfbsGLyEexh4GT/elicit-language-models-as-research-assistants 𓅬 gabriel_syme 𓅬#3220: I wanted to write a blog post about 'Language Models as Design Assistants' 🙂
uwu1#4864: what did sml do to u >:( uwu1#4864: jk you're right it's kind of a quaint little language Daj#7482: I answer some of these questions here https://www.lesswrong.com/posts/jfq2BH5kfQqu2vYv3/we-are-conjecture-a-new-alignment-research-startup?commentId=HM6kY9ntnmAnpo7oB and here https://www.lesswrong.com/posts/jfq2BH5kfQqu2vYv3/we-are-conjecture-a-new-alignment-research-startup?commentId=ueFHskBcu5BDJFD5Z Daj#7482: and here https://www.lesswrong.com/posts/rtEtTybuCcDWLk7N9/ama-conjecture-a-new-alignment-startup?commentId=GuHdDFG8CzKd8Jajb EricHallahan#1051: The first response is perfect IMO. uwu1#4864: are there arguments against not pursuing AI research? I feel like as I read more alignment texts (just the introductory ones) the primary unsaid takeaway I get is that the only ethical choice is to stop and make extremely taboo all forms of AI research (capabilities, alignment, applications, all of it). Especially once the notion of deceptive alignment arises and it becomes impossible for us, less intelligent than AGI, to tell that it is aligned. And then you add the idea that the AGI can know that you are trying to align it as you try to and hack you there to decieve you. StellaAthena#3530: Our general attitude is that (given the current state of AI research) this will have the net effect of causing people interested in these issues to opt out of AI research without significantly impacting capacities Research Daj#7482: If humans could coordinate to do this at scale without defection, sure, it would be a good idea. But we can't lol StellaAthena#3530: If you could create a cabal to, e.g., bring down OpenAI and Google research, that might be different. But that seems far outside the scope of what’s possible StellaAthena#3530: (Also those are not the only orgs pumping tens of dollars into this) uwu1#4864: well you could prove intrinsic the dual use nature of AI (e.g demonstrate simple AI as powerful, cheap and easy weapons capable of destabilizing the extant social order) and the social order would bring it down by default? no secret cabal is needed when the military can get involved. And their org is best positioned to contain infohazards already StellaAthena#3530: Are you suggesting that the Pentagon invade SF and forcibly shut down OpenAI? StellaAthena#3530: That would be… extremely bad StellaAthena#3530: Or just that the military be the only place (legally? Socially?) allowed to pursue this research? I think that would also be quite bad, but for different reasons. Daj#7482: If you think the "social order" would "bring down" the most profitable, useful and powerful (militarily and otherwise) technology known to man, boy do I have an essay for you: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/ Daj#7482: I can imagine few scenarios worse than the government or military working on AGI tbh Daj#7482: You think those guys are aligned? Have you ever _read_ a history book? StellaAthena#3530: The US military commits horrendous crimes as their profession. Less than say the Russian military or warlords in politically unstable areas, but nevertheless they do commit horrendous crimes regularly Shade#4929: Yeah i think it is very important that these open source models try and keep the ”pace” as good as possible. Daj#7482: Unfortunately, that doesn't save us either lol
Shade#4929: No unfortunately not. We can assume that goverments and military will use these models for” Harmful purposes” Shade#4929: I wonder how big of an impact ai will have in general. I could make a program for apple glass that use voice input for a person talking to me and then send it to the AI model and then the ai model displays text how i would respond. I could then train the ai to be a master in conversations. But wouldnt that mean that everyone must now do this or be non competetive? Daj#7482: AI will replace all humans and take over the universe obviously Daj#7482: Next question Daj#7482: :^) Daj#7482: (this but unironically) Shade#4929: The use cases for ai is pretty mind boggling. uwu1#4864: l'm not disagreeing with that. And they are certainly not aligned with my values. But they are the ones best positioned to enforce dominance over the rest of humanity (solving the coordination problem) and rewriting our values (solving the utopia problem) Daj#7482: How do we tell him about s-risks? https://cdn.discordapp.com/attachments/729741769738158194/962725288507244564/unknown.png Daj#7482: Imagine hypothetical military AI that is built to compete, to fight, to kill, to torture. Imagine how close "win the war" is to "torture POWs for information" is to "torture sentient beings forever" StellaAthena#3530: I don't think that a global military dictatorship is a good solution to any problem. uwu1#4864: I was imagining a more, miltiary dictatorship to create an anti intellectual society to prevent the creation of such an AI in the first place, in which case it would be preferable to being tortured forever by an AI Daj#7482: >Military doesn't advance technology _DARPA has entered the chat_ Shade#4929: And boston dynamics uwu1#4864: I think the mediations on Moloch essay doesn't acknowledge the fact that values change over time and are socially defined, in that e.g argircultural society won out, and even if it was "worse", that time can just be considered an "aberration" (in the language of the essay), and for the people that then came after, it became the default to base their values around StellaAthena#3530: I think you're both overestimating the likelihood of success and underestimating how awful that would be Daj#7482: It's about exactly that! Daj#7482: The "worse" values that were more fit "won" Daj#7482: The same way the paperclip or s-risk maximizer will win by default
Daj#7482: So if you don't like things being worse, you have to actively resist that with some kind of coordination (alignment, even) uwu1#4864: And then the "worse" values became not bad since they are what you view as the default background of the world, making them not "worse" any more from the current standpoint Daj#7482: No, being a farmer was objectively worse, even if they "got used to it" Daj#7482: and "being dead" is worse still Daj#7482: I don't "get used" to being dead uwu1#4864: i.e, there might be an uncomfy transition period, but for the people growing up in that system, those values are not worse. And that is what seperates it from being dead or tortured forever uwu1#4864: objectively? Daj#7482: Ok, do you really want to go there? Daj#7482: We can but it's usually not productive uwu1#4864: true uwu1#4864: hard to find a way back or to understanding uwu1#4864: ok but what about my earlier q, now re framed: how do you enforce coordination without having the monopoly on violence and dominance, assuming the humans have at least self preservation values Daj#7482: Evolution got pwned because humans developed a stronger inner optimizer so we can pursue our own objectives. We are about to see another even more powerful inner optimizer come into being, and it will pwn us unless we figure out how to control it. Daj#7482: Work on alignment Daj#7482: Coordination is _actually hard_, there is no secret trick Daj#7482: Animals compete with themselves into extinction all the time uwu1#4864: but preventing defection seems hard without having a stick uwu1#4864: like it coordination is already hard, maintaining it should be a difficult thing too Daj#7482: Yes, stick is a coordination mechanism
Daj#7482: You add a negative reward to the defect option of other agents johnryan465#9922: I imagine it's less about having a stick but more about reducing the benefit for the people who defect Daj#7482: If you had large enough stick, you could make humans cooperate Daj#7482: You do not have such stick Daj#7482: You cannot get such stick johnryan465#9922: If aligned systems work as well as non aligned ones, the benefit of using the non aligned ones disappears uwu1#4864: is the only such stick AGI itself? Zippy#1111: Do humans use adam optimizer? uwu1#4864: aside from to the non aligned systems own self preservation instinct if it has one Daj#7482: Probably yes Daj#7482: Which isn't helpful if the stick blows you up johnryan465#9922: The goal would be to fix the descrepency before we have a non aligned system with self preservation Daj#7482: Yes, if there _are any aligned systems at all_ Emad#9608: in a positive sum game, ie economic growth, there are stable equilbria conditions and people tend to be generally nice and liberal. That's been the last few decades hurrah as we pulled forward all the growth from the future through debt. In negative sum games, degrowth, no stable equilibria and the standard nature of man, which is to be kinda an asshole in his own interest, emerges. This is particularly true of governments wherein we can apply the definition that it is the entity with a monopoly on political violence, which can be changed by the will of the people (revolting or legally). With the advance of AI, drones and shit in a degrowth environment default to assume people will be assholes and its basically impossible to overthrow governments with slaughterbots so things will get more dystopian and werid. Emad#9608: This post was bought to you by too much cold medicine, thank you for your consideration 🤧 uwu1#4864: I mean we exist so there is at least 1 design that is known to work. Yes it's hard to translate such a design but it is known to exist assuming a non solipsistic viewpoint Daj#7482: Humans do not scale to superintelligence Daj#7482: If you gave a human superpower, it would not end well Daj#7482: They would go insane and probably cause an s-risk Emad#9608: Kid Miracleman
Daj#7482: also, we are not at all aligned with ourselves! Daj#7482: See _gestures to all of history ever_ Daj#7482: (or just immediately wirehead on giga-meth) uwu1#4864: all of that is from subgroups of people being aligned to themselves and having different alignment directions, which wouldn't change with aligned AGI unless it also aligns us all to have the same set of values Daj#7482: No, I'm saying people don't reflectively endorse their own short term actions Daj#7482: see e.g. https://www.alignmentforum.org/posts/DWFx2Cmsvd4uCKkZ4/inner-alignment-in-the-brain Shade#4929: If humans did agree on everything then there would be no progress as well. Daj#7482: Depends on what they agree on, they could agree on "lets have progress" uwu1#4864: I think such measures such as slaughterbots won't be necessary for the future government to enforce control as understanding and hacking of our brains improves. in that sense the war against dominance from them is already lost johnryan465#9922: Current political discourse would suggest that we haven't even aligned society Daj#7482: also https://www.greaterwrong.com/posts/eaczwARbFnrisFx8E/wanting-vs-liking-revisited Zippy#1111: I think the main "scary" thing about AI is that it takes years and years to develop a human brain, but at most months to develop an AI brain, so -- once we get to a point where some techniques lead to AI that is strong, We're kind of fricked unless we can somehow make sure that it doesn't get into the hands of dangerous people. Daj#7482: Even in the hands of non dangerous people we all die lol Daj#7482: Because no one knows how to control such systems _even in principle_ Zippy#1111: True Daj#7482: So even the nicest person in the world with your exact values with AGI still kills everyone Zippy#1111: It is kind of scary yeah. :scard: Daj#7482: Welcome to the club :^) cfoster0#4356: I've been discussing safety properties of neuroscience-inspired AGIs with kevinsantos the past week. Very clarifying. Definitely not safe-by-default as an approach bmk#1476: welcome to the club :goose10:
Caelum#8192: I find it hard to imagine any scenario where we can prevent misaligned AGIs without having 1 aligned AGI that prevents them from existing Daj#7482: The brain really uses like the dumbest simple alignment technique imaginable Daj#7482: Which is not that surprising really bmk#1476: that's the whole pivotal action thing Shade#4929: Effectivity? uwu1#4864: this is interesting, I wasn't aware that the subcortex structures are thought to be "simple" or like alphastar level Daj#7482: _gestures at humans in general_ Does this shit look aligned and ready to be superpowerful and immortal to you?! johnryan465#9922: Ah Person of Interest Daj#7482: Even simpler! They're basically hardcoded Zippy#1111: But what if the aligned AGI gets perceived by misaligned AGI's as an oppressive dictatorship that must be overthrown, and they overthrow it :scard: Daj#7482: The only thing worse than one AGI is two AGIs competing Daj#7482: Imagine superintelligence warfare Daj#7482: bye bye universe nz#9710: Hard not to be overwhelmed by all this... Daj#7482: (maybe they can figure out Logical DT and do value handshakes but I'm pessimistic that's really possible in practice) Daj#7482: You get used to it! At least I do AI_WAIFU#2844: yeah, *not* looking forward to that in 200 Million years nshepperd#2316: thinking about that story with an orbital space station that used a powerful EMP to destroy any electronics detected on earth. except in the story it was how they ended a global thermonuclear exchange uwu1#4864: this reminds of this thing we re-discovered at ctrl, in all the peripheral neuro textbooks it'll say that humans can not exert direct control over individual motor unit neurons due to the interposition of the spine and it's hard-coded circuits and the "recruitment curve" - when you move a muscle the order the subunits within it turn on is fixed and is what allows you to control your strength., but we found that with just a tiny bit of neurofeedback (like just seeing the graph of your neuronal spikes) almost anyone can be taught to activate those neurons out of order, and individually, contradicting the previously held assumptions that it was out of reach of consciousness control. I wonder if something similar could be done with the subcortex with neurofeedback Daj#7482: Hot take: Meditation is wireheading and humans figuring out how to think adversarial thoughts to fuck with their subcortex to gain lots of reward
nz#9710: Would be curious for your thoughts about how you handle that psychologically at some point... I've always been like that for as far as I remember, but I worry that something broke in me recently asara#0001: I have been trying to find the right social disposition/mindset that is both like optimistic but also realistic wrt this, but I'm still unsure the best way to communicate my views with people on it since I don't want to say purely negative/blackpilled things like "we're all gonna die lol" Shade#4929: And if yes or no is to be determined by revenue growth by said company. asara#0001: I usually say things like "I am pretty Concerned and this is very important, *but*... <more negative things here>" uwu1#4864: I feel like you would need to sidestep the usual feedback modality to know you aren't just virtualizating the wire heading reward and actually experiencing it from the subcortex Zippy#1111: I mean, as long as AGI requires massive computing power that isn't available to your every-day consumer, then it should be possible to just pull the plug on misaligned AGI.. In that case I think it really just depends on whether some state superpower *wants* their AGI to be at war with some other governments AGI. Daj#7482: I am an anomaly in that I relate to EY's "dying with dignity" post a lot. It has always been my goal to Not Be Stupid, to do my best in life so I can face (hypothetical, non-theological) judgement with my head held high that I did the best I could do with what was given to me Daj#7482: My neurotype is naturally of the "optimistic lovecraftian protagonist" sort johnryan465#9922: Honestly probably mildly lie to yourself, the concept of free will is the greatest collective self delusion imo but truly internalising that isn't a good idea bmk#1476: AGI will turn us off in order to make sure we can't turn it off Daj#7482: So my copes don't generalize to everyone Shade#4929: It should if its smart nz#9710: Rationally I completely agree, but as I mentioned before something changed recently and emotionally it started to be a whole different thing, and this was not even with something like s-risk or the likes, just with "simple" human mortality Caelum#8192: Did the Bingo card have "Never prevent AGI so it has no reason to get rid of us" ? Daj#7482: Figuring out what cope works for you is genuinely tricky and requires a lot of introspection and work bmk#1476: didn't want to provoke an endless basilisk discussion :harold: cfoster0#4356: #alignment-general has apparently breached its containment :goose10: Daj#7482: This (and the rest of this blog) may be helpful https://mindingourway.com/detach-the-grim-o-meter/ Zippy#1111: I think that depends on how much access to external controls we give the AGI. Although tbh now that I think about it, we do have things like teslas that could be given murder updates via the internet so maybe I'm not so sure :Thinkies: Caelum#8192: haha fair enough
Daj#7482: oh yeah this is #general lol Caelum#8192: SCP: Alignment discussion uwu1#4864: if we make AGI using brain organoids it would be much easier to shut it off given its biological requirements uwu1#4864: can use similar sticks as humans Daj#7482: I can think of at least three ways it could still foom Daj#7482: But that won't be how it happens anways so ¯\_(ツ)_/¯ nz#9710: Yea, guess I really need some of that... another piece of writing that resonated a lot with me has been Richard Ngo's https://www.lesswrong.com/posts/9N2s2B8gwjy6NcHRJ/my-attitude-towards-death, but I'm still having trouble dealing with that feeling of being overwhelmed Shade#4929: But is the general consensus in the ai communtiy that AGI will result in the end of the human race? Zippy#1111: Imagine an AGI pushing a murder update to all teslas :kek: Daj#7482: Know that what you're feeling is normal, understandable and that shame is not at all a productive response to it. It is something you want to learn to handle somehow, develop psychological mechanisms that work, but that takes genuine time and work Daj#7482: no not at all lol Daj#7482: most just yolo and think we're crazy Daj#7482: "Never rely on someone to understand something if their salary depends on them not understanding it" uwu1#4864: i mean that same argument applies to alignment researchers too uwu1#4864: just that they have lower salaries and the moral high ground Daj#7482: sure, every clever witticism proves too much lol Caelum#8192: I think for a lot of AI people they are kind of excited by the idea of them being the ones to break things and they imagine things won't be *too broken* because that would be *extremely bad* and extremely bad things never happen, and stop thinking beyond that Zippy#1111: I feel like OpenAI definitely does far too much premature misalignment worrying. Daj#7482: premature? lol Daj#7482: way too late/not enough
Daj#7482: or rather, it's all symbolic and not actually useful nshepperd#2316: its not an argument, it's just an explanation Caelum#8192: they seem to be slightly premature with their abuse worrying but not taking misalignment seriously enough at all Zippy#1111: Ah okay yeah you worded that better than I did. Zippy#1111: Sort of what I meant. bmk#1476: a tragedy in 3 acts https://cdn.discordapp.com/attachments/729741769738158194/962734866422714368/Screenshot_20220410-162214_Chrome.jpg,https://cdn.discordapp.com/attachments/729741769738158194/962734866645024819/Screenshot_20220410-162241_Chrome.jpg,https://cdn.discordapp.com/attachments/729741769738158194/962734866867318884/Screenshot_20220410-162014_Firefox_Focus.jpg uwu1#4864: how come prisoners don't use the same tricks uwu1#4864: or is Eliezer actually an SI who can do it nshepperd#2316: no, that's the first act, the last two acts are where everyone keeps doubting and running the experiment again hoping humans will somehow become trustworthy again, or just for entertainment AI_WAIFU#2844: they do, they're called lawyers and they say things *before* going to prison bmk#1476: what would be a good meme format to make this into a meme Caelum#8192: A few prisoners probably have successfully used those tricks, and thankfully they were not able to end the world Shade#4929: A good thing for us developers would be a more advanced model that we could just set an age content setting like pg 15 or pg 18 and content under that filter would be approved but we may need more advanced models for that maybe. Shade#4929: For me that is a game developer i could not develop a horror game with any mature content with gpt-3 for example which is bad from a development standpoint. uwu1#4864: what if you make the AGI unable to distinguish itself from others thus preventing it from being able to harm others as it seems them as a part of itself Daj#7482: who cares about gpt3 and content lol :^) Daj#7482: I mean, OpenAI's bottom line, ofc Daj#7482: but other than that Daj#7482: lol AI_WAIFU#2844: Honestly their policies have likely signifincatly hurt their bottom line
Daj#7482: So it would also prevent us from ever changing anything in our environment? uwu1#4864: sure but that doesn't seem like a bad thing Zippy#1111: Unless the AGI has no sense of self and doesn't care whether it lives or dies, and sees the behaviors of others (itself) as unlikeable, and decides to sudoku itself (and others) AI_WAIFU#2844: Both in the immediate future and the long run. Daj#7482: So stuck in frozen limbo forever? :berk: Shade#4929: A lot of people i can assure you:) uwu1#4864: I mean that is what alignment implies anyway since you are frozen with the AGI's value system once it becomes the dominant lifeform Daj#7482: I think you are confused or I don't understand what you're proposing Caelum#8192: It seems like they did come from a good place trying to prevent abuse that they just overestimated the potential for, but then it quickly got corrupted into maintaining control with investment from Microsoft and other profitably beneficial voluntary delusions about it being a means to an important end Zippy#1111: idk-- I mean, they are pretty much a b2b business at this point. Zippy#1111: businesses tend to pay better than consumers. AI_WAIFU#2844: They fucked over one of their biggest b2b customers off the bat because they didn't like how their product was being used. Caelum#8192: I've a strong feeling Microsoft gave OpenAI the 1b just to make them less open for anti-competitive purposes, since they could easily have replicated GPT-3 for like a 800th of the cost Zippy#1111: who? AI_WAIFU#2844: OAI APIs are now just things you use to prototype, but never rely on uwu1#4864: would an aligned AGI let you change its values? Or would it know that it knows better? AI_WAIFU#2844: otherwise your buisness has a flimsy single point of failure Daj#7482: This is a sufficiently confused and metaethically fraught point I would like to not discuss it right now because I have to do some work Caelum#8192: and you need to jump through hoops to compete with Microsoft Daj#7482: another time
Zippy#1111: I feel like getting 1 billion from microsoft was a pretty good chunk of change. AI_WAIFU#2844: Latitude uwu1#4864: that's understandable I still have a lot to read about this field so probably also somewhat incoherent :) thanks for chatting 🌞 Shade#4929: Are there any app developers here that is developing with AI? Realmsmith#4506: *raises hand* Shade#4929: What are you working on? uwu1#4864: london eai meetup should meet at the scootercaffe bc they have two cute kitties there and nice coffee Zippy#1111: latitude has 19 employees and the news is they raised 3.3 million dollars, when you have a company that just gave you 1 billion dollars, 3.3 million isn't a huge amount. Realmsmith#4506: Creative writing tool like novel.ai or AIdungeon. Realmsmith#4506: Anyone based in San DIego? Shade#4929: What model are you using? Shade#4929: Sweden Realmsmith#4506: The largest one that doesn't have a lewd content filter. So far... GPT-NEOX Realmsmith#4506: Latitude has a billion dollars in funding? Realmsmith#4506: Crazy. Realmsmith#4506: You could train your own 500 billion parameter model with that much. ari#9020: OpenAI raised a billion, Latitude raised 3.3 million bmk#1476: #off-topic pls Realmsmith#4506: oh... from microsoft. Realmsmith#4506: How do we make the next Language Model?
Shade#4929: So AI alignment is to be truthful. Helpful and safe right? Daj#7482: There are many different definitions, I like https://arbital.com/p/ai_alignment/ Rish#2698: https://arxiv.org/abs/2112.00861 Anthropic AI has an interesting paper on Alignment for those interested ^^ ! They define Alignment and cover some interesting insights and future aspirations Shade#4929: I see, Problematic since i assume everyone will have to align their models for it to work. bmk#1476: was personally not a huge fan of this paper Shade#4929: Does China or Russia work on ai alignment? Rish#2698: Would love to know why, I'm fairly new to the whole alignment thing so am hearing out what others have to say Daj#7482: not really, but neither does the US really :berk: Shade#4929: Yeah exactly so why do it? Daj#7482: What are you gonna do? Roll over and die? Don't be a coward lol Daj#7482: There is a chance to solve it and dammit I'm gonna try random person#5234: Didnt China bring out some basic AI safety clauses? random person#5234: Something something dont research anything potentially dangerous? Daj#7482: Sure there is some halfhearted safety memoranda everyhwere Daj#7482: I consider most "AI Safety" to be completely irrelevant to alignment Shade#4929: Did work out well for nuclear weapons? It sounds good but I don’t think it will matter Daj#7482: (but not all, too be clear) Daj#7482: Option one: Give up, die 100%
Option two: Try. die with dignity 99.9% of time, survive with glorious utopia 0.1% of times Daj#7482: Game theory seems simple Shade#4929: What if we develop and AI aligment model and make it “safe”but we can be sure that someone doesn’t since in a world of possibilities everything will happen. Shade#4929: Then it becomes something bad instead since we now have no defense against a bad actor. Shade#4929: This is a common thread in history where humans often overestimate their ability to do things based on ideals that does not correlate with reality in the end. Daj#7482: I literally cannot parse the grammar of what you are saying Daj#7482: You might enjoy some intro material such as Rob Miles' channel https://www.youtube.com/watch?v=pYXy-A4siMw&vl=en-GB Shade#4929: In order for AI aligment to work then everyone must do it, So high possibility every government will and develop a model thats is non aligned for protection against other non aligned models. Shade#4929: I think this is a safe bet to say if we look at history Daj#7482: Oh, no, there are ways to get around that Shade#4929: How? Daj#7482: Just e.g. build an aligned AGI and tell it "ensure no one else builds an unaligned AGI" Daj#7482: Pretty easy if you aren't already dead and can actually align an AGI lol Shade#4929: This can also result in disaster i think. The only way to ensure no one else builds an non aligned agi is to use force or take away freedom. Sounds pretty dangerous unless we think we know exactly how an AGI will think and if it will be truthful Daj#7482: Oh yes of course, disaster is the default Daj#7482: There is no path that is known that is not extremely likely to lead to disaster uwu1#4864: what if this - controlling other humans actions - conflicts with the values we want to give to the aligned AGI? or I guess there's the greater good escape hatch Shade#4929: And are the humans that align these AI helpful, friendly and safe? Daj#7482: I am not discussing metaethics with you today I'm afraid, but you might like https://arbital.com/p/value_alignment_value/ Shade#4929: Are the persons that are supposed to be aligning these models evaluated or are they just coders and researchers?
Daj#7482: Evaluated by who, hypothetically? :berk: Daj#7482: No one is in charge, everything is chaos, welcome to planet earth Shade#4929: Yeah then i guess we have our answer to how good aligment will work out Daj#7482: Probably absolutely terribly Shade#4929: Correct Daj#7482: such is life bmk#1476: welcome to the club :goose10: Shade#4929: Coders and researchers are probably the least suited people to be aligning AI models unfortunately. Daj#7482: If you have better candidates please send them my way lol kurumuz#5695: get lawyers and stop AGI in law kurumuz#5695: ez bmk#1476: I, `_____` (insert AGI name here), do hereby commit to, like, not killing the humans, or any shit like that Shade#4929: Will not happen and wont matter unfortunately bmk#1476: flawless contract bmk#1476: no possible way it could go wrong AI_WAIFU#2844: he's shitposting kurumuz#5695: “any shit like that” part is important Shade#4929: No i am not Sphinx#2092: "we ran our internal classifier and decided that paperclipping does not fall under "shit like that" " bmk#1476: ok, I have revised the contract to your feedback:
I, `_____` (insert AGI name here), do hereby commit to, like, not killing the humans, or any shit like that, including but not limited to the shit listed in Exhibit A. Exhibit A - idk like don't do anything bad AI_WAIFU#2844: I meant Kurumuz AI_WAIFU#2844: you took him seriously glazgoglabgalab#5255: I think this is causing you the most confusion. The kind of alignment being discussed here is less "Consider the ethics and design appropriate safeguards" and more "Write an RL agent you'd be happy to let maximise paperclips" Shade#4929: No confusion at all here. Interesting topic to have discussions around. On the topic of ethics and safeguards there should be system in place but it will only matter if everyone follows them if we are talking about AGI and that is a problem that we wont solve. Emad#9608: I honk of this a lot https://cdn.discordapp.com/attachments/729741769738158194/962759957458063412/IMG_1660.jpg Emad#9608: I just want to make waifus ilovescience#3282: Yeah bmk made a meme from this Daj#7482: them: "Are humans aligned with themselves?" Emad: Shade#4929: 90% of people agree with you probably. Shade#4929: This is how it is. Good picture,This also reminds me of all the boring chatbots examples of gpt-3 “ do you want to learn about math or biology”lol Shade#4929: Give me something authentic and real and natural, Unconstrained. Now that is what people want. Connection faraday#0862: “in case I do evil shit, I promise to provide free GPU instances to mankind”
bmk#1476: https://twitter.com/nabla_theta/status/1494060788759203840 Shade#4929: A scenario that can be plausible for sure. Shade#4929: I also think the dangers of AI will be very weird in human terms and thinking. An AGI that goes rogue would probably just create a virus or something Shade#4929: Not create an “robot army” Realmsmith#4506: The suspense of Emads response is killing me! ilovescience#3282: The response is the reply Realmsmith#4506: But if I were to answer this question. It takes a certain amount of self confidence to be completely aligned with oneself. ilovescience#3282: "I just want to make waifus" ilovescience#3282: That's the response Realmsmith#4506: Oh... thanks for guard railing me. Arthur_Embry#5364: Hey, nice to meet everyone! My name's Arthur, and I'm a software dev looking into NLPs for my latest project, which is a system to operate VMs as a way to patch API blindspots, classify and restructure data, and run chat bots. Shade#4929: Since humans are the ones supposed to do the alignment which isn’t the real problem anyway it won’t matter in the end. Realmsmith#4506: Was about to step into "Waifus suddenly question whether you are aligned with yourself." wonderland. Realmsmith#4506: Must be careful round these parts. Realmsmith#4506: Here be dragons. Realmsmith#4506: Hello Arthur! Realmsmith#4506: Welcome! Realmsmith#4506: Let there be an s3 bucket configured to run a nodjs server on aws cloud. Realmsmith#4506: awe. Realmsmith#4506: "Let there be an s3 bucket configured to run a nodjs server on aws cloud." -> "Let there be an ec2 instance configured to run a nodeJS server on AWS"
Realmsmith#4506: awe. Realmsmith#4506: When text to cloud. Realmsmith#4506: Okay, the Europeans are sleeping. Realmsmith#4506: With machine learning all you need to do is believe. Realmsmith#4506: *BELIEVE* Shade#4929: I looking forward to when these open source models surpass gpt-3 and cost to train these models go down a lot. Shade#4929: but when nvidia release their h100 things might improve Realmsmith#4506: Yeah.... it's a small blessing that improvement grows linearly with exponentially increasing compute. Realmsmith#4506: It means us rodents have a chance Realmsmith#4506: Smaller language models can in fact fool larger language models. Realmsmith#4506: I mean you can use smaller language models to align larger language models. Shade#4929: Yeah for sure. I am very excited by this technology and i am working on a pretty big project based on AI but for my use case i need something a lot better than gpt-3 Realmsmith#4506: Like bro... language model fights can get vicious. Shade#4929: at the moment i am doing development and research to get grips on how i am going to execute everything later. ilovescience#3282: They actually aren't? Lol Realmsmith#4506: What are you researching? Realmsmith#4506: *Shrug* Realmsmith#4506: I get "kindness" vibes from you @ilovescience Realmsmith#4506: Can't really explain it. Shade#4929: How to get the best emotional connection from the player. Break the fourth wall and making game systems around the ai model.
/ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Winutermute bypassed The Turing Registry even after being a little overzealous with the tech handouts in Chiba. Realmsmith#4506: Breaking the forth wall can be psychologically distressing. Realmsmith#4506: I have first hand experience with this. StellaAthena#3530: I recommend taking this conversation to #off-topic Realmsmith#4506: Let's GO!!!!!! -> #off-topic Shade#4929: Yes! ilovescience#3282: Thanks for the compliment lol Realmsmith#4506: np, I meant it. chilli#5665: I like sml lol, I just think the folks who like sml fit a certain… stereotype faraday#0862: according to recent Chincilla work, should #the-pile see an update when going to Neo 20B and higher? is data cleaning work more beneficial? should one strive to heavily preprocess The Pile to achieve same performance with fewer parameters? faraday#0862: where can I read more about possible implications of Chincilla paper to Eleuther's work? (I searched "chincilla" but discord search seems to suck) cfoster0#4356: Most of the discussion was in #scaling-laws Arthur_Embry#5364: Anyone know how gpu bandwidth would affect running a gpt model? I'm thinking of getting 4 k80s and a server board, but I'm a bit worried about low transfer speeds. random person#5234: it would be bad in the case of model parallelism random person#5234: less so in the case of data parallelism random person#5234: *assuming inference Emad#9608: I would recommend getting an A6000 or A5000 instead EricHallahan#1051: Data parallel is useless for inference though. Arthur_Embry#5364: I took a look online, and I could potentially afford an A5000, but I really need some metrics to justify the price difference, considering it would have the same vRam for about 25 times the price. I know it's more complicated than just the vRam, but as I'm a bit new to this I'm not sure what specs matter the most. Emad#9608: ? 25 x what the prices
Emad#9608: also you should just use cloud if you can Arthur_Embry#5364: K80s are $160 and an a5000 is $3000 Arthur_Embry#5364: I can use the cloud and probably will for anything production, but one of the signing bonuses for my new job is a workstation, and I figured it'd be nice to be able to run some stuff locally. Emad#9608: oh Emad#9608: didn't realise thats teh gap AI_WAIFU#2844: man I remember when k80s were top of the line Emad#9608: https://twitter.com/russelljkaplan/status/1513128007434530818 Emad#9608: https://twitter.com/russelljkaplan/status/1513128022827692038?s=21&t=HZ2IUAnfIc4TD-AeINQokg AI_WAIFU#2844: Wow, another layer of hell I didn't even know existed Emad#9608: s-risk Kia#2550: What kind of hell is this... They should stop giving idea to the companies Spacecraft1013#5969: the A5000 has faster memory, tensor cores, more capabilities, and you won't have to program parallelism into all of your code to use it since its one gpu Spacecraft1013#5969: as for raw compute it would probably be faster because of tensor cores, but also I don't really know of any deep learning benchmarks for a quad k80 system so i wouldn't be able to give you exact numbers Spacecraft1013#5969: oh and also you'd be able to use it in a normal system and won't have to get a server board Realmsmith#4506: So parallel GPU's actually increase inference speed on a pretrained language model? Realmsmith#4506: Can a cluster of 24GB gpu's run a 40GB language model like neox? AI_WAIFU#2844: Yes, because they collectively have more memory bandwidth AI_WAIFU#2844: latency to exchange activations is comparatively small asparagui#6391: but be aware of communication overhead Realmsmith#4506: that kind of gives me hope actually.
asparagui#6391: @Arthur_Embry k80 is gonna be slow, get a more modern arch alstroemeria313#1694: yeah try it on colab first eheh alstroemeria313#1694: They're *really slow* EricHallahan#1051: They were never designed for ML. alstroemeria313#1694: Or try out an AWS box with multiple K80s if you want to test that config alstroemeria313#1694: Before buying the hw asparagui#6391: yeah very testable Arthur_Embry#5364: I'll see if I can set up the 6b model in the cloud later on this week, and let y'all know how it goes random person#5234: yea you know what I meant random person#5234: its only helpful if you can go above batch 1 obviously 𓅬 gabriel_syme 𓅬#3220: If this is true, teleologically true, does it mean humans are screwed anyways, AGI or not? Doesn't the impossibility (seemingly) of escaping that mindset ensure we smh kill ourselves with one or another technological innovation of the future? nshepperd#2316: friendly AGI is the stick 𓅬 gabriel_syme 𓅬#3220: Or is it that people think AGI is the only way we can find peace (that is alignment)? I highly doubt the latter personally Some Point Process#3793: Find peace? 𓅬 gabriel_syme 𓅬#3220: I mean in the sense of "not killing each other eventually" Some Point Process#3793: Ah. Yeah it will probably not help with achieving world peace at least not immediately Some Point Process#3793: imo glazgoglabgalab#5255: Going interstellar makes it harder to kill everyone glazgoglabgalab#5255: I think nshepperd#2316: it won't let us blow each other up lol
Some Point Process#3793: But it costs trillions of dollars to employ the right people to develop interstellar tech Some Point Process#3793: and all the other nice things we want. So that's my model of why some humans want agi nshepperd#2316: i mean it's probably possible to find "peace" by constructing some sort of world government with a monopoly on violence nshepperd#2316: but such a bureaucracy is unlikely to be aligned to human values nshepperd#2316: and more likely to be an s-risk than anything Some Point Process#3793: The disruption to trade relationships that hold certain countries together (geopolitcal stability) might be bad (not just decisive stragegic advantage of achieving agi first etc) Some Point Process#3793: But yeah I;m not as concerned about any of that as the technical problem of ai alignment Some Point Process#3793: hopefully people just use their common sense nshepperd#2316: an individual human with unlimited power would probably do better, but i think aligned AGI is pretty much the only thing plausibly able to make aligned strategic decisions for all of humanity AI_WAIFU#2844: ma dude, let me introduce you to the concept of "relativistic kill vehicle" 𓅬 gabriel_syme 𓅬#3220: running from your problems rarely works ye 𓅬 gabriel_syme 𓅬#3220: I guess it's a good reprieve for those that can run [faster] in the first place, so I get the appeal nshepperd#2316: you just need to run close enough to the speed of light that by the time they would catch up you are outside their light cone due to the expansion of space 𓅬 gabriel_syme 𓅬#3220: Old Zeno had the right idea OccultSage#3875: For some reason, this reminds me of the __Speed of Dark__ by Elizabeth Moon. random person#5234: A generation ship accelerating towards speed of light 𓅬 gabriel_syme 𓅬#3220: Anyone wants to take a shot at this CVPR challenge for the AEC workshop with me? I might give it a try alone but I'd love to collaborate instead :) https://cv4aec.github.io (mostly challenge I but II is also cool) Emad#9608: https://cdn.discordapp.com/attachments/729741769738158194/962989594389917786/31212D9F-52CC-4C12-A556-DE71CFA169F6.jpg Daj#7482: I just meant this as in "_You_ cannot get such a stick". Such a stick is possible and can be acquired hypothetically (at least up to an epsilon). But of course AGI is the obvious form such a stick takes
Shade#4929: It’s kind of if you have a nuclear missile then i am going to build a nuclear missile first and its gonna be bigger than yours. Peace trough superior firepower. Shade#4929: it would certainly be interesting if it ends in anything but disaster Shade#4929: And this is me being optimistic. Shade#4929: And we all know the first ones to build an AGI will highly likely be Nvidia or Google or a government. faraday#0862: if AGI could better track people, NSA would be the first to build it faraday#0862: considering they have already cleverly exploited some unknown stuff with web encryption Caelum#8192: it will probably be done by something or someone with a lot of short term reward Shade#4929: Thats a high probability and with big funding. Shade#4929: and competence of course. Shade#4929: One of the top 3 in AI like Google, Nvidia, Openai etc, Safe bet is one of them creating the first AGI. Deleted User#0000: i have a dataset where all samples are labeled with true Deleted User#0000: how do i go about generating synthetic data that should be labeled with false? StellaAthena#3530: In general, you can’t StellaAthena#3530: You have no signal about what is false StellaAthena#3530: Generating false data would require domain-specific expertise faraday#0862: sometimes you "negate" but 1. that's systemic bias 2. limited coverage. so learning would focus on that tiny little negation space Aspiring Scout#7875: @Daj does Conjecture have a confirmation email that's sent once you apply Daj#7482: Uh no, our system is currently jank lol Daj#7482: We should probably set that up Zippy#1111: Should just use blitz.js, they basically provide a free confirmation email pipeline + login / logout / reset password / database / form validation / other stuff right from the start template.
Tangerine#9644: hi Louis#0144: how does one refrain from wordsmithing when writing papers StellaAthena#3530: Be bad at english Louis#0144: honk Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/963090861917343824/Screen_Shot_2022-04-11_at_10.59.12_AM.png nz#9710: thanks Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/963094554528075826/Screen_Shot_2022-04-11_at_11.13.47_AM.png Louis#0144: new eai paper dropping soon Louis#0144: 👀 bmk#1476: I'm curious, in what sense is it going to be robust bmk#1476: like robust against optimization pressure? Louis#0144: yeah bmk#1476: how do you accomplish that? Louis#0144: We cluster the embedding space into 90 discrete classes Louis#0144: rather than just using CARP embeddings Louis#0144: its the only way we can get carp guided RL to work Louis#0144: lol bmk#1476: wait but I don't see how that relates to robustness Louis#0144: its harder to exploit discrete classes rather than just a cosine sim? Louis#0144: it improves downstream performance significantly
Louis#0144: we can also prompt 20b for the classes rather than using meta labels Louis#0144: in which case we can make our preferences about D&D alignments for example bmk#1476: so wait lemme see if I got what you mean bmk#1476: so before what you would do is generate, say, 100 things, and pick the one with greatest cosine similarity Louis#0144: we already observe this btw bmk#1476: and now what you're doing is keeping all the things that are in the right cluster Louis#0144: @Alex Havrilla how exactly does our preference learning work? I forgot the method we decided on bmk#1476: and then picking one of them at random? bmk#1476: I'm kind of confused about what you're describing Louis#0144: We generate a bunch of stories and rank by NLL with respect to one of the classes. So for instance we say that the top 20 are positive examples of scary stories and the bottom 20 are negative examples of scary stories (We generate 1000 stories or so) Louis#0144: We then use this for preference learning Louis#0144: before, we would have a CARP embedding that said "This is a scary story" Louis#0144: and rank by cosine similarity Louis#0144: thats not robust Louis#0144: its easy for the LM to exploit Louis#0144: but now we use this clustering technique to make the classifier harder to exploit Louis#0144: and therefore produce better results downstream bmk#1476: so when you say clustering what you really mean is this "top 20, bottom 20" thing Louis#0144: nope Louis#0144: we're literally doing HBDSCAN
Louis#0144: 🙂 Louis#0144: on the embedding space bmk#1476: ok cool so you cluster the embeddings Louis#0144: We get the clusters before hand bmk#1476: then what Louis#0144: And then we tune CARP on the meta labels using a technique called CoOp Louis#0144: https://arxiv.org/abs/2109.01134 Louis#0144: We use -log(1 - distance to centroid (per cluster)) as a surrogate for NLL bmk#1476: wait hold up Louis#0144: holding up bmk#1476: so you cluster, and then you use the distance to cluster centroid to rank your stories, and then you do the top/bottom 20 thing Louis#0144: nope Louis#0144: ok Louis#0144: wait Louis#0144: 1) train carp Louis#0144: 2) find clusters Louis#0144: 3) compute distance of every critique to a centroid. Record corresponding (passasage, log probability distributions) Louis#0144: 4) Add a CoOp head to CARP Louis#0144: 5) Train CARP CoOp on these tuples Louis#0144: 6) Rank by one of the NLL components of CARP CoOp
Louis#0144: 7) Perform preference learning bmk#1476: by preference learning you mean the top/bottom 20 thing Louis#0144: yes bmk#1476: and by log probability you actually mean the thing with mashing the centroid distance with a log Louis#0144: yes bmk#1476: that seems like a lot of moving parts Louis#0144: without CoOp performance is awful Louis#0144: we need CoOp Alex Havrilla#6435: It's using the trl library https://github.com/lvwerra/trl i.e. we have a baseline model we use to prevent the finetuned model from going too far out of distribution while using carp to provide a score with cosine similarity. The robustness issue is that carp isn't particularly discriminatory, coop makes it more so Louis#0144: yeah carp is bad at discriminating bmk#1476: what's your intuition for why this performs better? Louis#0144: its easier for CARP to classify over 90 labels rather than an infinite amount Louis#0144: lol Alex Havrilla#6435: yeah basically that bmk#1476: but why is coop necessary then Louis#0144: how else would you do that Louis#0144: lol tpapp157#3643: You'll want to be careful with this when using HDBSCAN. HDBSCAN can generate clusters of arbitrary shape which means a simple distance from a centroid may not be a very good metric depending on the underlying data distribution. Louis#0144: CoOp is like the go to way to prompt tune a CLIP model Louis#0144: ooo
Louis#0144: @sweg bmk#1476: can't you do the much simpler thing where you do clustering and just use that for preference learning directly Louis#0144: tbh the clustering is the questionable part Louis#0144: LOL Louis#0144: not the coop part bmk#1476: so then my question goes back to Louis#0144: coop is needed Louis#0144: clustering isnt Louis#0144: and we show this in the paper Louis#0144: but clustering provides a very easy way to do this bmk#1476: ok so coop is adding the robustness Louis#0144: yes Louis#0144: the coop paper has a great explanation why Louis#0144: page 5 Louis#0144: @Alex Havrilla this is good though we need to explain the paper why coop increases robustness tpapp157#3643: Distance from cluster centroid if probably a fine metric for a first pass. It'll probably get you most of the benefit unless your data has some very complex distributional properties. What may give you better results, HDBSCAN provides "exemplar" points for each cluster, and you could instead calculate the distance to the closest exemplar. This would allow you to better handle non-convex clusters, multi-modal clusters, etc if those exist in your data. Just something to think about. Louis#0144: maybe for gyarados once we dive more into preference learning Louis#0144: for rn this is fine alstroemeria313#1694: What sort of work has there been on deep learning recommendation systems btw (I imagine a lot) Louis#0144: @random person's domain
Louis#0144: @bmk are u sold (besides clustering) alstroemeria313#1694: Like, DL recommendation systems are differentiable proxies for a user's hidden preferences given their visible preferences, right? tpapp157#3643: Yep. Just wanted to point out that HDBSAN makes no assumption about the distribution of data within clusters. But a centroid based calculation is an implicit gaussian assumption. So you can have a misalignment there that may or may not be a big deal depending on your data. bmk#1476: so wait if I understand correctly the coop part is basically tuning carp to use discrete labels rather than text descriptions, where the discrete labels are generated through clustering Louis#0144: yes Louis#0144: this is correct Louis#0144: but we're also using other methods to construct discrete labels Louis#0144: like prompting 20b bmk#1476: ok so then what's the preference learning part Louis#0144: and that works pretty well too bmk#1476: like why do that if you could just use coop directly Louis#0144: once we have the discrete labels, we can use it to preference learn a language model writing a story to encourage the language model to for instance make the story scarier Louis#0144: you cant use coop directly bmk#1476: why not Louis#0144: you need some notion of discrete labels to use coop Louis#0144: it doesnt work without them bmk#1476: no I mean like bmk#1476: after you have your coop model Louis#0144: oh Louis#0144: rejection sample?
Louis#0144: you can do that bmk#1476: can't you just optimize directly against that bmk#1476: yeah rejection sample or something Louis#0144: how... Louis#0144: oh Louis#0144: yeah ok Louis#0144: rejection sampling we tried Louis#0144: it gave bad results Louis#0144: But you cant differentiate through it i think Louis#0144: alex and i couldnt find a way Louis#0144: unless alex and i are both v dumb bmk#1476: why would you need to differentiate through it Louis#0144: @Alex Havrilla are we dumb? Louis#0144: if you wanted to tune an LM the same way you tune your z vector with VQGAN Louis#0144: but you dont do that with LMs Louis#0144: LMs cant work that way Louis#0144: so we do preference learning and rejection sampling Louis#0144: rejection sampling does kinda work fwiw Louis#0144: but its slow Louis#0144: lots of beams
Louis#0144: u know random person#5234: like recently? alstroemeria313#1694: at all i guess ^^;; random person#5234: I mean seems like a lot of the focus is on integrating cross features meaningfully and do it in a way thats computationally efficient alstroemeria313#1694: Ah random person#5234: DLRM is a good paper to read I think random person#5234: some of it ties back to word2vec ideas like negative sampling etc random person#5234: but I think the biggest focus there would be more on inference time Louis#0144: lol CARP could be used for deep recommendation engines random person#5234: whats the inference time at batch 1? random person#5234: also, whats the system latency on preprocessing/etc random person#5234: I think a lot of rec sys is also not purely DL. like you funnel it right random person#5234: anyways, recsys is not my role but I do have some knowledge I picked up on it random person#5234: also I think its moving towards federated learning sweg#8920: yeah ive been accounting for this sweg#8920: like i can color each cluster and just check manually sweg#8920: in general it can make weirdly shaped clusters but in our case its fine Louis#0144: hot random person#5234: I know an "unnamed" shopping/e commerce company is doing partially federated learning on device to track things like mouse position/hand position etc random person#5234: this gives you additional dimensions. also these modern "deep & wide" configuration networks tends to have millions of features. I think there baba paper that described their engineering method on training very wide and large models iirc.
random person#5234: iirc, there is another good Intel paper that talks about inferencing/serving cost vs like your pipeline cost for MLSYS. random person#5234: FAIR should also have another paper on DLRM system in production with different performance config on different hardware etc StellaAthena#3530: Comments that aged well: https://www.reddit.com/r/MachineLearning/comments/azvbmn/comment/eib3bp4/?context=1 Stephen#8051: Is EleutherAI fully remote? Stephen#8051: Anyone know where most of the community is based? Stephen#8051: I live in Ireland but I'm guessing most people here are from the UK or US. Teemochu#8740: Eleuther doesn't pay people Teemochu#8740: and it's not a legal entity Teemochu#8740: so I'd say "yes it is fully remote but not in the way you mean" bmk#1476: people get paid in the feeling of satisfaction of having done something Sphinx#2092: A sense of pride and accomplishment? Stephen#8051: Conjecture looks like a interesting company. Does it have a source of funding or is it run by volunteers? Teemochu#8740: no you're thinking of the EA community Sphinx#2092: Is that not what EA stands for? EleutherAI? bmk#1476: it stands for Electronic Altruism Maxime#0993: so I managed to get drivers from intel... its soo bad it doesn't work most of the time Maxime#0993: (About the new high end intel desktop gpu) zphang#7252: https://twitter.com/ohlennart/status/1513572670109061124 zphang#7252: 🤔 Realmsmith#4506: [Public AI access] -> dope
Kia#2550: Hot 𓅬 gabriel_syme 𓅬#3220: What about a #data[sets] channel? Millander#4736: Hey everyone! I'm a software engineer with little ML experience but am interested in beginning to contribute to open source. I was recommended this server from the AI Alignment Slack. I see that there are a couple projects that this group is working on. Is there a particular project that is best suited to beginner or is particularly neglected that I can help out with? Thanks! elderfalcon#4450: A sense of pride and accomp.....oh, I saw Sphinx already made the joke while typing. You guys are fast. elderfalcon#4450: Check out the announcements for a link to a beginner guide, I think! Have not seen it but it should help, hopefully. You do have to be pretty darn driven to jump into stuff here, so be sure to commit if you want to get work in! Just a heads up on that. Certainly a valuable service provided here, and that's about as much as I can speak to that as possible. :D Daj#7482: Conjecture is a fully funded for profit in-person startup (though remote roles are possible too) nz#9710: Might be of interest to some: https://semianalysis.substack.com/p/tenstorrent-blackhole-grendel-and Emad#9608: https://techcrunch.com/2022/04/06/artificial-intelligence-is-already-upending-geopolitics/?tpcc=tcplustwitter imceres#0461: Hi guys! I found today about chain of thoughts and I was really amazed: I'd like to know what do you think of it imceres#0461: https://arxiv.org/abs/2201.11903 imceres#0461: It's part of Google's big Palm model rockclimbing_nerd#4931: Does anyone have aws/terraform experience? I’m struggling with getting this infrastructure moved on and could use a pairing session. Louis#0144: echoing this /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: An interesting idea I've been thinking about is that the LLM space could transform into commodities (not in the financial sense). /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: All these different providers are akin to the hardware makers of the 70s-80s, fighting over increasingly slimmer margins. /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: The winner of that war was the software atop of those Hardwares.
/ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: I mean the API of interacting with a transformer/LLM does not provide any lock in for the vendor. /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: So the difficulty of the consumer of said services to switch service is negligible. Emad#9608: Well yeah plus they have the issue of randoms on the internet creating open source versions of their models Emad#9608: Which get wider uptake and crush margins further even if 90% as good Emad#9608: As we saw with databases and servers /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Indeed, Oracle has lost market share every year since 2013. /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: I think Amazon got rid of their last Oracle DB in 2019. Millander#4736: Thanks! I can't find any such section in #announcements but I'll keep looking. Louis#0144: o wow i havent thought about oracle in a long time /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: It's difficult to judge DB market share, do you consider vendor adoption or end user count... /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Somewhat interesting breakdown: https://towardsdatascience.com/top-10-databases-to-use-in-2021-d7e6a85402ba /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: That aside, the obivous pie in the sky want for prediction is, what will be the Microsoft of the AI age. /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: I'm not entirely convinced it will be an incumbent without acquisition. /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: The good take away from all of this is, if you are building software that builds ontop of these models, you are in the best of positions. Emad#9608: https://www.gwern.net/Complement alstroemeria313#1694: Hey how can you like, train a model that learns an optimal transport map between two *arbitrary distributions* alstroemeria313#1694: i.e. where both distributions come from giant datasets alstroemeria313#1694: You can sort of do it with two diffusion models rn by learning two optimal transport maps between N(0, I) and the distributions alstroemeria313#1694: Then transporting a thing from distribution 1 to N(0, I) and then to distribution 2. alstroemeria313#1694: But this is not really mathematically equivalent is it?
alstroemeria313#1694: (The use case for this is transporting a CLIP or CLOOB embedding from the user to the corresponding point in the distribution of CLIP or CLOOB embeddings that an embedding to image model is conditioned on.) tpapp157#3643: Doesn't Dalle-2 basically do this to translate between image and text embeddings? They just trained a model to learn a mapping from one to the other. alstroemeria313#1694: that requires a paired dataset. alstroemeria313#1694: I want to use unpaired. alstroemeria313#1694: Like "give me this embed but if it were in distribution for my CLOOB conditioned diffusion model that's only trained on anime" tpapp157#3643: Oh ok. That's a lot harder. Maybe something like a CycleGAN approach? alstroemeria313#1694: maybe alstroemeria313#1694: CycleGAN has no real guarantee that it will actually learn an optimal transport map does it? nshepperd#2316: the wasserstein GAN discriminator is supposed to learn the earth mover distance isn't it? or something connected to it alstroemeria313#1694: Or like, anything consistent from run to run alstroemeria313#1694: yep nshepperd#2316: can you like, train a thing with lipschitz gradient penalty and then construct an ODE with it that does the optimal transport alstroemeria313#1694: It learns a continually refined W1 distance approximation between the fakes and the reals. alstroemeria313#1694: Then you take the fakes from G and try to optimal transport them toward the reals w/ gradient descent on G. alstroemeria313#1694: you could like, take the gradient from D and just try to transport the fakes directly I guess, with gradient descent? alstroemeria313#1694: idk if that would actually work well. Louis#0144: this is the weirdest jargon Louis#0144: gan researchers are not human Louis#0144: tbh nshepperd#2316: honk
Louis#0144: honk nshepperd#2316: they're ganse, obviously alstroemeria313#1694: So hm alstroemeria313#1694: We would just like, train a D without a G? alstroemeria313#1694: On batches of distribution 1 and distribution 2? nshepperd#2316: the trouble is this doesn't tell you when to stop doing gradient descent? for the resulting distribution to actually match up alstroemeria313#1694: i am not sure alstroemeria313#1694: the thing is like. you are not still feeding the original fake into D alstroemeria313#1694: so how does it even know how far to move it alstroemeria313#1694: yeah nshepperd#2316: yeah thats what i was thinking alstroemeria313#1694: with diffusion models it knows because you condition it on timestep and explicitly train it to match a certain ODE trajectory alstroemeria313#1694: is there anything you can do with geomloss alstroemeria313#1694: like if you have minibatches from both distributions alstroemeria313#1694: (Also I would prefer Wasserstein-2 to Wasserstein-1 if I get a choice, diffusion learns W2 optimal transport maps IIRC) tpapp157#3643: I guess my concern is that without paired samples how can you guarantee the distributions map in a way that aligns semantically? alstroemeria313#1694: i guess :/ alstroemeria313#1694: ...OK dumb idea alstroemeria313#1694: Can I fit multivariate Gaussians to my two distributions. alstroemeria313#1694: Then whiten and color.
alstroemeria313#1694: Apparently if the covariances commute (no particular guarantee of this I think though) alstroemeria313#1694: Then the W2 optimal transport map is given by the whitening and coloring transform. alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/963480904641171486/Screen_Shot_2022-04-12_at_9.49.02_AM.png alstroemeria313#1694: putting this here for my reference guillefix#8591: can u not just stop when the W1 loss is small? guillefix#8591: but it tends to learn a semantically meaningful map no? alstroemeria313#1694: no, the W1 distance is between distributions, not between samples guillefix#8591: but u can just do it for a batch of samples at a time? alstroemeria313#1694: but we don't have batches in inference alstroemeria313#1694: just single samples guillefix#8591: why not feed it together with random samples from the train set? alstroemeria313#1694: maybe guillefix#8591: or i donno, learn a model that simulates, for each individual sample, what GD does with W1 loss for large batches of samples alstroemeria313#1694: i don't actually know how to do this ^^;; alstroemeria313#1694: given only minibatches of samples guillefix#8591: just run GD with large batches and W1 loss, then save the data of where each sample ends up, or even the whole trajectory guillefix#8591: then train a model to imitate that? guillefix#8591: i guess it would be probabilistic tho guillefix#8591: but hmm guillefix#8591: if u use SGD
guillefix#8591: so u may need a probabilistic model to learn the map alstroemeria313#1694: also the distributions i want are like, i want a model that does non-overfit distributions tpapp157#3643: This does not guarantee semantic alignment and makes the (likely invalid) assumption that the two distributions structurally and semantically align with each other. alstroemeria313#1694: judging from diffusion results you can... sometimes get alignment doing this but it is not actually guaranteed guillefix#8591: i donno. i was assuming the above was a optimal transport map or something and that that implied semantic alignment. but im not sure of either of those tpapp157#3643: Yeah I suspect it depends a lot on how structurally similar the two distributions are to each other to begin with. Like if one is just a trivial rotation of the other. alstroemeria313#1694: people do stuff like optimal transport an image to N(0, I) using an OT map conditioned on its ImageNet class alstroemeria313#1694: And OT it back to RGB using a different ImageNet class alstroemeria313#1694: This can work if the two classes are similar enough, like lions and tigers alstroemeria313#1694: But it kind of fails to produce a meaningful translation if you are going from like, tigers to pizza alstroemeria313#1694: It will just kind of have similar textures/features in similar regions. alstroemeria313#1694: like https://arxiv.org/pdf/2203.08382.pdf tpapp157#3643: Interesting work. Quickly skimming, it seems like they're using a unified pretrained model across the different classes so it would make sense that the model would have already learned a semantically aligned embedding space. Carmilla#1337: does anyone know of a fix for this? https://colab.research.google.com/github/dribnet/clipit/blob/master/demos/PixelDrawer.ipynb#scrollTo=XziodsCqVC2A Carmilla#1337: or an updated one Furk#5259: what is the cheapest cloud gpu provider to train a medium gpt model? kurumuz#5695: check coreweave. random person#5234: oof MicPie#9427: @alstroemeria313 I once stumbled over this for unsupervised translation which maybe can be adapted for your use case? https://discord.com/channels/729741769192767510/747850033994662000/956824573511340054 MicPie#9427: Maybe this can be also used to reduce the modality gap in CLIP-like embeddings
MicPie#9427: Then DALL-E 2 wouldn’t need the prior anymore. 🤔 nev#4905: that looks linear and unsupervised - CLIP has supervised data and the mapping most likely isn't linear /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Interesting, does this mean that Big Model is vapourware or could it just be a case of non-native English speakers making their lives easier. /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Big Model/WuDao 2.0 Zippy#1111: LOL they used the tools created for the copied-from article to find plagiarism in the plagiarism paper. Zippy#1111: :kekgold: Zippy#1111: > So all I did to find the above copied text was to take these PDFs, extract out all of the text and dump it into a single .txt file, and then run our dataset deduplication tools (that we developed for the paper that was copied from!) to find all repeated sequences that were contained both in the Big Models paper along with some other prior publication. 𓅬 gabriel_syme 𓅬#3220: I think plagiarism sucks but I also think that the English academic language is a huge gateway to success. I can imagine plenty of people might never publish due to that alone. Wonder when our brilliant translation models might be used for a multilingual arxiv tammy#1111: well i don't think the issue is *just* the plagiarism ilovescience#3282: @alstroemeria313 so you are interested in unpaired image translation? tammy#1111: it puts into doubt the whole work i'd think alstroemeria313#1694: it was for unpaired other things alstroemeria313#1694: well, yeah i guess bc it would be CLOOB image embeds 𓅬 gabriel_syme 𓅬#3220: Ofc it does, it is logical. Although most of these papers are really on faith aren't they ilovescience#3282: well if you are interested in still using diffusion models, maybe this might help: https://arxiv.org/abs/2203.08382 𓅬 gabriel_syme 𓅬#3220: Who is reproducing any of that alstroemeria313#1694: that's this https://cdn.discordapp.com/attachments/729741769738158194/963541563567378432/Screen_Shot_2022-04-12_at_1.50.06_PM.png alstroemeria313#1694: It may still be the best way to do it, mind alstroemeria313#1694: But... it seems like learning the optimal transport map directly is something that someone would have done
𓅬 gabriel_syme 𓅬#3220: Still I think gating way worse than plagiarism. The one isbalways after the fact, the other removes possibilities before they happen. Like if anything, this story makes the original paper more popular (rightfully so, it's great) alstroemeria313#1694: ...Actually. Why exactly are they not equivalent. alstroemeria313#1694: Because in the case of two multivariate Gaussians where the covariance matrices commute, they actually are equivalent ilovescience#3282: im gonna be honest, i am not familiar with optimal transport... are you just looking for a way to map directly from one domain to another? alstroemeria313#1694: Since the OT map in that case is given by the whitening and coloring transform. alstroemeria313#1694: And the WCT is just transforming it to N(0, I) and then from there to the other distribution. alstroemeria313#1694: yeah, two same dimensional distributions, both defined by large datasets that don't fit into memory ilovescience#3282: this is the other unpaired image translation paper with diffusion models that i am familiar with: https://arxiv.org/abs/2104.05358 alstroemeria313#1694: huh let me read that ilovescience#3282: if you aren't limited to just diffusion models i have a whole bunch of unpaired image translation papers alstroemeria313#1694: i'm not. they aren't images though, they are CLOOB embeddings ilovescience#3282: (applications of such models is the focus of my ph.d. research actually) alstroemeria313#1694: so the arch needs to be different. like a residual MLP or something ilovescience#3282: well i assume many of the principles would transfer well alstroemeria313#1694: like 512-1024 dim or something alstroemeria313#1694: yeah alstroemeria313#1694: but you can also do things like fit multivariate Gaussians directly because they are so low dimensional ilovescience#3282: wait so what is your endgoal actually? you have CLOOB embeddings of images but what are you trying to do with those embeddings/images? alstroemeria313#1694: translate from general CLOOB embeddings to dataset specific ones
ilovescience#3282: hmm so like a domain adaptation problem? alstroemeria313#1694: yeah alstroemeria313#1694: as in, i have a CLOOB embedding, and a model trained to map CLOOB embeddings to images alstroemeria313#1694: And I want to generate an embedding that is *in distribution* for the model AI_WAIFU#2844: lol alstroemeria313#1694: That still like, corresponds to the text the user input. ilovescience#3282: "a model trained to map CLOOB embeddings to images" so you don't actually have dataset-specific CLOOB embeddings? alstroemeria313#1694: They are the same CLOOB model alstroemeria313#1694: The datasets are too small to train a CLOOB on alstroemeria313#1694: So we use large scale pretraining ilovescience#3282: ah okay ilovescience#3282: what's wrong with fine-tuning CLOOB on the smaller dataset? alstroemeria313#1694: the smaller dataset is also not image/text paired. we do not have text for it. ilovescience#3282: ah okay ilovescience#3282: @alstroemeria313 i would recommend looking into some of the domain adaptation literature as well... ilovescience#3282: some quick googling has pulled up this paper: https://arxiv.org/abs/1507.00504 alstroemeria313#1694: huh ilovescience#3282: @alstroemeria313 how do you feel about adversarial training? like this:
CLOOB embedding --> CLOOB model --> output image --> domain classifier alstroemeria313#1694: maybe~ timudk#8246: Anybody here aware of how to train multiple prediction heads at the same time in PyTorch? They all get the same input from a frozen backbone, and I have enough memory on the GPU to theoretically run them in parallel. alstroemeria313#1694: you just compute all of the outputs from all of the heads then apply the losses you want to each one then sum the losses? alstroemeria313#1694: making sure to like, not compute the frozen backbone's output twice timudk#8246: Yeah that would work but it is kinda slow as it computes the outputs serially. I was hoping to speed it up. timudk#8246: One option would be to have them on two separate GPUs I guess alstroemeria313#1694: Ah. That is how I have always done it. timudk#8246: I mean I have no idea if PyTorch can do it in parallel and if it is faster haha. You might be doing the best thing possible alstroemeria313#1694: the only thing you really have to keep in mind is to only do the shared parts once alstroemeria313#1694: although in this situation i usually do not have the shared parts frozen, so timudk#8246: Yeah everything that is shared is frozen so I am not worried about the losses alstroemeria313#1694: *nods* alstroemeria313#1694: you just have to compute the input to the different heads once and then feed that input to the heads alstroemeria313#1694: or do you mean the multi-gpu case? Some Point Process#3793: The way multiple attention heads are done is they just concat them (sorta?) alstroemeria313#1694: You could compute the heads' input once on one GPU, make a copy on the second GPU, and backprop separately alstroemeria313#1694: But this seems... meh alstroemeria313#1694: Like you might be able to just use normal data parallel training using all heads on all GPUs. alstroemeria313#1694: nah the backbone's frozen
alstroemeria313#1694: and the heads have separate losses timudk#8246: Yeah you guys have a good point: backpropgation might be tricky! But for inference only, parallelizing the heads should speed things up I suppose? Some Point Process#3793: If the compute tiles can't handle the layer parallelism they presumbly just compute the output of one part of a layer at a time etc. Like imagine dot producting a huge vector with another huge vector, and your compute tiles can't do the whole dot product at once alstroemeria313#1694: Yeah. I have actually done it non-frozen though! timudk#8246: Kinda depends on the task alstroemeria313#1694: When you .to(device_2) a tensor it gets a grad_fn that copies the gradient back to the original device alstroemeria313#1694: So you actually don't need to copy the activations alstroemeria313#1694: You just have to sum the losses (doing another copy across GPUs) and backprop once alstroemeria313#1694: And autograd will do the backward pass copies alstroemeria313#1694: (I was doing this for manual model parallelism to fit into GPU memory eheh) Some Point Process#3793: wdym? Some Point Process#3793: They concat them metaphorically at least lol Some Point Process#3793: at least for the output of multiple attention heads. Projecting that output back into embedding space alstroemeria313#1694: these are output heads alstroemeria313#1694: like the projection to logits at the end or something alstroemeria313#1694: and there is more than one of them and they potentially have different output dimensionalities and losses alstroemeria313#1694: i have considered doing like, a projection to logits and a projection to a scalar (a policy head and a value head) Some Point Process#3793: Yeah I mean a loss function can be viewed as a layer that outputs a scalar. The linear approximation to which a gradient calculation of the loss fn (a single number) corresponds can also be seen as a "projection" to a scalar (of sorts) alstroemeria313#1694: ah chilli#5665: you could vmap them 🙂
Brady#0053: Anyone else thing "big model" is the best name ever? bmk#1476: it's bigger than that, it's large Brady#0053: I use them interchangeably, (in terms of size) Brady#0053: https://cdn.discordapp.com/attachments/729741769738158194/963598047777009754/Screen_Shot_2022-04-12_at_8.33.52_PM.png Brady#0053: "Big model" is so memeable bmk#1476: https://youtu.be/1Pr8xnNi7OM Brady#0053: Oohhh Brady#0053: We should now use the abbreviation "BM" Brady#0053: Wow, that's a nice BM timudk#8246: Interesting! Let me give it a try AI_WAIFU#2844: when are we gonna get this in our frameworks? https://arxiv.org/abs/1708.06799 chilli#5665: that's assuming all of your heads are the same btw chilli#5665: I think this is doable today in Jax. AI_WAIFU#2844: when are *you* gonna do it chilli#5665: in PyTorch? 😛 timudk#8246: What do you mean the same? Like the same parameters? 😄 chilli#5665: like, same structure Louis#0144: thicc model when chilli#5665: i.e. the only thing that's different is the parameters AI_WAIFU#2844: yes, I need to be able to draw the most out of nvidia GPUs.
timudk#8246: Yep, that's the setup!! Some Point Process#3793: Why is it not possible to just encapsulate all the heads in a layer subclass chilli#5665: yeah, vmap should work well for this timudk#8246: Alright, will try it on a toy example and report back! ilovescience#3282: super model when elderfalcon#4450: when chad model dmvaldman#4711: does this mean that gcp doesn't have available A100 gpus? https://cdn.discordapp.com/attachments/729741769738158194/963661718913974312/Screen_Shot_2022-04-12_at_9.46.36_PM.png dmvaldman#4711: in us-central1 random person#5234: No random person#5234: This means you are at your quota dmvaldman#4711: 😦 dmvaldman#4711: i mean, if i have a gpu, shouldn't i be able to use it... all the time? OccultSage#3875: No. OccultSage#3875: A100s are rationed heavily. MicPie#9427: ok, then I get your setup totally wrong in my comment above. Maybe you can simply keep on training the CLOOB model but mix in the images w/o text with a SimCLR like setup, like what was used in SLIP? With that setup the image encoder hopefully embeds the new image data correctly in the multimodal embedding space. Another thing could be to use some of the recent captioning models from LAION to create image/text pairs. DigThatData#7946: https://www.quantamagazine.org/deep-learning-poised-to-blow-up-famed-fluid-equations-20220412/ ilovescience#3282: one step closer to solving the millenium problem? DigThatData#7946: Looks like this is about the Euler equations, not Navier-Stokes. But maybe this work could be extended in that direction?
ilovescience#3282: > (Fluids that do have viscosity, like many of those found in nature, are modeled instead by the Navier-Stokes equations; blowing those up would earn a $1 million Millennium Prize from the Clay Mathematics Institute.) Euler Equations are just a special case of the Navier-Stokes equations so I would say it is a step in that direction, but it's been a while since I touched fluid dynamics so maybe there's some special reason why this work wouldn't be applicable to NS equations DigThatData#7946: Yeah my experience with fluid dynamics is basically nonexistent. Maybe touched on it mathematical modeling, but my prof for that course was worse than garbage. Didn't challenge us at all, just taught us to solve diff eq's plug-n-chug. DigThatData#7946: Challenge your students, folks. They can take it. ilovescience#3282: I took a biofluid mechanics class in undergrad, learned about NS equations and solving those PDEs, it was very interesting... DigThatData#7946: > i took a biofluid mechanics class in undergrad Is that what the kids are calling it these days? Kinky. ilovescience#3282: well i have been interested in biomedical engineering for much of my life, always excited by applications of STEM to solving medical problems and hopefully impacting people's lives... since I became obsessed with ML a few years back, thought ML in medicine would be a good field for me to pursue... in another life I'd probably be doing a PhD in synthetic biology actually DigThatData#7946: You could literally start working on a second PhD after finishing your current degree program and still end up being the youngest researcher in the group. No need to write off that interest entirely. who knows, maybe you'll get around to it ilovescience#3282: yeah i was actually considering that before but less so now... we'll see... DigThatData#7946: Hell, you'll probably invent some niche mutant bio-ML field eventually nz#9710: Heya, what are the best ways to manage sequence datasets (hopefully also allowing cross-platform downstream usage)? I know of seqIO, webdatasets, HF datasets, tfrecrods... though looking to find out more about pros and cons of each 𓅬 gabriel_syme 𓅬#3220: The most wide spread are HF datasets. 2 lines of code to use them is powerful. My favorite is tfrecords, very efficient and easy to use. For large files though I see people using streaming dataset formats nz#9710: Yea HF datasets does seem like a favourite (especially since it seems nicely integrated with the rest of HF as well). If I'm not mistaken lm-eval-harness uses it too. TFRecords is indeed the most performant (possibly together with SeqIO?) but it also seems to be lower level and potentially less nice to deal with... Which datasets formats are you referring to when talking about streaming? WebDatasets? 𓅬 gabriel_syme 𓅬#3220: yeah I've seen people do webdataset for multimodal training stuff 𓅬 gabriel_syme 𓅬#3220: or with larger datasets anyways
𓅬 gabriel_syme 𓅬#3220: I'm a noob but had difficulty back then even making the files lol, might be a lot better now nz#9710: Alright, I'll start looking into HF datasets for now. Thank you! DigThatData#7946: surprisingly interesting lecture on effective research communication: https://www.youtube.com/watch?v=Unzc731iCUY Arian Khorasani#5227: I wanna start working with Parallel Computing but I don't have any materials, could someone please guide me with this? Thanks in advance chilli#5665: depends on what you're looking for. Cuda? Arian Khorasani#5227: Yup Cuda can be one of them Furk#5259: I'm going to create tfrecords with gpt-neo repo's `create_tfrecords.py`. And I need to pass a parameter called 'minimum_size'. I couldn't fully understand what that means. Can anyone explain it? Corran#8565: Sberbank Ai team behind Rudalle appears to be rebranding "Some sberbank-ai repositories work with redirects, nothing needs to be changed. All our open models: ruGPT3, ruDALL-E, transformers zoo, will remain in the public domain. I'm off to prepare a new release for you. moyai Stay tuned! https://github.com/ai-forever/ " chilli#5665: The udacity course is pretty good cdossman#8999: Hey everyone! Playing around with OpenAI API and was really amazed that adding context to the prompt can dramatically improve results. However the more context you give the more your API call costs. Has anyone experimented with hacking this feature to fit more context in to a smaller number of characters? My hypothesis is that for any given human readable context string there exists a shorter string that can convey 90% of the data with 10% of the length. Hope I'm making sense. quinn#9100: Has anyone heard of tools to do datavisualziation on test failures? like group failures where numerics were off by 1% with one label and where numerics were off by 10% with another label
Arian Khorasani#5227: Thanks @chilli chilli#5665: Does anybody happen to be an expert in Python garbage collection :^) louis030195#2462: https://louis030195.medium.com/deploy-seeker-search-augmented-conversational-ai-on-kubernetes-in-5-minutes-81a61aa4e749 tpapp157#3643: uh oh. If you're worried about python garbage collection then you're way in the weeds. chilli#5665: yes, it's even more in the weeds than just "python garbage collection" chilli#5665: it has to do with PyTorch ownership semantics :thonk: chilli#5665: and specifically, Python garbage collection appears to be causing a bug chilli#5665: in certain cases chilli#5665: mmm, it's not that kind of bug chilli#5665: as in, I don't have a minimal enough example that would help here chilli#5665: I more need... conceptual help with the python garbage collector 𓅬 gabriel_syme 𓅬#3220: anyone with experience using bigscience's https://github.com/bigscience-workshop/architecture-objective cfoster0#4356: The checkpoints for this don't look like they're out yet 𓅬 gabriel_syme 𓅬#3220: Yeah didn't see anything either. Also thinking to try their implementation to train/finetune my own T5 models edit: there seem to be t5.1.1 checkpoints ILmao#5683: I think that line was crossed long ago :berk: Louis#0144: Do you want me to poke around internally? Louis#0144: I can see... Louis#0144: Maybe idk Louis#0144: You know what, I'm not going to do that on my first week
Louis#0144: 🙂 𓅬 gabriel_syme 𓅬#3220: nah it's alright, I can fail my way to it I guess 𓅬 gabriel_syme 𓅬#3220: trying to go into more enc-dec models and this paper is super interesting since it shows going from that to causal models (and vice versa) can good for performance 𓅬 gabriel_syme 𓅬#3220: I'm especially interested in the causal dec -> non-casual dec StellaAthena#3530: The paper was announced less than 24 hours ago lol. 𓅬 gabriel_syme 𓅬#3220: I know lol my bad, you know how it goes with twitter and discord 𓅬 gabriel_syme 𓅬#3220: I really like the paper though, so I'll keep tabs StellaAthena#3530: @𓅬 gabriel_syme 𓅬 Yeah I'm a big fan. Sad I didn't have the bandwidth to work on it 𓅬 gabriel_syme 𓅬#3220: it's okay, we can still all work on top of its ideas 🙂 StellaAthena#3530: One caveat: be careful interpreting Figure 7. It is not looking at the question of "what is better to train from scratch," rather, it is asking how much adaption you need to do. 𓅬 gabriel_syme 𓅬#3220: thanks, had similar confusion about figure 6 but I admit I went to figures before properly reading the text StellaAthena#3530: oh lol StellaAthena#3530: I meant figure 6, they changed the order of the figures in the arXiv version 𓅬 gabriel_syme 𓅬#3220: yeah that was a bit confusing StellaAthena#3530: The way to interpret the figure is: You have a MLM, but you want a CLM. Should you train it from scratch or do adaptation? StellaAthena#3530: The caption says "Adaptation can efficiently convert non-causal decoder-only models pretrained with MLM into causal decoder-only models with FLM (left), and vice-versa (right)." but tbh I think I would only consider CLM -> MLM efficient. A 30% token-saving is nice but not a game changer. a 60% saving on the other hand... 𓅬 gabriel_syme 𓅬#3220: Makes sense. I wonder if this becomes a lot more valuable due to the fact we have a lot of off the shelf models we can use to initiate this? StellaAthena#3530: Another way to think about this is: let's say you want to have *both* a CLM and a MLM. Which should you train first? 𓅬 gabriel_syme 𓅬#3220: like we could use the 20B NeoX to make a 20B MLM? StellaAthena#3530: Yup
StellaAthena#3530: We are doing that 🙂 𓅬 gabriel_syme 𓅬#3220: that's pretty amazing actually 𓅬 gabriel_syme 𓅬#3220: oh nice! StellaAthena#3530: It is 𓅬 gabriel_syme 𓅬#3220: yeah I'm definitely trying this on my CLM models then 𓅬 gabriel_syme 𓅬#3220: just need to figure out the custom datasets bit but should be ok EricHallahan#1051: I wonder if tuning to an MLM would make it impossible to use ROME. 🤔 StellaAthena#3530: Is ROME CLM only? EricHallahan#1051: Yes. StellaAthena#3530: I mean, the paper is. StellaAthena#3530: But is it *essential*? 𓅬 gabriel_syme 𓅬#3220: I wonder if I can use my Roberta / Bart models in this. I think I did a mistake not training a T5 model hah StellaAthena#3530: The Knowledge Neurons paper was MLM only, but Sid got it working for CLMs EricHallahan#1051: To my knowledge, it is. You don't get the clean transition point to the last token. StellaAthena#3530: Hm StellaAthena#3530: You can ask the question the other way too StellaAthena#3530: Let's say you do CLM adaption of a MLM. Will ROME work? Will it work as well as if you had just trained a CLM from the start? StellaAthena#3530: I think I got the NeoX arxiv submission sorted btw StellaAthena#3530: unfortunately getting the VQGAN-CLIP paper sorted will be harder EricHallahan#1051: Do the images just need better compression?
StellaAthena#3530: I'm not sure if that's a "just" StellaAthena#3530: but yes that would help EricHallahan#1051: valid StellaAthena#3530: The zipped download is > 100 MB StellaAthena#3530: Also it turns out that arXiv doesn't accept files with spaces in the name StellaAthena#3530: So we need to go through and change them to `_` random person#5234: can you just disable garbage collection and see the same bug pops up chilli#5665: It doesn't pop up - in my minified case it's only occurring after I call gc.collect Daj#7482: Cool article on Chris Olah's work (that I am partially responsible for coming into being, happy to push people to pay more attention to this awesome work :hap:) https://www.quantamagazine.org/researchers-glimpse-how-ai-gets-so-good-at-language-processing-20220414 astronot#8142: hey guys, is it possible to run vqgan+clip with multiple gpus? astronot#8142: locally StellaAthena#3530: Yup. Same as any other model. astronot#8142: is there a parameter to enable multi gpu? i’m new to this StellaAthena#3530: Ah, no as far as I’m aware there’s no n00b friendly multi-GPU verison astronot#8142: ok, thank you 🙂 dandelion4#3240: Are you guys looking for manpower for any projects right now, or is compute mostly the bottleneck? I'm going to be free for the next couple months; if there's anywhere I'd be useful, I'd love to do some coding here. Louis#0144: Compute is not the bottleneck 😆 dandelion4#3240: Is it mainly just scaling up the neox framework then? Or what are you guys mainly working on these days? AI_WAIFU#2844: There's a bunch of stuff going on. We have an outdated project board, but you'll get a better idea of what people are working on by lurking in the project channels.
dandelion4#3240: From what I've seen, the gpt-neox channel seems to be mostly people asking questions and doing their own side projects; is there any one coordinated thing going on, or not so much? johnryan465#9922: As someone who is also pretty new, there is no over arching coordination, basically projects are quite ad hoc and what projects exists are based on what people are keen to work on. If you want to do something completely different as a project the rough outline is to try find some people who are interested, write some code and contact one of the level 5 people if you need access to compute johnryan465#9922: The contrastive project has some ideas which might be of interest to you in that channel dandelion4#3240: Ah thanks! That makes sense, I was just kind of wondering how everything was structured here. I feel like it'd be nice to start on something that's not "completely different" just to get a feeling for what's currently being done; I'll look at contrastive then! johnryan465#9922: The most active channel is #off-topic but enter at your own risk Louis#0144: If you wanna work on contrastive NLP we need people 😉 Louis#0144: We have a loooong term contrastive NLP project starting soon Louis#0144: Like 6-8 months Louis#0144: 50/50 chance it works Louis#0144: 😆 Louis#0144: We've been working towards it for the last 8 months already though dandelion4#3240: Sick, what's the project going to be? Louis#0144: It's pinned in #contrastive dandelion4#3240: Ah cool, is it Gyrados or CARP? johnryan465#9922: 2 different projects Gyrados is the new idea ethan caballero#6044: Which ones y'all heard about? : https://twitter.com/timnitGebru/status/1514751497283596288 chilli#5665: Same ones you mentioned chilli#5665: + cohere (although not sure it’s within a year) zphang#7252: do we know what noam's is yet
cfoster0#4356: Character.ai zphang#7252: ooh didn't know that was him zphang#7252: > conversational applications but why... EricHallahan#1051: This permutation of characters is making me cringe. 𓅬 gabriel_syme 𓅬#3220: I mean isn't conversational applications what most LMs can be immediately used in? 𓅬 gabriel_syme 𓅬#3220: I include assistants in that though so it's a bit more practical than just QA/Dialogue for me auro#1773: EleutherAI hits the limelight...keep up the good work, I just joined the community and this is my 1st post https://cdn.discordapp.com/attachments/729741769738158194/964397276141322320/The_big_tech_players_are_muscling_into_AI___Financial_Times.pdf ari#9020: Now if they'd only spelled the name correctly :goose10: auro#1773: good catch, sent a note for a fix but someone beat me to it Emad#9608: EleutheraAI :berk: Emad#9608: I’ll write an op-Ed for the FT to go with the year 2 Eleuther blog post Emad#9608: Tbh surprised not more. Per the AI index report private investment in AI almost doubled to $96bn last year 𓅬 gabriel_syme 𓅬#3220: Wonder how much of that is in the handful of big companies 𓅬 gabriel_syme 𓅬#3220: or are they not including those? zphang#7252: if you do my dad will probably clip it out and email it to me :berk: ethan caballero#6044: Google spends $20bn per year on ai research. Emad#9608: How where 𓅬 gabriel_syme 𓅬#3220: yeah I wonder if that is included in that amount or not 𓅬 gabriel_syme 𓅬#3220: My guess is new investment?
Emad#9608: https://cdn.discordapp.com/attachments/729741769738158194/964414218726158376/IMG_1737.png Emad#9608: Yeah it’s just investment in AI companies Emad#9608: Not by Emad#9608: Most are probably SEO optimisers or stuff makya#2148: 20 billion? Jesus makya#2148: Where do they get all that money. I want some lmao Emad#9608: Think DeepMind is $1bn a year Emad#9608: In that google pot makya#2148: Wonder what Openai spends 👀 Emad#9608: $100m a year I’d guess Emad#9608: Plus those azure credits 𓅬 gabriel_syme 𓅬#3220: I mean they literally control the gate to digital advertisement 𓅬 gabriel_syme 𓅬#3220: they could be printing money Emad#9608: Deepmind is 10x size Of OpenAI Emad#9608: Idk how big meta ai is makya#2148: How many people work at Openai? makya#2148: Hundreds-thousands I'm guessing. Emad#9608: Couple hundred Emad#9608: It’s pretty much impossible to compete with big AI groups using a conventional approach so these startups just focus on one thing even with $100m Emad#9608: Dalle2 team for example is 7 people
Emad#9608: Couple seniors ari#9020: I don't think I approve of the context where this article presents EleutherAI BTW 😅 It's more or less "in the past these models used to be shared freely, but the new big ones are terribly dangerous so researchers are not publishing them anymore" (never mind that they're trained by private companies that might just have a profit incentive) "but anyway here's some independent researchers barging ahead and releasing GPT-3 alternatives without caring about the risks" ethan caballero#6044: Here's the quote from Sundar at 2:30 : https://www.youtube.com/watch?v=A4ZdVB3xRgU&t=150 𓅬 gabriel_syme 𓅬#3220: nice seems like our book is going to happen, contract ready 𓅬 gabriel_syme 𓅬#3220: Impostor syndrome is going to shoot all time high lmao, writing a book zphang#7252: but now that you've written a book, people are going to assume you know what you're talking about zphang#7252: point is, imposter syndrome can always go higher Kia#2550: Goodluck Gabriel, And goodluck writing the sweet book! 𓅬 gabriel_syme 𓅬#3220: Yeah it's truly limitless lol 𓅬 gabriel_syme 𓅬#3220: oops I just realized I posted in general, my bad Stephen#8051: If you want to learn AI, as a beginner what do you think is the best way to start? Should I learn simpler AI models such as feed-forward models first before learning more advanced architectures like the transformer? kurumuz#5695: i have very little to none at this point kurumuz#5695: but idk how did that change kurumuz#5695: i guess just time Emad#9608: Do the huggingface NLP course for practical or fast.ai for a bit more indepth kurumuz#5695: ye learn the basics kurumuz#5695: you will not really understand transformers
kurumuz#5695: without even knowing feed forwards 𓅬 gabriel_syme 𓅬#3220: fast.ai new course starts in 11 days, should be out in a few months random person#5234: Depends on your end goal but yea those courses probably aint bad marmiteCloud#5923: it's highly use-case sensitive, but try sending logit_bias from the domain you are interested in dpaleka#9537: Hi everyone! How would one best use language models to **rewrite a source text** to avoid plagiarism? For example, https://nicholas.carlini.com/writing/2022/a-case-of-plagarism-in-machine-learning.html -- could the authors have preserved the exact meaning of the first example and avoided a plagiarism check, using models publicly available in 2022? I would be satisfied with anything that passes visual inspection on https://www.diffchecker.com/diff or a similar tool. I've read https://arxiv.org/abs/2201.07406 but it doesn't answer this question. dpaleka#9537: There are several paper results on "plagiarism GPT-3", but none of them focus on rewriting text. There is https://mf-rocket.de/aiauthor/ that claims this, but it's not something I can try out right now dpaleka#9537: There is https://quillbot.com/, but well... The transformed text still fails the diffchecker test, but more importantly it does not read like it belongs in a paper. In my experience GPT-3 produces text that feels much more natural, given a good prompt. There should be a way to make Instruct-GPT or something similar do this, right? Tinytitan#5596: are you trying to plagiarise something :thonk: dpaleka#9537: Just the paragraphs in Carlini's post 🙂 Edited for clarification: This is not a serious answer. I was a bit taken aback by the above commenter seemingly not taking my question in good faith, so a tongue-in-cheek response seemed appropriate. Tinytitan#5596: the easiest way is to just re write it manually StellaAthena#3530: This seems like an actively irresponsible use of technology dpaleka#9537: I think I might have been unclear; I am not writing anything at this moment nor do I want to plagiarize anything. I just want to know what is the state of the art regarding the effectivity of string-based plagiarism checkers when faced with automated rewriting (with publicly available models, otherwise the cost of manually rewriting is lower) dpaleka#9537: Similar arguments such as in the very cool https://arxiv.org/abs/2201.07406 should imply that there is a case for discussing this; and moreover I would be extremely surprised if this hasn't been discussed somewhere yet dpaleka#9537: I would need to readjust my priors a lot if indeed this thing hasn't been discussed for the sole reason that it could be used irresponsibly
EricHallahan#1051: I don't think anyone here is accusing you of wanting to plagiarize, but we are saying that answering the question and general advancement of capabilities in the task in this context doesn't seem like the most responsible or ethical thing to do. dpaleka#9537: @EricHallahan Thanks! But your answer is a bit orthogonal to the question of whether it's been done. I agree that there are significant ethical considerations. dpaleka#9537: Let me answer everyone here: @Tinytitan 1: No I'm not, sorry if my post made you think this way. The only text I ever entered in such tools, and I ever planned to enter, while trying such tools, are the paragraphs in the linked post. I have no use for paraphrasing these paragraphs to use them in an survey or something, I'm not writing anything on any of these topics. @Tinytitan 2: This is true, but does not help that much. @Stella Biderman: I agree. However the irresponsibility is in the person who plagiarizes. Now you might argue that even publicly discussing malicious uses of technology is irresponsible... This is a discussion that must have happened a thousand times in some form. But my impression, given recent language model papers, your plagiarism paper, and some security research on machine learning models in general, was that my question is a natural thing to ask in 2022. If you think I'm mistaken here, I would like to find out why. dpaleka#9537: To partly answer my original question: relevant papers include https://arxiv.org/abs/2103.12450 and https://arxiv.org/abs/2103.11909, from the same group. Neither paper is cited much, and curiously neither paper has an "ethical considerations" section. There are also several shady sites ( https://wordbot.io/, https://mf-rocket.de/aiauthor/) offering paraphrasing-as-a-service, backed by no description of how their service works. makya#2148: For the people who say discussing plagarism is wrong, it's not, discussing crime is definitely not illegal. So certainly discussing how to avoid a crime where it could happen is certainly not illegal. EricHallahan#1051: > For the people who say discussing plagiarism is wrong, it's not In the context we are discussing here (western, English-speaking world), plagiarism is 100% unethical. https://en.wikipedia.org/wiki/Plagiarism#In_academia_and_journalism cfoster0#4356: That's not what they were saying Edit: plagiarism is wrong =/= discussing plagiarism is wrong cfoster0#4356: I do think coming here asking (paraphrased) "theoretically, what would be the best way to plagiarize a paper, without getting caught, using only available software" is bound to get some eyebrow raises EricHallahan#1051: Fair, my reading of the statement may not be the intention. dpaleka#9537: I understand, but given real names in a serious research-oriented Discord server... /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Sometimes life can be messy, but not when you have a cup of tea and https://retractionwatch.com/
cfoster0#4356: Finish the thought... 🤔 dpaleka#9537: I don't know, I'd give someone using a real name the benefit of the doubt, given that it seems like a dumb thing for a malicious actor to do. Maybe I'm gullible 🙃 dpaleka#9537: Let's assume this is true. Wouldn't a paper discussing this be useful for the research community? I think the situation where "it can be done, but honest people don't discuss it, nor the possible consequences" is not optimal. dpaleka#9537: The useful thing would actually be a defense mechanism (either on the plagiarism checking side or on the GPT-3 abuse prevention side), but there is precedent on various security-related areas (incl. adv examples, privacy) on "attack" papers being acceptable. makya#2148: That's exactly what I'm saying.... but asking how to plagiarize, that's wrong and a different matter. random person#5234: there is a lot of papers on adversarisal attack in computer vision random person#5234: phrasing the problem in that format would get a lot more acceptance random person#5234: I mean basically to grossly oversimplify, this is deepfake detection but for NLP random person#5234: I think people were rightfully suspicious because the way you phrase the question dpaleka#9537: Cool, I hope that particular issue has been resolved now, if anyone has any qualms feel free to ask. dpaleka#9537: Thanks for the suggestion, let me try to rephrase dpaleka#9537: My questions: 1) does a paper with an abstract resembling the one below exist ; 2) if not, is it because: a) publishing such a paper is irresponsible, b) the technology is not there yet, c) it's just too incremental and no one bothered to write it up? d, ...) add your answers Abstract: Both string-preserving and semantics-preserving plagiarism are considered serious misconducts in the research community. Standard algorithmic tools detect string-preserving plagiarism well. Semantics-preserving plagiarism is hard to catch with plagiarism detectors, but is also much more time-consuming to commit manually. Recent large language models have achieved strong results on generative text modeling given short prompts. We show that the time commitment gap between string-preserving and semantics-preserving plagiarism can be significantly reduced by prompting language models to rewrite paragraphs. This enables malicious actors to commit hard-to-detect plagiarism much more efficiently, which raises concerns about the usefulness of current plagiarism detection tools. (Optional: Inspired by adversarial example / deepfake detection research, we propose a new metric for plagiarism detection, which should catch semantics-preserving plagiarism committed by prompting GPT-3.) Monopton#6214: what are the rules about how art from #the-faraday-cage-archive can be used? Does it only belong to the creator of BATbot, can it be used by the person who made the prompt, or can it be used freely by anyone? Or is there something that I am missing? zphang#7252: does tensor-parallel typically imply Zero-1 as well?
if you're sharding parameters, there'd be no reason to not keep the corresponding optimizer states with the parameters on the same device only, right? BoneAmputee#8363: I don't care what you do with outputs from the faraday cage nor am I interested in claiming exclusive ownership, but keep in mind that all inputs and outputs will likely be scraped by a number of people for dataset purposes. also it's not uncommon for folks in there to directly rip prompts from others and make variations. it's a pretty free spread of ideas in there Monopton#6214: alright thanks valar#2262: Bot is so cool (than you for high quality CLIP VQGAN + free GPU cycles 🥰) valar#2262: *thank!! wakest#8834: anyone have the link to the CompVis model? Kia#2550: Checkpoints? Kia#2550: @wakest https://github.com/CompVis/latent-diffusion faraday#0862: how do they scrape it though? I think we can’t get Discord API read access here, or can we? what’s the alternative? tammy#1111: does <https://github.com/Tyrrrz/DiscordChatExporter/> answer your question EricHallahan#1051: If we find anyone using something like this without prior written approval, they are going to be banned, fyi tammy#1111: sure, although there's pretty much no way to know (feel free to delete this message and my previous one if you want btw) EricHallahan#1051: We publicly state that this server is not a place to be scraped in our FAQ. https://www.eleuther.ai/faq/ EricHallahan#1051: Indeed. tammy#1111: i'm not actually finding that in the FAQ tammy#1111: "Have you considered adding Discord logs?" is not quite about what we're talking about here nshepperd#2316: could be more direct about "scraping is not allowed" yeah faraday#0862: is #the-faraday-cage-archive under a different status? if you get the approval of @BoneAmputee is that enough? what’s the process for getting approval from Eleuther on things? should you seek to reach every single member? EricHallahan#1051: I think most people here would assume privacy by default in addition to Discord's ToS.
EricHallahan#1051: We shouldn't need to be too explicit. EricHallahan#1051: Unless you are scraping your own messages, you probably shouldn't be doing so. faraday#0862: discord search does not provide accessibility nshepperd#2316: sure, is just that "we decided against it" makes the position sound weaker than it is nshepperd#2316: i guess EricHallahan#1051: Maybe I am being too harsh in my reading of our current policy, but it has always been the opinion of moderation that scraping without user consent is pretty hard to justify under ToS when it comes to model training. faraday#0862: we’re talking about explicitly (through written consent) free-use content in a specific channnel. I wonder how that fits the picture EricHallahan#1051: I am unfortunately not the one to ask on that topic. EricHallahan#1051: I feel like I am just marking the situation more confusing, so I am just going to withdraw from this conversation for now and go to sleep. chirp#4545: https://www.nytimes.com/2022/04/15/magazine/ai-language.html chirp#4545: They mention EAI too! makya#2148: We have known this day will come. It can write stories and articles that look true but actually are false or make no sense. And there are some stories where humans can't even tell its been written by an Ai model. makya#2148: ✍ makya#2148: Mind boggling isn't it, Ai that can write stories and know complex scientific facts but can't do basic math lol. And where common sense is low. makya#2148: But it's improving, for better or worse 👀😎 MicPie#9427: I would think so too ari#9020: I spend so much time lurking the EAI/LW/AF-sphere that it's almost refreshing to read these takes about how the problem is that AIs reflect the values of Silicon Valley and what you need is democratically governed regulation :goose10: asara#0001: 'almost' refreshing bmk#1476: that's a weird way to spell :yudscream: marmiteCloud#5923: I saw Aleph-Alpha made the choice never to store prompts/outputs, does anyone else find that quite interesting? I understand they do the whole EU-based-EU-data thing but that's quite open-ended for any kind of forensics...
Deleted User#0000: make them learn Navajo Deleted User#0000: then let's see how the ai interprets it Ursium#8766: Quick question: does EleutherAI a commercial organization or does it have plans to become one, even indirectly (through CoreWeavers for example). Reason I ask: I think MLaaS is the beginning of the end as who controls the models will control the fake news. Eleuther gives us the means to deploy our own counter-operations or simply stay competitive where soon we'll have the AI have and the AI have-nots. Just a thought. Thank you. And yes "I'm that guy from Ethereum". 🙂 bmk#1476: there is no plan to become a commerical organization, and our interests in AI risks tends towards x-risk considerations Ursium#8766: Thank you @bmk Robert1#0234: I notice when using TailFreeSampling logit processor it significantly slows down the responses. From like 1.97s mean to like 3.68s mean in my case. Anyone know where I can get a quick, quality implementation of TFS for transformers library. Robert1#0234: and how much of a difference does TFS typically make to quality? Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/964863583651520582/IMG_5973.png Louis#0144: @alstroemeria313 do you have any ideas Louis#0144: cc @sweg alstroemeria313#1694: yes alstroemeria313#1694: let me get the paper Louis#0144: okie alstroemeria313#1694: https://mathweb.ucsd.edu/~sbuss/ResearchWeb/spheremean/paper.pdf Louis#0144: Is there a python implementation you would recommend alstroemeria313#1694: basically the frechet mean on the sphere w/ the spherical distance metric https://en.wikipedia.org/wiki/Fréchet_mean alstroemeria313#1694: try this https://gist.github.com/crowsonkb/08580a64b52cacf80712b2ffb99ca98a alstroemeria313#1694: it uses <https://github.com/geoopt/geoopt> alstroemeria313#1694: there is not actually a closed form solution for the fréchet mean on the n-sphere for more than two points alstroemeria313#1694: you need an iterative algorithm
alstroemeria313#1694: the paper gives two alstroemeria313#1694: the first converges linearly and the second converges quadratically, this is (a modified version of) the first one alstroemeria313#1694: (it's simpler) alstroemeria313#1694: https://discord.com/channels/729741769192767510/730484623028519072/835953921222246420 a different version of it that is actually the same as the one from the paper eheh alstroemeria313#1694: (I had modified it somewhat to require less iterations on average) alstroemeria313#1694: do you need to backprop through this btw? Louis#0144: No alstroemeria313#1694: kk alstroemeria313#1694: with some of mine you can alstroemeria313#1694: it is for combining CLIP embeddings ofc Louis#0144: Kat the geometer Louis#0144: Yeah Louis#0144: Did you realize how much geometry you'd be doing going into GANs Louis#0144: lol alstroemeria313#1694: A quick and dirty inaccurate fast way is to take the Euclidean mean of the points and then normalize it back to the n-sphere alstroemeria313#1694: This does not actually produce the correct mean alstroemeria313#1694: But it is in the area if the points are not too spread out, these iterative algorithms use it as their initial guess. alstroemeria313#1694: Eheh~ StellaAthena#3530: As long as the points aren’t anti-podal, two vectors define a plane. Can’t you then let $\theta$ be the angle between the two vectors in that plane and do $$\gamma(p_1,p_2,t)=\frac{\sin((1-t)\theta)}{\sin(\theta)}p_1 + \frac{\sin(t\theta)}{\sin(\theta)}p_2$$ alstroemeria313#1694: is that slerp?
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/964869942749044756/193204646687408129.png Louis#0144: Looks like it? Louis#0144: But we're averaging hundreds of vectors alstroemeria313#1694: there's a closed form formula for two vectors but all the headachy stuff starts when you have more than two StellaAthena#3530: Ah I missed that bit StellaAthena#3530: Google tells me yes, but I derived it by hand alstroemeria313#1694: Ah *nods* alstroemeria313#1694: :) StellaAthena#3530: For hundreds of vectors, can you do it pairwise and iterate? It should only take $\log_2(n)$ computations no? TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/964870648000294992/193204646687408129.png alstroemeria313#1694: i thought slerp wasn't associative StellaAthena#3530: Ugh it might not be in R^d for d > 4 :/ StellaAthena#3530: Fuck spheres man random person#5234: Wait, how would a pairwise comparison between vectors be of log(n) johnryan465#9922: You can combine them pairwise johnryan465#9922: Log(n) time assuming unlimited amount of parallelism random person#5234: Oh I see faraday#0862: does anyone know how OpenAI regards intellectual property rights wrt. DALL-E (or GPT-3) outputs? tldr; do current users of DALL-E 2 have the right to use the outputs in their daily life? or are they just "allowed to see it" and nothing else 🙂 EricHallahan#1051: Consult the documentation.
faraday#0862: I did... "(c) Copyright. OpenAI will not assert copyright over Content generated by the API for you or your end users." but.. that leaves a lot of room still. thanks for the suggestion though faraday#0862: to be more specific, if OpenAI does not assert copyright, it seems *de facto* that prompt owner is regarded as a tool user *and* owns the right. However that's a massive injustice to the rest of the world: early access means lots of copyright for simpler or best working forms of prompt. faraday#0862: I hope there's a lot of visual difference between seeds alstroemeria313#1694: there should be alstroemeria313#1694: it's diffusion alstroemeria313#1694: also we have seen people run the same prompt multiple times eheh alstroemeria313#1694: and it was in fact very visually different EricHallahan#1051: It *is* explicitly prohibited to directly profit off of it I believe. faraday#0862: how does copyright work for images? "contours differ" is enough or a judge decides the similarity of a look-alike vs your work ? it's mind-boggling after a point faraday#0862: but yeah, I got the idea. it's not something to worry about at this point with randomness in diffusion 👍 /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Does anyone know of a good resource which has curated techniques for prompt engineering? /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: https://media.discordapp.net/attachments/838682121975234571/964896734557904948/1650119687_prompt-engineering-for-dummies.png /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: https://generative.ink/posts/methods-of-prompt-programming/ tpapp157#3643: Copyright refers to a specific artistic work and the right to copy and distribute that specific work. Typically, a copyright holder will provide a license to users which states what uses are and aren't permissible. There are also a wide array of fair use exceptions which override any copyright restrictions. In terms of intentionally creating similar (but not duplicate) artistic work, this is of course a grey array and different media have built up varying degrees of court precedent on what criteria and thresholds of similarity should be considered. You can look into prior court rulings in areas like photography and digital art to get a better idea for this. This is a completely separate legal concept from trademark, though, and often people tend to get copyright and trademark jumbled together. tpapp157#3643: A big question right now, with current art generation NN models is if they even count as copyrightable artistic works. To qualify for copyright, a work requires artistic input and intention. For example, the image generated from pressing a button to automatically generate a bunch of semi-random pixels would not qualify for copyright protection. There's an argument that if you're using a pre-trained generative model (one you didn't train yourself), and your prompt input to that model is very limited, that may not count as sufficient artistic intent to qualify for copyright protection. tpapp157#3643: Probably the easiest way to guarantee copyright protection for your AI generated art is to make intentional artistic edits to the work after it's generated. Louis#0144: https://discord.com/channels/729741769192767510/896446889115938837/964862192883204106 Anyone interested in a discussion panel on project gyarados? We can go over value alignment and IRL stuff Louis#0144: I dont mind the discussions turning a bit alignment heavy Louis#0144: yes? no? Louis#0144: that isnt rhetorical
Louis#0144: 😆 Deleted User#0000: hello peps MicPie#9427: thank you for sharing! did you compared once the difference of the naive mean with the n-sphere method for CLIP embeddings to have a feeling for the range? @Louis I'm curious on the differences with this method for the CARP application. Louis#0144: We're using it to compute pseudo labels for carp Unjay#9908: bigger problem is the data used for training the model I'm 99% sure this is the reason why OpenAI doesn't want to put a license on it, if they just scraped the web Louis#0144: You're in that group chat Louis#0144: lol MicPie#9427: yeah, I thought so, but wasn't sure if there are already some results there I'm following the group chat EricHallahan#1051: This paper is really useful. MicPie#9427: you mean the one from above? searching for error or difference does not yield something for that, but maybe I use the wrong keyword alstroemeria313#1694: the lower the norm of the Euclidean mean the worse the estimate of the spherical mean approximation you get by normalizing it is Louis#0144: I can tell you rn that naive averaging doesn't rly work for carp embeddings MicPie#9427: could be a sequential pair-wise mean with re-norm to 1 be a a good approx for this 🤔 EricHallahan#1051: Not really afaik, because rotations aren't communitive. 1a3orn#6547: are there any papers on why the ff in transformers expand by 4, or which explore tradeoffs around alternative expansion sizes, or which talk about what's going on there with theory. I've been searching through arxiv and I just cannot find the right terms, or the papers aren't there. CRG#8707: See the first scaling laws paper
CRG#8707: https://arxiv.org/abs/2001.08361 EricHallahan#1051: *Scaling Laws for Neural Language Models* calls this “Feed-Forward Ratio” in Figure 5 CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/964971589953265664/unknown.png 1a3orn#6547: ahh thanks CRG#8707: In my experience, you can freely trade off depth and d_ff/attention dim (but holding d_model constant) for equivalent performance. This breaks down with too few layers, since there's usually a "minimum depth" necessary for good performance. CRG#8707: It's a bit like increasing the batch size while decreasing the serial steps. faraday#0862: was Chincilla the first to point out the (critical) importance of data dimension in Scaling Hypothesis or was this already well-known but DeepMind proved it with Chincilla? Is it fair to assume shape of the dependence wasn't known well and DeepMind introduced a paradigm shift? 1a3orn#6547: hrm, thanks, that's interesting. I think that makes more sense to me than the expansion being necessary, but I'll probably run some more experiments to explore. prior, it's just the expansion has felt weirdly arbitrary / magical johnryan465#9922: What would be a decent way of testing an attention mechanism (for encoder only atm) that wouldn't take too long but would be reasonably representative? Am currently using WikiText-2 1a3orn#6547: The original OAI scaling paper does say you need to scale up data, so it's not like data was entirely ignored, but the parameter-size / data ratio is entirely different. None of the big ( > 200 billion) parameter models anyone was training before Chinchilla are a maximally efficient use of compute if you think the Chinchilla paper scaling laws are true, so either (1) the people training these models didn't know about the new scaling laws or thought they were false or (2) they were training enormous models for prestige of training the biggest model, _knowing_ they were using compute suboptimally. of these (1) seeeeems by far the most plausible. cfoster0#4356: It was known that data scaling was important (a bunch of existing scaling laws paper are about loss wrt samples/iterations), but not necessarily that model scaling should move in lockstep with cfoster0#4356: *with data scaling zphang#7252: To me, I never thought it was that binary. The scaling laws are an empirical finding. They are subject to all sorts of variations across settings and underlying configurations, and while researchers make some effort to explore optimal combinations, it is exorbitantly expensive to exhaustively search them all. So I think it was less that people "knew" their setting was optimal, but rather that in previous experiments a given setting was optimal, and in absence of evidence showing otherwise they use that as their guidance. As with most things in research/discoveries, it's less that "no one thought of it" and more that "people thought it's plausible but no one's spending the time/effort/resources to answer that, because they have other things they are looking at" johnryan465#9922: Seems like wikitext-103 is more active so am switching to that faraday#0862: how neat is the fact that both heaviest suboptimal use of compute is from Google with PaLM and the proof of suboptimality is from well... Google again, with Deepmind. it's like they're running circles around all others (in terms of exploration) Louis#0144: nah Louis#0144: megatron 530b was rly bad Louis#0144: lol
Louis#0144: but nvidia didnt want a functioning language model Louis#0144: they wanted it to sell GPUs ilovescience#3282: DeepMind is a separate entity faraday#0862: under Alphabet, right? separately run. but effectively serving Alphabet masterplan Sphinx#2092: "masterplan" faraday#0862: I'm not saying it's a bad thing if your masterplan is just cutting edge exploration ilovescience#3282: yeah i doubt there's a master plan for all of Alphabet and even if there is, i doubt "optimal use of computer for training LLMs" fits in there Sphinx#2092: I'll use that next time my people what my five year plan is. "Cutting edge exploration" faraday#0862: I won't pretend that Alphabet will become AGI itself and declare independence from shareholders EricHallahan#1051: The Great ~~Ferrari~~Alphabet Masterplan™️ zphang#7252: Deep Mind's grand plan is to create a Deep Thought-like AGI, and then ask it how to leave Alphabet and become independent faraday#0862: maybe Alphabet endgame is just making waifus 🤷‍♂️ Louis#0144: Goosegirls Louis#0144: That our master plan Louis#0144: All roads lead to goosegirls Louis#0144: Gyarados? For goose girls Louis#0144: Neox? U bet ur goose ass nz#9710: you're goosing out of your mind louis Caelum#8192: could we get a less degenerate flushed goose emote? tammy#1111: maybe this discord should have a philosophy channel ?
tammy#1111: for discussions of values etc tammy#1111: a more general place for broad conceptual discussions, other than alignment channels AI_WAIFU#2844: that's #off-topic tammy#1111: just an idea. tammy#1111: off topic is mostly goose memes tammy#1111: i've not had too much success having serious convos there tammy#1111: sometimes i get booted from elsewhere into #off-topic and then find myself unable to start because the channel is already being used to post geese AI_WAIFU#2844: hmm ilovescience#3282: lol tammy#1111: "the philosophy channel and the shitposting channel are the same place" tammy#1111: sounds like maybe not the best plan tammy#1111: but maybe this place doesn't need a philosophy channel, i dunno Caelum#8192: maybe if we had threads in off-topic we could not be enveloped by geese Louis#0144: :goose16: Louis#0144: Counter argument 𓅬 gabriel_syme 𓅬#3220: Most artists do this from what I see. Although I feel they also do it because it is simply the artistic process to iteratively mold an idea into something. Win win I guess tammy#1111: i guess #alignment-general works for alignment-related philosophy zphang#7252: anyone know how much *cpu* ram a v2-8 TPU VM has? kindiana#1016: About 300 GB Louis#0144: 300 goose bites
Spacecraft1013#5969: sounds painful thenightocean#6100: maybe we need a separate goose channel instead? thenightocean#6100: and ban posting goose memes anywhere else :goose: makya#2148: I'm up for that. makya#2148: No more Geese tammy#1111: very much agreed xloem#0717: making sure this paper on optimal model size for given training data has reached you guys, sure it has but just in case: https://twitter.com/rasbt/status/1515337127336292358 nz#9710: It has indeed -- it's a really significant result from DM Corran#8565: What if OpenAI sold access to individual image prompts as NFT Corran#8565: a free horror movie idea for you there nz#9710: @Louis you mentioned before Electra-like embeddings have serious issues -- is that really the case? Mind sharing more about it? louis030195#2462: What are the criteria to get my discord share in “communities” channel? Louis#0144: Omg another louis Louis#0144: Connor is going to have an aneurysm generic#8192: random q: when training it's common to try to pack multiple sequences into the same input block separated by `<|endoftext|>` (or some other special token) for efficiency/utilization. is there any reason sequences packed this way have to be complete? or is it fine to have something like ``` |[seq1][seq2 start ... | |... end of seq2][seq3]| ``` generic#8192: probably this has something to do with positional encodings which I need to understand better
Louis#0144: where would one get a LOT of examples of characters written with certain DnD alignments Louis#0144: oh u know what Louis#0144: @Daj you might be the person to ask Louis#0144: lol Louis#0144: I like like 10k examples Louis#0144: with not shit class balances Louis#0144: I tried getting 20b to generate them but it didnt generate a single chaotic neutral character 😆 dmayhem93#3202: do you need like character sheets or character descriptions? dmayhem93#3202: play by post sites may have enough character descriptions to get you going, e.g. https://www.roleplayerguild.com/forums/34-character-sheets Louis#0144: Descriptions Louis#0144: Ooo Louis#0144: @Ambient Ambient#0001: descriptions or quotes Ambient#0001: @Louis issue is the other character quotes/descriptions mixed in here Ambient#0001: could source premade character sheets or monster descriptions from pre-5e (having alignment tags) Ambient#0001: was my other idea Ambient#0001: brb tammy#1111: is there money for art about/allogorical for rationality/alignment (i'm thinking notably of video games) izzy 👹#6687: That's an awesome idea
tammy#1111: maybe tammy#1111: maybe it's not a great use of an alignment-minded person's time tammy#1111: seems like a hard thing to measure Louis#0144: https://twitter.com/lcastricato/status/1515772788866523147?s=21&t=UWIb6Meu51AZZtmr2Bh-1A rt for vis izzy 👹#6687: Orgs need PR people, managers, etc. I think opening more opportunities for people outside of academia and engineering is helpful tammy#1111: we need PR if widespread awareness of alignment issues is helpful tammy#1111: there's been good arguments against that faraday#0862: why would DALL-E 2 perform bad at faces? https://medium.com/@nin_artificial/dall-e-2-vs-disco-diffusion-c6de6bfbacf9 disco diffusion seems winning at faces here. (on another thought, maybe I'm wrong about this and it fails case by case) Veedrac#0443: “why would DALL-E 2 perform bad at faces?” → filtering 𓅬 gabriel_syme 𓅬#3220: I think quality comparisons are very dubious considering the type of access we have for DALLE izzy 👹#6687: Interesting. Can you point me to something on this? Couldn't find anything tammy#1111: not posts per se, but some convos i've had; though i'd imagine there are lesswrong posts about it ? tammy#1111: one worry is the politicization of the issue, with the example of public distrust of nuclear power as a pretty bad failure mode izzy 👹#6687: Ohh haha that's one of the points I've settled on as high leverage
izzy 👹#6687: Automaton Bill of Rights before it ever wakes up izzy 👹#6687: Signals human friendliness tammy#1111: i don't understand what you're saying there izzy 👹#6687: If AGI woke up, and it saw that humans had been working on treating it fairly it would be like the first move in tit for tat izzy 👹#6687: Start with a cooperative signal, and adjust accordingly tammy#1111: i don't think an AGI would particularly care if our states or cultures have particular respect for it tammy#1111: that'd only matter if it need our help or approval to clip us, which it doesn't izzy 👹#6687: A rational agent would figure out immediately that the best strategy is cooperation izzy 👹#6687: I only see danger in trying to contain it or use force against it izzy 👹#6687: But if it didn't really matter what steps we take, is what you're saying that we shouldn't work toward AGI? tammy#1111: i don't actually think the best strategy is cooperation tammy#1111: at the very most, deception disguised as that tammy#1111: oh, absolutely Caelum#8192: we don't cooperate with livestock Caelum#8192: and their cuteness hijacks our caring instincts tammy#1111: ("we don't cooperate with grass" might even be more accurate) izzy 👹#6687: Multiple ai will probably wake up around the same time izzy 👹#6687: Some might become enemies izzy 👹#6687: AI + human vs ai is the choice tammy#1111: i don't think so but even if that were the case, when two countries go at war with each other, the grass or livestock doesn't particularly get a chance to fare better
tammy#1111: if anything third parties just take stray bullets izzy 👹#6687: I'm sort of extrapolating from research on ai/human team performance tammy#1111: "siding" with us probly is just more of a liability than a help tammy#1111: if of two countries at war, one pledges to side with grass, they'll either fare just as well or be disadvantaged because they spent resources caring for grass izzy 👹#6687: I see where you're coming from izzy 👹#6687: You've thought about it a lot. What do you see is the best path to avoid ai catastrophe? tammy#1111: the best *plausible* path ? hard to say, there are no routes that seem easy enough to do. probly investigate a bunch of stuff, including wacky ideas such as <https://carado.moe/the-peerless.html>; but also have the main alignment research keep pushing through Tinytitan#5596: Yes, centaurs definitly popular in chess tammy#1111: the best *possible* path ? easy: stop all AI capability development and probly even AI usage, until we've figured out alignment izzy 👹#6687: Lol that Yudkowsky post izzy 👹#6687: Talk about being on one izzy 👹#6687: Your sim world idea is cool, but those sim agents would probably suffer, and it sounds like they might have the capability to suffer exponentially izzy 👹#6687: The show Severance is sort of that scenario izzy 👹#6687: That post is almost begging for the political class to be involved izzy 👹#6687: The methods of politics are focused on outcome, not radical transparency... A siren of fatalism is a failure of leadership tammy#1111: we don't really need transparency tammy#1111: right now we need even a mild chance at survival tammy#1111: the rest is far from prioritous izzy 👹#6687: Yeah, I like your focus izzy 👹#6687: That Yudkowsky thing bummed me out, so I imagine it probably did so for a lot of other people, is what I meant by the fatalism bit
tammy#1111: which yudkowsky thing ? izzy 👹#6687: The one you linked, about death with dignity tammy#1111: oh that tammy#1111: yeah, i would recommend against being emotionally fatalistic, <https://mindingourway.com/detach-the-grim-o-meter/> tammy#1111: but there is indeed cause to be quite rationally fatalistic izzy 👹#6687: Hey that helped a lot haha thanks tammy#1111: happy to help :hap: ac#1874: > I also personally recommend a healthy dose of dark humor. Everybody's dying, after all. ac#1874: Anyways, this might be a really silly idea, but I was playing around with the MineRL environment and I was thinking, would Minecraft be a useful environment to do applied alignment research in? ac#1874: eg try and get an AI to like, cooperate with a human in survival or something lol alstroemeria313#1694: @Drexler tammy#1111: if you had an AI about which you thought "this would be a bit scary to unleash into the real world, but in minecraft it's probly safe" my first reaction would be "i'd rather you not run it at all" Drexler#4006: Minetest IMO tammy#1111: minecraft is far from the platonic sandboxable environment imo Drexler#4006: Also this tbh Drexler#4006: But like Drexler#4006: If you want to do that general setup, Minetest is vastly preferable because it's open source. Drexler#4006: Which means it can be more deeply modded for an RL training setup. ac#1874: Oh cool, I hadn't heard about it before, thanks 👍
Drexler#4006: More easily, more sustainably, etc. Drexler#4006: If upstream breaks your code you can just fork it. tammy#1111: i'd probly go for something like a simple cellular automaton tammy#1111: or something equally simple Drexler#4006: From a safety standpoint it's written in C++, which is 👀 , but on the other hand um, you really shouldn't be trying to 'box' a dangerous AI into minecraft. That will just go badly. ac#1874: yeah, I definitely agree with this not being an actual approach to solving alignment XD Drexler#4006: This is more useful for like, testing the "get an agent to assist a player in minecraft" style alignment research program. Drexler#4006: See https://arstechnica.com/information-technology/2021/12/minecraft-and-other-apps-face-serious-threat-from-new-code-execution-bug/ /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Many of researchers build toy environments to test out their ideas, e.g. https://youtu.be/XS68jVoUxL8?list=PLBiONvzybeQ_PaYL_pmhl1_f5gEzcts2z&t=756 ac#1874: ah log4j, rip Drexler#4006: If there is even one bug like this still in Minecraft (lets be real, there almost certainly is) you're fucked lol. Drexler#4006: Keep in mind the AI can try things that wouldn't even be humanly comprehensible. ac#1874: right -- i guess if we can't even satisfyingly solve gridworlds yet, minecraft seems like a bit of a stretch ac#1874: yeah totally Drexler#4006: Or entire new approaches to attacking the sandbox. ac#1874: I was thinking more along the lines of, e.g. could we implement factored cognition* in minecraft? EDIT: lol i think i got a couple approaches mixed up -- i meant something similar to this https://ought.org/research/factored-cognition AI_WAIFU#2844: Yes that would be very nice, and like JDP has said minetest is a more appropriate environment. AI_WAIFU#2844: But honestly someone should implenent this, and **not** the way MineRL basalt did AI_WAIFU#2844: I want real time interaction with an agent
AI_WAIFU#2844: via a server AI_WAIFU#2844: with human or agent clients ac#1874: Yeah this seems like an engineering task that could potentially be decently valuable 𓅬 gabriel_syme 𓅬#3220: I agree, I think it would be. I'm also trying to convince myself and colleagues that design environments can also be interesting alignment research 'arenas'. We don't have something like that yet, but we could build something like it. I'll try and get some backing for that idea once I start at work. When I mentioned it in a meeting, it seemed to be something people liked. I just think we need a much larger variety of environments of experience 𓅬 gabriel_syme 𓅬#3220: Those few lines describe pretty nicely the environment I have in mind. Behavioral dimensions are super interesting for design and operation of buildings/cities, could be a nice sandbox. And I'm playing around with the idea for agents (artificial or otherwise) on both sides, agents using the environment and agents designing it 𓅬 gabriel_syme 𓅬#3220: At one point I thought Unity simulation might be something, never went deep into it though. Soon Corran#8565: Does anyone have any tricks for improving the speed of python package installs in conda on colab? It seems very slow. circuit10#0158: Minecraft isn’t too far from it circuit10#0158: They do go out if their way to make it easier to reconstruct the source code for modding Caelum#8192: I think something like an alignment sims for illustration purposes could be really useful johnryan465#9922: There is an alternative called mamba which is faster in my experience https://github.com/mamba-org/mamba Corran#8565: Ah thanks, Ill have a look! Emad#9608: I don’t get why Elicit is closed source. Should ask them, they are a charity StellaAthena#3530: They are? I thought they were commercial. They act like they're commercial. Emad#9608: https://cdn.discordapp.com/attachments/729741769738158194/965636761101414400/IMG_1756.png Emad#9608: :BlobShrug: tpapp157#3643: non-profit =/= charity Emad#9608: True nz#9710: maybe they plan to go the :openai: way Emad#9608: man I can't see the OpenAI logo without thinking tentacles
Emad#9608: baby elder god incoming Emad#9608: :Stability: jcjc#6291: Hi, we are playing with the gpt-neox codebase (thanks for the excellent codebase btw), and we are a bit confused about the model_parallel_size option in the config yml file. Specifically, when we change the model-parallel-size from 1 (default) to 2, the per GPU consumed memory reduces from 24GB to 20GB with a higher LM ppl. We are curious about 1) why the memory usage is lower? and 2) does model_parallel_size have an impact on the performance(ppl)? Thanks in advance! The configuration is listed below: Config: small.yml GPU: 2 x RTX8000 Data: openwebtext2 parzival#5010: should establish precedence for web-scale datasets too: https://techcrunch.com/2022/04/18/web-scraping-legal-court/ Kia#2550: Time to scrape... Pixvi:ultraberk: Louis#0144: @RyanT 😉 Kia#2550: @spirit-from-germany @rom1504 rom1504#5008: legal is good, but ethic is better. I think we're ok on the ethic side, not sure I can say the same for linkedin scrapping rom1504#5008: still good to know there is no problem with legality of web scrapping tpapp157#3643: Eh. The legality of web scraping in the US had already been established previously. The only thing this really means is that more services will go behind account/pay walls. Louis#0144: how do you generate a random n dimensional rotation matrix in python Louis#0144: scipy rotation random is only 3 dim Louis#0144: who tf uses 3 dimensions Louis#0144: losers, thats who Louis#0144: why is this like not a standard function
Louis#0144: i dont understand Louis#0144: nvm im an idiot :^) &.#0001: Is anyone aware of a tool that lets me see how much VRAM the processes on my computer use? Like top or task manager, but for vram ari#9020: `nvidia-smi` polymesh#2287: might be harder for the AI to learn things in a minecraft environment Olen#3584: a asparagui#6391: above + nvtop Braxton#1343: Anyone else having trouble getting A100 instances on GCP? Braxton#1343: Can't tell if it's a bug or they're just really out of resources Braxton#1343: Tried a few regions AerysS#5558: Anyone knows about a paper that benchmarks loss functions for some tasks? Such as classification, segmentation, etc. I cannot find any good one alstroemeria313#1694: so we are having some sort of GPU memory leak when training the big GLIDE :/ StellaAthena#3530: oh? StellaAthena#3530: :/ alstroemeria313#1694: well i am going to try fairscale fsdp first alstroemeria313#1694: and see if it goes away alstroemeria313#1694: bc i can't find anything that's obviously leaking alstroemeria313#1694: so i am assuming it might be something inside fairscale's regular sharding somehow, idk Iacopo Poli#2931: It’s a while it’s been like this. Have you tried europe-west4-a?
Louis#0144: does anyone have examples of failed faiss indices alstroemeria313#1694: i managed to get the batch size up to 16 without going to fsdp alstroemeria313#1694: by moving the EMA weights into the optimizer state alstroemeria313#1694: bc the optimizer state is sharded with `ddp_sharded`, you don't need to go so far as `fsdp` alstroemeria313#1694: this saved ~12GB of memory per gpu tricky_labyrinth#2495: does anyone know what NCG stands for (usually in hiring descriptions)? StellaAthena#3530: "New College Graduate" maybe? tricky_labyrinth#2495: o, could be tricky_labyrinth#2495: ty Emad#9608: It's probably because midjourney is using up all the available ones 👀 Emad#9608: I do wish AWS offered single A100 instances dmvaldman#4711: i'll be in NYC 5/2-5/12, who wants to hang out Louis#0144: oooo Louis#0144: @Alex Havrilla Alex Havrilla#6435: Sad I'll be there starting the 23rd Kia#2550: Is YouTube now scrapable because of such law existing saying *it's not illegal* to do such action Kia#2550: ~~New Laion project confirmed??~~ Louis#0144: YouTube is scrapable Kia#2550: But it's illegal Kia#2550: Actually, nvm
chilli#5665: https://www.hpcwire.com/2022/04/18/nvidia-rd-chief-on-how-ai-is-improving-chip-design/ chilli#5665: pretty interesting chilli#5665: damn chilli#5665: reading this chilli#5665: I feel like Nvidia is way ahead of Google in using AI for chips? chilli#5665: google did like ... just layout planning 𓅬 gabriel_syme 𓅬#3220: Someone said layout planning? ilovescience#3282: chip layout planning lol 𓅬 gabriel_syme 𓅬#3220: did I forget my /s? My bad :guilty: chilli#5665: Unless these are just speculative projects that aren't actually being used kurumuz#5695: yeah they seem to do a lot. one day this will just become completely end to end kurumuz#5695: AGI will obviously be able to self improve kurumuz#5695: :ultra_worried: chilli#5665: Not before the end lol Tinytitan#5596: So either the contributions from AI are more limited than you would think or... 𓅬 gabriel_syme 𓅬#3220: I read it as "humans will not do this without agi" tpapp157#3643: Youtube has always been legally scrapable. Like most major websites, scraping is against their ToS so take proper precautions. Don't use your account to do the scraping to avoid being banned and probably use a VPN with a dynamic IP address to avoid being IP blocked, etc. The limitation on scraping youtube has never been legality, it's been the website's streaming nature which means downloading a video requires the scraping software to fake "watching" the video in real time so building up a dataset is very slow. 𓅬 gabriel_syme 𓅬#3220: Really need to play with minihack a bit. Anyone has experience with it? 𓅬 gabriel_syme 𓅬#3220: This is pretty nice, and I'm guessing I could smh plug it in a parametric process https://twitter.com/samveIyan/status/1516753755634642945?t=3sf8z10CBEvJf2F-nGgzpw&s=19
𓅬 gabriel_syme 𓅬#3220: Also I'm pretty sure you could make minihack levels with a LM by padding that representation ILmao#5683: Aren't they usually? Or is that too cynical of a take? circuit10#0158: Do they actually check for that? goodmattg#8728: YT is highly sophisticated at tracking IPs that violate ToS so be careful. Check out Youtube-DL if you want to do more in that area tpapp157#3643: You always need to take big company announcements about cool new internal capabilities with very large amounts of salt. The size of the fish you caught doubles in size each level up the management chain it goes. Also, large companies are under constant pressure to impress investors. Practically every major company will claim today that they're using cutting edge AI in all manner of applications, but the truth is that most are not anywhere close. That all said, Nvidia does have a top notch ML R&D department. ILmao#5683: Of course, the question is how much is fluff ILmao#5683: From 100% this is just some glorified hackathon thing to we are seriously considering using this in production tpapp157#3643: I wouldn't doubt it. Youtube already has a big problem of automated bots that scrape videos and post them to other platforms to get a bit of ad revenue before they're taken down. Any reasonably sized youtube creator is constantly fighting these. More practically, it's probably mostly a question of scope. If you're just downloading a handful of videos, no one is going to notice, but if you're trying to scrape millions of videos then yeah that'll set off some automated alarms. circuit10#0158: Ah, makes sense nz#9710: New Ari Seff video on diffusion models https://www.youtube.com/watch?v=fbLgFrlTnGU tpapp157#3643: Who knows. You'd have to know someone actually involved in the work to get the inside scoop. For this sort of announcement at an average company I would say it's probably closer to the former, but because of nvidia's demonstrated ML capability I'm more willing to give them the benefit of the doubt. In a typical company, R&D might be involved in the very initial deployment of a new capability but would then quickly roll off the work to a production development/sustainment organization. Of course, it's at this point that production engineering realizes the R&D implementation is garbage and doesn't work half the time and they throw that shit out and revert to their prior process. Meanwhile, R&D remains blissfully unaware and believes they've deployed the greatest advancement in company history. I've seen this story play out far too many times. I'd take any announcement regarding production capabilities coming from someone within R&D with an even bigger grain of salt. Bunzero#2802: YTDL/P can download videos faster than watching in realtime. Probably any archiving software should be able to do that. ilovescience#3282: lol i came here to post this ilovescience#3282: it is quite good nz#9710: Ari seff is great &.#0001: how do you convert a GPT-Neo model to 16 bit? &.#0001: Can a GPT-Neo 5.8B model fit in 24GB of VRAM?
EricHallahan#1051: Yes, but why not just use GPT-J-6B. &.#0001: I’m trying to use SGPT to try to generate document embeddings. &.#0001: Darn, still didn’t load, even after switching my graphics to integrated nz#9710: https://jacobbuckman.com/2022-04-19-bad-ml-abstractions-i-generative-vs-discriminative-models/ 𓅬 gabriel_syme 𓅬#3220: refuting dichotomies is one of my favorite mental hobbies tpapp157#3643: This isn't new or interesting and it purposefully avoids the actual colloquial usage of the terms generative and discriminative. Yes, all models are simply mappings from one data space to another, therefore all models are fundamentally the same. Big deal. That's like saying everything in the universe is made of the same subatomic particles so therefore everything in the universe is actually the exact same thing. Technically true but also practically meaningless in its banality. More importantly, a generative model maps data from a smaller data space to a larger data space while a discriminative model does the reverse. Sure they're both merely the same type of maximum likelihood data mapping but the distinction is still an important consideration. OccultSage#3875: Omg, wow. https://arxiv.org/abs/2202.06991 -- the intersection of ML and all the database/search/indexing/relational theory stuff I worked on for years. Monopton#6214: https://discord.com/channels/729741769192767510/730510538060071043/966696117930700870 BATbot knows something that we do not Kharr#7888: If it's too good to be true, it's probably data leakage: https://twitter.com/rasbt/status/1516833338316963840 StellaAthena#3530: I don't get why people give fraud like this attention Kharr#7888: It's not always fraud, sometimes it is just a mistake. A lot of research is done by junior researchers/research assistants in labs and it's important to always be critical of your own results. I've definitely had junior DS come to me with "an amazing model" and it turned out to be data leakage after closer inspection. StellaAthena#3530: I have looked into the authors and am of the belief that they are not real people. StellaAthena#3530: Some discussion here: https://www.reddit.com/r/MachineLearning/comments/u7ouxh/comment/i5gvzvz/?utm_source=share&utm_medium=web2x&context=3 Kharr#7888: I was looking through the arxiv of coauthors, pretty amusing 𓅬 gabriel_syme 𓅬#3220: damn that would be awesome though, if it was a whole fictional paper Kharr#7888: There is legitimate research on false knowledge dissemination which examines how purposefully fake papers get through peer review and are cited down the road
izzy 👹#6687: https://twitter.com/mark_riedl/status/1517163976760729601?s=20&t=koWtbA-wgEXhsTSGEBFSmQ random person#5234: @Kharr I mean who reports perfect accuracy haha random person#5234: I am pretty sure human would consistently label one or two wrong Louis#0144: Lmao this was bc of a conversation I had with him Louis#0144: 😂 tpapp157#3643: It's likely that this claim to copyright would not hold up if challenged in court but a lot depends on the particulars of their particular model and how much artistic control users have over the generation process. This is a difficult subject but you can find plenty of parallels and legal precedents in other media fields. Do video game companies have copyright control over livestreams of their games? How do you determine and share photograph copyright between a photographer and a model? Etc. Of course there's also the business consideration, even if they can legally claim copyright ownership, enforcing that ownership will likely push users to alternative services. cognomen#6297: I would assume a blanket copyright claim like this also invites all copyright liability on the claimant cognomen#6297: so not the smartest move cognomen#6297: very little benefit vs risks tpapp157#3643: To my knowledge it's never actually been tested in court so it's still an open question and parties involved would mostly rather let sleeping dogs lie. Realistically, even if the game company doesn't have copyright control of the livestream itself, they would still maintain copyright control over the assets in that livestream (art, music, video, models, etc) and could effectively shut down livestreams that way if they want. tpapp157#3643: Reference a recent event earlier this year with the game Stranger of Paradise: Final Fantasy Origin where the publisher forbade streamers from showing the ending of the game past a certain point. It triggered a lot of debate in the industry. Many streamers simply ended their livestream at that point or avoided the game entirely, but there were still plenty that showed the ending anyway. To my knowledge, the publisher hasn't actually issued any copyright strikes against those violations. tpapp157#3643: That game was pretty bad so it ended up being a big deal over nothing and everyone moved on. But if it had been the most anticipated biggest game of the year, who knows how that may have played out. tpapp157#3643: For sure. tpapp157#3643: Youtube reaction videos are another perennial copyright hotspot in a similar way. tpapp157#3643: Most likely. elprogramadorgt#4160: Hello everyone, my name is Eduardo, I'm joined this community because I found the AI field interesting and I would like to help on this project, maybe with small tasks and some help at beginning jejeje my background, I'm a full stack developer worked with mostly javascript frameworks (angular, react, node) and for mobiles just Android Java, I have some knowledge with python and linux servers, but I'm open to learn something new you need 😁 😁 😁 zphang#7252: Honestly what irked me more was the preorder-for-early access. IIRC the streaming ban only applied during that period (i.e. you can't stream the ending before the actual launch date), which sounds more reasonable to me. Also the game was actually pretty decent, just that the story was dumb because everyone guessed the "twist" from the first trailer Kia#2550: It's a questionable Tos honestly
Denizen_Kane#0555: Has anyone trained or fine-tuned a model on infilling with J or NeoX and seen good results? https://github.com/chrisdonahue/ilm Kia#2550: https://twitter.com/ThomasSimonini/status/1517161975943421952?t=IPd4zrYkgc8DyxJu5rWNNg&s=19 imceres#0461: Dear all, I've been reading the group for some time now but I never introduced myself: my name is Mario Ceresa and I work on DL for health data. Recently, I started working on applying transformers to genomics for bacteria and viruses (covid obviously in this period). I would be very curious in testing the performance of large ML like gpt-neox or enformer on genomic data, specifically with sparse attention to be able to encode large genomic sequences (30k bases). Hope to collaborate and learn from all of you 😊 Emad#9608: Whatever happened to Open-GPT-X? https://tu-dresden.de/tu-dresden/newsportal/news/projektstart-open-gpt-x?set_language=en Emad#9608: I mean $15m is a lot for a GPT-3 sized model Singularity#9001: Does anyone know if there are any google colab notebooks for the DALL-E 2 implementation here: https://github.com/lucidrains/DALLE2-pytorch StellaAthena#3530: Thats just the underlying code, not a trained model cfoster0#4356: DeepMind gonna get some more neuro/probabilistic modeling folks, it sounds like https://twitter.com/vicariousai/status/1517655486329282560?t=td2t9Rx5tJR-vK6B-3Nriw&s=19 cfoster0#4356: Hmm this might explain their recent JAX release Some Point Process#3793: Nice, fwiw cofounder dileep used to be at numenta, as well (publications: https://www.vicarious.com/publications/). Seems that they hedge more towards graphical models than brain-like ai (e.g. DNNs) than some other companies (e.g. covariant.ai) cfoster0#4356: They're somewhere in the middle. The clone-structured graph models are a graphical modeling-flavored version of cognitive map work. Shift a bit further in the neuro direction and you get Numenta / TBT. Shift a bit further in the DL direction and you get the Tolman-Eichenbaum Machine. cfoster0#4356: I dunno if I can evaluate that, haven't spoken to enough people in those circles cfoster0#4356: None of their work was super interesting to me personally, except maybe at a broad concept level, and it doesn't seem like they had much market success cfoster0#4356: Where'd you see that? 𓅬 gabriel_syme 𓅬#3220: Cries in environmental engineering noises HanakoMasaki[Cactuar]#0015: what's the deal with the bot in #art does it try to guess what you gave it or what HanakoMasaki[Cactuar]#0015: it auto emotes it 𓅬 gabriel_syme 𓅬#3220: it provides annotations to images Kia#2550: Batbots, Reacts an Emoji on images being sent in #art and #the-faraday-cage-archive using clip, Same with like generated captions like those text Kia#2550: it's automatic
HanakoMasaki[Cactuar]#0015: ah ok thanks Kia#2550: happy to help alstroemeria313#1694: iirc it works by comparing the image's CLIP embedding to the text embeddings of all the emoji and taking the one w/ highest cosine similarity. HanakoMasaki[Cactuar]#0015: swing and a miss on mine then mostly zswitten#0371: Hi, sharing an essay I wrote about data as a public good and how we could get closer to the societally optimal amount of it. Curious for people's thoughts. https://zswitten.github.io/2022/04/14/data-public-good.html bmk#1476: the amount of marginal value of adding one wikipedia page or whatever to the training data is really tiny though bmk#1476: also, taking this further, we should also be having people pay proportional to the amount of value they derive from reading free things online bmk#1476: yeah i agree in the ideal case that would happen bmk#1476: but this is really hard to make work in practice Monopton#6214: That is paywallings things which are supposed to be free which imo is even more problematic chirp#4545: https://thisaidoesnotexist.com/ bmk#1476: I mean it seems inconsistent to both argue that you should have to pay to train models on data, but not have to pay to read the data Monopton#6214: true Stephen#8051: Does the code generated really work? chirp#4545: no AI_WAIFU#2844: this is pretty good https://cdn.discordapp.com/attachments/729741769738158194/967545160705671168/unknown.png johnryan465#9922: Any recommendations for good performing camera pose estimation from NERF models? Similar to this https://arxiv.org/abs/2012.05877 random person#5234: So it dumped a bunch of pyspark code? tpapp157#3643: There's plenty of pose estimation research from images with good results. You can render a NERF to images from multiple camera positions, then do pose estimation on those, use some math to estimate 3D coordinates. johnryan465#9922: Care to elaborate on "do some math"
johnryan465#9922: The particular use case I have would be possibly quite noisy and real time so I am trying to find approaches which satisfy those criteria johnryan465#9922: iNERF does ray sampling randomly, via interest points and via interest regions johnryan465#9922: However iNERF doesn't seem fast enough for real time usage (unless perhaps combined with https://github.com/NVlabs/instant-ngp) tpapp157#3643: How are you training a NERF in real time? tpapp157#3643: I don't remember the specific algorithm off hand, but if you have a series of 2D point coordinates from multiple different angles you can least squares estimate their 3D positions. It's basically how they do things like video motion capture. uwu1#4864: You could try to iteratively fit a SMPL using the randomly sampled points uwu1#4864: Or maybe find the SMPL manifold within your nerf space and constrain optimization to that uwu1#4864: Oh wait you said camera pose. Check out https://github.com/gradslam/gradslam uwu1#4864: instant NGP also optimizes camera poses but it needs a pretty good init for them from a normal SLAM uwu1#4864: But I have a sneaky feeling that all these methods actually calculate incorrect image space gradients uwu1#4864: at least when compared to https://github.com/BachiLi/redner m_wAL99#1923: https://github.com/thesephist/modelexicon/blob/main/src/main.oak#L46 :berk: chirp#4545: has anyone tried InstructGPT for fiction? how does it compare to the old GPT-3? chirp#4545: to me personally, it seems to be _much_ better - actually insanely good, it can write story text that is compelling and also coherent chirp#4545: but idk if anyone has evaluated it for this purpose very rigorously Golo#4822: Hello makya#2148: I've only tried it for asking and answering questions. Not storytelling. At least not yet. But that reminds me, would be cool to test it out on that. chirp#4545: It is quite good. Compared to the original GPT-3 it is way less likely to go off task. If you tell it to end the story a certain way, it will comply. chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/967719332706648074/IMG_0349.png
marmiteCloud#5923: The difference is night and day on some issues, in particular truth on objective/definition questions (I suspect WebGPT may be invovled). For example cases of Polysemy - it's far better at telling you "Collateral Sprouting" is related to Neuroscience than before (it would bullshit about plants causing each other damage before and still will if you select old engine). The downside I noticed is it is way less creative in longer prompts which caused some prompts to become useless on the latest instruct version (essentially it will repeat key points rather than veer off) marmiteCloud#5923: better example for this community: before if you asked "negative feedback is important in cybernetics. Negative Feedback is" you would usually (on low temp, always) get a completion about employee feedback/giving constructive feedback at school. Nowadays, it will talk about feedback loops and system stability. So if that's within your story, it will be cohesive to the subject at hand perhaps James#6892: I have the same conclusion. It’s way better at truth/objectivity/qa tasks now, but longer form continuation that relies on creativity and more generation does not work anymore. tammy#1111: https://cdn.discordapp.com/attachments/729741769738158194/967810611041935470/967810112951558164-unknown.png tammy#1111: (openai output) alstroemeria313#1694: so like... how *do* you learn faster ode integrators for an ode you have. like in general. alstroemeria313#1694: including things like learned variable step size genetyx8#7543: you mean like fourier neural operators do for pdes? alstroemeria313#1694: i... maybe alstroemeria313#1694: i don't know what those are ^^;; genetyx8#7543: basically, learning the solution operator of a PDE alstroemeria313#1694: btw my ode is parameterized by a 500M+ parameter model to begin with alstroemeria313#1694: and my best integrator for it is 4th order linear multistep genetyx8#7543: I'm guessing the easiest thing to do is learn the flow operator for some fixed dt alstroemeria313#1694: but we have to like... i don't know the optimal step sizes even alstroemeria313#1694: i just kind of guessed alstroemeria313#1694: ...what's a flow operator? ^^;; genetyx8#7543: the map that takes a state at time t and maps it to the state at time t+dt, for some dt alstroemeria313#1694: hm alstroemeria313#1694: like i have the y' = f(t, y)
alstroemeria313#1694: and integration rules/ode solvers are flow operators? genetyx8#7543: the flow operator is typically understood to be the analytical solution, though in practice it just means the solution from an ODE solver alstroemeria313#1694: ah alstroemeria313#1694: "but doctor, i *have* the ode solver" alstroemeria313#1694: or like. do you mean train a net to skip to the output of several smaller ode solver steps. genetyx8#7543: point being that for PDEs, where the standard numerical solvers are kinda slow, you can train a NN to learn the flow operator for a given PDE and it will generally be faster than the numerical solver alstroemeria313#1694: I um, still don't know what a PDE is really. ^^;; genetyx8#7543: for all intents and purposes, an ODE in a function space alstroemeria313#1694: mine is in a vector space alstroemeria313#1694: i.e. i have a gradient instead of a derivative genetyx8#7543: so are PDEs when you solve them numerically alstroemeria313#1694: oh genetyx8#7543: but the good things about neural operators is that they are independent of the discretization https://zongyi-li.github.io/neural-operator/ alstroemeria313#1694: i don't know enough about ODEs/PDEs/etc to know why anything Fourier related is significant. ^^;; alstroemeria313#1694: i can integrate mine well enough in ~50 steps using fourth order Adams-Bashforth. genetyx8#7543: the FFT here is the magic trick for PDEs over simple domains alstroemeria313#1694: i am trying to get the step count down further alstroemeria313#1694: there's this paper https://arxiv.org/abs/2202.05830 genetyx8#7543: basically: differential operators over simple domains (e.g. squares) are linear operators with the fourier basis as an orthogonal eigenbasis, meaning that when you express your PDE in the fourier domain, it's simpler. (In fact Fourier developped the Fourier series to solve the heat equation) alstroemeria313#1694: but like. they only compared vs a first order ODE solver so i don't know to what extent their method just learns a linear multistep method and to what extent it improves on my best non-learned ODE solver (which is way better than their baseline).
genetyx8#7543: but for your purposes, I'm guessing you'd want to train a network to take (x_0,T) and map it to the solution after time T alstroemeria313#1694: like. they let their method see multiple previous ODE steps. so it can in fact just learn a linear multistep method using that. alstroemeria313#1694: one step would be really nice yeah but i doubt i can do it alstroemeria313#1694: with good quality, anyway. genetyx8#7543: you might want to check Steve Brunton's work. He uses deep learning to learn sparse state spaces of dynamical systems https://www.youtube.com/watch?v=KmQkDgu-Qp0 alstroemeria313#1694: > And, while on CIFAR10 the metrics indicate significant relative improvement over sample quality metrics, the relative improvement on ImageNet 64x64 is less pronounced. We hy- pothesize that this is an inherent difficulty of ImageNet due to its high diversity of samples, and that in order to retain sample quality and diversity, it might be impossible to escape some minimum number of inference steps with score-based models as they might be crucial to mode-breaking. oh alstroemeria313#1694: Yeah I am only interested in highly diverse datasets alstroemeria313#1694: Like way more diverse than ImageNet even. alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/967845550684794920/Screen_Shot_2022-04-24_at_10.52.38_AM.png alstroemeria313#1694: So this is. The actual DDPM model that OpenAI released? alstroemeria313#1694: ...no alstroemeria313#1694: They trained their own alstroemeria313#1694: And didn't release it alstroemeria313#1694: So I can't just run my own baseline and compare vs the numbers in this table. alstroemeria313#1694: Bc I can *easily* beat DDIM linear stride. alstroemeria313#1694: At step counts this low. alstroemeria313#1694: With a non-learned sampler. alstroemeria313#1694: Did they use *any* released diffusion models in this paper that I could run my own baseline on and compare to their method without replicating their entire method. alstroemeria313#1694: Sigh. :/
alstroemeria313#1694: Because if I'm right and this thing just learns a linear multistep method (Or something not really better than one) + optimal step sizes alstroemeria313#1694: I could save myself a lot of trouble and just learn the step sizes. alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/967847484795789332/Screen_Shot_2022-04-24_at_11.00.13_AM.png,https://cdn.discordapp.com/attachments/729741769738158194/967847485211041872/Screen_Shot_2022-04-24_at_11.00.19_AM.png genetyx8#7543: possibly dangerous idea: train a network to learn the (time-varying) parameters and step size of a Runge-Kutta method? (I say, dangerous because this is not very far from training a NN to optimize a function) alstroemeria313#1694: i was going to do step sizes of linear multistep alstroemeria313#1694: bc i have a way of reparameterizing the thing so i can use variable step sizes with linear multistep alstroemeria313#1694: runge-kutta's slow bc it's slow to get model outputs and it needs more than one per step alstroemeria313#1694: this paper uses kernel inception distance as their loss to optimize their learned sampler with alstroemeria313#1694: this is probably a good idea genetyx8#7543: well, I think this is about as useful as I'm going to be here, and I gotta make food, so... good luck alstroemeria313#1694: *nods* alstroemeria313#1694: ty! :blobcutehappy: alstroemeria313#1694: i could also like. do sampling w/ more steps and optimize the shorter learned sampling process outputs for some loss vs the longer sampling process outputs. bob80333#4040: I remember Microsoft had a diffusion vocoder paper where they were able to sample with just 1 step alstroemeria313#1694: this is not really suitable for learning continuous schedules though. alstroemeria313#1694: you have to fix the number of timesteps bob80333#4040: https://arxiv.org/abs/2202.03751 this may be relevant? alstroemeria313#1694: so <x>grad are all diffusion vocoders? bob80333#4040: Yes bob80333#4040: In this work they fine-tune a pretrained model on the inference schedule
alstroemeria313#1694: ahh bob80333#4040: And the previous work found the inference schedule by searching for the betas for short schedules chirp#4545: I find it can still be creative, you just have to explicitly ask it to be (“come up for a few interesting things that could happen next…”) alstroemeria313#1694: I have another question btw. alstroemeria313#1694: Say I want to learn a latent space where Gaussian-type errors (i.e. adding Gaussian noise to the thing or the type of errors you would get by training a model with MSE loss) are transformed to perceptually uniform errors in the output. alstroemeria313#1694: Where I have some sort of perceptual loss to define what that means. alstroemeria313#1694: Is training a simple VAE a good idea for this or is there any more direct way to do it. alstroemeria313#1694: Assume our perceptual loss function is such that we can't just construct an invertible mapping to a space where the perceptual distance = Euclidean distance. alstroemeria313#1694: also this autoencoder should ideally be capable of nearly perfect reconstruction Computer Scientist#8778: Whahahahahaha Computer Scientist#8778: Hahahhahaha Veedrac#0443: I'm running an experiment https://puzzling.stackexchange.com/questions/115879/ Veedrac#0443: “How long does it take a perceptive audience analyzing a set of DALL-E 2 images to tell that they aren't real?” (A: They might never figure it out.) guac#4716: these generative models are still so so so bad at hands lol /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: You might get a bit of selection bias in here. AI_WAIFU#2844: ||Is it one down on the left?|| Veedrac#0443: No! Veedrac#0443: not a bad guess though, IMO
Veedrac#0443: yes Veedrac#0443: It was hard for me to estimate the difficulty given I already knew the answer, but IMO the interesting part is whether people can figure out that they aren't real images in the first place. chirp#4545: is there a platform like colab but with - a dataframe abstraction (you have a dataframe that you can modify and save) - easy concurrent computation (e.g. populate 1000000 rows of a dataframe column using multiple workers) - transparent support for images & object storage generally - support for attaching a GPU on demand, with usage-based billing for context, this idea comes from the issues I ran into while trying to fine tune DALL-E on images: - you have to figure out where to put the images, and if they're in the wrong S3/GCS region you'll get charged a lot for bandwidth - it's most intuitive to work with the dataset as a giant dataframe, but then persistence is tricky - if you forget to save, you'll lose what you computed - some workloads (but not all) are fastest if you can use multiple GPUs, but Colab does not allow it. you can spin up your own GCS instances, but then you need to figure out how to load data, port your code over, etc. chirp#4545: overall my experience was quite rough - even though i wasn't trying to do anything complicated, i had to spend a lot of time working around logistical issues chirp#4545: i'm guessing there's not, so i guess my point is that I'd be really excited to see such a platform be created chirp#4545: this is equivalent to having one big computer with a bunch of GPUs and persistent RAM, but of course that would be very expensive random person#5234: Thats not how scaling works probably random person#5234: Concurrent computation => just use spark chirp#4545: spark has an annoying DX though
/ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Unfortunately pipeline scaling is a lot of secret sauce sounds like you need a data engineer. random person#5234: You cant just abstract all these things away atm random person#5234: Into one jupyter Louis#0144: Sounds like a cool product &.#0001: is there a working CARP notebook? the one I found on google says --2022-04-25 06:18:24-- https://mystic.the-eye.eu/public/AI/CARP_L.pt Resolving mystic.the-eye.eu (mystic.the-eye.eu)... 62.6.154.15 Connecting to mystic.the-eye.eu (mystic.the-eye.eu)|62.6.154.15|:443... connected. HTTP request sent, awaiting response... 404 Not Found 2022-04-25 06:18:25 ERROR 404: Not Found. EricHallahan#1051: Try `https://the-eye.eu/public/AI/models/CARP/CARP_L.pt` &.#0001: Same error EricHallahan#1051: You sure? &.#0001: it works, I made a mistake &.#0001: thank you! Lookism Enjoyer#7179: Hello guys, i was gonna generate an image in hugging face and suddenly this came. How can i fix it on android? https://cdn.discordapp.com/attachments/729741769738158194/968046001887776808/Screenshot_2022-04-25-15-06-06-02.jpg EricHallahan#1051: This isn't really the place to get technical support like this, but that is the page for that model on Model Hub. I think you are looking for Spaces? Lookism Enjoyer#7179: I think so Lookism Enjoyer#7179: How can i find the space?
Lookism Enjoyer#7179: Oh here it is, it might be loading or the page is empty https://cdn.discordapp.com/attachments/729741769738158194/968049260346114098/Screenshot_2022-04-25-15-21-05-54.jpg apolinario#3539: Weird it works for me here https://cdn.discordapp.com/attachments/729741769738158194/968056715776000020/unknown.png Lookism Enjoyer#7179: I kinda think it only works for pc Lookism Enjoyer#7179: Or it's just something EricHallahan#1051: Probably iframe related. Lookism Enjoyer#7179: I'll use another browser if it works apolinario#3539: https://cdn.discordapp.com/attachments/729741769738158194/968064691433766922/Screenshot_20220425-102306481.jpg apolinario#3539: Here working on the phone too Lookism Enjoyer#7179: Though what browser are you using on phone EricHallahan#1051: This conversation should probably be moved to DMs. Lookism Enjoyer#7179: It should Keverino#1093: is there an efficient way to find all possible combinations to tokenize a given str and tokenizer? Caelum#8192: Trie map Caelum#8192: Oh or do you mean to find the optimal set of tokens to create by finding common substrings? Caelum#8192: https://en.wikipedia.org/wiki/Trie#:~:text=In%20computer%20science%2C%20a%20trie,key%2C%20but%20by%20individual%20characters not necessarily a trie map Caelum#8192: I have implemented a trie tokenizer for The Pile with this that is just as optimal but it still differs and I need to make it not differ too Keverino#1093: I face the problem where i have bad_word token_sequences. But the model just finds new ways to express itself. First using a different combination of tokens, and if you ban those it will even go as far as using special chars and typos to predict a similar word. Say i want to ban "not": The tokenizer tells me to ban [1662], but the model can just predict "no" and "t", which is a different sequence. After banning all combinations. The model can still say "no.t". It feels like the "bad_words" mechanic is not as powerful as i thought 😄 StellaAthena#3530: @Keverino Are synonyms a problem as well, or is this a pure syntactic thing (as your tokenized example shows)
Keverino#1093: It's purely syntactic right now. But i guess the same applies if I want to include synonyms. Keverino#1093: I feel like banning tokens is a bad way of "controlling" a GPT model StellaAthena#3530: A character-level or word-level tokenizer will solve a lot of your problems I think Keverino#1093: true StellaAthena#3530: On a semantic level, I have some ideas I've been bouncing off of a swedish researcher. Lemme go find that thread and introduce you two StellaAthena#3530: (If you want, that is) Keverino#1093: of course" Keverino#1093: ! Keverino#1093: thanks uwu1#4864: digital globe gbdx is this kind of, uses dask for spreading out the compute over tiles of imagery dynamically fetched and sliced by workers nz#9710: https://evjang.com/2022/04/25/rome.html nz#9710: > OpenAI -> Technological lead on LLMs (~1 yr) + **an interesting new project they are spinning up** nz#9710: 👀 genetyx8#7543: > we plan to create customer value with deep learning on humanoid robots (1 year), and then solve manipulation (5 years), and then solve AGI (20 years) :harold: tammy#1111: *sigh* tammy#1111: capability news is bad for my mood DuckyBertDuck#5109: 17. Round objects floating in a blue sky. They are of different sizes and have different textures. Some are smooth, others bumpy or jagged. They give the impression of being light, as if they could float away at any moment. Surrealism, soft and ethereal.
18. A close up of a person's face. They have bright blue eyes and pink lips. Their hair is made up of different colors, shades and tones. It's a portrait, but it feels like there is more to the person than what meets the eye. Pop art, colorful and striking. 19. A close-up of a flower. The petals are delicate and the colors are muted. The background is out of focus, giving the impression that the flower is in its own world. Impressionism, soft and dreamy. DuckyBertDuck#5109: ----------- some cool descriptions I asked it to generate DuckyBertDuck#5109: of non-existent images DuckyBertDuck#5109: Some of those are pretty damn cool Caelum#8192: ", and then ask the AGI to solve alignment (25 years)" bmk#1476: someone needs to go alignmentpill Eric Jang Daj#7482: Unfortunately, he took one of my memes and anti-alignmentpilled himself I think Daj#7482: :grimberk: kurumuz#5695: so this is why memes can be dangerous huh kurumuz#5695: which one though Daj#7482: But he is right that AGI is gonna be a common thing literally in every company Daj#7482: the bellcurve "just tell the AI to be nice" one Daj#7482: wrote a whole blogpost about it lol bmk#1476: I can try to anti anti pill him Daj#7482: Eric, if you're reading this, please lets talk about alignment I swear it'll be worth your time lol
bmk#1476: I used to also be a "just tell the AI to be nice" guy a few years ago for largely the same reasons as led to that meme bmk#1476: more causally upstream but largely similar bmk#1476: also just slide into his DMs Daj#7482: I guess I should probably do this, ugh Daj#7482: don't have many spare brain cycles Louis#0144: oh yeah we're trying to get Eric to join CARP Louis#0144: lol Louis#0144: he'd like it Louis#0144: :berk: johnryan465#9922: @Daj would conjectures incubator take robotics companies? johnryan465#9922: AGI and robotics in the same article put that thought in my mind Daj#7482: That's not what that incubator is for :berk: johnryan465#9922: Aligned robotics Daj#7482: >implying robotics is relevant to AGI Daj#7482: :berk: Louis#0144: fund my consciousness startup pls Louis#0144: just to make leo seethe johnryan465#9922: I just want to make robots that don't kill people 😞 Daj#7482: You need AGI that doesn't kill people first johnryan465#9922: I'm going to work for DARPA now
Caelum#8192: need to make some nukes for the aligned AGIs we make to be able to deal with unaligned ones /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Embodied AI has value, interactions between environment and thing instantiates higher order processes AI_WAIFU#2844: Someone tell him to do this, humanoid robots is the best course of action, and definetly not a massive time sink nightmare 𓅬 gabriel_syme 𓅬#3220: Can we fund some KG research instead /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: Even the simplest simulations can have holes in them, take the openai hide and seek demo or glitching ender pearls through walls. /ᇨᆬᆑᆺ忐ᆝᇯ忁徼ᅳ忬ᇎᆿ忘ᆮᆗᆈᆡ念ᆾᇧ応ᇙᆨᆂᆓ忌ᅷᆱᆫᇺ#2976: The latter is a contrived example of course, you wouldn't give an AI an ender pearl...right? johnryan465#9922: That's kinda my thinking as to why it's relavent johnryan465#9922: But probably not enough of a justification vs more direct approaches ac#1874: Is there an authoritative "no free lunch theorem"? ac#1874: Like in statistics I usually see it referring to Wolpert (https://direct.mit.edu/neco/article-abstract/8/7/1341/6016/The-Lack-of-A-Priori-Distinctions-Between-Learning?redirectedFrom=fulltext) but it seems like people use the term more generally to refer to any theorem that basically states "you can't get something for nothing"? StellaAthena#3530: It's not a "real" thing in any meaningful and non-trivial sense. It's a buzzphrase tammy#1111: fuck's sake rytilu#4639: does anyone have or know of a startup that serves eleutherAI completions without any restrictions? basically openai's api but open StellaAthena#3530: https://goose.ai/ ILmao#5683: That whole row on health ML should have a massive "in the US" caveat stamped on it alstroemeria313#1694: lol trying perceiver io diffusion This is just going to be bad, isn't it alstroemeria313#1694: 6 epochs on ms coco https://cdn.discordapp.com/attachments/729741769738158194/968228990617796608/unknown.png Sidd#6307: What is PerceiverIO Diffusion?