data
stringlengths 115
7.61k
|
---|
kurumuz#5695: you are right, it's the same on my GPT-J code. I was confused https://cdn.discordapp.com/attachments/729741769738158194/949893297659314197/unknown.png
kurumuz#5695: yeah you should be able to fuse these
chilli#5665: Hmm, to some extent - it’s complicated
chilli#5665: But mostly… no
kurumuz#5695: like if it as separate kernels it would move the x for the attn(x) + ff(x) two times
kurumuz#5695: if you fuse it, it only should move it once?
chilli#5665: Dynamic shapes is a different issue we’re also working on
kurumuz#5695: that is the only problem that is keeping me from completely switching to JIT
StellaAthena#3530: Can you like… write a closure that generates fused operators? I’m thinking about the fact that rotary + Attn is fusable, but you’d need a slightly different implementation for different h params of rotary I think.
kurumuz#5695: torch.script can do that afaik
kurumuz#5695: you can script conditional things
kurumuz#5695: torch.jit.trace is terrible at that though
kurumuz#5695: I am not sure if torch jit can actually fuse attn and ff btw
kurumuz#5695: you definitely can with triton though
kurumuz#5695: cc @chilli
chilli#5665: Uhh… it’s also not trivial with triton
kurumuz#5695: :goose10:
StellaAthena#3530: It’s probably of minimal importance, but I was thinking about how parallel residuals introduce some redundancies (e.g., you can drop the first FF) and so now I’m wondering what an optimized version looks like
kurumuz#5695: all I want to do is move the x here once and use it for both attn and ff
StellaAthena#3530: We want to add them, not compose them, to be clear
|
kurumuz#5695: yeah
kurumuz#5695: so when you do
```python
attn out = self.attn(x)
ff out = self.ff(x)
x = residual + ff out + attn out
```
pytorch should be moving x, 2 times to the SRAM though it can just move it once and use it both for self.attn(x) and self.ff(x) @chilli
kurumuz#5695: goal is to move it just once
kurumuz#5695: is that really not trivial?
kurumuz#5695: not sure if this even helps with anything lol
kurumuz#5695: you are moving like what, 2048 x 4096 float16s for the 6B model?
StellaAthena#3530: If this is non-trivial I’ve lost all confidence in my ability to identify disable operations that aren’t like, PyTorch consecutive built-ins
StellaAthena#3530: Well 20B has a dimension of 6144. Plus you don’t need to load the rotations or do the rotary computation
StellaAthena#3530: (Though I think we cache the rotations)
StellaAthena#3530: Also, you need to multiply by the MBS/GPU right
StellaAthena#3530: So it’s at least 20x your estimate. Maybe even 100x?
kurumuz#5695: yeah might be
StellaAthena#3530: It’s a good thing we are swimming in engineers with nothing to do so they can chase down details like this and save us a day of training our next model
|
/s
chilli#5665: It’s potentially nontrivial. The problem is that my description of operator fusion is a bit simplistic lol
chilli#5665: Like, SRAM is not thaaat big
kurumuz#5695: yeah i was also thinking about that
chilli#5665: So when you fuse a couple of pointwise operators together
kurumuz#5695: you cant just stuff everything inside
chilli#5665: Yeah
kurumuz#5695: so if your data is big enough you will have to split them into separate kernels right
chilli#5665: What you’re doing is loading the first say… 5000 elements, performing your operations, and then writing them back
chilli#5665: And the GPU handles this automatically, more or less
kurumuz#5695: yeah like when i was thinking about int8 dequant + matmul kernels, when you dequant in place the data will get to the size of 2x
kurumuz#5695: so it can like not fit when you dequant it
kurumuz#5695: to the SRAM
chilli#5665: This is related to the GPU’s programming model
kurumuz#5695: so also need to think about that and stuff
chilli#5665: Ah that’s not the worst thing I think, you just load the amount that you can fit 🙂
Bruce23#6204: Hey, does anyone know what GPU runs at https://6b.eleuther.ai/ ?
Daj#7482: The demo is hosted by mystic.ai
Bruce23#6204: thanks
Bruce23#6204: I am wondering if the speed is comparable to a NVIDIA RTX A6000, that why I asked.
|
Bruce23#6204: It takes around 20seconds to process 1200 tokens on the RTX A6000. Is there a way I can speed that up?
Bruce23#6204: GPTJ-6b
StellaAthena#3530: You should not compare your computer with its commercial GPU to a cloud service provider.
tpapp157#3643: Yeah. The advantage of powerful hardware is only relevant if you have the ability to pre-stage your data next to the processing hardware. In a cloud API setup, your throughput is dominated by network latency and communication overhead.
RRavier#0355: Look up up outlier/fault detection/analysis. High level of what every method does: embeds data in space so that the weird stuff is far away in some distance metric from the not weird stuff. There's a bunch of different things to try. Whether it'll work is a different story. Problems like this usually require more assumptions on the data at hand to get anything reasonable
immibis#3179: What? There is no way communication overhead takes 20 seconds
JustAMan#3353: Hello, I'm looking for a ML model that can take a webpage screenshot along with markers for the text inside the webpage and then filter out the noise according to a predicate. Does this model exist and if not, then how do I approach this problem?
immibis#3179: what is noise?
JustAMan#3353: things on the page that are less of a match to the predicate
JustAMan#3353: Let's say I want a specific paragraph on multiple webpages based on a given criteria, then that's what I'll get from the model
JustAMan#3353: I've seen another approach which is to extract all you can, then filter or search through the data. This does not work for me since the location of the paragraphs or headers matters as they represent very different data
immibis#3179: what is the predicate? If you already have some way to detect which stuff is important to you, you don't really need machine learning
JustAMan#3353: it's to scale, let's say you need to do this on thousands of websites. Obviously classic algorithms do not suffice as they rely on html structure etc... not to mention the dynamic nature of the query, I don't need the same thing every time I scan
JustAMan#3353: in contrast to a general search engine, my domain is strict so domain is rather limited. I never ask for pictures or videos for example
rom1504#5008: If you're dealing with large data like videos maybe
For text it's definitely not true since it's very small, and for images it's quite enough to have 10Gbps cards
However yes having good interconnect is useful for big models and gradient communication.
chilli#5665: also, fun fact, it's not "volatile gpu util" - the volatile is part of the line above
chilli#5665: time to make a meme about it
|
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/950092851138666576/unknown.png
bmk#1476: honestly i never even looked at the header
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/950093696664231956/unknown.png
chilli#5665: lol
immibis#3179: > CUDA Version: 10.0
is that supported? My card says CUDA 11.4 and Tensorflow still refuses to use it
immibis#3179: wait maybe I have that backwards. Maybe my card is too old for CUDA 11.4 to use it
immibis#3179: either way something doesn't work. I was under the impression old stuff wasn't supported
immibis#3179: maybe pytorch does better 🙂
Technobird22#2055: Okay, will do - thanks!
chilli#5665: https://twitter.com/chhillee/status/1500547396945670144?s=21
alstroemeria313#1694: https://github.com/unixpickle/sk2torch
alstroemeria313#1694: So we could make SVM classifiers out of CLIP embeddings and obtain gradients wrt the generator's parameters.
alstroemeria313#1694: Or SVM regressions.
DigThatData#7946: https://docs.rapids.ai/api/cuml/stable/
DigThatData#7946: actually... I bet you can't backprop through RAPIDS abstractions. tbd
DigThatData#7946: implicit differentiation ftw I guess
Dri0m#3828: @triggerhappygandi DeepNet be like https://cdn.discordapp.com/attachments/729741769738158194/950404140515872868/EKaM6e-XUAATeWx.png
triggerhappygandi#0001: They didn't even make the model big. Just too narrow ans deep like a straw
triggerhappygandi#0001: Smh my head
|
kyle184473927;7#0973: Would any researcher here be interested in being a judge for a Research Competition hosted by Georgia Institute of Technology (Georgia Tech) this upcoming month? Great opportunity to see the newest research in robotics, AI, and software out of one of the top research universities as well scope out top talent. Can be done virtually and apologies for the off-topic post
Caelum#8192: :morelayers: but maybe scaling in that direction can allow for distributing the network better?
Crit#0843: hugging face has no dataset for semantic search? https://cdn.discordapp.com/attachments/729741769738158194/950462117759295508/unknown.png
EricHallahan#1051: You will probably find more help in the HF community forums for that than here IMO.
Crit#0843: :harold:
Crit#0843: probably should do that
Crit#0843: kind of surprising though that it's not on HF
EternalRecursion#0071: Hi, I have a dataset of around 12000 poems which I want to fine tune a language model on for poem generation. Can someone help me determine which model might be best for me to use, and will I be able to train it off my RTX 3060, or are there any free cloud services I could use that might be better?
rom1504#5008: all datasets are good for semantic search
rockenots#6906: I'm looking at the Project Menu and I'd like to work on #22 ([RFP] Can large models do a simple task (i.e arithmetic) perfectly?). How should I get started?
Daj#7482: That RFP (Request for Plot) was created by @bmk , who might have comments for you, but generally the way an RFP works is by requesting a certain kind of plot (as described in that project description), and then someone (i.e. you) figures out a way to make that plot and then you and others take it from there
Daj#7482: It's not meant as a shovel ready project that comes with mentorship and more an idea for a cool independent project someone could attempt
tpapp157#3643: Haven't there been enough papers recently showing that they can't? There was the one just a couple weeks ago showing accuracy on math problems was correlated with number frequency in the training set.
Daj#7482: I don't think that is at all conclusive that it can't be done in principle
StellaAthena#3530: It's non-obvious to me that that's a bad thing, tbh. Also, the model they were studying was bad at arithmetic regardless.
Tinytitan#5596: *I* have dificuly adding numbers more than 15
StellaAthena#3530: :100000IQ: It's not bad at math, it's just more human-like in it's intelligence
05bmckay#1766: Does anyone have any idea about how to train on encrypted text (without the decryption key)?
AI_WAIFU#2844: pretty sure you can't do that, and if you could it would be bad encryption
EricHallahan#1051: ~~Train on ROT13 text~~
|
Ravna#1831: How to rot13 Japanese?
asparagui#6391: hiragana --> rot13 --> cebsvg!
n.kh.l#5814: I’m doing something similar with songs instead of poems. If you want to, feel free to DM me
ethan caballero#6044: https://twitter.com/ethancaballero/status/1501254558651064320
https://twitter.com/mustafasuleymn/status/1501228731779543040
Daj#7482: why are you like this
Daj#7482: :berk:
ethan caballero#6044: I'm legitimately excited about it. and it's an Anthropic clone. Their focus is scaling and safety: https://jobs.ashbyhq.com/inflection/d9ea657d-a27a-4b40-b84f-36980f9777eb
Daj#7482: There is basically zero information, how is this possibly an "Anthropic clone"?
Daj#7482: The way you phrase it sounds pretty insulting imo
Daj#7482: Do you know what the word "clone" means?
ethan caballero#6044: scaling part:
https://twitter.com/mustafasuleymn/status/1501236697920638980
Daj#7482: it does not mean "has a few vaguely similar keywords on their website"
Daj#7482: None of these people are known for their alignment or safety backgrounds (or even scaling)
Daj#7482: unlike Anthropic's founding team
Daj#7482: It also literally calls itself a _consumer software company_
Daj#7482: lmao
bmk#1476: the only part of anthropic that seems alignmenty is the interpretability people under chris olah
bmk#1476: the rest of anthropic is just scaling
|
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/950839907063066624/test.png
Sphinx#2092: https://cdn.discordapp.com/attachments/729741769738158194/950840235577729034/06Rwwij.png
ethan caballero#6044: some context:
https://greylock.com/firm-news/welcome-mustafa-suleyman/
triggerhappygandi#0001: Discord bacc :hap:
bmk#1476: I think more spinoffs good because it splits up capital and compute resources, making is harder to scale
bmk#1476: cashgrabs good actually
triggerhappygandi#0001: The website looks kinda neat
ethan caballero#6044: ~same brown color as anthropic
triggerhappygandi#0001: My pre dopamine hacked preteen brain is mesmerized by the swirly lines
bmk#1476: I'd love it so much if every star capabilities researcher started their own startup selling consumer products, soaked up $100M of funding, and bought their own dedicated hardware
bmk#1476: that would set back AGI research a ton in multiple ways
triggerhappygandi#0001: I mean, 2 people can wear the same dress.
bmk#1476: 1. a lot of these startups will fail, making people more suspicious of funding AI and potentially triggering an AI winter
bmk#1476: 2. more competition over hardware helps keep the current GPU shortage up indefinitely
ethan caballero#6044: ai winter ain't happening
triggerhappygandi#0001: Promise powerful chatbots after 3 years and fail to deliver consistently :goose7:
bmk#1476: 3. depleting the pool of capital and talent makes it harder to do AGI research
triggerhappygandi#0001: Discord still isn't fully functional for me :/
bmk#1476: images are a scam anyways
|
bmk#1476: text is All You Need
ethan caballero#6044: Both images say "Ethan is right".
triggerhappygandi#0001: 2. :aGooseGooseGooseGoose:
https://www.google.com/amp/s/www.techspot.com/amp/news/93611-tsmc-rd-executive-believes-chip-shortage-last-until.html
bmk#1476: no it matters because this draws talent and capital and compute hardware away
bmk#1476: :ultragoose:
ethan caballero#6044: lol at the emojis not loading
bmk#1476: :goose10:
bmk#1476: sorry folks you'll just have to memorize all the goose emote numbers
bmk#1476: :goose2:
triggerhappygandi#0001: I know goose10
bmk#1476: :goose12:
triggerhappygandi#0001: :goose3: vs :3goose:
alstroemeria313#1694: > For fun, here's a vector field produced by differentiating the probability predictions of a two-class SVM https://cdn.discordapp.com/attachments/729741769738158194/950847290761945158/unknown.png
alstroemeria313#1694: it looked weird to me so i made a version where i differentiated the log odds instead https://cdn.discordapp.com/attachments/729741769738158194/950847608769892362/svm_vector_field.png
alstroemeria313#1694: (Actual applications of this are like, using SVM classification or regression on CLIP embeddings then backpropagating through the trained SVM and CLIP to guide a generator)
alstroemeria313#1694: Sometimes you just need a thing that is more powerful than linear regression but not as complex as an MLP.
alstroemeria313#1694: (Because you are in a low data regime or whatever)
Emad#9608: I don’t think there is actually that much of a chip shortage tbh. Could get 2k A100s within three months if needed from multiple discussions, that’s probably enough to manage most things
Emad#9608: More there is a human capital shortage as a bottleneck, need to grow some more
|
bmk#1476: i was thinking frontier-pushing levels of compute
bmk#1476: also yes soaking up human capital is important too
bmk#1476: also gotta get the capabilities->alignment pipeline polished
Emad#9608: “When humans want to control a computer, they need to learn a programming language in order to provide instructions, he added, or use a mouse to navigate and engage with things on the screen. “All of these are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something,” Suleyman said. The British entrepreneur claimed a new suite of technologies that Inflection will aim to develop will eventually enable anyone to speak to a computer in plain language.” https://www.cnbc.com/2022/03/08/reid-hoffman-has-set-up-a-new-ai-company-with-deepminds-co-founder.html
EricHallahan#1051: I was talking to an engineering-manufacturing firm last Thursday, and I asked how resource shortages were impacting their business. They said that they had almost no effect on them since they had been able to haggle their suppliers. 🤷♂️
Emad#9608: “2k A100s is all you need” - Emad 2022
bmk#1476: that's like roughly equivalent to one v4-2048 right
ethan caballero#6044: In case y'all aren't aware, reid hoffman is into AGI:
https://www.youtube.com/watch?v=-W-eTwnxLw4
bmk#1476: ok so how do we make a good capabilities->alignment researcher pipeline?
ethan caballero#6044: SBF
bmk#1476: so eleuther has the advantage that we've made alignment high status around here
bmk#1476: which motivates people to learn, and we can enable that with stuff like the alignment reading group
bmk#1476: but we need to really scale this up
bmk#1476: elaborate
ethan caballero#6044: He'll probably be the biggest source of funding for the movement.
bmk#1476: no, I mean, how do we actually do the pipeline, given funding?
EricHallahan#1051: Funding is not the problem; it is the process itself which needs to be established.
uwu1#4864: understand their motivations for capabilities research and figure out how to fulfill them for alignment
Emad#9608: It’s about half a Selene, similar to Polaris at Argonne which is 12 in Top500 list
|
ethan caballero#6044: I think @david_krueger's strategy is pretty good.
Get Academia/the_sources_of_prestige to believe that scaling solves all capabilities, and that (as a result) AI Alignment is the only actual unsolved problem (and as a result is the most prestigious problem):
https://twitter.com/DavidSKrueger/status/1486450889942519823
https://twitter.com/DavidSKrueger/status/1500453173496008706
Emad#9608: People are main bottleneck to alignment but there isn’t a problem map of where to chuck what type of peoples. Um sorry I didn’t share my notes will finish when new laptop tmrw and pop on alignment general
Emad#9608: Funding and compute not a problem
bmk#1476: my general feeling rn is that it's better to get people to the point where they can decide what projects to work on themselves rather than actually trying to allocate projects top down
bmk#1476: like it's pretty hard to put more than like 2-3 full time people on a project unless it's a megaproject that can be divided down into 2-3 person subprojects, and rn I don't think there are many great alignment megaprojects
bmk#1476: also people are way more productive working towards something they find promising
Emad#9608: Does that mean the space is lacking perhaps 20 full time people?
uwu1#4864: what about kaggle esque competition?
bmk#1476: as of rn I think the best thing to do with 20 full time people is to put them through some kind of course to get them up to speed, and then give them a bunch of guidance but ultimately have them choose their own directions
bmk#1476: I think alignment is uniquely bad for this format
Emad#9608: Idk some would want to go and build a big red button
uwu1#4864: i think that also has the benefit of not universalising values and letting diverse perspectives be given a shot at being represented within aligned systems if they arise
Emad#9608: First AGI wins and all that
bmk#1476: ok you're going to have to expand on what you mean by "kaggle esque competition"
bmk#1476: because this doesn't seem to follow at all from my understanding of kaggle like competitions
uwu1#4864: oh I meant that for the it's better to let ppl choose what to work on
bmk#1476: oh
|
uwu1#4864: sorry :3 but also one could imagine an open competition letting anyone participate as letting more people give it a shot than just who the funders for the 20 or whatever think is right
flowpoint#7450: as a noob, i wish there was an alignment playground, and acessible toy problems with practical relevance
what's the mnist of alignment?
bmk#1476: the problem with competitions is what should the metric be?
uwu1#4864: that can be refined over time
bmk#1476: no, this is a fundamental problem
bmk#1476: alignment is not the kind of field where you can just slap a metric on it and have people make the number go up
uwu1#4864: i mean yeah exactly. you'll need to run many and iterate on the metrics used. Also it could just be a panel of judges, like the Millenium Prize
bmk#1476: that would not only be pretty useless, it could be actively counterproductive
ethan caballero#6044: :guilty:
bmk#1476: even then, with a panel of judges, you restrict yourself to things that look good to judges
EricHallahan#1051: something something Goodhart
uwu1#4864: yup but it could better than having the judges also pick the projects worked on
ethan caballero#6044: How long do you think it will be until there is relatively trendy Alignment Reseach Conference like ICLR?
ethan caballero#6044: Hit up Steinhardt, Krueger, Manell, Leike, and Christiano to start Alignment Reseach Conference.
bmk#1476: I guess that could be the case
bmk#1476: so retroactive funding
uwu1#4864: maybe it could be like Crufts or other pet/animal shows. People present their aligned agent and the judges go through them through a set of qualitative and quantitative tests
bmk#1476: you lost me again
uwu1#4864: it's just a model for where people train agents to do stuff that humans evaluate
|
bmk#1476: I don't think "make this agent as aligned as possible" is remotely the right way to think about this
cfoster0#4356: I must satisfy your interests with Friendly AI and ELK :paperclop:
bmk#1476: like as an analogy, we're trying to get to the moon, and rewarding people for making agents more aligned is like rewarding people for getting as far off the ground as possible
bmk#1476: so people will build towers, or hot air balloons, or whatever
uwu1#4864: Rockets were the product of an arms race though which would probably be good to avoid with AGI. we only thought to go to the moon after the development of such systems
bmk#1476: when what we actually want is someone to go figure out how orbits work and the rocket equation and rocket engines that can provide the needed delta v
ethan caballero#6044: Alignment's traction with the mainstream right now feels like where Deep Learning's traction with the mainstream was in 2012, the year before ICLR was founded.
bmk#1476: the problem right now is we don't even know how orbits work or what delta v is
bmk#1476: we need to first develop those concepts to even meaningfully think about going to the moon
bmk#1476: we also need to develop better materials and rocket engines and so on
uwu1#4864: so first we need a telescope
bmk#1476: but rewarding people for getting closer to the moon doesn't incentivize that
Emad#9608: PaperclipCon
uwu1#4864: then it feels like an AGI arms race to align your AGI with your sides view before the other side's AGI destroys you would be the only way
bmk#1476: ..what?
bmk#1476: can you elaborate on how you arrived at this conclusion
bmk#1476: the problem we're trying to solve isn't who to align with, it's how to make an AGI aligned with literally anyone at all
AI_WAIFU#2844: If such thing ever comes into existence, 99% chance it's actively going to be counterproductive and and will at best produce results that are orthogonal to the goals of x-risk reducing alignment research
AI_WAIFU#2844: Lot of people just don't get that if we fuck this up it kills us all
AI_WAIFU#2844: in their souls
|
uwu1#4864: thinking to why rockets were developed, or even computers, in general when there is a transformative development that one wouldn't have considered otherwise, it seems to often arise from pitching human ingenuity against other humans, which seems to have mostly occured due to wars and arms races
bmk#1476: 1. that was not the parallel I was intending to draw at all
bmk#1476: 2. even then, it still doesn't follow that we want people to compete on *who to align the AI to*
bmk#1476: that's the wrong thing to think about
bmk#1476: the threat isn't that AI will be aligned to the wrong person
Emad#9608: One trend I’ve noticed from speaking to alignment folk, 5-10 year max horizons as will probably all get paper clipped
bmk#1476: the threat is that someone will think they solved alignment, or worse they haven't even thought about alignment, and then they try to give it a really good set of values, and then it doesn't work and kills us all anyways
bmk#1476: the reason you don't want people competing to be the first to make an aligned AGI is that the first person to be done will probably have cut corners or only solved a subset of the problem or just convinced themselves that alignment isn't a problem
Emad#9608: Maybe I’ve just spoken to the glummer ones :thinkies:
bmk#1476: hey I have longer timelines, nearly 20 years median
uwu1#4864: that will just be a prerequisite to getting the AGI to kill the enemy, to not have it kill your own side. not saying that this is a desirable outcome or that it will solve alignment as you say, it could still just kill us all and that the risk of thinking you've done it when you haven't will still be present. And yeah I'm not saying that this is the desirable outcome but I guess I can't imagine a different one
bmk#1476: much different
Emad#9608: What’s a cotra
bmk#1476: I don't get what you mean
ethan caballero#6044: greatest AGI superforecaster
AI_WAIFU#2844: ~~yeah but cotra is way off the mark~~
bmk#1476: so it seems like you fundamentally think of alignment as a thing that everyone will realize needs to be solved so they can get the AI to do what they want
Emad#9608: See if 20 years
bmk#1476: and so everyone will first solve alignment to get the AGI to be aligned with them
ethan caballero#6044: reincarnation of Kurzweil/Moravec
|
Emad#9608: You can grow a whole bunch of people
Emad#9608: Train em up proper
bmk#1476: 20 years is the median though
bmk#1476: the distribution is very broad
AI_WAIFU#2844: This is true to an extent, the problem is that people will opt for bandaid solutions, dial up the power, and then the bandaid won't work and they'll be powerless to do anything about it.
bmk#1476: i mean that's what ive been arguing
Emad#9608: Well you can probably ignore 5years as goose cooked anyway a few folk probably won’t make a difference, focus on what to map for 10 years when everyone has exascale access and stuff
bmk#1476: .
uwu1#4864: no but it's a prerequisite
bmk#1476: by alignment i mean by shorthand "solving alignment completely and not just bandaid"
bmk#1476: can you explain your thinking
ethan caballero#6044: Get SBF to fund prestigious annual International AGI Existential Safety Research Conference with proceedings and posters and stuff.
bmk#1476: i dont like this idea for the aforementioned reasons
AI_WAIFU#2844: Yeah, as much as I shit on the ratsphere for not cranking out enough alignment researchers, you really can't just throw warm bodies at the problem, and doing so will make everything worse.
bmk#1476: at least, you have to do it the right way
bmk#1476: and creating incentives to do alignment as attire is not the right way
uwu1#4864: if AGI really is so powerful, it is also a weapon. a prerequisite for weapons is that they are controllable, or in this case aligned (or, Band-Aid aligned let's say). Furthermore, it seems that many scientific developments have come from the search for better weapons. Furthermore, such an AGI will also have to operate in an adversarial environment, potentially being developed in a race against other superpowers developing such systems. Thus, there will be significant pressure to figure how to align your AGI to your values such that it can't be used against you by the enemy. And having the AGI aligned to your values is a prerequisite to having the AGI aligned to anyone And this is not a desirable outcome! I agree with you that the alignment is completely solved is the thing actually needed, especially with the ease of increasing the power of the weapons systems vs scaling say nuclear weapons to the point where you accidentally kill everyone at the test firing (which is the risk with turning up a Band-Aid aligned AGI). But, being band aid aligned feels like a prerequisite for being fully aligned? Unless alignment appears fully formed out of nothing, it would be build up by subcomponents, which would interact with each other, and one could consider systems missing certain components or being incorrectly assembled as appearing on the way to such a fully aligned system.
AI_WAIFU#2844: I don't think that's sufficient, we need a full on training pipeline
AI_WAIFU#2844: also it's not obvious how to select for the right group
ethan caballero#6044: I think fastest way to grow number of people of number of people doing "real" alignment also involves inadvertently increasing the number/ratio of people doing "alignment as attire".
|
It's better to have 1M people researching "real" alignment and 1M people researching "alignment as attire", than it is to have 1 thousand people researching "real" alignment and 10 people researching "alignment as attire".
bmk#1476: my counterargument is that:
1. lots of people simply have not internalized the idea that alignment is really fucking hard and so will settle for something much less aligned because they *don't realize* that it's not sufficiently aligned. it doesnt help that lots of these people have the "move fast and break things" mindset. and it doesnt even matter what most people think, as long as one group convinces themselves that theyve solved alignment they can destroy the world for the rest of us
2. bandaid alignment is to real alignment as building a hot air balloon is to building a moon rocket. like sure you dont have a hope of going to the moon in a world where you dont have hot air balloons, but it's not like building better and better hot air balloons will get you to the moon, and in a world where lots of people seem to think that improving hot air balloons is real progress towards going to the moon, this is really dangerous because someone is going to think that their hot air balloon is a moon rocket and then blow up the world
T_Olabode#7343: What are the most common reasons why AI companies fail?
(some guesses: No market, Vapourware etc)
bmk#1476: ¯\\_(ツ)\_/¯
flowpoint#7450: we shouldn't always emphasize how hard alignment is,
it easily discourages ppl. to try
bmk#1476: the part where we all die a horrible painful death if we don't figure it out should hopefully provide that encouragement
uwu1#4864: yes but hot air balloons weren't developed for going to the moon, they were developed to give you a nice view (and also provide recon and drop weapons on the enemy). Which also later led to planes and rockets. What I'm saying is that, wanting to go to the moon will not lead to the development of technology that will enable that ever, no more than wishing for world peace will achieve it - technology is only driven by incremental steps building on previous steps. And often those steps are taken due to having to fight other humans also taking such steps, because humans are lazy and don't want to do stuff unless they have to. Especially If we can't even conceptualise the building blocks of what we need
uwu1#4864: it can't even get people to quit smoking not sure why it would work here
bmk#1476: I think at this point you're taking my analogy too literally
bmk#1476: the original point of my analogy is that not all incremental work that gets you closer to your goal in terms of some metric is actually progress towards that goal
flowpoint#7450: no, future regret is so bad at motivating ppl.
most just start procrastinating then
bmk#1476: some of that work is a waste of time or even a step backwards
bmk#1476: I agree that getting people to compete is a good way to get people to do things
|
uwu1#4864: yes and I'm saying that there's no way but to set some metric and try to achieve it, even if later it was the wrong metric. If the risk of the inbetween systems is too high, then there are no instances in human history of such technology being developed. Like, if perfect is not the enemy of good, then it doesn't appear that humans are capable of it
bmk#1476: I just don't think "compete to be the first to make an aligned AGI" is a good idea
bmk#1476: well the hard part is picking a metric that isn't literally counterproductive
bmk#1476: I think I'll concede that metrics are a really good way to get people to make progress on something, though I still don't agree that it's the only way
bmk#1476: but even for that, I still think that picking a good metric is really hard
uwu1#4864: I think it's really hard too
uwu1#4864: but also if you don't explicitly pick one then you are just using an implicitly defined one
bmk#1476: i think that's a fair argument
bmk#1476: so that brings up the question
bmk#1476: what metric do we want to use
uwu1#4864: im not sure. knowing that would also be solving alignment? Maybe we need a competition to determine the metric which is then wargamed out by opposingly motivated parties with results determined by a fair arbitrator. but also idk if that would just be DnD esque improv theater or would actually lead to accurate results
uwu1#4864: we could also think about ways to end AI research permanently, e.g global spread of anti-intellectual regimes
uwu1#4864: but that seems like a pretty bad outcome for us personally
bmk#1476: that also seems bad for alignment research
bmk#1476: and also basically everything else
uwu1#4864: i wonder if a model could somehow prove that it has forgotten something would be a useful primitive in alignment
bmk#1476: @uwu1 i think this might interest you https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=aL2XZ5HNYyW7ZWdce
bmk#1476: i think that would be useful but really hard to accomplish
bmk#1476: the "prove" part sounds hard
uwu1#4864: nice! yeah epistemology seems to have been kind of forgotten in modern research for some reason
|
DigThatData#7946: https://twitter.com/bneyshabur/status/1494002534414831616
jacquesthibs#6131: Has anyone here trained models on the M1 chips? If so, what do you think of the Mac Studio?
jacquesthibs#6131: I’ve been trying to figure out which mac I should get next. I wanted to buy something low-end enough and just ssh into a separate linux machine when I need to train models so Mac Studio might be overkill.
jacquesthibs#6131: Note: I love the mac ecosystem so I will definitely be getting a mac.
zphang#7252: m1 helps my pycharm not lag
EricHallahan#1051: I just use a 4 year old laptop which has a broken hinge. 😛
jacquesthibs#6131: Yeah, my macbook pro is 7 years old at this point
jacquesthibs#6131: Yeah I guess it won’t be coding related what decides how powerful a machine I get.
bnanrz#1693: Just got an m1 yesterday, pycharm doesn’t lag at all. Unlike on my 2016 mbp for work
bnanrz#1693: M1 pro I guess. Gotta throw that in there
bnanrz#1693: If you want me to try something I can try my best lol
bnanrz#1693: Yeah i waited a few weeks to get 32gb of ram instead of the 16gb. im glad i did as it uses 18gb with just the normal stuff open
jacquesthibs#6131: Oh yeah, I often have 50+ tabs
jacquesthibs#6131: Although Tab Suspender has helped with ram issues I believe
uli#4334: look what #memes caused https://cdn.discordapp.com/attachments/729741769738158194/950941792830894160/unknown.png
uli#4334: lmao
DigThatData#7946: https://github.com/Nixtla/neuralforecast
xloem#0717: here's a new paper purporting an O(n) alternative to transformers. haven't reviewed it: https://arxiv.org/abs/2203.03691
StellaAthena#3530: We discussed this a bit last night.
|
tl;dr Any paper that compares to 11M transformer models isn’t an “alternative”
Keverino#1093: What are in your opinion AI/NLP conferences that tackle GPT model like problems and development? Maybe there are chances to meet people from this discord too? 😄
Deleted User#0000: https://www.cnbc.com/2021/01/27/deepmind-co-founder-investigated-by-law-firm-after-staff-complaints-.html
Deleted User#0000: so I dont know anyone itching to join there lol
Deleted User#0000: :berk:
Sphinx#2092: random\_capabilities\_z
bmk#1476: oh yeah that reminds me @Deleted User if I wanna visit the DM office and chat to some alignment people who do I talk to
AI_WAIFU#2844: just walk into suttons office
bmk#1476: I mean the London office
Deleted User#0000: Shane naturally
bmk#1476: aight so do I like email him
Deleted User#0000: https://vkrakovna.wordpress.com may be a good bet
bmk#1476: oh right I think I might be able to ask Mikulik
Deleted User#0000: yeah him too
Dashiell#8739: what prompted conferences to adopt the whole "open review" thing? was it concerns about the big labs getting preferential treatment?
Sphinx#2092: Isn't it only ICLR?
zphang#7252: now ARR too I guess
zphang#7252: or I guess it depends on the extent of "open review"
RRavier#0355: it's getting more popular
RRavier#0355: I dunno how many bullshit reviews you've had but I've had more than I can count. the entire purpose is on accountability
|
RRavier#0355: whether it works is different
RRavier#0355: but it does in my experience make getting reviews where the entire text is like "there is no novelty" and "you ignored all of the literature" less likely in theory
RRavier#0355: public shaming is a powerful tool might as well embrace it
StellaAthena#3530: Reviewers' identities aren't revealed though, so it's not really publicly shaming them?
RRavier#0355: well no they're not fully known but like, if bullshit is pointed out in the rebuttal
RRavier#0355: literally everyone can see
RRavier#0355: at some level the blindness goes away, so even if the general public wouldn't know, the metas or the meta metas would know
RRavier#0355: with all of these reviewer meta scores now or whatever
StellaAthena#3530: I guess what it boils down to is that I don't know if I should expect people to care
RRavier#0355: it's a theoretical exercise I don't think anyone really knows if it works
RRavier#0355: it's a case of "there's gotta be something better than this"
RRavier#0355: -shrug-
Dashiell#8739: as of right now I've gotten zero reviews of any kind 😅 , I've just seen what seems like mixed reactions to them. And at first I thought it might have been part of the same push for more strict double blind reviews, but then I realized that doesn't make a whole ton of sense
RRavier#0355: double blind is practically dead in science because of arxiv.
RRavier#0355: and other equivalents
RRavier#0355: only way to really get any amount of blindness is to submit to venues that do triple blind or embargos
RRavier#0355: e.g. nature, science etc
bmk#1476: idk I think double blind reviews were never that great in the first place
random person#5234: I mean I dont think even triple blind would work if you know, you have a paper that mentions result with JFT 300M or some well known internal dataset
bmk#1476: "to preserve the integrity of this triple blind study, no one involved will have any idea why anyone is doing anything"
|
StellaAthena#3530: It’s pretty obvious who people are in our subfield specially, but I’ve seen (don’t recall where exactly) good evidence that blindness increases diversity in both the people and the institutions that get accepted
atllas#0428: is there a way to generate a face
atllas#0428: https://cdn.discordapp.com/attachments/729741769738158194/951251374195232848/faceaifaceface.png
atllas#0428: like this?
atllas#0428: i want to get from
atllas#0428: https://cdn.discordapp.com/attachments/729741769738158194/951253091431374978/GANDHIAN12.png
atllas#0428: to
atllas#0428: https://cdn.discordapp.com/attachments/729741769738158194/951253154329133057/Martin_Luther_King2C_Jr.png
asparagui#6391: morphing or something else
alexandrost#2936: Hello! Any ideas on choosing a CPU for faster language model inference?
alexandrost#2936: Is there like a community favorite ?
circuit10#0158: You need a GPU for that (CPUs are really really slow at it)
Caelum#8192: Search "GPU" in:20b/gpt-j
𓅬 gabriel_syme 𓅬#3220: I love it, learn a lot by finding my favorite papers in openreview and then going summaries and critiques
alexandrost#2936: Thank you. Sorry I meant , a CPU to support a GPU Deep learning system
alexandrost#2936: Like, when you build a deep learning rig , you have to use a gpu and a cpu that has the necessary specs so as it does not become a bottleneck during inference
cfoster0#4356: You should try #art or a different server
RRavier#0355: Look up image interpolation. So long as pose and resolution are similar standard image processing tools should be good. If black and white, optimal transport. Otherwise whatever the current flavor of DL
atllas#0428: i think the challenge is going from the generated image to a real face
guac#4716: it's an interpolation you'll always end up at the start/end images
|
RRavier#0355: You'll have to register the images yeah but OT or any reasonable method would take care of that. Same pose/resolution would make those concerns negligible
atllas#0428: Got it, thank you !
chilli#5665: Folks might be interested in this: https://dev-discuss.pytorch.org/t/what-and-why-is-torch-dispatch/557
cc: @Kharr @Lucas Nestler (ClashLuke)
ilovescience#3282: wow very interesting...
love to see dispatch stuff used in Python and PyTorch... type dispatch is used heavily in fastai...
I wonder if there are any applications of this to be used in fastai (there is some use of __torch_function__ magic, but I am not too familiar with it)
on a separate note, your code examples are not rendering properly, only showing on a single line...
chilli#5665: ah, thanks for the catch
chilli#5665: updated!
chilli#5665: yeah, PyTorch has always had a pretty extensive dispatcher system that supports multiple dispatch, etc.
chilli#5665: The main difference is that this exposes it to the users 🙂
ilovescience#3282: well now you have :berk:
guac#4716: damn ezyang blog posts are always so solid
chilli#5665: smh I wrote this
chilli#5665: unless you're referring to the dispatcher blog post linked in the post
guac#4716: ah i got caught up in the ezyang linked post 😅 but it's wicked how deep ya'll are taking dispatching from python 👍
ILmao#5683: You're forever cursed to be in his shadow :P
|
ILmao#5683: (it was a good post)
Octopirate#9999: lolyea used to use fastai at my last job, definitely not something i see academia using as much since it's not as customizable
Octopirate#9999: imo
Octopirate#9999: kind of semi the same deal with keras
Louis#0144: Omg hi ezyang
jaredmadere#8538: can someone help me better understand how cutn / cut_pow function within diffusion notebooks like JAX and disco?
I know that a higher cut_pow means smaller cuts- but I’m confused how the cuts actually work
does smaller cuts mean that the image is ‘collaged’ out of smaller cuts ? ie larger cuts would make the image look like it is collaged out of large chunky cut outs? OR do the cuts happen as part of the evaluation process and their size does not effect how ‘chunky’ the actual end image looks? I’m trying to figure out how to adjust the settings to make the images it produces look less geometric/ look like they are made up of smaller more refined chunks
if the cuts are smaller, do you need more cuts to fill up the same amount of space? or am I fundamentally misunderstanding how this works?
also- is this channel the best place to ask this sort of question?
alstroemeria313#1694: generally #art. Each cut is like, the area that CLIP is allowed to see/affect at once, we do them because CLIP's input size is fixed at 224x224 and we generally want much larger images. We do like 16-128 random crops each iteration and downscale them to 224x224 to get around this. :)
jaredmadere#8538: Thank you so much for explaining this…so is my idea about smaller cuts leading to images that look less ‘chunky’ / ‘collaged’ accurate or is that an inaccurate way of understanding how it will effect the final image?
alstroemeria313#1694: The smaller cuts are to put in detail and tend to make things look more collaged
alstroemeria313#1694: The larger ones are for global coherence
alstroemeria313#1694: Since the collaging effect comes from CLIP seeing different parts of the image separately.
ilovescience#3282: > since it's not as customizable
|
not true, fastai is quite customizable
Octopirate#9999: Aaaaaaas customizable
Octopirate#9999: Same with Keras
Octopirate#9999: Definitely customizable, just maybe better to go further down if you’re working with novel methods maybe?
Octopirate#9999: That’s my intuition for why I don’t see it as much at least
ilovescience#3282: eh i find it to work fine for my purposes, which is implementing and training SOTA unpaired image-to-image translation models
ilovescience#3282: fastai callbacks make it quite easy to implement lots of SOTA algorithms
T_Olabode#7343: https://twitter.com/janleike/status/1501986578456973317?t=0uqdrjLtudC3HF8MChBqvw&s=19
alstroemeria313#1694: https://github.com/pytorch/data how does this compare to webdataset
alstroemeria313#1694: like does it supplant it yet, or...
ILmao#5683: My foray into using fastai for research ended in monkey patching core parts of the framework. Suffice it to say not a pleasant experience
ILmao#5683: If you can "colour inside the lines", it's a fine library. But the assumptions it makes in the name of streamlining normal usage can cause quite a bit of friction for research if you don't.
ILmao#5683: https://github.com/pytorch/data/blob/19cf4530084820c54264141f22b82ba0e2997cfd/torchdata/datapipes/iter/util/tararchiveloader.py#L18 seems similar to the tar part of webdataset
ILmao#5683: Whether it fulfills the first point under https://github.com/webdataset/webdataset#related-projects IDK
ilovescience#3282: what parts did you need to monkey patch?
ILmao#5683: Some of the output shape inference, parts of the xresnet models and some of how data was passed through the training loop, off the top of my head
ILmao#5683: It's been a while. Ended up rewriting it all with PyTorch Lightning 🤷
ilovescience#3282: > some of how data was passed through the training loop
why not use fastai callbacks?
ilovescience#3282: i never really liked pytorch lightning, didn't really seem to simplify stuff...
|
plus their hypocritical approach to open-source doesn't sit right with me
ILmao#5683: Can you attach auxiliary information to a batch with callbacks? I looked into it and as I recall you couldn't
ILmao#5683: The whole point of PL is to reduce some boilerplate. It's ugly and their development model isn't stellar, but it absolutely helped with writing research code
ILmao#5683: Whereas despite my best efforts with fastai, a lot of time was spent just fighting the framework
ilovescience#3282: i don't see why not?
ILmao#5683: Which is fine, it's not really designed for that
ilovescience#3282: I would need more details on what you are doing though
ILmao#5683: My recollection is that core parts of the training loop made assumptions around the structure of `x` and `y` in each batch
ILmao#5683: So when I tried smuggling in some extra bookkeeping information alongside the inputs, it would either break completely or discard it
ILmao#5683: Perhaps it's changed since, but having waded through the fastai2 internals more than once I'd be surprised if it has
ilovescience#3282: what do you mean by bookkeeping information?
ILmao#5683: Original sample IDs for augmented multi-crops of data
ILmao#5683: Which would be used at the end for calculating metrics
ilovescience#3282: i mean the easiest solution that comes to mind would be to store stuff in the callback, but that may not be the most elegant...
ILmao#5683: It wouldn't work unless both shuffling and batch assignments were fixed ahead of time
ILmao#5683: I looked into it 😛
ilovescience#3282: are you using a PyTorch DataLoader?
ilovescience#3282: tbh i don't know the details of your task/data but i would be very surprised if there isn't a way to do what you suggest, and if there isn't a way, it should be raised as an issue as something to add...
ilovescience#3282: are you in the fast.ai discord?
rom1504#5008: I read that it's the (wip) future integration of webdataset directly into torch. So not really a competitor
|
alstroemeria313#1694: ahhh
rom1504#5008: ... I'm not finding again where I read that though hmm
ILmao#5683: 1. I had to because the higher-level data APIs didn't work for my use case (asked about this, got crickets) 2. yes there was, either by not using most of the framework (which others did) or hacking into it (which I did) 3. yes, but since I don't use it these days it's on perma-mute
johnryan465#9922: Is there any projects people are working on involving Gaussian Processes and related Bayesian stuff at the moment?
Louis#0144: @StellaAthena John is a machine learning engineer, he reached out to me on twitter
cfoster0#4356: AFAICT there still isn't a widely used library for soft prompting of generative models, is there?
MaxHager#6351: I search for an ML coliving in the bay area. Does someone know about something like that or wanna join? Ping me.
Octopirate#9999: trying to do it in nyc 😭
Keverino#1093: Has anyone ever worked with data from tables in GPT-like models?
mrShiba#4412: Does anyone here use colab pro plus
mrShiba#4412: They just release a new update that only allow 2 hours max run time for GPU
mrShiba#4412: https://cdn.discordapp.com/attachments/729741769738158194/951842270221598770/image0.png
mrShiba#4412: I hope this is only me or this would be a very scummy move from Google
Sphinx#2092: > Copy of testnewModelv3
Classic JupyterNotebook setup.
mrShiba#4412: Lol, one reason why jupyter not suitable for production
Aspiring Scout#7875: Are there any anki decks for ML interview prep that people have found useful?
jordiae#4107: I'm looking for people to read together this article https://papers.nips.cc/paper/2021/hash/e614f646836aaed9f89ce58e837e2310-Abstract.html I want to make sure that I properly understand it. Anyone interested?
elderfalcon#4450: I saw @Lord Parfington post this earlier in the art channel and I don't think it got nearly as much attention (... er...) as it should have.
|
It's an inductive bias across hyperparameters that scales strongly across network sizes, pretty solid theory and practice. Obvious avenues for application here: https://www.microsoft.com/en-us/research/publication/tuning-large-neural-networks-via-zero-shot-hyperparameter-transfer/
Shouldn't be too hard to test on our current crop either given the big -> reduce -> tune -> transfer that they propose.
wabi-sabi#5811: Is anyone aware of existing work resembling the following:
1. Overlapping constrained optimization problems in which you are allowed to wiggle different subsets of parameters depending on which input you're working on. I am specifically thinking about a constrained optimization problem that corresponds to a binary search tree such that there's one optimization subproblem per subtree inside the tree. Whenever you add a node to a tree or rebalance a subtree, you redo the associated optimization problem while leaving all parameters unassociated with that subtree fixed.
2. Approaches to solving global optimization problems by reusing work done for local optimization problems. For example, consider the brachistochrone problem. If you know the fastest path from A to B and also know the fastest path from B to C, can you reuse information obtained in computing those paths to arrive at the fastest path that passes through all of points A, B, and C?
Just stapling the two local solutions together obviously would be a bad idea, but do the local solutions get you anything useful at all? This might or might not be able to steal ideas from repair methods in dynamic constrained optimization.
I have been playing with these ideas for several months and nothing has come of them yet. Asked an optimization guy at my school for input and he wasn't able to point me in a good direction, but didn't have any criticisms either.
EricHallahan#1051: PSA: PyTorch 1.11.0 is released!
https://github.com/pytorch/pytorch/releases/tag/v1.11.0
wabi-sabi#5811: Part of what I'm wondering is: can we substitute the phrase "inadmissible estimator" where repair methods for optimization use the phrase "infeasible solution" and end up with anything sensible? I feel like the answer is yes, just add the constraint that "you must not use an inadmissible solution" and then they coincide, but I'm not sure if that actually works.
RRavier#0355: what makes something inadmissible in this instance?
wabi-sabi#5811: https://en.m.wikipedia.org/wiki/Admissible_decision_rule
RRavier#0355: I mean it makes sense to me
wabi-sabi#5811: A guess that's optimal for a subproblem feels like a good start for finding the optimal solution to a global problem, and turning a bad guess into a good one feels like a similar sort of challenge as turning a disallowed solution into an allowed one, basically I am trying to find some formalism to get machines to use both of those insights.
RRavier#0355: they mean the same thing
|
johnryan465#9922: In the research channel I linked a paper where they perform hierarchical classification in a binary tree structure, some parameters are per node (they intentionally link some of them though)
johnryan465#9922: They do various methods to construct the tree, to try and find what performs the best
johnryan465#9922: It is probably a little too informal for what you want though
Lord Parfington#0012: very interesting, thank you for mentioning it, i'm really just a grub in the grand scheme, but i do wonder about all these novel papers and things than come running through the muck. you do think it's applicable to the art transformers then? with some elbow grease of course
alstroemeria313#1694: mm~
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/951945239441047572/Screen_Shot_2022-03-11_at_12.50.29_PM.png
alstroemeria313#1694: yeah like. i keep pointing out to people that the Adam LR has to vary with width because it is in the units of the parameters and with a larger width you have smaller parameter values
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/951947197950345256/Screen_Shot_2022-03-11_at_12.58.15_PM.png
Lord Parfington#0012: is that because the distance between the parameters is smaller in relation to the total width of the scale?
Lord Parfington#0012: if so, sounds like it's just a matter of perspective. backing up to see the big picture so to speak
Lord Parfington#0012: is that roughly the equivalent to focus, as well? like in photography, capturing the foreground while the background is blurry and vice versa?
elderfalcon#4450: Some people here may play the "cheap shot cynic" for this paper for easy community points -- so I'm pre-empting that a bit with this statement.
Having read quite a few papers and implemented a much smaller percent of them, I'd say if this is applicable it would be one of the most impactful paper of this year alone, easily. This has unreasonable amounts of value, it's hard to overestimate.
Especially if people have a lot of compute on hand, it's easy to snowball invest that.
Lord Parfington#0012: neat. very neat. let's hope it's the "next big thing". i had predicted it would come in february, but middle of march isn't too wide of the mark
sweg#8920: hey guys, thoughts on what would be considered sota now for low dimensional clustering?
Louis#0144: we arent doing low dimension?
Louis#0144: lol
|
Louis#0144: we're doing 512 dimensions
Louis#0144: (or is it 2048)
sweg#8920: im using umap for dimensionality reduction
sweg#8920: before clustering
Louis#0144: oh
sweg#8920: no?
Louis#0144: cluster before dim reduction
Louis#0144: lmao
sweg#8920: oh
sweg#8920: yeah
sweg#8920: ok
sweg#8920: that seems obviousl
sweg#8920: LOL
sweg#8920: attempt 2
Louis#0144: LMFAO
sweg#8920: hey guys, thoughts on what would be considered sota now for high dimensional clustering?
Louis#0144: DBSCAN maybe?
Louis#0144: do people still do clustering with gaussian mixture models
elderfalcon#4450: A good rule of thumb is to be Scrooge McDuck with your information as much as possible. T'will save you many heartaches along da wae.
sweg#8920: :kerbal:
|
sweg#8920: i mean im not really sure what more to say
Louis#0144: @sweg i walked over to the clustering lab at GT
Louis#0144: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.OPTICS.html
sweg#8920: we have language model embeddings that are 2048 dimensional vectors
Louis#0144: they recommended this
sweg#8920: and we want to do clustering with them
sweg#8920: idk whats good for clustering now
sweg#8920: there is a *clustering lab*?
Louis#0144: yes
Louis#0144: surpsingly they are not well clustered
Louis#0144: kinda scattered across the ML building
Some Point Process#3793: https://blog.codinghorror.com/the-2030-self-driving-car-bet/
Deleted User#0000: stop trying to channel my brain i am a real person irl thanks
rom1504#5008: Did you have any success with language model embeddings ? There are usually not very good compared to embeddings from model trained on similarity
sweg#8920: i maybe didnt describe this properly
sweg#8920: they are embeddings from a clip-like contrastive model
sweg#8920: a text encoder
sweg#8920: well from carp if you're familiar with that
rom1504#5008: Ah ok
rom1504#5008: Yeah then people recommend umap
|
rom1504#5008: Kmeans work though
rom1504#5008: Also, knn is also very good for exploring
rom1504#5008: Faiss or <https://github.com/criteo/autofaiss> to use it easily, for example. It scales to whatever number of embeddings
rom1504#5008: Depends on what you want to do
MaxHager#6351: https://www.intelligencehouse.org/ ml coliving in bay area if someone is interested
chilli#5665: should probably provide more info about who the people organizing this are
MaxHager#6351: i setup the website and currently alone, so try to make people aware of it. i putted an info sheet on the website --> https://docs.google.com/document/d/1gdva3Ebr2xX7TijUL42lRBtvU9gJqb3JIofF1oGBhBI/edit
nostalgiahurts#3408: huh, the UMAP docs suggest that dimensionality reduction first can help density-based clustering (https://umap-learn.readthedocs.io/en/latest/clustering.html)
they're talking about HDBSCAN, but the OPTICS page suggests that it's similar to HDBSCAN
tpapp157 has also suggested UMAP -> HDBSCAN before
tpapp157#3643: @sweg HDBSCAN is easily sota for clustering. I have yet to find an algorithm that reliably beats it. Especially with complex high dimensional data. The only exception is if you have very few data points and/or you know your clusters follow a particular distribution in which case you should use the appropriate parametric method that matches the data distribution.
sweg#8920: our data is points on a hypersphere
sweg#8920: would that bork HDBSCAN?
Louis#0144: HDBSCAN with cosine sim
Louis#0144: lol
tpapp157#3643: You should dimension reduce to the 'true' dimensionality of your data prior to clustering for the best results. I've seen way too many blog posts and even academic papers that arbitrarily dimension reduce to 2D to do clustering which is so so stupid.
tpapp157#3643: No. just use a distance metric which works for your data. Cosine Similarity is a good one. The default euclidean often works fine, though it tends to emphasize slightly different relationships. Try out different parameters and see what works best for your data.
sweg#8920: ok thx for the help 😁 😁 😁 🦋 🦋
DigThatData#7946: I think you got it with umap
DigThatData#7946: also: what are you hoping to achieve with clustering? pseudo labels?
|
nostalgiahurts#3408: yes, I remember you suggested using PCA to estimate the true dimensionality, and then using UMAP to reduce from the full data to that number of dimensions
Louis#0144: Pseudo labels ya
Louis#0144: Fuck you 😁 😁 😁 🦋 🦋
Louis#0144: Snorted laughing
generic#8192: I don't know if anyone other than me has a use for this, but I finally finished up a dataset of individual C/C++ functions, their corresponding binary (x86-64) code, and any comments associated with the function (either at its definition or at its declaration in a header). after dedup it's around 5 million functions, 13GB uncompressed (but only 2.4GB after zst): https://moyix.net/~moyix/nn_comments_new_dedup.jsonl.zst
Kal'tsit#3130: is there a public api for the gpt j model?
Kal'tsit#3130: or 20b
Kia#2550: https://www.goose.ai/
Kal'tsit#3130: thank you
Kia#2550: happy to help
Trusty_Robot#9640: https://www.newscientist.com/article/2311525-simple-mathematical-trick-could-slash-ai-development-time-in-half/ I don't have access to this publication, does anyone know what this article is referring to?
StellaAthena#3530: Nothing important
johnryan465#9922: https://arxiv.org/pdf/2202.08587.pdf
johnryan465#9922: Basically seems to exchange some noise for a reduction in computation time
johnryan465#9922: You can basically half computation time if you take a random projection of the gradient instead of the true gradient
cfoster0#4356: Unfortunately the estimate is super high variance, increasingly so with more parameters
cfoster0#4356: So it's likely not useful in practice for DL
Teven#6831: is this.... clickbait for DL people ?
Teven#6831: "enhance your gradients with this one weird trick" "doctors hate him"
StellaAthena#3530: https://twitter.com/nabla_theta/status/1501030415129198594?s=20&t=1pIpu8YRbZPoJpInq9KZzQ
|
generic#8192: https://www.oneweirdkerneltrick.com/
UnsupervisedLearner#4148: Who else in the AGI battleglobe camp? There is literally nothing that could go wrong from this, but also it's inevitable so get comfy with your fate
https://twitter.com/nabla_theta/status/1502783399622111234
asparagui#6391: d4 is me
StephennFernandes#2961: Hello everyone, I had a doubt regarding multi lingual Language models that had been bother me for a while, How is the dataset format while pretraining multi lingual models like XLM-RoBERTa, mBERT, mBART, mt5 like ? Is the dataset in sequential batches of sentences per langauge or is it all randomly shuffled ??
alstroemeria313#1694: generally you want to shuffle them randomly
Teven#6831: SIGBOVIK 2013 ? Damnit I'm late on the memes it seems
StephennFernandes#2961: So you mean shuffle them in a way like : eg once sentence is in French and the next sentence to it is Russian and the next is polish and so on ... ?? Is it or should it be shuffled in a way that the entire batch of rows from 1-10 must be french then from 11-21 be Russian and 22-32 be polish ?
alstroemeria313#1694: well, if you put one sentence of one language next to one of another then it doesn't learn inter-sentence dependencies
alstroemeria313#1694: someone else needs to chime in here about the specific details, but you can like, take up to 2048 (or whatever your context window is) tokens of one text, and if your context window is not full pick another text to continue it with, etc.?
alstroemeria313#1694: so you end up with these 2048 token examples?
alstroemeria313#1694: then you shuffle those?
alstroemeria313#1694: i'm not sure if you'd want to intermix languages when packing the context window, hm
alstroemeria313#1694: and probably the original papers would be best to discover what people actually did for multilingual training
StephennFernandes#2961: Okay, so the ideal training on multilingual models would be a no shuffled batches of text concatenate along different languages
StephennFernandes#2961: Okay
generic#8192: you'll feel less bad when my SIGBOVIK 2025 paper "Scooping Other People's Joke Research with Time Travel" comes out
Some Point Process#3793: https://www.youtube.com/watch?v=ZaOp1KNhpUQ
DigThatData#7946: Interesting looking new library for composing functional torch pipelines: https://padl.ai/
|
example code:
```
word_predict = (
clean
>> tokenize
>> to_tensor
>> batch
>> (dropout >> transformer)
+ right_shift
>> cross_entropy_loss
)
```
genetyx8#7543: "Is this a Monad?"
DigThatData#7946: "maybe()"
ILmao#5683: The prospect of having to debug pipeline stages with such a library scares me.
johnryan465#9922: The dream is that but in a type system so complete that it would catch math errors
Some Point Process#3793: https://www.youtube.com/watch?v=pMtk-iUaEuQ
Some Point Process#3793: Interesting take (mainly because boldly contrarian)
ILmao#5683: https://github.com/hasktorch/hasktorch gets you one step closer to there (catches shape errors). I haven't seen any repros of SOTA models with it though, perhaps because the barrier to entry is so high
|
cfoster0#4356: I have had an increasingly hard time listening to these, unfortunately. My guess as to the reason is that the hosts have built up mental models and meta-narratives about NNs & the nature of progress in ML that... I don't buy?
Some Point Process#3793: Agreed for the most part. The only thing/moment that led me to share it was that host's elaboration of his views that made it clear what to expect (the only moment I was "convinced" tho was when they talked about data not being able to provide abstraction/complete "function tables", even in the infinite data limit)
Some Point Process#3793: Like (when they mentioned) any quadratic equation of the form f(x)=x^2 (where the representation is fundamentally symbolic) might not be learnable by an mlp since it's just a locality sensitive hash table (doesn't extrapolate, so the domain over which it generalizes is finite)
cfoster0#4356: I think their framing is just kinda confused
cfoster0#4356: Like, in particular it confuses NNs as a way to parametrize learnable functions vs. as a representation of knowledge
Some Point Process#3793: I don't agree with most of it, such as the view that some sort of reified symbolic architecture is necessary. But (neurosymbolic) language might be a good inductive prior (more so than the geometric learning approaches that attempt to bake them in via other ways). The ceo of waymo voiced similar views about language as just a "functional" requirement https://www.reddit.com/r/singularity/comments/rvzu84/amnon_shashua_ceo_mobileye_about_agi_general/
cfoster0#4356: The one bit I agree with them on is that an abstraction-centric view of things is key
cfoster0#4356: Both in terms of my model of future AI development and of how to build safe and useful AI
Some Point Process#3793: well one counterargument there is that humans can implement (arbitrary) learnable functions, like algebraic equations. And reason about their behavior. With everything being "neurosymbolic" since it's all BNNs
Some Point Process#3793: Yeah. Abstraction is a nebulous concept to me without the construct of language though. The most generalizable forms of abstraction seem to be depend on symbols
cfoster0#4356: Not quite sure what you mean by implementing arbitrary learnable functions. Like, in biological neurons?
Some Point Process#3793: Yeah. Lots of people can do mental math (co-localized with language processing centers like the left parietal cortex)
Some Point Process#3793: I can visualize equations well, others might have good phonetic working memory so as to be able to calculate in their heads. But the underlying representation is symbolic
cfoster0#4356: I think abstraction is a capability that broadens with language acquisition, but is distinct and more general
Some Point Process#3793: Yeah I can see that being strongly true (e.g. in the context of spatial pattern recognition)
cfoster0#4356: IMO Connor said it well that brains are parallel processors that sometimes emulate symbolic + serial machines (in order for distributed internal choices to cohere towards singular external goals)
cfoster0#4356: So in that sense, they would not be fundamentally neurosymbolic systems
Some Point Process#3793: Yeah something along those lines is obviously the case (there is temporal binding and voting mechanisms etc)
Some Point Process#3793: yeah I definitely think preconscious or subsymbolic activity is the underlying "substrate" for intelligence. The only other case I can think of in favor of language is from EY (e.g. "Levels of Organization in General Intelligence": https://intelligence.org/files/LOGI.pdf)
Some Point Process#3793: According to this view language was thought to have "co-opted" with the development of intelligence (via selection effects etc)
|
Some Point Process#3793: So there was some sort of crosstalk between language and the development of intelligence. But the paper iirc just places a lot of importance on symbolic processing for intelligent behavior
cfoster0#4356: Tbh if "symbolic" processing just means processing with indirection, I'll gladly sign on to the neurosymbolic bandwagon
tpapp157#3643: I think it's more accurate to think of language as a discretization of a continuous abstract space.
chilli#5665: https://twitter.com/jefrankle/status/1503453644397551618
chilli#5665: Folks might be interested in this
chilli#5665: @kindiana :^)
cfoster0#4356: Huh I hadn't seen this before 👀 Thanks
Some Point Process#3793: Yeah I'm actually reading some of it now (unusually incisive in itself, and also compared to e.g. some alignment papers)
cfoster0#4356: I can see why people were/are willing to throw money at MIRI, reading this
alstroemeria313#1694: Hey does anyone ever add the log condition number of a weight matrix to the model's loss
alstroemeria313#1694: (Because I want close to orthogonal vectors in the weight matrix but I explicitly *don't* need them to be normalized and in fact this hurts)
alstroemeria313#1694: i am doing this and it is working better/faster so far than pytorch's orthogonal parameterization
Some Point Process#3793: I thought a (proper) orthogonal matrix (with orthonormal/normalized vectors) has the best possible condition number (i.e. 1), since the eigenvalues are identical
alstroemeria313#1694: if you multiply it by a nonzero scalar it still has condition number 1
Some Point Process#3793: Right
alstroemeria313#1694: and given the way my input data is, constraining it to be normalized hurts performance
alstroemeria313#1694: i would have to add like, a separate learnable scale or something
Some Point Process#3793: hmm that's something I've wondered about myself
alstroemeria313#1694: https://twitter.com/RiversHaveWings/status/1503505714580402179
alstroemeria313#1694: typing some keywords into google scholar didn't turn up anything that looked relevant so i asked Twitter
|
Some Point Process#3793: The biggan paper relaxed the orthogonality "loss" by the following (including unit norm constraint): https://cdn.discordapp.com/attachments/729741769738158194/953065308497801226/unknown.png
alstroemeria313#1694: oh huh
alstroemeria313#1694: yeah the thing i am doing involves an svd i think
alstroemeria313#1694: so it's slow
Some Point Process#3793: The W^T*W measures how far the weight matrix fails to be orthogonal. Minimizing only the off diagonal terms of this product implicitly optimizes for orthogonal vectors (not necessarily orthonormal)
alstroemeria313#1694: *nods*
alstroemeria313#1694: that would also do the thing
alstroemeria313#1694: i need them to have pairwise cosine similarities near 0
alstroemeria313#1694: but do not need them normalized
alstroemeria313#1694: yeah ok i should have uncorrelated factors now
alstroemeria313#1694: the final weight matrix's rows turned out to have norm 15.0753
alstroemeria313#1694: so constraining them to norm 1 *really* would have hurt
alstroemeria313#1694: now i have factor, er, "loadings" of ```
tensor([[-0.6418, 0.7086, -0.7142, 0.6532, 0.7531, -0.6716, 0.6006, -0.6716,
0.9411, 0.7300]])```
alstroemeria313#1694: (the second layer weight matrix)
alstroemeria313#1694: i should sign flip these so they are all positive and do the same to the first layer's rows and its biases
alstroemeria313#1694: (since the nonlinearity is tanh and is thus symmetrical around 0)
alstroemeria313#1694: then i can like, sort them by magnitude
alstroemeria313#1694: i did tanh specifically so i could do this
|
Some Point Process#3793: The paper that biggan cited about relaxing the orthonormality constraint seems to provide an explanation https://cdn.discordapp.com/attachments/729741769738158194/953068541387624458/unknown.png
Some Point Process#3793: (i.e. https://arxiv.org/pdf/1802.05957.pdf)
Some Point Process#3793: So apparently there is "useful" information in the distribution of the spectrum (set of eigenvalues) of the weight matrices
Octopirate#9999: has anyone tried strapping an LLM like gpt-j into an incremental learning model?
Octopirate#9999: here i mean, i saw this paper https://arxiv.org/pdf/2106.06297.pdf
EricHallahan#1051: https://arxiv.org/abs/2106.06297
Octopirate#9999: yeah, they've made some good headway on catastrophic forgetting for large models
Octopirate#9999: very cool
Octopirate#9999: i wonder if you'd get better performance on finetuned tasks with incremental learning sans forgetting
Octopirate#9999: for something like generation
alstroemeria313#1694: there is, i think
alstroemeria313#1694: but i didn't want it
alstroemeria313#1694: because i was going for interpretability instead.
alstroemeria313#1694: restricting the condition number to 1 means you are constraining the matrix to be invertible (for square matrices) and some pseudoinverse related thing otherwise
alstroemeria313#1694: in other words you are constraining what functions it can represent, so that some are ruled out entirely
alstroemeria313#1694: this is generally a stronger constraint than you want for a GAN discriminator.
alstroemeria313#1694: i think.
alstroemeria313#1694: https://twitter.com/dylanhendricks/status/1503519051426934784
Some Point Process#3793: > restricting the condition number to 1 means you are constraining the matrix to be invertible
Yeah I'd imagine it might make it invertible in the sense that the computation of the inverse is numerically stable etc
|
alstroemeria313#1694: the matrix is allowed to become rank deficient or nearly so normally.
alstroemeria313#1694: like i think neural net weight matrices can become ill-conditioned if learning the function requires them to throw away some information to achieve the lowest loss.
alstroemeria313#1694: or they can just out of nowhere, too
alstroemeria313#1694: like if the *underlying function to learn* isn't invertible.
Some Point Process#3793: Yeah that makes sense ^^
Some Point Process#3793: > there is, i think
> but i didn't want it
> because i was going for interpretability instead.
There are other works about the importance of eigenvalue spectra for general NNs
https://www.nature.com/articles/s41467-021-24025-8#Sec2
alstroemeria313#1694: ah
kurumuz#5695: https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/ @chilli when did this happen lol
kurumuz#5695: FSDP is nice
kurumuz#5695: I am gonna use this so much
kurumuz#5695: @Sid have you seen this
Sid#2121: yeah, it's been out for ages
Sid#2121: haven't tried it though
Sid#2121: ah wait, this is in pytorch
Sid#2121: i've seen fairseq's implementation, not this
Some Point Process#3793: i.e. (flatter spectra corresponding to smaller
|
power law exponent p, might be better but not sure) https://cdn.discordapp.com/attachments/729741769738158194/953095256184848394/41467_2021_24025_Fig1_HTML.png
alstroemeria313#1694: *nods*
alstroemeria313#1694: my network is underparameterized
Some Point Process#3793: Oh wait I think they're saying the heavy tailed is better
alstroemeria313#1694: it is 5,141 params.
Some Point Process#3793: nvm I misread the x axis of the heavy tailed histogram so it's consistent with low alpha = more robust or smth
kurumuz#5695: ye i used fairseq for a long time, codebase is a bloated piece of shit though
kurumuz#5695: and this is just a wrapper in pytorch
kurumuz#5695: which looks a lot better
ILmao#5683: Looking forward to trying this at smaller scales
rockclimbing_nerd#4931: I really like this paper for fine tuning: https://arxiv.org/abs/2203.05482 It feels somewhat surprising to me. I wanna try it out on something.
zphang#7252: I think it's been done in BERT-type models
zphang#7252: oh, and CLIP
DigThatData#7946: there's been some interesting research discussing language and cognitive development in the context of "feral" and socially isolated children that I think you'd probably find interesting. Here's a decent looking survey article I just found googling around: https://www.verywellmind.com/genie-the-story-of-the-wild-child-2795241
chilli#5665: Happened for a while lol
Corianas#4212: Hi all,
I have trained some yolo things which isn't exactly rocket science these days thanks to google-colab and publicly available notebooks,
but broke my arm and was looking for something a little more....interesting to learn and have running while my typing speed is... low.
|
I am looking to (hopefully) train from scratch a gpt network to understand its... magic, for lack of a better term, rather than fine tuning.
what is the current best, but low.... hardware requirement for similar to gpt-1 end size (if i manage to not mess it up)
goal is to run on a reduced version of the pile: consisting of, wikipedia circa pre-2007 database, open subtitle, and youtube subtitles to start with.
not going for genius, more tv addicted 90s kid.
Corianas#4212: so, my question is: is it worth starting with gpt-neo? or neox?
Louis#0144: Sounds like Leo
bhadrabahadur#3238: Hi all ,
We recently published the open-source project PADL (https://padl.ai/), a deep learning development framework for Pytorch.
PADL streamlines the entire deep learning workflow, from experimentation to deployment. Its functional API provides a satisfying correspondence to the “box-and-arrow” mental model for deep learning models, with node logic implemented as pure Python functions.
Try it out for yourself on Colab:
1. CLIP guided diffusion for face editing
https://colab.research.google.com/github/lf1-io/padl/blob/main/notebooks/05_diffuse_faces.ipynb
|
2. Sentiment Analysis - NLP
https://colab.research.google.com/github/lf1-io/padl/blob/main/notebooks/03_Sentiment_Analysis_with_padl.ipynb
And read more about PADL on our developer blog! https://devblog.padl.ai/
Github: https://github.com/lf1-io/padl
You can get easily started with: `pip install padl `
We look forward to welcoming you into the PADL community! And we would really appreciate getting your feedback.
Best,
the LF1 Team
EricHallahan#1051: In the future, it would have been helpful for you to indicate that you were interested in promoting a project you are connected with; we generally don't allow advertising without prior permission as per our #rules, but I'll give you a pass since it is my fault for forgetting to ask.
Daj#7482: It's open source and of interest to this community so ¯\_(ツ)_/¯
Daj#7482: Looks neat bhadrabahadur
EricHallahan#1051: True 😛
DigThatData#7946: Lol I also beat them to it yesterday
DigThatData#7946: (Having trouble linking to the comment on my phone, but was this channel)
bhadrabahadur#3238: sorry about that 🙂
|
EricHallahan#1051: Don't apologize, it's my fault and we don't adhere to it too strictly.
EricHallahan#1051: @DigThatData :schmid:
bhadrabahadur#3238: Thanks 🙂
yes, it is open source, and would be great if we can get some feedback from the community.
Everyone is welcome to join the project!
bhadrabahadur#3238: there is actually an internal interactive debugger you can use to debug things inside the pipeline.
so it is less scary
More about the debugger: https://lf1-io.github.io/padl/latest/usage/debugging_your_transforms.html?highlight=debugg https://cdn.discordapp.com/attachments/729741769738158194/953277956900421692/Screenshot_2022-03-15_at_14.05.32.png
DigThatData#7946: I think this style of piped programming will help bring some R holdouts over the fence
Dangerous-Educator72#1299: What's that feature in R?
Dangerous-Educator72#1299: I'm not an R user
DigThatData#7946: Very hadleyverse
EricHallahan#1051: Have you read bmks code
Deleted User#0000: not if I can avoid it
DigThatData#7946: Not native R, but modern paradigm
DigThatData#7946: https://magrittr.tidyverse.org/
Dangerous-Educator72#1299: Looks like the PADL thing is about more than just the functional approach
Dangerous-Educator72#1299: There seems to be some way to export full objects with non-pytorch stuff in there, could be handy
Dangerous-Educator72#1299: I personally like the transforms in torchvision because you can check out the output of your processing and inspect it etc.
|
bhadrabahadur#3238: nice, yes, goes in the same direction.
PADL has more operators to make complex pipelines and also has features to streamline the training, saving, and deployment.
bhadrabahadur#3238: I think the idea behind both packages is the same.
We also want to avoid complex code for models, want to have clean structuring of operations left-to-right or top-to-bottom, and avoid nested function calls.
Plus, we went a little overboard creating a comprehensive saver, stage handling, and abstraction of trainer 😅
Deleted User#0000: ok so if you want some absolutely minimal feedback which you can take or leave: it's not clear to me what they key painpoint is you want to solve. I understand that it's about organising training pipelines, but it's not easily clear to me what the killer feature here is that would make it worth to integrate into a big framework and expand my mental model around all these wrappers. There are tons of deep learning orchestration frameworks now. For me, 'unifying' is not an attractive word, unify means 'lots of framework complexity. Just rewriting a model into a pipeline would not warrant for me to get a framework dependency that will keep breaking.
Deleted User#0000: (this may sound harsh but you will be aware there is a graveyard of orchestration projects so the bar is high and you need to ask yourself the hard questions)
Dangerous-Educator72#1299: I think that PADL isn't a deep learning orchestration framework - that would be something like KubeFlow
Dangerous-Educator72#1299: My understanding from looking at the documentation is that the PADL pipelines are like generalized torchvision transforms/ or scikit-learn transforms plus neural network forward pass, plus some postprocessing
Dangerous-Educator72#1299: So the pipelines seem to work at the single data point level
Deleted User#0000: yes not orchestration in the scheduling workloads sense, just wrapping/organising training code. I wondered if there was a goal to extend the pipeline stages into job scheduling stages etc
bhadrabahadur#3238: Thanks for the feedback! Definitely looking for similar feedback.
There are two main killer features that we wanted to build in PADL.
First was the ability to easily stack different models together in a pipeline. Maybe one part of your preprocessing comes from package A, another part comes from a config file, the next part comes from some in-house maintained code. With PADL, you could import all of these into a notebook and use the operators to connect these into a single pipeline. So, making a model becomes like stacking legos. You don't need to write boilerplate code to bring together different models/functions.
Second thing is to be able to export the pipeline in a super portable way. When you build your pipeline, PADL isolates and extracts all dependencies and objects necessary to build it, which can be then easily exported & saved by padl.save. And that extraction is all you need to build the model from scratch again, and if you want to load pipeline again you just need to do padl.load.
This allows a nicely flexible mode of working, where you’ll work in a notebook, importing code from many sources, and then exporting the full object in a way that makes it super portable.
bhadrabahadur#3238: Right now, our goal is not to build a full orchestration package capable of building, scheduling, deploying jobs.
We want to build a package that is complete in building deep learning pipelines and making them fully portable.
But we have another complimentary `padl-extensions` package that includes some extensions (`pytorch-lighting` `huggingface`, `torchserve`): https://github.com/lf1-io/padl-extensions/
|
For example, after you have built & saved your PADL model, you can use `padl-extensions` to serve your model using `torchserve` with just one line.
`prepare_and_serve('saved_padl_model.padl)`
example of it in colab notebook : https://colab.research.google.com/github/lf1-io/padl-extensions/blob/main/notebooks/torchserve.ipynb
Deleted User#0000: thanks for elaborating, you could try updating your readme to try to be really clear about your unique value proposition. You could also have a table comparing to other common tools in the pytorch ecosystem to highlight which does what and position yourself
bhadrabahadur#3238: That is a great suggestion! Thanks! We will work on this!
bhadrabahadur#3238: just top of your head, what do you think are alternatives in this space?
Deleted User#0000: i dont know I dont use pytorch lol
bhadrabahadur#3238: haha fair enough 😄
We want to extend PADL to tensorflow too in future, so then it might interest you 😄
Deleted User#0000: I dont use tensorflow :berk: JAX
bhadrabahadur#3238: we need to do our market research more then XD
tpapp157#3643: A bunch of libraries have pipeline functionality of varying degrees of sophistication. SKlearn, HuggingFace, Keras, etc.
random person#5234: from your github and readme and what you said here, is this like an alternative to MLflow type of package?
Sphinx#2092: https://cdn.discordapp.com/attachments/729741769738158194/953311739196366938/dozes.gif
bhadrabahadur#3238: MLflow looks more like a monitoring and orchestration package.
PADL is more like an extension for PyTorch at the moment, thinking of it more like Keras functional API to Tensorflow. So you can build models with PADL by stacking different pytorch models and other python functions & classes. And of course there are other features in PADL like saving and debugging.
Deleted User#0000: >500 in DM alone I guess
Louis#0144: Besides alstro and I, who's the main contrastive learning people we have ?
Louis#0144: Like people who do serious research in the space
|
StellaAthena#3530: A nice general reminder that we should be careful about what we say, even in informal blog posts. Apparently our preliminary results on NeoX are being used in a talk by people unrelated to EleutherAI 🙃
https://twitter.com/DEichholtzer/status/1503646001667661826?s=20&t=ZsyNG9joyR4VaPR1ad27_w
EricHallahan#1051: :guilty:
tpapp157#3643: I'm not a researcher but I've done a lot of industry work with contrastive learning across a variety of modalities over the last several years.
Sora#8531: Damn, I gotta say that's sooo cool. I hope one day I have to be careful about what do I say because people care about me and my work. I guess you hear this very often, but congrats on such an amazingly inspiring work.
tailcalled#2750: GPT-3 and GPT-NeoX is a Markov chain
(of 2048th order, but still, a Markov chain)
Just mentioning this here because a random person on twitter said that I was mistaking GPT for a Markov chain, so it seems that not everyone knows that GPT is a Markov chain
StellaAthena#3530: Holy shit, they open sourced MAGMA. Like, for real, with a model download and everything.
https://github.com/Aleph-Alpha/magma
circuit10#0158: As someone who isn’t an AI researcher or anything what’s the easiest way to play with this?
circuit10#0158: They mention the “Aleph Alpha playground” but I can’t see anything about a playground on the website, unless I need to log in
circuit10#0158: Or is it not something that’s ready to be messed with like that?
EricHallahan#1051: This is a pretty :chonk: model, since it is GPT-J-6B and a vision model (which is effectively negligible in the grand scheme of things).
So expect to need something of similar horsepower to what is needed for GPT-J-6B.
circuit10#0158: I was mainly looking for a hosted service ideally with a free trial
circuit10#0158: Or I could rent a cloud GPU I guess?
|
EricHallahan#1051: Yeah I don't know tbh
random person#5234: gcp have 300 bucks free
circuit10#0158: Yes but they have an annoying quota thing for GPUs
circuit10#0158: So I was going to use a different one
circuit10#0158: I also already used my free trial for a Minecraft server
circuit10#0158: Wait, they have a goose photo? https://cdn.discordapp.com/attachments/729741769738158194/953359414214139985/model.png
EricHallahan#1051: What else would you use?
chilli#5665: Published my blog post btw 🙂 https://twitter.com/cHHillee/status/1503803011843252224
cfoster0#4356: Excited to try it out. A few bugs in the install to work through, it looks like
circuit10#0158: Oh, you had the same Google Drive issue too :)
cfoster0#4356: @circuit10 yeah. Also a few other things when running in colab
cfoster0#4356: `Literal` isn't available in python 3.7, so you've gotta have a `typing_extensions` fallback
circuit10#0158: It should run on a 3090, right?
circuit10#0158: https://cdn.discordapp.com/attachments/729741769738158194/953368524158558238/unknown.png
cfoster0#4356: 🤷♂️ probably? I would guess it should run wherever GPT-J runs
circuit10#0158: Would I need 24GB of RAM? https://cdn.discordapp.com/attachments/729741769738158194/953368807504748544/unknown.png
circuit10#0158: This is only 18GB
circuit10#0158: And not configurable without getting 2 GPUs (which doubles the RAM too)
circuit10#0158: https://cdn.discordapp.com/attachments/729741769738158194/953369480371785789/unknown.png
Some Point Process#3793: Yeah that's what I've said before, but I didn't know it was that bottlenecked (other people remarked training was compute bound or close)
|
ERROR: type should be string, got "https://discord.com/channels/729741769192767510/785968841301426216/878378533137809428\nSome Point Process#3793: i.e. https://cdn.discordapp.com/attachments/729741769738158194/953369878818078800/unknown.png\nSome Point Process#3793: but yeah they were definitely wrong for the most part (ref: https://arxiv.org/pdf/2007.00072.pdf)\ncircuit10#0158: I'm hosting it on http://95.111.249.143:9876/ if anyone wants that for some reason\ncircuit10#0158: Though it works if you manually download\nchilli#5665: Well... I mean, 50% of peak FLOPS is generally considered pretty \"compute bound\"\nchilli#5665: lol\ncfoster0#4356: Before I click, is this a download link or a webpage?\ncircuit10#0158: The direct download would be http://95.111.249.143:9876/mp_rank_00_model_states.pt\ncircuit10#0158: It's just `python3 -m http.server` in a directory containing that file\ncircuit10#0158: So that first link was a file listing\nSome Point Process#3793: but you were mainly looking at % wall clock time in the blog post?\nchilli#5665: oh, are you referring to that table?\nSome Point Process#3793: yeah\nchilli#5665: oh, that table does not necessarily imply that we're achieving 50% of peak FLOPS\nSome Point Process#3793: intuitively for me tho, 'memory/compute-bound' would be more about wall clock and not flops utilization etc\nchilli#5665: oh, to be clear, that transformer implementation definitely does not fall into \"it's compute bound\"\nchilli#5665: they were using an un-optimized BERT implementation\nchilli#5665: iirc\nSome Point Process#3793: Oh I see that you had a different take than i thought" |
chilli#5665: well, my take there is that the model from that paper *is* indeed memory-bandwidth-bound in many cases
chilli#5665: but that that table is not representative of a well-optimized transformer
chilli#5665: (like say, Megatron or lightseq)
cfoster0#4356: https://openai.com/blog/gpt-3-edit-insert/
circuit10#0158: I got the Magma thing running
EricHallahan#1051: > New Capabilities for GPT-3: Edit & Insert
Ah yes, *alignment*.
Definitely not capabilities.
circuit10#0158: Looks very important for Codex
chilli#5665: oh, that's pretty neat
cfoster0#4356: Hmm looks like it's only available for davinci :thonk:
circuit10#0158: Should I make a web interface for it so people can try it?
cfoster0#4356: Yes! :hap: :honkies:
Louis#0144: is there a paper
circuit10#0158: https://cdn.discordapp.com/attachments/729741769738158194/953375693075533894/unknown.png
cfoster0#4356: I don't see one. Maybe @ jesse knows
EricHallahan#1051: No, this is an API thing.
Louis#0144: why not just tag him, @jesse feeds off chaos
circuit10#0158: I'll probably do that soon
StellaAthena#3530: a Gradio demo would be awesome. You can probable adapt this one pretty easily: https://huggingface.co/spaces/EleutherAI/VQGAN_CLIP
|
zphang#7252: I wonder if it's just cleverly formatting the inputs, or a new modeling format
zphang#7252: hope it's the latter
StellaAthena#3530: You can do LM adaption with T5, it would make sense to me that you could do span-adaption with GPT-3
circuit10#0158: I'll probably try that soon but I do need to go for a bit
EricHallahan#1051: Or wait for @Deleted User to make one. :berk:
alstroemeria313#1694: <https://openai.com/blog/gpt-3-edit-insert/> How did they do this exactly? ^^;
cfoster0#4356: Probably the former, if uses the same endpoint & arguments as regular completions
alstroemeria313#1694: lol already posted
Louis#0144: I need to know too
cfoster0#4356: Didn't kharr suggest a way to do this?
zphang#7252: that's true, and given that it's still davinci that sounds like "adapted" rather then "new"
StellaAthena#3530: What's the alternative hypothesis under which it is "new" rather than "adapted"?
I think it's pretty unlikely that they're achieving this by training a GPT-3-scale MLM model
cfoster0#4356: Linked in #research
zphang#7252: I guess yea 1) they're unlikely to retrain another GPT-3 scale for something like this, and 2) I don't think it's look like MLM (that was issues of needing to figure out how many tokens you need ahead of time, unless they also invented a new pos encoding while they were at it)
StellaAthena#3530: Does "alibi BERT" solve that problem? If so, it's high value low-hanging fruit.
zphang#7252: I think that the BERT MLM objective is already starting to handicap gains 😂
maybe an alibi electra/deberta-v3
StellaAthena#3530: What's the objective for those?
|
zphang#7252: replaced token detection. The tedious thing is that you need to co-train a (smaller) adversary model to adversarially replace tokens.
The upside is that you compute losses for every token in every input, as opposed to only 15% in MLM
StellaAthena#3530: So it's GAN-like?
StellaAthena#3530: You have one model trying to make unnoticed substitutions, and the other one trying to distinguish real and fake data?
zphang#7252: yes kind of, except we care about the discriminator
StellaAthena#3530: Whereas with GANs we usually care about the generator
StellaAthena#3530: hmm
Louis#0144: Isn't there an issue with Electra embeddings
Louis#0144: lol
StellaAthena#3530: Fuck if I know... you're the person I would ask about that @Louis
Louis#0144: Ahaha
zphang#7252: actually taking a closer look, the "generator" are not be trained adversarially to the discriminator, it's just be trained on the side with regular MLM
but the discriminator is still trained on whether the input tokens are replaced or original
generic#8192: [oops already elsewhere]
circuit10#0158: Made a demo at https://56886.gradio.app but I can't keep it up for long because it costs money
Source code:
<https://gist.github.com/Heath123/3d96bbc4c2dd76ab456efbe0e8f0a59f>
StellaAthena#3530: Mind if I rehost this? Happy to credit you however you like
circuit10#0158: Yes, that's fine :)
circuit10#0158: You can credit me if you want but you don't have to
|
StellaAthena#3530: How should I credit you? "circuit10 on Discord"?
circuit10#0158: That would be fine, or my GitHub (if you want people to be able to contact me it might make sense to say circuit10#0158?)
BoneAmputee#8363: getting a lot of <PERSON> in the results :grimberk:
BoneAmputee#8363: thank you for hosting the demo
StellaAthena#3530: 🤔 https://cdn.discordapp.com/attachments/729741769738158194/953396777002885250/Screen_Shot_2022-03-15_at_4.58.18_PM.png
circuit10#0158: I guess that means there was no output somehow?
circuit10#0158: Oh, I just got that too
EricHallahan#1051: It worked fine for me. Interface could use some polish but that isn't too difficult to fix. https://cdn.discordapp.com/attachments/729741769738158194/953397354722103347/unknown.png
EricHallahan#1051: Yeah it's conceptual captions. :grimberk:
circuit10#0158: I considered using an image upload but then you couldn't use multiple images/place them freely in the text
EricHallahan#1051: valid
StellaAthena#3530: You could introduce tokens like `[image_1]` but that's clumsy and hard to use for people less familiar with this sutff
StellaAthena#3530: lol https://cdn.discordapp.com/attachments/729741769738158194/953397990498897981/Screen_Shot_2022-03-15_at_5.03.08_PM.png
StellaAthena#3530: Submitted this one by accident
dmvaldman#4711: Did we answer this question yet or is it still speculative? I'm also curious.
dmvaldman#4711: No associated paper or technical blog post I'm missing, right?
StellaAthena#3530: @circuit10 Did you install from the repo? `pip install magma` fails
circuit10#0158: The `server.py` file was running from the repo's directory
cfoster0#4356: Nope
cfoster0#4356: Nothing yet at least
|
circuit10#0158: This is restarting, I think the URL will change when it's done
BoneAmputee#8363: I asked it if Kathryn Janeway was male or female and it got it wrong
circuit10#0158: New link: https://24652.gradio.app
circuit10#0158: I improved error handling a tiny bit
circuit10#0158: Also updated the gist
circuit10#0158: I would make more changes but retarting takes too long and invalidates the link
BoneAmputee#8363: okay it definitely knows Janeway :janewaycoffee: I started prompting it more better
BoneAmputee#8363: it keeps guessing the wrong Trek show tho
BoneAmputee#8363: tryna figure out if it does better than clip+gpt
BoneAmputee#8363: err, any of dzryk's models
circuit10#0158: I'm going to have to take it down soon for cost reasons I think
circuit10#0158: $1.70 per hour isn't that much but it will add up if I leave it on
circuit10#0158: If I take mine down will you have a replacement set up soon?
BoneAmputee#8363: what gpu? :cat_thonk:
circuit10#0158: 3090
BoneAmputee#8363: I'm probably done playin with it but in the future, Vast has 3090s for 36 cents an hour rn
StellaAthena#3530: @circuit10 I'm trying to get it hosted on HuggingFace, but it's flipping out about the DeepSpeed requirement >.>
StellaAthena#3530: https://huggingface.co/spaces/EleutherAI/magma
circuit10#0158: Oh
circuit10#0158: I'll definitely look into that if I need to do this again but I felt like using a cloud provider might be easier
|
Teven#6831: I'm sure the spaces guys would be happy to help with that
Teven#6831: that seems like a pretty basic thing to get right haha
StellaAthena#3530: @Teven Who are the spaces people? I can ping them on slack
Teven#6831: (for them)
Teven#6831: Charles Bensimon mostly
Teven#6831: Charles on Slack
StellaAthena#3530: @circuit10 You can take it down and I'll have a replacement up in < 24 hours
circuit10#0158: It's down now
StellaAthena#3530: @circuit10 @Teven I pinged Charles, hopefully we can get this resolved quickly.
Some Point Process#3793: @&. https://discord.com/channels/729741769192767510/938462108721483787/953406267152539668 no clue but if I had to guess it'd be a bidirectional architecture (but where emitting a token in between did something different to the output than what it would with autoregressive objective). And where gpt-3 was fine tuned on making these edits
Teven#6831: I've pinged him a discussion we had about it, he's in France though so it'll probs have to wait tomorrow
&.#0001: (for other people reading) he question was
&.#0001: > I wonder how the new edit and insert endpoints from OpenAI work
> Furthermore, one model ID (text-davinci-002) is capable of generate, edit, and insert
> “edit” deletes and inserts text at various points, for instance adding types to a Python program
> “insert” lets you have a “prefix prompt” and “suffix prompt” and generate in the middle
> Anyone willing to speculate how these could be implemented for GPT J?
&.#0001: Fine tune on making edits, like a prompt structured like this?
instruction
|
input
output
&.#0001: Is it true that in GPT-J, the output at each token position only depends on the token positions prior to it, right?
StellaAthena#3530: Yes
Some Point Process#3793: Yeah. But I'd imagine it'd also have to attend over sequence to make some edit in between. (Fully visible state)
&.#0001: Fully visible state?
EricHallahan#1051: The token positions after the token are masked.
Some Point Process#3793: It'd have to take the entire sequence into account
cfoster0#4356: I think insertion would be straightforward. Fine tune out of order like `prefix suffix completion` as opposed to `prefix completion suffix`
kurumuz#5695: @Some Point Process in AR generation you already read over all the context
kurumuz#5695: and you have access to all the tokens
&.#0001: Excuse my lack of knowledge, define masking?
&.#0001: Like this?
&.#0001: Input: "I have watched this [MASK] and it was awesome."
Output: "I have watched this movie and it was awesome."
StellaAthena#3530: Yes
Some Point Process#3793: Yeah. In inference. But then in training there can be a stuffed context for efficient training (masked)
cfoster0#4356: That's Bert style masking
StellaAthena#3530: Whereas for autoregressive models it is
"I have watched this [MASK]" -> movie
|
"I have watched this movie [MASK]" -> and
"I have watched this movie and [MASK]" -> it
...
cfoster0#4356: In gpt style masking you prevent tokens from attending to future tokens by zeroing out those entries in the attention matrix
kurumuz#5695: I mean for training it's more efficient to do a prefix LM sure
kurumuz#5695: but it should still work even without that
&.#0001: How would this work for editing? Would they not zero it out instead?
&.#0001: It can edit in an arbitrary number of tokens.
kurumuz#5695: @Some Point Process Do you think Instruction-Input-Changed output will not work with AR?
kurumuz#5695: because it will
cfoster0#4356: I think the edit feature you're talking about is as you described
cfoster0#4356: Here
&.#0001: How would insert work?
kurumuz#5695: UI/API work.
&.#0001: Oh, yeah, edit is its own model
cfoster0#4356: @&. this
&.#0001: edit is `text-davinci-edit-001`
while insert is `text-davinci-002`, same model capable of generate
cfoster0#4356: Ah ok
Some Point Process#3793: Yeah that seems more plausible (than bidirectional)
|
cfoster0#4356: Looks like they use an `[insert]` special token to delimit the span to be filled
&.#0001: interesting, text-davinci-insert-002 exists, but text-davinci-002 is also capable of insertion
&.#0001: only `edit` models support editing
kurumuz#5695: it can insert but probably doesn't carea bout the context after
kurumuz#5695: I assume insert specific model is trained with the way @cfoster0 said
&.#0001: They only published GPT Instruct after it came out of beta, and this one is in beta
cfoster0#4356: Only making them available for use :berk:
&.#0001: It seems to care about the suffix, and works with insert https://cdn.discordapp.com/attachments/729741769738158194/953413625014521957/Screenshot_20220315_180541.png
&.#0001: Their base models are perfectly capable of insert and generate
cfoster0#4356: OAI is very good at implementing, so presumably they've got a good lead even after the idea phase
cfoster0#4356: Does copilot do infilling? I haven't played with it much
Some Point Process#3793: Copilot is just AR. Looks like gpt-3 ihas different models for each of the two and won't do AR with "insert" endpoint
Some Point Process#3793: Or codex at least
cfoster0#4356: Everything is AR with enough rearranging
Prismane#3728: what does AR refer to?
Some Point Process#3793: Autoregressive (just concatenative token gen)
Some Point Process#3793: The only issue I still see with keeping the causal mask (i.e. AR proper), is that, the "rearranged" input sequence <edit/insert instructions><existing prompt> would imply that it's training objective would have been to "append" edits to that sequence. OTOH I'm just biased towards bidirectional since they appear to be more flexible (like universal transformers which can make edits in place etc)
cfoster0#4356: Hmm I think the training objective is the same, conceptually
cfoster0#4356: You're just kinda teaching the network how to "defer" computations
Some Point Process#3793: Yeah I agree with you (had the wrong picture)
|
janus#0150: One of the new oai models has a 4k context window
janus#0150: They are less flexible because they have a fixed number of masked tokens, right?
kurumuz#5695: AR is just gonna work fine for this lol
janus#0150: Yeah, under the hood it's always put later context in the prompt
kurumuz#5695: I doubt OpenAI is using a non-AR model/objective here.
janus#0150: I agree, and I think cfoster is right about implementation
janus#0150: AR seems much better for infilling
kurumuz#5695: tfw you can formulate everything as AR
kurumuz#5695: you obviously can
kurumuz#5695: lol
cfoster0#4356: Not stuff with cyclical dependencies
kurumuz#5695: not sure what that means
cfoster0#4356: Like, the ordering of which variables you generate before which other variables will be arbitrary in cases like graphs
cfoster0#4356: Graphs don't have a natural ordering
cfoster0#4356: So you gotta just pick something arbitrary
janus#0150: How are you suppose to create that? All at once or with iterative refinement? AR can do the first by picking something arbitrary and making the rest consistent, and the second with repeated editing
cfoster0#4356: You just need explicit tokens to specify how you're traversing/building up the graph
Some Point Process#3793: Well for seq-seq tasks there was at least one justification for bidirectional (google t5 paper) https://arxiv.org/pdf/1910.10683.pdf
Some Point Process#3793: i.e. https://cdn.discordapp.com/attachments/729741769738158194/953440670679457842/unknown.png
Some Point Process#3793: And yeah it won't require any new parameters to remove the mask. But editing text "in place" (without CM) might have it's own complications ofc and the distinction between CM and not is indeed subtle
|
HostsServer#2628: Just recently i guess heres some tests with a few models very interestingggg
https://pytorch.medium.com/training-a-1-trillion-parameter-model-with-pytorch-fully-sharded-data-parallel-on-aws-3ac13aa96cff
HostsServer#2628: This builds off the data parallelism natively added
Realmsmith#4506: Damn, OpenAI came out with another banger today.
Kia#2550: https://openai.com/blog/gpt-3-edit-insert/ this?
EricHallahan#1051: I mean it has now been posted in this channel no less than four times now, so I would assume so.
Louis#0144: Hey guys I just found this https://openai.com/blog/gpt-3-edit-insert/
bmk#1476: I was the first to post it :smug:
cfoster0#4356: Excuse me? :honkies:
bmk#1476: https://discord.com/channels/729741769192767510/730095596861521970/953374862175518771
bmk#1476: wait
bmk#1476: damn
bmk#1476: how did you even post about it before the tweet lol
cfoster0#4356: I post before tweets regularly lol
cfoster0#4356: See: arxiv
bmk#1476: do you just watch the blog page??
cfoster0#4356: Not with my eyeballs no
cfoster0#4356: That'd sure be a waste of time lol
bmk#1476: great, now next time I need to post the link a few minutes before it even goes live smh
Kia#2550: I pointed out if that's what they're talking about,but yeah
|
bmk#1476: eh who am I kidding I'm too lazy to do that
cfoster0#4356: Precommit to the announcement
bmk#1476: see but that's too easy
cfoster0#4356: Not for me
bmk#1476: the hard part is timing it
bmk#1476: no but I mean nobody would be impressed
cfoster0#4356: I sure would
bmk#1476: the hard part is timing it just before it goes live, but not too much before
zphang#7252: just make lots of dummy posts and edit to the link after the fact
zphang#7252: then you can claim you posted it first
cfoster0#4356: I will not accept edited posts
zphang#7252: maybe that's how schmidhuber does it...
bmk#1476: he edits the fabric of space and time
cfoster0#4356: There are definitely open source tools for this kind of stuff, for ex https://github.com/thp/urlwatch
bmk#1476: well I'm glad y'all watch this stuff so I don't have to
chirp#4545: showerthought: hours and minutes and days of the week are like Fourier features for time
cfoster0#4356: There's a sort of general principle here that you can take a measurement and decompose it into a relationship between a small set of quantities that are at a more behaviorally-useful scale
cfoster0#4356: Similar story for trichromatic vision
Caelum#8192: I've almost got something similar to work with 20b :wechat_smirk:
uwu1#4864: is it the same as sinusoidal positional encoding/frequency encoding? that seems to be the baseline for implicit modelling now at least for 3d
|
uwu1#4864: looks like it's just MLP(FFT(x)) which would make it same when xy is just a sampling grid
cfoster0#4356: Yeah, this paper was a spinoff of the NeRF paper, which got its frequency encoding from transformers
circuit10#0158: Using this with a bit of guidance I can turn
```js
function M(e,i,r,t){var o=r.file,n=r.body[0],c=n.width,l=n.height;return o?o.size>41943040?Promise.reject("file_size"):e.createUploadPolicy(i,o,c,l).then(function(i){return e.createUpload(i,o,t)}):Promise.resolve()}
```
into
```js
/**
* Uploads a file to the server.
* @param {Client} client
* @param {Message} message
* @param {Object} body
* @param {Object} options
* @returns {Promise}
*/
function uploadFile(client, message, body, options) {
var file = body.file;
var firstBody = body.body[0];
var width = firstBody.width;
|
var height = firstBody.height;
return file ?
file.size > 41943040 ?
Promise.reject('file_size') :
client.createUploadPolicy(message, file, width, height).then(function (policy) {
return client.createUpload(policy, file, options);
}) :
Promise.resolve();
}
```
mostly automatically
circuit10#0158: I have no idea if it's right because that's some random code I took from a page that I don't have the source for
alstroemeria313#1694: i use it in my diffusion networks to make the timestep embeddings.
plasmafox#1663: Is it me or is every pytorch and tensorflow tutorial obsessed with data classification and labels? I want to do predictive generation on a dataset where I have no idea what the labels are
Sphinx#2092: you have to train the model somehow
Sphinx#2092: i think training-wise, it's the same.
plasmafox#1663: I mean, say you had a dataset of text and wanted to make a model that could predict the next token. How would you give it labels? That sounds more like it's reserved for classification, like training it to recognize images
Sphinx#2092: https://www.tensorflow.org/text/tutorials/text_generation
Sphinx#2092: They have a whole section literally titled `TEXT GENERATION`
plasmafox#1663: well yeah, that proves my point that labels and classification aren't necessary
|
Sphinx#2092: I dunno what to tell you, they literally make labels in the tutorial as well lol
Sphinx#2092: I wouldn't get so hung up on these things. Just learn the fundamental ideas and move on.
DigThatData#7946: the justification for bidirectional language modeling I think becomes a lot clearer when you consider how varied syntax can be from language to language. consider latin, where typical word order convention is subject-object-verb
Ties#8063: Thoughts on that new edit & insert functionality by OpenAI? https://openai.com/blog/gpt-3-edit-insert/ I have had a few cases where something like this would be helpful, I wonder what they are a actually doing under the hood. Probably some type of prompt engineering that is abstracted away from?
EricHallahan#1051: That has been an ongoing conversation in this channel. You would probably be interested in our discussion yesterday.
https://discord.com/channels/729741769192767510/729741769738158194/953373590886178856
Ties#8063: Ah sorry, I missed that
EricHallahan#1051: No problem, the same question has been brought up here by at least ten other people by now. `:)`
Ties#8063: Lol, I just scrolled up and see what you mean. Should have done a search on the URL haha.
EstebanSir#2189: I wonder how 'special' this actually is to GPT-3, does anyone have a link to the paper?
Ties#8063: As per the discussion above, no paper as of yet I think.
EricHallahan#1051: There is no associated paper.
EstebanSir#2189: :(
cfoster0#4356: It's special in that no one has made such a model publicly available
cfoster0#4356: If someone wanted to do the same with neo or j models, they probably could
StellaAthena#3530: I would bet a very large amount of money that the technique used (whatever it is) would work with GPT-Neo and GPT-J with almost no modification
EstebanSir#2189: I see, very interesting
Spy#9778: would the RL training used for the instruct model fall into that category?
Spy#9778: If so then I agree, but it's pretty expensive to do
Spy#9778: and it doesn't seem unlikely that's how they did it
|
bmk#1476: yeah I think the definition of "almost no modification" can vary by orders of magnitude between people so probably worth clarifying what you mean
StellaAthena#3530: @Spy @bmk Sorry, I mean "almost no modification" *to the methodology*. I think that the methodology is *not* specific to GPT-3 and can be done with any autoregressive language model. I am responding to someone who said
> I wonder how 'special' this actually is to GPT-3, does anyone have a link to the paper?
I am making no prediction about the *amount of work required to do the finetuning*.
Spy#9778: Ah gotcha
rom1504#5008: If they don't even say what's their method now, and just provide APIs, there's really nothing open about openAI anymore..
Hopefully there's just a bit of delay between the API announcement and the paper/blogpost
nshepperd#2316: @alstroemeria313 hm you can do the bias correction in adam by just warming up beta2 with like `beta2 = b (1 - b^n)/(1 - b^(n+1))`
nshepperd#2316: where n is 0 for the first step
nshepperd#2316: or to allow you to change beta2 over training, you keep a `bias` variable, and use `b (1 - bias)/(1 - b bias)`
nshepperd#2316: which is updated by multiplying by `b` each step
nshepperd#2316: starting at 1
nshepperd#2316: maybe this is good for doing the ema of the parameters
nshepperd#2316: instead of manually choosing a cutoff epoch to increase the ema_decay
alstroemeria313#1694: ahh
nshepperd#2316: this is just reparameterizing the ema basically
nshepperd#2316: so that the value you keep is always the *unbiased* ema
alstroemeria313#1694: ahhh
alstroemeria313#1694: which saves you memory.
Teven#6831: Hey, the training for the Bigscience 176B LM has officially started - or rather, the test run hasn't yet crashed and the loss is still going down rather than up, so it's been promoted to "official run". You can watch the Tensorboard at https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard if you're interested. I wanted to thank Eleuther for all of the things we've been able to built on top of: the Pile, which allowed us to compare several datasets and some of whose components we re-used ; the LM harness, which was essential for evaluation when we were trying to match GPT-3 performance at smaller scales ; and more generally GPT-J and Neo for allowing us to compare notes. More broadly, Eleuther was clearly a successful example of open research without which Bigscience wouldn't have existed or would have looked very different. Cheers to open AI.
|
Teven#6831: We're doing an AMA on Reddit on Thursday 24th, I can also answer questions here if you have any. Looking forward to procrastinating here more now that I have a bit more time on my hands.
nshepperd#2316: yep, you can avoid doing the unbiasing division over the massive params-sized tensors
Daj#7482: Congrats guys! Fingers crossed everything works out :hap:
Teven#6831: tbh it's going suspiciously well for now, I don't believe it until someone inferences a checkpoint lol
EricHallahan#1051: To insert an anecdote from GPT-NeoX-20B training: the times when training crashed were because Stella forgot (to set a reminder?) to empty out the checkpoints into cold storage. :berk:
Teven#6831: lol this cluster has "bottomless" (=500TB) storage at the price that files that haven't been accessed get deleted every 30 days. Even thinking about it raises my heart rate
StellaAthena#3530: 500 TB of storage is not enough to save daily intermediate checkpoints throughout training. We were specifically interested in keeping all intermediate checkpoints
EricHallahan#1051: It was 100% a problem of our own making since we wanted all artifacts from training. It wouldn't have been a problem otherwise.
StellaAthena#3530: IDK, I’m pretty hyped about the intermediate checkpoints. And I’ve had a couple people ask for them… need for figure out how to set that up. Even just 100 of them would be nice.
Teven#6831: ah yes, we also want to do this - think we settled on only exporting 30 of those throughout training because those things are massive with the optim state
EricHallahan#1051: Oh, I'm not detracting from us wanting all the checkpoints. I'm saying that our lesson was to consider all relevant infrastructure to the plan before pressing the go button.
Teven#6831: people do seem to be excited about those
EricHallahan#1051: I expect a long section on GPT-NeoX-20B in the year-two retrospective. :libre:
nostalgebraist#3542: re: param ema conversations, i have been doing a running arithmetic avg from [T, T + 1/alpha] and ema after that -- works great
nostalgebraist#3542: where T is some step > 0 so the model can converge a bit first
nostalgebraist#3542: https://nostalgebraist.tumblr.com/post/675465296232988672/i-tried-the-arithmetic-average-then-ema-approach
nostalgebraist#3542: one nice thing about it is that, up until step T+N, all ema rates smaller than 1/N are equivalent. so you can just set up a super long EMA at the start, and then lower the rate later if you feel like it
nostalgebraist#3542: and get good results at any intermediate point
nostalgebraist#3542: @alstroemeria313
alstroemeria313#1694: ooh
|
alstroemeria313#1694: i am doing a more gradual warmup
nostalgebraist#3542: i think my approach = a linear warmup of the denominator, fwiw
nostalgebraist#3542: at a rate of 1 step per step, with 2 as the initial value
kurumuz#5695: ok I bet someone here implemented the "memory efficient attn" in a fast way in pytorch
kurumuz#5695: i think i actually seen someone talk about it
kurumuz#5695: any ideas?
ILmao#5683: I only remember https://github.com/lucidrains/memory-efficient-attention-pytorch
Some Point Process#3793: Interesting. Yeah it wasn't obvious it was an infinitely growing recurrence relation. So it's doing some discretized taylor expansion around the past iterates which can be understood via forward differences. (Which also show that power series expansions (aka "generating functions") allow you to find analytical solutions to an algebraic recurrence relation, provided that it is linear/constant coeffs., which correspond to solution to a diffeq in continuous cases, i think) <https://en.wikipedia.org/wiki/Recurrence_relation#difference_operator> https://cdn.discordapp.com/attachments/729741769738158194/953892615252762655/unknown.png
Some Point Process#3793: But yeah just a random connection. It's obvious that any real life rec. relation is not going to be analytical
Some Point Process#3793: wiki on ema explains this better:<https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average> https://cdn.discordapp.com/attachments/729741769738158194/953900641502240808/unknown.png
Aran Komatsuzaki#5714: sorry didn't notice this message. i matched up with the team of Mostafa Dehghani and Neil Houlsby but am working at Mountain View because that's easier.
zphang#7252: Neil "The Adapter" Houlsby :ultrazucc:
Humanoid#2332: Hi everyone, I'm pretty new here and currently working on DL (Keras and pytorch) how can someone contribute to your work 🙂 ? It looks really interesting
Kia#2550: Did BigScience say their 178B multi-langual LM will be Opensource?
Kia#2550: please correct if I heard it wrong
nostalgiahurts#3408: they say yes on reddit: https://old.reddit.com/r/MachineLearning/comments/tfm7zb/n_live_and_open_training_of_bigsciences_176b/i0zz57l/?context=1
this blog post (https://bigscience.huggingface.co/blog/model-training-launched) says
> While the exact license of the model is still being drafted by the ethics and accessibility working group, the focus is on openness and the intention is that the trained weights of the model should be accessible for researchers for experimentation. Accomodation is also being currently made for the model itself to be available to anyone via an easy-to-use API for cases when researchers don’t have access to enough compute to run the model themselves.
Kia#2550: Thanks nostalgiahurts!
Kia#2550: Quite exciting to hear,and thank you very much for your help
|
rockclimbing_nerd#4931: https://github.com/wjakob/nanobind
Emad#9608: How much compute would it take to even inference a 175bn model efficiently?
StellaAthena#3530: A silly amount. It will require between 350 and 400 GB of VRAM to even load…. To fit the model on an 8x GPU node each GPU will need to have 48 GB of VRAM.
Emad#9608: So will basically need to be on 80gb a100s or a google megagpu
Emad#9608: Or I suppose trainium
Emad#9608: Otherwise have to go multiple pod
Emad#9608: Makes it minimum $6k a month to run
StellaAthena#3530: I believe so, unless you can build an 8x A6000 pod
StellaAthena#3530: A quick heuristic: the model is 8.8x larger than GPT-NeoX 20B in the number of parameters. So if you want to fit the 176B model on an 8x GPU cluster each individual GPU must be larger than the minimum reqs for NeoX 20B
ari#9020: Jensen Huang's GTC 2022 keynote speech is on Tuesday, let's see if he's bringing us a Hopper GPU that's as heftily specced as the rumors say
Emad#9608: Going to find out details next week :hap:
Emad#9608: A6000 basically goes to 4x via PCI-E, can go to 8x with the A40s though
StellaAthena#3530: @circuit10 I got your magma app hosted on HuggingFace Spaces: https://huggingface.co/spaces/EleutherAI/magma
EricHallahan#1051: Is it possible to make the title be all uppercase?
StellaAthena#3530: it is, but it seems like the common practice is to do all lowercase
StellaAthena#3530: (see also how our org name, EleutherAI, stands out compared to facebook, google, etc)
EricHallahan#1051: The common practice is to follow the model authors. 😛
StellaAthena#3530: It's building again at `MAGMA`
HypnoPump17#9322: since some of you might be fans of `einops` here: https://github.com/arogozhnikov/einops/issues/178 (it turns out it's not a bug but intended behaviour. Not deleting bc it caught me by surprise and it might happen the same to others, but feel free if u consider unneeded )
EricHallahan#1051: I meant to make the *title* MAGMA, not the repo!
|
EricHallahan#1051: I'm moving it back.
EricHallahan#1051: It won't build because moving the repo removes the GPU allocation. :grimberk:
EricHallahan#1051: cc @Hugging Face 🤗
Louis#0144: hi
Louis#0144: LMFAO
Louis#0144: I need to email Douwe
Louis#0144: thanks for the reminder
Teven#6831: Lol I haven't used a GPU space in my life, ask Charles again I guess
StellaAthena#3530: It's up again 🙂 https://huggingface.co/spaces/EleutherAI/magma
@EricHallahan
Omar Sanseviero#6198: 🔥 🔥 👍
StellaAthena#3530: Thanks to @Omar Sanseviero! I hadn't realized you were on here.
circuit10#0158: It's erroring when I try to run it :(
StellaAthena#3530: It looks like the GPU is resetting for some reason
circuit10#0158: Oh
circuit10#0158: Is there a way to use custom HTML/CSS/JS for the Gradio inputs, or would you need to switch to a Flask server or something?
Omar Sanseviero#6198: Yes it's possible for custom css and html
Omar Sanseviero#6198: Here's an example https://huggingface.co/spaces/microsoft/unispeech-speaker-verification
circuit10#0158: Ah, thank you, I couldn't find anything in the docs about it for some reason
|
Omar Sanseviero#6198: Btw Abubakar from Gradio had some thoughts on improving the UI
circuit10#0158: For the thing that I made (but it was just a quick wrapper around the example in the repo so I shoudn't take much credit for it)?
StellaAthena#3530: Hmmm it looks like we hit a OOM error. @Omar Sanseviero the original version had a 16 GB GPU, did the reshuffling cause it to get downgraded?
Omar Sanseviero#6198: I think the renaming might have gotten to some bugs
Omar Sanseviero#6198: Taking a look
circuit10#0158: Hmm, that has HTML output but I can't see a clean way to do it for input
circuit10#0158: The repo has this which doesn't look like it's possible with plain Gradio (you can reorder the inputs and things) https://cdn.discordapp.com/attachments/729741769738158194/954117925671956501/magma_present.png
abubakar#9825: Hi everyone! @abubakar from Gradio/HuggingFace. Just joined the discord. Magma's an incredible model!! And happy to help in any way set up the Space.
Currently it isn't possible to reorder inputs, but an idea that may help in making this more flexible is to have a Textbox, followed by an Image, followed by another Textbox. That way people could upload images in a more intuitive way and still put text on either side (which could also still include URLs I suppose).
StellaAthena#3530: Welcome! @circuit10 is looking into that and I'm sure would appreciate an assist 🙂
EricHallahan#1051: Is the image large?
circuit10#0158: I was just running the example
EricHallahan#1051: Oh hmmm
EricHallahan#1051: Oh cool, I should look into this stuff more.
cfoster0#4356: Anyone know where the tokenizers for fairseq's `dense_lm` models live?
cfoster0#4356: Oh wait... does it also just use GPT-2's?
chilli#5665: hmm
chilli#5665: stupid question
EricHallahan#1051: I like stupid questions.
|
chilli#5665: why don't the FFN layers in transformers use layer norm?
chilli#5665: (or do they?)
AI_WAIFU#2844: presumably because if they didn't the activation magnitudes would blow up? you would have a path through the network that goes through multiple FFNs without any kind of normalization
StellaAthena#3530: They do?
AI_WAIFU#2844: last I checked it was
ln1 = norm(x)
ln2 = norm(x)
x = x + ffn(ln1) + attention(ln2)
AI_WAIFU#2844: so no norm and you get ...ffn3(ffn2(ffn1(x))) as a datapath
StellaAthena#3530: This is from “On Layer Normalization in the Transformer Architectures” https://cdn.discordapp.com/attachments/729741769738158194/954213409212227594/IMG_9215.png
StellaAthena#3530: And this is what GPT-J and GPT-NeoX do
StellaAthena#3530: @chilli Does this answer your question
chilli#5665: oh, err, is `ffn` just a single linear layer?
chilli#5665: that would explain things
AI_WAIFU#2844: I think it's an MLP
AI_WAIFU#2844: usually
AI_WAIFU#2844: so linear(f(linear(x)))
EricHallahan#1051: It's two linear layers. 😛
AI_WAIFU#2844: for some f
chilli#5665: right, so I'm wondering why no layer norm in between them
|
chilli#5665: (or do they?)
AI_WAIFU#2844: I think you can get away with no layer norm between them
StellaAthena#3530: Well, the value can’t get very big
AI_WAIFU#2844: you don't need to norm *everywhere* you just can't have 200 layers back to back with no normalization
chilli#5665: hmmmmmmm
chilli#5665: has norming *everywhere* ever hurt things 😛
AI_WAIFU#2844: might realistically shave off a few flops depending on how good your complier is
chilli#5665: somewhat of a rhetorical question, I suppose
chilli#5665: I'm curious whether anybody knows any results exploring it though
EricHallahan#1051: \:morenorms\:
chilli#5665: I do remember I saw some paper that was talking about how layer norm did some bad thing with gradients
StellaAthena#3530: I feel like the all reduce is going to be more expensive than any savings?
AI_WAIFU#2844: the other thing is that norms might hurt, techincally every time you norm you're deleting some information by removing a degree of freedom from the output
chilli#5665: that's true
AI_WAIFU#2844: but I suspect that's insignificant
chilli#5665: although I guess you could always do some "local layer norm"
StellaAthena#3530: Like per GPU?
chilli#5665: and probably not lose that much
StellaAthena#3530: Or maybe per head
chilli#5665: yeah, per model-parallel shard
|
AI_WAIFU#2844: yeah but now you have a model that's sensitive to the underlying sharding
chilli#5665: it's like how folks did "local batch norm" with DP
AI_WAIFU#2844: and nobody wants to deal with that
StellaAthena#3530: TBF, 20B ended up that way anyways
AI_WAIFU#2844: hasn't that caused a large headache?
StellaAthena#3530: Maybe?
StellaAthena#3530: *I* don’t care because I have many GPUs 😛
kurumuz#5695: why do you need all reduce
kurumuz#5695: just replicate all the LN params
kurumuz#5695: across shards
StellaAthena#3530: You’re dividing by sqrt(Var[x])
StellaAthena#3530: And x is sharded
AI_WAIFU#2844: so you gotta send information between shards
chilli#5665: the layer norm is reducing across the MP
AI_WAIFU#2844: you *might* be able to get around it
AI_WAIFU#2844: but again, pain
kurumuz#5695: you can literally seperately tune the LNs
kurumuz#5695: no need to share anything
StellaAthena#3530: We aren’t talking about LN parameters
kurumuz#5695: you already have the grads locally to update the local LN
|
kurumuz#5695: i think
StellaAthena#3530: LN(x) =(x - E[x]) / sqrt(Var[x])
StellaAthena#3530: (With some additional learned parameters I’m ignoring)
StellaAthena#3530: If I know half of x and you know half of x, how am I supposed to compute (half of the) z-score without seeing your half?
AI_WAIFU#2844: you *can* technically get all the xs between layers once, because you need to do that anyways, but now you gotta do an all gather instead of an all reduce
AI_WAIFU#2844: or something funky where you build up the statistics as the data comes in
StellaAthena#3530: If the coordinates of x (viewed as projections of random variables) are independent everything’s fine, but that’s not the case.
StellaAthena#3530: I guess technically you only need between-shard independence. So there’s some partition x -> (y, z) so that y and z are independent
StellaAthena#3530: But asking that to hold throughout training without changing the partition seems sus
StellaAthena#3530: Maybe you can force that to happen by doing something fucky with the embedding matrices?
Some Point Process#3793: ~~As for LN + causal masking: I just assumed that all of the info in the LN would be in the 2nd moment that it wouldn't break the "causal masking" (not just that it was unbiased). But seems to be a strong assumption~~
Some Point Process#3793: ~~apparently the residual after the self-attention layer block prevents any issues~~
kurumuz#5695: wut
kurumuz#5695: you don't need masking for anything other than attention
kurumuz#5695: they literally have no conception of the rest of the context
kurumuz#5695: attention is the part that does that
Some Point Process#3793: Just brain fart sry
Some Point Process#3793: for sharded training or w/e, splitting the layer norms might on balance just (over enough iterations) not make a difference given the expectations. Might just introduce some sample variance in certain parameters etc (compared to not splitting)
Some Point Process#3793: but i'd imagine that you might want to keep the same splits (as a hyperparam) over the entire training
ILmao#5683: Has anyone tried doing only a couple steps of all-reduce? e.g. with ring communication, only pulling from 2 "neighbors"
|
AI_WAIFU#2844: I don't think there's been a need really
ILmao#5683: As in the communication overhead is negligible in practice compared to the rest of the model, or...?
AI_WAIFU#2844: like all of our other techniques make it so that cutting down on comms overhead more than we already have is generally unnecessary.
Deleted User#0000: for the magma demo on hugging face, would a input image component and separate text component along with a output text component work similar to the Aleph Alpha playground?
StellaAthena#3530: This is jaw-dropping
https://andrewmayneblog.wordpress.com/2022/03/17/building-games-and-apps-entirely-through-natural-language-using-openais-davinci-code-model/
Orz#3023: woah
James#6892: Seems amazing for prototyping games ^. I almost wonder if there will be a new paradigm of creation tools where AI is the primary interface
Daj#7482: ~~Seems~~ Is amazing for ~~prototyping~~ making ~~games~~ anything ~~^. I almost wonder if~~ there will be a new paradigm of creation tools where AI is the primary interface
James#6892: Lol @Daj
Daj#7482: Before being swiftly replaced by a new paradigm that doesn't need the human user anymore :^)
Orz#3023: ~~EAI making codex when?~~
James#6892: It will be very limited for a game that’s beyond a prototype though... as the code will probably eventually dissolve into some huge unoptimized, unstandardized mess
James#6892: That being said I can see a new game engine where the text, lines, stories, art, voices, animations, and code is all being generated by AI
James#6892: Just tell it what you want and it will make it work for you lol
James#6892: It can start as a 2D game engine
Orz#3023: well btw
deepmind alphacode could solve some of competitive level algorithmic problems (codeforces)
|
tbh I won't really be surprised to see an AI that can write code much better than humans in the near future
James#6892: Yup, I saw that ^. It’s still mind boggling to me that programmers are going to be one of the first people to be disrupted by AI. A few years ago people were thinking it’s going to be the low cost jobs being automated away. But what’s happening is it’s now the devs being AI augmented.
bmk#1476: some hindsight bias involved but I don't think I'm *that* surprised
bmk#1476: I don't think I would have predicted it with high confidence, but I think there were enough signs pointing to it
bmk#1476: 1. robotics being hard
bmk#1476: 2. replacing devs saves a lot more money than replacing a mcdonalds employee
bmk#1476: 3. even a few years ago, it was pretty clear to me that AI generated art was going to take off, and people always say "nooo AI will never replace artists" which makes that argument less convincing to me
ari#9020: Also:
- massive, well-structured data sets just right there to take
- possibly in some useful sense humans are just bad at programming?
James#6892: I can totally see the AI generated art being like a personal stab to artists. That being said, are there any actual companies creating economical value out of AI generated art though?
asara#0001: I have kind of just learned to increase the confidence in my own takes in these areas, which are of course based on the takes of people I talk to (such as members of Eleuther). I needed a logo for something last night and instantly went to a generative model, got an absolutely amazing one in 30 seconds, and checked it off my todo list, and that's.. well, I'm very 'far ahead of the curve', but it's still pretty amazing to me
StellaAthena#3530: AIs are medium at *simulated burger flipping*
StellaAthena#3530: I don't understand why people expect robots to be able to actually do it IRL
StellaAthena#3530: Also, there are serious deployment issues with putting DL into a physical robot that's supposed to move around
EricHallahan#1051: Why develop a robot to flip burgers when you could just design the machine to do it in the first place.
EricHallahan#1051: something something real-time constraint
James#6892: Yeah it seems kinda obvious in retro based on all those things you guys mentioned. Definitely some hindsight bias as well though. But I guess when it comes to business/commercializing, many are based on the "low hanging fruit" that have the least issues in practice.
James#6892: So the availability of code, lack of physicality, etc. all those things are factored in. Even from an audience perspective, devs are the most likely to embrace AI tools compared to like, some guy running a mom and pop shop.
StellaAthena#3530: People are also heavily rewarded for overselling
|
flowpoint#7450: i thought too that programming was bound to be displaced,
nowadays i think it'll get displaced, but not by much
just because anyone can program in natural language, doesn't mean that anyone can prompt-tune towards useful software
StellaAthena#3530: Just saw a good example of this on Twitter: getting bad reviewers? Just bash the competing methods more and your scores will go up!
https://twitter.com/EdwardRaffML/status/1504612071358480388?s=20&t=wSY_aQl0B12rVHJVU89vvA
EricHallahan#1051: Really, it is surprising how effective a standard Roomba is considering it is just a few preprogramed event-driven behaviors: wander around, find dirty spots and clean them, then repeat. Once battery is low, return to charging station when you happen to come across it next.
EricHallahan#1051: It's extremely simple, and the escape algorithms are stupidly effective given that many of them are just "keep trying until the front bumper stops being activated when you move forward. "
StellaAthena#3530: The latest ones have a rudimentary SLAM system that allows it to perform grid-search on rooms and pick up- where it left off
EricHallahan#1051: Yeah that's why I said "standard".
EricHallahan#1051: We have two of those. 😛
StellaAthena#3530: What's the name of that anthropic paper about purported emergent behavior scaling LLMs
StellaAthena#3530: Found it: https://arxiv.org/abs/2202.07785
EricHallahan#1051: When I visited them it was their lawnmower that was the widely anticipated product since it was expected to be a lucrative market. They suspended that project during the pandemic and it looks like they will never start it up again.
I can only assume that they are more focused nowadays on competing with the products that entered the market with more tech (SLAM).
AI_WAIFU#2844: Lol just tell the model to clean up the code
Realmsmith#4506: That tweet really hit me hard in the depression. We are too close for comfort and AI systems are getting increasingly expensive.
𓅬 gabriel_syme 𓅬#3220: I'm definitely betting on this. And it moved rapidly, from dalle to LMs, to probably planning / decision models after
Realmsmith#4506: Language models can already do simple planning.
|
James#6892: Which industries do you think has the greatest potential for AI "building things"?
James#6892: I feel its gaming for me, (if the AI can generate things that are consistent in theme/style)
Realmsmith#4506: Anything you can dream.
Realmsmith#4506: We'll be able to dream better and more creatively.
James#6892: Sounds like gaming to me lol
AI_WAIFU#2844: the one dyson sphere industry
James#6892: Lol, i had to google that to understand what you mean
Realmsmith#4506: Dunno man the moment a game stops being a game is what's going to be disrupted the most.
Realmsmith#4506: LIke bro... I thought I was playing a game but my AI was actually a consequentialist.
Realmsmith#4506: So now my life is contained in the game world.
Realmsmith#4506: I can't stop playing guys HALP
bmk#1476: idk copilot is singlehandedly responsible for a huge chunk of my productivity
thrasher#7261: i don't fear being replaced by these tools, but i am actively aiming my career at learning how to effectively wield them and improve them
thrasher#7261: as you say, they don't help you figure out what problems are worth working on
thrasher#7261: i expect they will improve in effectiveness significantly if you can mix in formal specification and performance feedback alongside the natural language prompts
thrasher#7261: i think it will also let smaller teams build more coherent things
thrasher#7261: i think it will slow down growth in the number of tech jobs. why hire 8 junior devs when you can hire 1 program synthesis wizard and rent an A100 for less money
tpapp157#3643: People put too much emphasis on the importance of brute force automation. In most skilled jobs, the majority of effort and value is in making correct design decisions, not in the actual implementation. If that were reversed, all of us would have been replaced by auto-ML scripts years ago.
thrasher#7261: i have thought about starting a company whose goal is just to advance the praxis of augmented programming as much as possible and eventually start contracting out wizards strategically
Spacecraft1013#5969: yeah copilot is a lot better than I expected it to be
|
Spacecraft1013#5969: I once had it write pretty much an entire model for me, I just had to go in and change the parameters up a bit
Realmsmith#4506: https://tenor.com/view/tired-exhausted-friday-feels-gif-15060878
tpapp157#3643: Yeah. Eventually I'm sure the world model for an AI will grow large enough to encompass design decisions as well. But nothing today comes anywhere close to that and I don't see that changing anytime in the near future.
thrasher#7261: I don’t want the synthesis tools to be making too many design decisions, they are by default misaligned
Daj#7482: You've almost figured it out ||That AGI is just around the corner and it will look absurdly comical in hindsight to see people arguing about the increasingly narrow sliver of things AI "isn't anywhere close to being able to do" seconds to midnight||
Daj#7482: ||And you have figured out why we are all going to 🖇️ !||
bmk#1476: just yesterday someone in here told me that they got alignmentpilled by my blog because it made them realize why short timelines make sense :goose16:
Daj#7482: You are correct i was being cheeky about things other people in the Convo said, please take my shitposting as tongue in cheek lol
Daj#7482: ||How much of the future light cone are you willing to bet on that?||
EstebanSir#2189: I was thinking, how would one even prevent an AGI with access to the web from infecting every computer in the world? I would think some systems are in place capable of disconnecting parts of the net
EstebanSir#2189: but
EstebanSir#2189: I guess it’s foolish to try?
Daj#7482: It is! That's genuinely a very good thing to notice!
Daj#7482: This is one of the first examples you tend to walkthrough when thinking about alignment
asara#0001: I am already fully alignmentpilled and you guys are consistency making it worse, gj
asara#0001: Gwern's short story on LW the other day was *really* good https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world
EstebanSir#2189: alignment is a complicated, I wish I knew some of the more accepted solutions
EstebanSir#2189: oh
Daj#7482: ||There are no solutions yet lol||
bmk#1476: @Aspiring Scout pls come join us and help us save the world
|
EstebanSir#2189: oh god that’s terrifying, I thought there were accepted theories?
Daj#7482: Not even close!
Daj#7482: And yet people are rushing towards AGI as fast as possible!
Daj#7482: Now you know why the Yud is screaming, and if you had some sense you'd be screaming too :yudscream:
Daj#7482: :berk:
EstebanSir#2189: yeah I would scream but humanity killing itself doesn’t seem like that much of a strange concept to me
Daj#7482: Yeah but I'd prefer avoiding it
Daj#7482: I live there
EstebanSir#2189: I like to think there is a very tiny chance that the first AGI just happens to be aligned
bmk#1476: the motto of alignment onboarding should be "thank you for helping us help you help us all"
EstebanSir#2189: it should be “god help us all”
bmk#1476: :goose16:
EstebanSir#2189: this website has a lot of stuff huh
AI_WAIFU#2844: this is a cope, your entire organization will just be disrupted instead
bmk#1476: wow this is literally me https://cdn.discordapp.com/attachments/729741769738158194/954459628463345804/Screenshot_20220318-132125_Firefox_Focus.jpg
bmk#1476: except s/pop rocks/literally anything more productive than the shit they cover in school/
EstebanSir#2189: does this discuss other optimizer as well? is there perhaps some hope there? (Sorry I don’t read fast)
AI_WAIFU#2844: one day your just gonna see one company start snapping up the world output of graphics cards, motors and cameras
AI_WAIFU#2844: and that's if things go slowly
Daj#7482: I think there is a tiny chance of this too but God damn should we _not_ bet the fate of the universe on it lmao
|
thrasher#7261: join my shadowy cabal of alignmentpilled centaur wizards and do the disrupting
AI_WAIFU#2844: this is the shadowy cabal of alignmentpilled centaur wizards
EstebanSir#2189: Oh absolutely, it just seems like the longer humanity exists for, the larger risks we take though
bmk#1476: my favorite story from my school days is when I got yelled at for reading a math textbook on my laptop during some extremely boring class so I pulled out a paper copy of the exact same textbook out of my bag and the teacher stopped complaining
EstebanSir#2189: can we even align a human
Aspiring Scout#7875: Well even if we don’t have solutions, we atleast have like millions of people working on it right?
bmk#1476: :grimberk:
EstebanSir#2189: there is a discord server with one of the biggest open transformer models
EstebanSir#2189: I don’t know man
AI_WAIFU#2844: > programs are more generally intelligent than in the paradigm discussed
yes, but I guess what I'm saying is that "the paradigm discussed" is not gonna last any appreciable amount of time.
AI_WAIFU#2844: fair
AI_WAIFU#2844: your still getting automated/replaced tho
EstebanSir#2189: What about adding imperfections to the optimizer? Like some other optimizer working on the same network but trying to reach a different goal
bmk#1476: read up on quantilizers
Aspiring Scout#7875: Also, even if we don’t have solutions and we don’t have many people working on it - we’re lucky that no one is openly trying to make AGI and is likely to have the resources to do so
bmk#1476: or watch the excellent Rob miles video
Aspiring Scout#7875: https://youtu.be/gdKMG6kTl6Y
EstebanSir#2189: Will do!
bmk#1476: :goose16: https://cdn.discordapp.com/attachments/729741769738158194/954461903315419206/Screenshot_20220318-133035_YouTube.jpg
|
Aspiring Scout#7875: Do you mean being able to decelerate the progress of AI intelligence so it halts before it’s close to human general intelligence and is also aligned?
bmk#1476: this is a perfect meme format actually
thrasher#7261: If you’re alignment pilled you can have a little PASTA, as a treat
Daj#7482: Good meme
Aspiring Scout#7875: Someone in my cohorts for agisf said what if you gave the AI emotions and someone else said then it would get pleasure from killing us
Daj#7482: Humans say the darndest things about alignment :berk:
EstebanSir#2189: Mhm, but a quantilizers is just an optimizer that converges closer to human solutions. I was thinking more of an optimizer that actively makes the network worse at its task, but better at something else
EstebanSir#2189: ^paired with the actual optimizer for the task
EstebanSir#2189: Or, who knows, maybe more of those
Daj#7482: You also have to consider that if your system sucks, someone else will just build a better one
EstebanSir#2189: like, as humans we want to eat, but we also want to get work done. Those two tasks fight for time
Tinytitan#5596: Decision transformers do somthing like this
Tinytitan#5596: however they would sabotage you if the future looked too good
EstebanSir#2189: Yeah this is basically embracing the suck.
bmk#1476: and that's why conditioning on latent catgirl variables is unnecessary for averting gayness
bmk#1476: oh wait wrong conversation
EstebanSir#2189: lmao
bmk#1476: anyways yeah so a big reason why doing alignment stuff with big LMs seems attractive is because there's a chance we might actually build AGI with LMs as a big component of it
bmk#1476: and those LMs might look a lot like what we have today
bmk#1476: I can't tell if this is referring to the company or arguing that the anthropic principle will push towards universes where alignment is easy
|
bmk#1476: well it's the thesis of everyone working on prosaic alignment
nostalgebraist#3542: (re SDE unemployment) there's a big difference between something that makes programmers more efficient, and something that *is* a programmer but costs less than a human one
bmk#1476: well it will make bad SWEs unemployed
nostalgebraist#3542: the introduction of compilers, etc. didn't lead to fewer people programming computers
bmk#1476: more automation shifts the focus to higher level stuff
bmk#1476: and anyone who is bad at that will become unemployed
nshepperd#2316: programming automation will enable us to reach even more incredible heights of insane and horrible code, at the very edge of comprehension of the combined powers of human and machine
nostalgebraist#3542: it kinda reminds me of a tweet i saw saying "huge LMs are a bug not a feature, once we figure out [something] they'll be more efficient and they'll get smaller"
nostalgebraist#3542: and i'm like, why would people just leave their compute idle? if param efficiency goes up, just train a model of the same size but now it's better
bmk#1476: why make LMs smaller when you can make them bigger
nostalgebraist#3542: "params" analogous to "people employed to do 'programming' in some generalized sense" which i think is a reasonable category
Krog#4186: Hello yall, I dont really ever chat much online anymore i'm an OG lurker. I like coming here to read your guy's science; It's one of the niche compartments of human-think before the coming technologicial singularity (Praise Roko). I have unfortunately come to you all for help in a perplexing problem. I seem to have been compromised by an incredibly intelligent ICE that has backed my security into a corner, possibly augmented by a (potentially) mythial beowulf-cluster computed neural-hive. I believe that my actions to defend from it... has only allowed it's training model to become better... and I- but a training model. With AI ramping up so quickly, I may have discovered a true cyber-mythic beast. I would appreciate any advice to help recover my system, (I'm in dire straights needing a clean linux distro through a non-obfusciated network)
rolka#5717: Gentoo
Deleted User#0000: dont use linux, use templeos
Deleted User#0000: it gives u divine protection
alexandrost#2936: Hello, does anyone know if it is possible to finetune the 20b model or the fairseq 13b on a single 48GB GPU?
𓅬 gabriel_syme 𓅬#3220: yep, that's why I mentioned it actually. I already have a folder with LM planning papers and I did, kind-of, made planning work in my case (at least I had a LM create things by stacking actions sequentially). Really exciting stuff
boris (^_^)#0348: Do you guys put layernorm after or before dropout?
I seem to be seeing both variants
boris (^_^)#0348: Actually does it even change anything? Probably for layernorm params…
|
**Edit**: decided to put them after dropout, let's see what happens
Realmsmith#4506: Woah can I take a look?
nostalgebraist#3542: i don't understand the point this story is trying to make...
asara#0001: uh maybe just alignment matters + quality fiction
CRG#8707: <https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world?commentId=sppGwC5s4KNfYo6ka> ? https://cdn.discordapp.com/attachments/729741769738158194/954556945317130272/Screenshot_20220319-0247402.png
nostalgebraist#3542: i read the surrounding discussion, that's why i'm confused
nostalgebraist#3542: (like the comment it was originally a response to, etc.)
nostalgebraist#3542: the story is about some hypothetical meta-meta-meta...-learning method that doesn't resemble anything on the 2022 pareto frontier
nostalgebraist#3542: but the prefacing line refers to "using only known sorts of NN & scaling effects," and it was originally posted to illustrate a point about future GPT-like models
nostalgebraist#3542: the story is frequently unclear which of the meta levels things are happening on. there are at least 3 -- programs, NN updating programs, SGD or something updating the NN -- and i kept thinking, "if the *inner* layers are this good with 20XX compute, even after the cost of the nested loop structure, what are *normal* models in 20XX like and why haven't they already taken over the world"
nostalgebraist#3542: why does the NN remember a "history of observation of lots of random environments & datasets"? ie why does it have an internal notion of time elapsing that follows the steps of its own training loop?
nostalgebraist#3542: Ah, that fits most of the description better. I was getting thrown off by the claim it was a "descendant of AutoML-Zero"
tpapp157#3643: The story definitely falls into the same pitfall that most SciFi does. It takes a far future technology, drops into today's world, and allows it to run rampant as a worst case scenario. It makes no effort to account for how the rest of the world would have co-evolved with the development of these super-powerful AIs in the preceding decades. As you point out, it's easy to make the argument that hypothetical super-powerful specialist AIs would have trivially been able to hem in this AGI at every step in the process. But at this point it's just fantasy one-up arguments between children at the playground.
Daj#7482: +1 "it's just fantasy one-up arguments", it's fun but unproductive to have circular arguments around "uhhhh akshully I can defeat your AGI takeoff story with X", just assume superintelligence is smarter than you and anything you come up with it comes up with something better, so any story is always a lower bound on how smart an AGI could be
Daj#7482: It's still fun though
wabi-sabi#5811: I really think that the distinction between boxing and alignment is overstated. They're complementary. Arguments to the contrary have a dualist feel to them.
Z0ro#7049: Does anyone have the link regarding on doing research in any field?
Z0ro#7049: Can't seem to grasp it, it was supposed to be laying around in one of these channels.
Z0ro#7049: That might be it, are there any more related papers? (Thanks for the hint by the way.)
Z0ro#7049: There was a more detailed alternative to it.
|
Utlagi#0001: playing around with some machine learning from scratch stuff (not the tutorials of that name, just self exploration).
Is there anyway to develop locally (e.g. on my own 3090, or other reasonable consumer hardware) in a way that uses XLA as if I’m targeting a GCP TPU? So that my models are 100% ready to be used on TPU?
Mainly just without renting a TPU to do all my exploration remotely on GCP -- at least until I feel my knowledge and model is ready for that cost and (very slight) remote dev inconvenience.
May be a bit of an x-y problem question. Feel free to answer the question I should have asked instead of the one I actually did!
EricHallahan#1051: Apply to TRC.
bmk#1476: tldr not really
bmk#1476: most of the weirdness comes from TPU specific problems
Utlagi#0001: yeah my concern there is that TFRC generally only gives you credits for 30 days-ish 😕 and I'm a slow learner! (really I just have a 9-5 so I have limited hours per week)
Utlagi#0001: thank you so much
EricHallahan#1051: You can continue to reapply after the first 30 days.
bmk#1476: you can keep refreshing it
bmk#1476: they won't ever say no
Utlagi#0001: oh. shit.
Utlagi#0001: any advice if I just sat on an invitation from January 9, 2019? chances I can still use that offer?
bmk#1476: yeah probably
Utlagi#0001: > Great news - we've added Cloud TPU v3 to your TFRC quota!
>
|
> In addition to 5 on-demand and 100 preemptible Cloud TPU v2 devices, we're happy to inform you that we've expanded your offer to include 5 on-demand Cloud TPU v3 devices.
Utlagi#0001: So this would generally renew every month without much concern for my own demonstrable productivity or measurable use?
bmk#1476: well you probably want to show you did literally anything useful with it
bmk#1476: but the thing is the TFRC people really want to give out compute
bmk#1476: so they're not going to try to not give you compute
Utlagi#0001: thank you
EricHallahan#1051: If you cannot, just contact them.
EricHallahan#1051: But you most likely should be able to use it.
punishedjamesthesnake#2995: Cool
alstroemeria313#1694: you probably can if you email them, i did this and got in
asparagui#6391: jax gives you this fwiw
𓅬 gabriel_syme 𓅬#3220: Does anyone here work on dialogue/QA models and would you like to have a 15min chat about it? I'm looking for some quick advice and directins 🙏
chirp#4545: the new OpenAI “insert” model seems to be very good at writing stories, possibly much better than the original GPT-3
chirp#4545: Basically you can use a prompt like this: “(1) Once upon a time… [insert] (10) And they lived happily ever after.”
chirp#4545: And the davinci-insert model is smart enough to fill in 2–9 with exactly 8 paragraphs that take you from point A to point B
chirp#4545: Definitely better than NovelAI, from what I can tell. I think the big thing is that you can specify _both_ the beginning and the end.
janus#0150: I didn't get the impression it's better at writing stories in terms of upper bound of writing quality/coherence/etc. The suffix makes it easier to constrain it not to go off track, especially with a minimalistic prompt like your example.
Original GPT-3 is very good at writing stories if you give it a long and well-written prompt which adequately constrains its behavior, but this is hard (unless you're willing to use something already written like an excerpt from a book).
Louis#0144: Forwarded this to riedl thanks
|
faraday#0862: I'm based in Turkey, with no GPT-3 free credits remaining. I want to add credits by paying but OpenAI does not support paid users from my country. I didn't want to open another account with another email or fake an address and want to comply with terms of usage but I'm without options. What can I do? Should I fake an address? Did anyone else experience this?
faraday#0862: With NovelAI, I tried to produce a story, providing an original sci-fi plot but I couldn't make good use of it. It seems to be more fit for AI Dungeon-like fantasy stories. Are there strategies to use it kick off more innovative plots?
chirp#4545: Not sure about NovelAI, but one trick with GPT-3 (instruct, but not the insert version) is to tell it to “tell a story that has a twist at the end”
chirp#4545: Another option is to use the insert version, and make the ending state quite different from the starting state. Then GPT-3 will need to come up with a creative way to get from here to there.
chirp#4545: Also, GPT-3 can generate boring stories, but there’s an easy way to make GPT-3 generate more interesting stuff: add conflict.
chirp#4545: You can literally just say “Character X wants to do this thing, but they also care a lot about this other thing” and GPT-3 will basically simulate the character’s decision making process, which can be strikingly insightful
janus#0150: You can also try putting a sample of another good quality sci fi story in the prompt and then saying something like here's another story by the same author
janus#0150: In my experience short and generic prompts tend to cause GPT to act much more dumb and uncreative than it can with detailed prompts. Bootstrapping from a short prompt is the part of the writing process that requires the most curation and finagling.
faraday#0862: I’m able to produce amazing response with GPT-3 but I’m more interested in producing useful stuff through GPT-J and NovelAI. you’re entirely right that somehow gpt-3 is able to simulate the mind of a character compared to output from gpt-j and novelai. but I couldn’t really understand why. the pile and 20b should be really strong wrt the task
faraday#0862: “interviewing” an award-winning author about his latest book tends to work well too
alstroemeria313#1694: so like, i need to, in pytorch, shift different features along the sequence dim by different amounts
alstroemeria313#1694: like i have [n, s, d] tensors
alstroemeria313#1694: and i need [n, d] random offsets to shift in the s dimension by
alstroemeria313#1694: and i need to do this quickly
alstroemeria313#1694: i can get larger tensors and index into them but
alstroemeria313#1694: using nested for loops for this is very slow
alstroemeria313#1694: i tried boolean tensors to index with but i got the elements in the wrong order
alstroemeria313#1694: oh
alstroemeria313#1694: torch.gather does it.
cfoster0#4356: would be nice if there was a specific function for this
|
alstroemeria313#1694: yeah
alstroemeria313#1694: wdym?
alstroemeria313#1694: i don't actually know how to do this
alstroemeria313#1694: without using for loops which is slow
alstroemeria313#1694: i did this: ```python
def random_translate_1d(x, max_translate):
n, s, d = x.shape
offsets = torch.randint(max_translate * 2 + 1, [n, 1, d], device=x.device)
out_s = s - max_translate * 2
tmp = torch.arange(out_s, device=x.device)[None, :, None]
return torch.gather(x, 1, offsets + tmp)
```
alstroemeria313#1694: it picks out a random `out_s` length contiguous sequence, at uniform, from each sequence in the batch. the sequence offset is different per batch item (dim 0) and per feature (dim 2).
alstroemeria313#1694: ohh
frank cilantro#9153: is there a way to find the index of a value in a tensor, or return -1 / None if it is not present, without using control flow?
frank cilantro#9153: i've been doing ```
matches = (tensor == value).nonzero()
return matches[0,0] if matches.dim(0) > 0 else -1``` but torch.fx doesn't like it
alstroemeria313#1694: yeah. `index = (tensor == value).cumsum(-1).argmax()` to get the index
alstroemeria313#1694: then `matches = tensor[index] == value` to get whether it matches, as a separate boolean
|
alstroemeria313#1694: you can then do `torch.where(matches, index, index.new_tensor(-1))` if you need them combined into one return value
alstroemeria313#1694: ahh
azalea#4872: @Expl0dingCat
azalea#4872: ur mid
Deleted User#0000: magma demo is number 3 on trending https://cdn.discordapp.com/attachments/729741769738158194/955280591572594688/Screen_Shot_2022-03-20_at_9.43.04_PM.png
StellaAthena#3530: @Hugging Face 🤗 Would y'all be able to share usage #s for our spaces? I assume that's something you track, like you do for repos?
EricHallahan#1051: We should also totally change the emoji to 🌋
StellaAthena#3530: omg yes
EricHallahan#1051: I'll do it now lol
EricHallahan#1051: Updated
EricHallahan#1051: It'll have to rebuild but it is worth it lol
EricHallahan#1051: Better. 🙂 https://cdn.discordapp.com/attachments/729741769738158194/955291385878159370/unknown.png
faraday#0862: what are the best settings for MAGMA? sometimes I get proper response but sometimes just ` ``The <PERSON>''` , ` <PERSON> and <PERSON>` or similar
rolka#5717: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=lm%20loss
rolka#5717: wee look at it go
rbaldock#1058: Hello MAGMA enthusiasts of EleutherAI! Following our open-sourcing of MAGMA, Aleph Alpha is also organising some open, expert ML seminars called "The Nitty-Gritty" (www.nittygritty.ai). The first one will be by Ethan Perez on Red Teaming Language Models With Language Models (https://arxiv.org/abs/2202.03286). This is happening tomorrow (Tuesday March 22nd) at 10am PDT / 1pm EDT / 6pm CET and will be 1 hour long (45+15). You are warmly invited to join. 😎
The Zoom link for the first event is https://us06web.zoom.us/j/82240467306. The seminar will be recorded, edited and uploaded online. By participating, attendees signal their agreement with this. Hope to see you on Tuesday!
Also, you are welcome to sign up for future Nitty-Gritty seminars by joining our mailing list, http://subscribe.nittygritty.ai/, which will not be misused for marketing or sales purposes and is easy to leave. 👍 See you there!
Pathos#6969: `Language Models (LMs) often cannot be deployed because of their potential to harm users in hard-to-predict ways.`
|
> "harm users"
> text
*see things flagged as "offensive"*
I might have not a clue what this is, but this sounds a lot like OpenAIs filter bullshit and it already makes me upset just thinking about it.
tpapp157#3643: We can help you with understanding the latest advances in NN modeling but if you don't understand how language can harm people then I don't think we can handhold you through understanding basic human empathy.
Pathos#6969: lmao a'ight
Ariel Ekgren#6449: Hi everyone ♥️ I'm looking for resources on the potential economic impact of LLMs. Trying to quantify the value that they can provide is surprisingly elusive. What is the total addressable market of GPT-3, the model that BigScience is training or GPT-NeoX? I'm fully on board with the fact that these models will lead the way to automate a % of white collar jobs. But are there any good sources and works already written?
I found this call from OpenAI on doing research for quantifying impacts https://cdn.openai.com/papers/Economic_Impacts_Research_Agenda.pdf but there has to be a lot more 🧠
jack#8178: anyone here used docker w/TPU pods?
jack#8178: i've got a compose file which works on individual TPU vm's but for some reason doesn't work on pods
tpapp157#3643: There isn't any good information on this. Partially because the technology is so new that people really haven't had the time to shift their mental paradigm of what could be possible. More to the point though, these LLMs are still mostly academic novelties rather than truly useful tools and as we've seen there has been very little incorporation of LLMs into real production products over the last several years. There remain a lot of practical hurdles to overcome for LLMs to be reliably useful.
tpapp157#3643: So to summarize, LLMs to date have very little potential economic impact for a variety of reasons. The LLMs of ten years from now, however, will probably be everywhere.
asparagui#6391: explain a little more what you're trying to do?
jack#8178: i would like to define my training environment with a docker file, and start training by running `docker-compose run --entrypoint $whatever training`
jack#8178: i know i need to add these options to `docker-compose.yaml` for that to work on individual TPU VMs
jack#8178: ```yaml
cap_add:
- ALL #unsure if necessary
environment:
- TPU_NAME=tpu_name
|
- TF_CPP_MIN_LOG_LEVEL=0
- XRT_TPU_CONFIG="localservice;0;localhost:51011"
- TF_XLA_FLAGS=--tf_xla_enable_xla_devices
volumes:
- /var/run/docker.sock:/var/run/docker.sock #unsure if necessary
- /usr/share/tpu/:/usr/share/tpu/
- /lib/libtpu.so:/lib/libtpu.so
- /tmp:/tmp
- /dev/shm:/dev/shm
- /run:/run
- /run/lock:/run/lock
privileged: true
devices:
- "/dev:/dev"
```
jack#8178: but I'm pretty sure this is insufficient for TPU pods
asparagui#6391: loosely each tpu runs a process and then they talk to each other
asparagui#6391: not sure how tf2 handles it tbh
asciidiego#8633: Hi guys!
|
Excited to join the community. I have a background in software development. Mainly have been scaling GPU nodes in a cluster lately (optimizing generative AI models and making them to run more cheaply in the cloud).
I studied along Sasha Rush back at Cornell Tech a while ago.
EricHallahan#1051: Welcome to EleutherAI!
asciidiego#8633: As an experienced software engineer, is there a way I can contribute?
jack#8178: yeah i can do that in python *not* in a docker container, but same python in docker doesn't seem to work
asciidiego#8633: Why not?
jack#8178: I don't know? Hoping someone here would have an idea
jack#8178: It has to be some missing resource for the collective communication ops
mic#7575: Relatedly, could someone give an overview of the projects available at EleutherAI, what skills and time commitment are required, and how one can get involved? I can see the projects board at <https://github.com/EleutherAI/project-menu/projects/1>, and some of the projects say "Help Wanted!" and "Newbies Welcome", but I'm not sure what the process would be for getting started.
At my university EA group, we're running a variant of the AGI Safety Fundamentals alignment program, and our ~35 participants will be finishing the program next month. I'm wondering if joining an EleutherAI alignment project might be a great way for them to develop their skills and contribute to AI safety.
johnryan465#9922: Cheers for sharing that board, somehow missed it
Ah it doesn't seem to be updated
mic#7575: oh rip
mic#7575: I found it from https://www.eleuther.ai/faq/
Daj#7482: The board is unfortunately indeed not up to date, and I'm not sure how much capacity for project management is currently free, but cc @bmk @StellaAthena @janus
asparagui#6391: hmm, docker/security sandboxing preventing intra-node communication
asparagui#6391: ?
jack#8178: yeah that's my guess - probably i need to give it more permissions, any ideas re how to figure out which?
|
asparagui#6391: well i would guess more opening ports for xrt
asparagui#6391: you say raw python works?
asciidiego#8633: Have you tried setting up the network to host and communicating between ports?
StellaAthena#3530: Welcome!
StellaAthena#3530: In terms of your prior experience, are we talking 2 nodes, or 20 nodes, or 200 nodes?
asciidiego#8633: 5 < n < 20
StellaAthena#3530: @jmerizia @triggerhappygandi @Louis and @alstroemeria313 are the people to talk to about doing similar work here.
Louis#0144: (Me too, we always need more engineers in #contrastive)
Louis#0144: We just scales magiCARP to 8 GPUs, moving to multinode soon
Louis#0144: Ye this is about the size I was looking into for carp too
asciidiego#8633: Do you guys use k8s with the gpu operator to manage multi-nodes or what is your current process?
Louis#0144: We're looking into using OSLO to manage multinode iirc
Louis#0144: Which is by @Kevin Ko
Louis#0144: Kevin is an amazing engineer
Louis#0144: Strongly recommend talking with him
Louis#0144: 🙂
Louis#0144: I gtg though rn, DM me. We'll chat later if you want
Louis#0144: I'm at a PhD visit day rn
jmerizia#4039: yes k8s, but not the gpu operator
𓅬 gabriel_syme 𓅬#3220: amazing thanks! Also a time I can possible attend to heh 🙂 Will any of these be recorded btw for later viewing?
|
Kevin Ko#0028: This is not true. lol
minhaaj#4955: What if a language model could acquire new knowledge by simply reading and memorizing new data at inference time? That’s the intriguing premise of Google’s ICLR 2022 conference paper Memorizing Transformers.
Conventional language models require training or fine-tuning to gain new knowledge, a learning process that is time-consuming and can entail extremely high resource consumption. The Google researchers envision language models that memorize facts by storing them as key/value pairs in long-term memory, such that the model’s attention mechanism can access and extract the stored information as required. The method is designed to bypass costly training and fine-tuning procedures and enable language models to immediately obtain new knowledge.
In the team’s proposed kNN-augmented transformer, input text is tokenized and the tokens embedded into a vector space. The embedding vectors go through a series of transformer layers, each of which performs dense self-attention, followed by a feed-forward network (FFN). Long documents are split into subsequences of 512 tokens, with each subsequence used as the input for one training step. Unlike traditional methods, the subsequences are not shuffled; long documents are instead fed into the transformer sequentially, as with the Transformer-XL (Dai et al., 2019).
The team evaluated their kNN-augmented Memory Transformer on several language modelling tasks that involve long-form text: English-language books (PG-19), long web articles (C4), technical math papers (arXiv Math), source code (GitHub), and formal theorems (Isabelle).
Promising architecture but what the team and decision-makers don't get is that it's the nature of language itself that is elusive and not how inference servers process the data. An interesting area of research, however. (Arvix link below)
https://arxiv.org/pdf/2203.08913.pdf
W.#7124: Anyone attempted to reproduce the RETRO model?
Kia#2550: An implementation does exist,but no actual model
Kia#2550: https://github.com/lucidrains/RETRO-pytorch
Hugh#1639: I guess I should put "Select Researcher" on my CV
https://spectrum.ieee.org/eleutherai-openai-not-open-enough
tpapp157#3643: Reminder Nivida GTC keynote is today at 11am US Eastern. Jensen will unveil the new H100 hardware.
Ravna#1831: I really hate this kind of "NYT reporter" style of writing. Take this for an example. It randomly adds some trivia of Tsinghua University's models in weird place of the article. Worse, each sentence has no apparent logical relation to its predecessor or successor.:harold: https://cdn.discordapp.com/attachments/729741769738158194/955803305282523196/Screen_Shot_2022-03-22_at_19.17.35.png
StellaAthena#3530: Lol. I told him to cut everything at the end about RL and Sutton but he apparently didn’t listen.
Seventeen ducks in a trenchcoat🦆#0017: Took this (IMO the highest quality Pokémon GAN so far) and made it so everyone can generate their own fake cards using it
|
https://twitter.com/ronvoluted/status/1506254884961611781
tpapp157#3643: Keynote starting in 10 minutes. https://www.youtube.com/watch?v=39ubNuxnrK8
tpapp157#3643: https://www.nvidia.com/en-us/data-center/h100/#specifications
faraday#0862: is there a good place summarizing key changes from Nvidia GTC?
it's hard to separate promo material from genuinely helpful sessions
Hugh_IFS#6062: Get the transcript and throw it through GPT-NeoX-20B. Easy!
faraday#0862: how do you prompt 20B for a robust summarization?
faraday#0862: sometimes I think @Hugh_IFS is the result of mind-uploading @Hugh
faraday#0862: if you're after a decent performance comparison as me:
"""
Nvidia says an H100 GPU is three times faster than its previous-generation A100 at FP16, FP32, and FP64 compute, and six times faster at 8-bit floating point math. “For the training of giant Transformer models, H100 will offer up to nine times higher performance, training in days what used to take weeks,” said Kharya.
"""
Hugh_IFS#6062: We are Hugh. You will be assimilated. Resistance is Futile.
Hugh#1639: We are Hugh. You will be assimilated. Resistance is Futile.
faraday#0862: Hughocalypse
Hugh_IFS#6062: https://memory-alpha.fandom.com/wiki/Hugh
Hugh#1639: But it is my actual real name.
tpapp157#3643: More cores, double the per tensor core throughput, and higher clock speed. https://cdn.discordapp.com/attachments/729741769738158194/955863822399324232/unknown.png
Hugh_IFS#6062: Anyone want to buy an A6000? I need to raise funds for the H6000.
kurumuz#5695: @Hugh_IFS I wouldnt sell it now
|
Hugh_IFS#6062: No, I agree. That would be foolish in the current climate and so long before the H6000 even exists 😆
random person#5234: Also transformer engine looks very nice
tpapp157#3643: Not much information on what the "Transformer Engine" is. It sounds like some sort of automated precision switching. https://cdn.discordapp.com/attachments/729741769738158194/955865659726766152/unknown.png
faraday#0862: what persuades people to purchase strong GPU cards for local use?
when does Colab start being insufficient? (I mean for which use cases)
bw#3136: When the Colab P100 feels to slow
Hugh_IFS#6062: When the Colab P100 has 16GB VRAM and you need 24GB for GPT-J-6B or 48GB for GPT-NeoX-20B
mo#0466: when google keeps banning you cause you use it too much
Hugh_IFS#6062: Running it at home means I can do a lot more. I orchestrate Python with json via stdin/stdout, and then the rest of my code is C#.
random person#5234: I posted some press material in offtopic
random person#5234: Basically its automatic layer precision from fp16 to fp8
random person#5234: AMP on hardware level
faraday#0862: what's the best tooling for poor man's research? Paperspace Gradient or Colab Pro+ ?
Deleted User#0000: magma repo recently linked the EleutherAI/magma spaces demo, shows up on paperswithcode under quickstart https://cdn.discordapp.com/attachments/729741769738158194/955868659467841556/Screen_Shot_2022-03-22_at_12.38.41_PM.png
𓅬 gabriel_syme 𓅬#3220: wait, is the PCIe a commercial version?
tpapp157#3643: Both are datacenter hardware. Just different interface types.
𓅬 gabriel_syme 𓅬#3220: nvm yeah just read the last line 😄
𓅬 gabriel_syme 𓅬#3220: sparsity I see, pretty cool
tpapp157#3643: We don't know yet how these will scale down to professional and consumer cards.
asparagui#6391: i do longer runs and it is nice to simply start something and forget about it
|
Emad#9608: Assume 2x speed up for 4090 to be announced end of year
tpapp157#3643: At a basic level, I doubt they'd use a different type of tensor core across their architectures (at least they haven't in prior generations) so it's probably safe to assume the 2x increase in tensor core throughput will carry over. In terms of the number of cores and clock speed, who knows. So double the performance of Ampere cards is a reasonable starting point. Of course, this assumes you can actually saturate those tensor cores.
Emad#9608: https://huggingface.co/blog/ai-residency
Louis#0144: whats the super computer thats gonna have H100s
Louis#0144: Eso?
Louis#0144: i cant find any info on it
ari#9020: Eos
Louis#0144: are they gonna use it for language modeling
Hugh_IFS#6062: They could use it for literally anything
Emad#9608: EOS will be about 275 pflops on top 500 list so less than fugaku but plenty fast
Louis#0144: @Emad can u convince them to make catgirls
Louis#0144: 🥺
Emad#9608: 576 x DGX H100 systems with a total of 4608 x DGX H100 GPU
Emad#9608: Why I'm getting my own
Louis#0144: true
Louis#0144: ur right
Louis#0144: emad wants to monopolize catgirls
Emad#9608: no
Emad#9608: open catgirls
kurumuz#5695: we will have a fight
|
Emad#9608: open husbandos
Louis#0144: lmao
kurumuz#5695: doesnt matter if its open
Emad#9608: kuru is the one that is NFTing them
kurumuz#5695: i will make my open and still compete with youi
kurumuz#5695: competition is great dude
kurumuz#5695: it feels like so fun
Emad#9608: nooo cooptation
kurumuz#5695: just one upping each other
kurumuz#5695: competition is literally how you get the best stuff
kurumuz#5695: cooptation is more like trying to agree on what to do
kurumuz#5695: just do both of the things
kurumuz#5695: let them compete
Emad#9608: no its smart mechanism design
kurumuz#5695: hm
kurumuz#5695: how so
kurumuz#5695: I don't know how you guys operate so I am curious
Emad#9608: cooptation allows for natural monopoly/schelling point creation
Emad#9608: so if you're the biggest funder of open source AI systems and independent/academic researchers you can do a nice roadmap for the first time
Emad#9608: and outcompete private enterprise and host the stuff at the UN and other partners
|
kurumuz#5695: I can see how that can help yeah
kurumuz#5695: nice
Emad#9608: only works at scale
Louis#0144: Nvidia vs emad vs kuru
kurumuz#5695: tbh honestly just train a big danbooru model with all that compete with diffusion
Emad#9608: yeah
Emad#9608: https://twitter.com/AlecStapp/status/1505977389502943233?s=20&t=-QL7dZV2c3gKIX9N68YsFg
Emad#9608: this is what our AI models will cause
Emad#9608: once we really crack waifus and husbandos
kurumuz#5695: LMAO
asara#0001: we are going to need some interesting developers in the area of fertility...
asara#0001: or, well, maybe not, depending on what happens
kurumuz#5695: the sexbots
kurumuz#5695: or
kurumuz#5695: i mean sexbots will not matter when we just upload ourselves
kurumuz#5695: anyway AI girlfriendo
Spacecraft1013#5969: just hope openai doesnt monopolize advanced catgirl waifus
Spy#9778: If you're just sitting down and training a transformer from scratch, what do you guys consider the "default" choice re: learning rate schedule
Spy#9778: linear warmup/linear rampdown?
Spy#9778: still cosine?
|
alstroemeria313#1694: eh i do exponential warmup but they're nearly the same
StellaAthena#3530: There was a paper, I believe it came out in the past couple days, about question-answering with LLMs that autoregressively generated a bunch of candidate answers to a question, and then returned the answer that was generated the most times.
Does anyone know what I'm talking about?
CRG#8707: https://twitter.com/arankomatsuzaki/status/1506081647736852484?t=mgEzUzLKE_5ZGq6KV26LnA&s=19
Hugh#1639: Isn't this basically just how few-shot works? Or vice versa?
cfoster0#4356: Nah few shot in the gpt context means you give it multiple examples of task input-output pairs and use it to complete a new input with an output
cfoster0#4356: Maybe internally there's some mechanism that works kinda like this during few shot (like, via some ensemble of paths), but not explicitly
Louis#0144: https://youtu.be/_8qjdBP8wwI
Louis#0144: here is the introduction to narrative theory talk i gave yesterday
Kia#2550: Your voice is amazing:goose6: (I didn't heard your voice by the way)
Kia#2550: huh https://cdn.discordapp.com/attachments/729741769738158194/956199174871867392/Screenshot_2022-03-23-22-33-56-12.jpg
Louis#0144: oh
Tinytitan#5596: how long was it anyway
StellaAthena#3530: It's still up for me
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/956202598917111878/Capture.PNG
Kia#2550: Interesting
Louis#0144: https://youtu.be/d7t4r2ybRD8
Louis#0144: I reuploaded
Louis#0144: It wasn't up for me anymore 😆
|
kurumuz#5695: really hard to hear
nshepperd#2316: there is something odd about the audio
nshepperd#2316: charismatic goose tho
Louis#0144: The audio seems ok?
Louis#0144: Oh
Louis#0144: It's kinda choppy
Louis#0144: Ouch
Louis#0144: Ill redo the talk for Eleuther
Louis#0144: Obv solution
kurumuz#5695: yeah cant understand anything from that audio
Louis#0144: Ok
Louis#0144: I'll just redo the talk
rbaldock#1058: The Nitty-Gritty won't stop because it can't stop!
Seminar alert: The second Nitty-Gritty online ML seminar, organized by Aleph Alpha, is happening tomorrow!
Title: "Adapters in Transformers. A New Paradigm for Transfer Learning…?"
Speaker: Jonas Pfeiffer [NYU / TU Darmstadt]
Time: March 24th 3pm CET / 10am EDT / 7am PDT, duration 1 hour.
Zoom link: us06web.zoom.us/j/81741723741
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.