data
stringlengths 115
7.61k
|
---|
Website: https://nittygritty.ai/220324_jonas_pfeiffer.html
Please join us and be a part of the discussion!
To receive notifications about this and future Nitty-Gritty ML Seminars, you are welcome to join our mailing list: subscribe.nittygritty.ai
Thanks to everyone who came last time for Ethan Perez's seminar on Red Teaming Language Models with Language Models. For those who missed out, we'll be uploading the recording in the coming days.
cfoster0#4356: Looks like an interesting series! In the future can you ask a mod before advertising events here? Our general rule is no advertising, though this is the sort of thing we're typically happy to approve if asked
alstroemeria313#1694: hey how do i autodownload a file if it's not present and use the local copy if it's already downloaded?
i can't do <https://pytorch.org/docs/stable/hub.html#torch.hub.load_state_dict_from_url> because it's a pickle not a pytorch serialized file
inox#5400: I would bet $10 copilot will write that function for you
alstroemeria313#1694: i would be worried there were subtle bugs in it
EricHallahan#1051: That is an extremely valid concern, considering that my experience with the function that does this in torchtext was filled with bugs.
EricHallahan#1051: If it is no longer buggy you can try that though.
https://pytorch.org/text/0.12.0/utils.html#download-from-url
alstroemeria313#1694: if the file
alstroemeria313#1694: and we maybe want to put it in a good location
alstroemeria313#1694: also it has to check the sha256 hash
EricHallahan#1051: Yeah then I would look into https://pytorch.org/text/0.12.0/utils.html#download-from-url
alstroemeria313#1694: there is one in torch.hub()
alstroemeria313#1694: but it doesn't cache
|
alstroemeria313#1694: how does openai do it in their repos
alstroemeria313#1694: ah <https://github.com/openai/CLIP/blob/main/clip/clip.py#L42>
Deleted User#0000: is there a pip package for vqgan clip or clip guided diffusion, similar to https://github.com/thegeniverse/geniverse
Qq#7586: how does CLIP guided diffusion work with images that aren't at the model resolution? do you resize the image before passing to CLIP then resize the gradients back?
alstroemeria313#1694: we do several random crops of random size
alstroemeria313#1694: and resize them all to the CLIP input size
alstroemeria313#1694: like from 16 to 128 random crops
Qq#7586: ooh makes sense, thanks! I'm using 32x32 images so I guess I'll have to settle for scaling them up :P
DigThatData#7946: > **Video unavailable**
> This video was removed because it was too long
):
nshepperd#2316: what, again?
nshepperd#2316: isn't the max supposed to be like 10 hours
Kia#2550: @DigThatData
alstroemeria313#1694: you should do random translates
alstroemeria313#1694: as well
alstroemeria313#1694: like for ViT-B/32 CLIP, by up to 16 pixels in either direction, for ViT-B/16 CLIP, by up to 8 pixels in either direction
alstroemeria313#1694: (pixels being post resize to 224x224)
alstroemeria313#1694: one random translate per iteration improves quality a *lot*
alstroemeria313#1694: if you are scaling the model output up
|
Qq#7586: thank you I'll try that out :)
DigThatData#7946: I adopted the convention that torch.hub uses, which is to store the model in a hidden "cache" subfolder in the users namespace. Check out how I modified AdaBins to auto-download:
* https://github.com/pytti-tools/AdaBins/blob/main/src/adabins/model_io.py#L7-L28
* https://github.com/pytti-tools/AdaBins/blob/main/src/adabins/infer.py#L107-L117
DigThatData#7946: oh lol just noticed that comment... I still need to send gdown a PR about that lol
DigThatData#7946: ...here I"ll just highlight the important bit for download location.
```
def dl_mymodel(dest=None, is_retry=False):
model_name = "mymodel"
logger.debug(f"Attempting to fetch {model_name} pretrained weights...")
if not dest:
dest = os.path.expanduser(f'~/.cache/{model_name}/')
...
```
tammy#1111: so, i hear AI safety has more money than it knows what to do with
is there a standardized procedure for independent AI safety initiatives to ask for some of it ?
𓅬 gabriel_syme 𓅬#3220: this was one, I'm sure if you follow the forums / right people more will come up
https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals
𓅬 gabriel_syme 𓅬#3220: others here will know more though
tammy#1111: oh yeah that's right
|
tammy#1111: fair enough
𓅬 gabriel_syme 𓅬#3220: I would imagine, like always, reaching out to people involved and maybe sharing ideas you might have could help. There's a pretty nice community in here too, in the Alignment channels.
Bedebao#4842: A good while ago EAI was looking for native speakers of other languages to proofread materials before integrating them into a multilingual Pile. Pretty sure I haven't been pinged ever since getting the role. So is that still on hold? Is it related to #polyglot?
Daj#7482: That project was effectively abandoned a long time ago
Daj#7482: I forgot we still had the role lol
Bedebao#4842: Let me guess, having so many different languages is bad for the tokenizer?
Bedebao#4842: Last I recall AI21's huge model has way too many tokens and that's why it performs badly.
Daj#7482: No just the main Pile contributors got burnt out and no one else wanted to put in the work to make the project happen lol
Daj#7482: The Big Science 176B model is multilingual and is looking fine
Daj#7482: (though it's not finished training yet so we'll see)
Bedebao#4842: I wonder why most big models pick 176B as the size.
Daj#7482: 1B bigger than GPT3? lol
Bedebao#4842: Or well, in that range at least. The question also goes for 175B instead of a round 200B.
newton#2015: but i am going on a very long vacation where i will have very little access to internet,
can you guys suggest me a playlist of cool ai papers and their implementation so i can get up and running and start contributing to your projects ?
𓅬 gabriel_syme 𓅬#3220: There are channels on the left with projects and/or domains of research currently ongoing. I would recommend seeing if one of them sparks your interest, and then going over the pins and some of the backlog discussion
Daj#7482: also https://discord.com/channels/729741769192767510/729741769738158194/801630685525835787
𓅬 gabriel_syme 𓅬#3220: Also, take a look here for ideas and work
https://github.com/EleutherAI/project-menu/issues
|
Daj#7482: Sort of out of date but there's good stuff in there
newton#2015: i am looking for a general overview of ai first, i am extremely new
Daj#7482: This is not the best place to ask for beginner advice, consider some of the servers in #communities
newton#2015: btw i found out about you guys from ieee spectrum
newton#2015: btw i already know the basics
newton#2015: i want to go the depths
newton#2015: like i have read the bert paper and i understand the basics but i want to learn by building it
Daj#7482: check out the reading list I posted
newton#2015: this one ?
Daj#7482: and otherwise just trying to get a transformer coded and running is a good first project
&.#0001: The Davinci-002 model from OpenAI supports a 4096 context window
&.#0001: How do you feel about the next step up for GPT-NeoX having a larger context window?
kurumuz#5695: just finetune it
&.#0001: One can finetune into having a larger context window? Or are you saying to use fine tuning instead of a long prompt?
kurumuz#5695: first one
kurumuz#5695: just be aware that it will be a lot slower and use a lot more memory
&.#0001: How much compute might that take? Also, is anyone working on 8 bit NeoX?
kurumuz#5695: :shrug:
ilovescience#3282: EleutherAI is in IEEE Spectrum? Was this recent?
StellaAthena#3530: Three days ago:
|
https://spectrum.ieee.org/eleutherai-openai-not-open-enough
ilovescience#3282: wow that's pretty awesome! I used to read IEEE spectrum all the time when I was younger lol
asara#0001: feels surreal seeing that original discord message in the article lol
ilovescience#3282: yeah who would think that a discord message would be in an IEEE spectrum article lol
asara#0001: this is what the future feels like (well, part 1 of part 1 of..)
ilovescience#3282: i wonder if this article will be in the next issue of the magazine... i will take a picture if it does lol
EricHallahan#1051: It's a screenshot of the FAQ.
EricHallahan#1051: I wanted something more than a blockquote for the year-one retrospective, so I replicated the CSS lol
ilovescience#3282: eh that's close enough
ilovescience#3282: it looks kinda nice on the website ngl
EricHallahan#1051: I purposefully never replicated the Discord light theme for the messages lmao
EricHallahan#1051: Since it is so cursed.
ilovescience#3282: yes agreed
EricHallahan#1051: It *does* exist however for the channel tags and mentions.
StellaAthena#3530: https://twitter.com/tronsgaard/status/1506658306395418624?s=20&t=V3x-bICmhWlJ-E-NOXa0xQ
asciidiego#8633: what exactly do you need? i am a core contributor there. perhaps I can help
newton#2015: among wired and popmech
spectrum is my fav
|
should I know any other mags ?
newton#2015: did i miss any ?
newton#2015: I know i did
newton#2015: but i kinda want your suggestions since your uname is @ilovescience
Tinytitan#5596: quanta?
newton#2015: a wonderful mention
newton#2015: also Smithsonian was good in the past but now meh...
45#2247: using only numpy? pytorch?
Daj#7482: Just do it in pytorch
Daj#7482: imo
45#2247: challenge accepted
45#2247: what's a good benchmark to check if it can learn something useful?
Daj#7482: Do generation with it and see how garbage it is lol
Daj#7482: Or compare it to similar sized GPT2 loss
kurumuz#5695: ye train on owt2 or something
kurumuz#5695: pile is too big to be a toy dataset
random person#5234: Just copy bert-base for architectural choices
! vikalexea#7105: imagine connecting a AI to an button (or deadly neurotoxin) and it's decide if you pass (survive) or not (dead)
cfoster0#4356: Imagine not doing that :)
! vikalexea#7105: : (
|
! vikalexea#7105: replace the deadly neurotoxin with cake 😄
breathesmall#0882: Does anyone know why i might be getting a 'Killed' message and crash when in python i make a generator with pipeline('text-generation', model = 'EleutherAI/gpt-j-6B')?
It works fine with neo 1.3B
kurumuz#5695: running out of memory most likely
breathesmall#0882: How much memory should i need to have?
breathesmall#0882: I thought i saw only 16g online
breathesmall#0882: I'm running Ubuntu
breathesmall#0882: Have plenty of hardrive empty
breathesmall#0882: I did free -h
breathesmall#0882: It says i have 12Gi available
breathesmall#0882: Is that not enough?
EricHallahan#1051: You most likely won't be able to naively construct it in the pipeline. Instead, you should explicitly construct the model and tokenizer, then construct the pipeline from those.
See this thread for some background as to why:
https://discord.com/channels/729741769192767510/729741769738158194/892830861681102878
https://discord.com/channels/729741769192767510/729741769738158194/892832770714390548
https://discord.com/channels/729741769192767510/729741769738158194/892835301641310230
kurumuz#5695: pipeline abstraction is pretty bad
breathesmall#0882: Oh my. Thank you. So is this something i should be able to do on my PC or do I need better hardware?
|
kurumuz#5695: you need 16GB on your RAM or VRAM
EricHallahan#1051: You loose much access to the underlying model if you pass it naively. I generally recommend against the pipeline abstraction unless you need to throw something together quick.
breathesmall#0882: Do you know of any good videos or tutorials that show how to do it without pipeline in python? I was just following a tutorial.
EricHallahan#1051: The best I can suggest is to read the Hugging Face Transformers documentation. You would probably have better luck finding support on the Hugging Face community forum, as mentioned in our FAQ. <https://www.eleuther.ai/faq/>
breathesmall#0882: Great, thanks man, i really appreciate it!
Leo Sanders#1157: Hi all! 👋 I have a question on your GPT-Neo-125M files here: https://huggingface.co/EleutherAI/gpt-neo-125M/tree/main
Leo Sanders#1157: Does your team still have GPT-NEO-125M in TF model format? I fount the PT, Rust, Flax ones but not TF. I’m trying to see if I can avoid converting myself PT -> ONNX -> TF as I tried it on some other PT models and wasn’t really successful :wrong_goose:
Teemochu#8740: especially if it's longevity-adjacent
Louis#0144: Rust????
kurumuz#5695: can you delete that please
kurumuz#5695: I don't think this is the place for it lol
bmk#1476: I deleted it
bmk#1476: not fit for #general
newton#2015: in meme perhaps ?
bmk#1476: #off-topic
newton#2015: i already started one using only Jax....
Leo Sanders#1157: Not sure what this one is :goose2: https://cdn.discordapp.com/attachments/729741769738158194/957268345021943829/IMG_0898.png
Leo Sanders#1157: Any idea how to get the TS? I’m sure it’s on someone’s computer maybe Leo Gao? He was the one uploading the PT version it seems
Louis#0144: No idea
Leo Sanders#1157: https://cdn.discordapp.com/attachments/729741769738158194/957278697013010522/IMG_0899.webp
|
Louis#0144: Whats SOTA for: given a sentence X, negate X
Leo Sanders#1157: Does anyone know who in your EleutherAI team did the training of the original GPT-NEO-125M model?
Leo Sanders#1157: I reached out to Guillaume Becquin and he mentioned the original Neo models were in TF Mesh and then converted to PT.
newton#2015: https://discuss.huggingface.co/t/what-is-rust-model-ot-and-how-can-i-use-it/769
newton#2015: @Leo Sanders may help but i didn't read it....
Leo Sanders#1157: Thanks, I dont think I will use this format. I’m looking for the TF format.
Deleted User#0000: Has anyone trained a model on the Scihub database? I would think this would be a very useful project.
Deleted User#0000: maybe it wouldn't be possible to train it on all of scihub though
Deleted User#0000: due to its size
Deleted User#0000: and legality?
Deleted User#0000: what would be cool is a GPT based system that acts as a kind of search engine, where you can ask it questions and it can construct an answer with some sources/ask question and it provide relevant articles/sources, trained on stuff like medical papers
Leo Sanders#1157: That’s what Google and Microsoft are working on
ewald#7730: what do you guys think of the OSCAR multilingual corpus?
newton#2015: a model that can write papers that pass peer review....
newton#2015: 😆
ac#1874: quick question about RL algorithms -- what's the difference between "iterated distillation and amplification" and "expert iteration"? or are these just two names for the same process? (i.e. of training a policy to imitate an expert, using the policy in some form of tree search to generate 'expert data' for the next iteration of the policy)
Metroproxyn#2769: Greeting! Really impressed with the idea & organisation of your project. I want to start contributing.
newton#2015: https://arxiv.org/abs/2009.06857
newton#2015: can anyone explain what's going on in her ?
Deleted User#0000: why would you ask such an unspecific question
|
Kharr#7888: I think what's most interesting is the use of the `royal we` :berk:
James#6892: Can you provide more details on this?
Kharr#7888: Have you done a Google Search recently? It will provide information about most questions with linked source. Though it currently highlights the statements in the source instead of generating something.
James#6892: Yes, am aware of this in google. Was wondering if there was some separate product. Also not sure what Microsoft’s version of this is.
Kharr#7888: If you try Bing, it's very similar
alstroemeria313#1694: How well does the "train with dropout then use dropout in inference and sample a lot of results with different random units dropped out" thing work?
CRG#8707: There was: http://proceedings.mlr.press/v48/gal16.html
CRG#8707: Also section 2.1 of: https://arxiv.org/abs/1806.03335
CRG#8707: TLDR; apparently not very well?
alstroemeria313#1694: Ah.
CRG#8707: Paging some discussion in the yannic server: https://discord.com/channels/714501525455634453/719799140267327610/926413331114373140
johnryan465#9922: It does have the strength of being able to take function samples which can be useful for calculations of certain properties contrasting with GPs
johnryan465#9922: Used to good effect in https://arxiv.org/abs/1906.08158
johnryan465#9922: (full disclosure the author (Yarin Gal) is my head tutor at university)
mhop#8966: anyone know the latest state of art knowledge graph extraction algo? I remember playing around with OpenIE a few years ago.. not sure if there's been notable improvements
fe#0483: Anyone see this? https://semantle.novalis.org/
Leo Sanders#1157: https://medium.com/@nqabell89/microsofts-exclusive-gpt-3-license-3a48af54f921
EricHallahan#1051: *Internally screams at description*
StellaAthena#3530: (For those out of the loop, Elon Musk hasn’t ben involved with OpenAI for years, since before GPT-3 was even trained)
EricHallahan#1051: (*also feels the need to internally scream at the fact that it is a Medium article*)
|
Leo Sanders#1157: That illustrates the point of Microsoft’s involvement in NLP, replying to @James
EricHallahan#1051: You could of course directly cite it from the horse's mouth.
https://news.microsoft.com/2019/07/22/openai-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/
https://blogs.microsoft.com/blog/2020/09/22/microsoft-teams-up-with-openai-to-exclusively-license-gpt-3-language-model/
These don't address the claim that "Microsoft is working on retrieval" of course.
Leo Sanders#1157: You do have great articles on Medium look at this interesting take https://medium.com/@miketoole/it-has-been-twenty-years-since-fabio-killed-a-goose-with-his-face-on-a-roller-coaster-a87d51285890
EricHallahan#1051: This also entirely glosses over the fact that Microsoft has plenty of teams working on NLP internally; they don't just outsource to OpenAI.
James#6892: Still don’t understand why Microsoft is working on retrieval
James#6892: Yes they have exclusiveness but doesn’t mean it’s retrieval (though webgpt is doing this)
James#6892: Anyone know the price of the new h100s?
James#6892: Given it’s 7-30x better than A100 for transformers and language models, would be interesting to see the price
bnanrz#1693: I read somewhere it was in the range of 30k apiece
mhop#8966: I think microsoft's project turing does a good chunk of the NLP application work
mhop#8966: mostly into bing accessories i think
𓅬 gabriel_syme 𓅬#3220: let me know if you try it, wonder how diversity would look
IDK#1046: Where can I learn how models with 4-bit weights work?
IDK#1046: AFAIK there is no hardware support for that
Metroproxyn#2769: maybe this: https://www.mdpi.com/2079-9292/10/22/2823/htm
newton#2015: its for low latency
|
there are some xilnix cores, such systems use way more neurons to compensate for lack of precision
IDK#1046: Oh, so no dedicated hardware, but they do this ob fpgas
IDK#1046: Isn't it more expensive than using gpu?
newton#2015: in workloads where the low latency benefits outweighs the cost it definitely makes sense....
newton#2015: also the training is done in gpu,
and then deployed in so called "neural MCUs" and edge accelerators
IDK#1046: How low latency do they need?
newton#2015: there are some wierd space magic/haitian voodoo programs that take in a normally trained model and outputs mixed precision, fixed precision or integer model using information theory magic tricks
IDK#1046: Can I have some links to this magic?
newton#2015: there's never enough of any performance metric, the day humanity becomes satisfied with what they have is the day progress stops
newton#2015: i am on vacation in a remote place with extremely low speed satmodem, when I am back in the city i will give you the sauce...
newton#2015: or if they fix the local coast guard microwave link that servers the place before I am back... they tell me that one had good speed but been broken for 3 months and they are fixing it....
newton#2015: god i hate elon the apartheid emerald mine baby so much,🤢🤮
newton#2015: microsoft is the most cringe company ever.
newton#2015: tds meaning towards data science ?
StellaAthena#3530: There are a lot of reasons to dislike Musk, but that’s not one of them. He was abused by and ran away from his father and did not start his companies with money with blood gems.
thenightocean#6100: Exactly. There are better reasons to hate Elon.
https://twitter.com/esyudkowsky/status/1446562238848847877?lang=en
𓅬 gabriel_syme 𓅬#3220: Did we talk about this, I was out all day
|
ERROR: type should be string, got "https://statmodeling.stat.columbia.edu/2022/03/28/is-open-ai-cooking-the-books-on-gpt-3/\nTheAloof#8651: Eliezer is just insufferable. He has become the Gary Marcus of alignment at this point, caught in a feedback loop of negativity and attention.\nStellaAthena#3530: This very much comes across like Gelman thinks that humans are editing InstructGPT-3’s responses before they’re sent back to the user? It's also weird that, while he rightly criticizes the lack of public demo or external verifiability of LaMDA, there’s zero acknowledgement that GPT-3 is the only >100B param model in the world which is widely publicly available. It's not free, but it is reproducible.\nStellaAthena#3530: https://twitter.com/ThomasSimonini/status/1508467545577345026\nLouis#0144: Oh!\nLouis#0144: That's nice\nLouis#0144: 🙂\nStellaAthena#3530: @janus @bmk DTs just got much easier to use\nalstroemeria313#1694: ooh\nalstroemeria313#1694: I should see if I can get mine to extrapolate using superconditioning-like tricks\nalstroemeria313#1694: Or well, achieve higher reward than I can otherwise, since it doesn't actually give me the reward I asked for (it gives me consistently lower)\nnshepperd#2316: train it with reward dropped out 20% of the time so you can supercondition on it?\nalstroemeria313#1694: i can try it without fine-tuning by picking two reward values to prompt with and interpolating further in the direction of the higher one\nAspiring Scout#7875: Why is that\nAspiring Scout#7875: Is it clickbaity?\nEricHallahan#1051: Self-promotion\nEricHallahan#1051: Which tends to come with pretty aggressive SEO and cross-promotion.\nalstroemeria313#1694: I doubt the superconditioning like thing will work well also but it may be worth a shot\nnewton#2015: its not about him benefiting from apartheid, (which every white south African did) its about his mindset being built in a society where apartheid mining moguls are A-OKAY everyday neighbors.\n" |
if anyone from privileged group who subscribes to culture of selective blindness to their privilege they are not good people.
newton#2015: yeah like his lack of understanding of vacuum thermodynamics, or whats a vacuum seal, or bend radius of something traveling 6 times the speed of sound, or his stupid car tunnel project
....
or like him paying out tesla founder to get to call himself "founder".....
should I go on ?
asparagui#6391: @IDK ampere supports 4 bit quantization
asparagui#6391: the t4 processor does as well
asparagui#6391: here is a presentation i did, demoing running 4-bit quantized resnet software on a processor (t4) with int4 hardware
asparagui#6391: https://brettkoonce.com/talks/tensorflow-and-swift/
IDK#1046: Which one of them?
asparagui#6391: all ampere chips
newton#2015: 🔥
chilli#5665: btw, @kurumuz , you might be interested in this
chilli#5665: https://loop-tool.glitch.me/bolt.html#N4Igxg9gJgpiBc4IDsDOAXABAbWQGkwFsBdTAXkwBt0A6VAT0ICMJLUAKAHRGSO4EpOySGiyoAXuUwAmAFSyAjAAYhAbiEiMmAIZTqNdDDQQATuwn8DEdsn7rhKLUz21DxsxavtCdjY6xgUto0hACulOxMlqihhDa+wiB4IJQQEAAOAProJjBwiABu8MiZAGwAzOVCRYRl5QAsQsjw0vDwSnht7V1tTS1tCp1tSj3wfQDs8ACseIPts0PdCqrl0qodSqrK0vXrs6oAHHvLux1teHihyCaslHgFMGDopgCW4jBDfVMtnXPD8MtVntNttTvsjh0Tnt4PU8OVOhcrjdKHcHk9Xu9Pshmgp4PCAYsRoC1pDDsdVGDzkjbvdHs8TG8Pr1sTD5mz/sTgVslDtofALsyXvNBkIIDC8EIQABfIA=
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/958131834997321808/unknown.png
chilli#5665: (not my project, but somebody I work with)
chilli#5665: it's an interactive loop-nest optimizer (kinda like Halide/TVM) that generates WASM, all in your browser
kurumuz#5695: :thonk: @chilli https://cdn.discordapp.com/attachments/729741769738158194/958136593779130408/unknown.png
chilli#5665: never claimed it was bug free 😛
|
kurumuz#5695: lol
chilli#5665: instructions are here
chilli#5665: https://loop-tool.glitch.me/viz.html
kurumuz#5695: so the middle part is your IL?
chilli#5665: yeah, it's the loop nest
kurumuz#5695: and the one on the right is WASM i assume
kurumuz#5695: nice
chilli#5665: yeah
kurumuz#5695: I never did wasm actually, but @aero should be very interested
aero#1357: damn 👀
chilli#5665: I'm prolly gonna write some kind of "hands on guide to loop optimizations" with this
chilli#5665: like, it's really trivial to show, say, the effects of loop reordering
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/958137149352472616/unknown.png
chilli#5665: or of loop fusion
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/958137202532032552/unknown.png
chilli#5665: or show a rfactor
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/958137284933345290/unknown.png
chilli#5665: you can also see that in this link I show an example of tiling/blocking
𓅬 gabriel_syme 𓅬#3220: Nice it's out, I'll be using this!
𓅬 gabriel_syme 𓅬#3220: So I have a question, might be naive. Is there any architecture for image(-text) generation (GAN, VAE, Diffusion, etc.) that also outputs a confidence value of some kind for each pixel in the image? I'm not entirely sure what that might be or how it's calculated ofc, perhaps a confidence wrt a prompt or class?
|
EricHallahan#1051: I mean there are two ways of approaching this: looking at each as a continuous value, or looking at each as a categorical over a discretization of the entire dynamic range.
𓅬 gabriel_syme 𓅬#3220: I was thinking it as a way to intervene in a targeted manner on certain regions of the image, smh
EricHallahan#1051: For example, WaveNet does the latter.
EricHallahan#1051: I was going to say "that machine looks familiar," then I realized that I had good reason to. :berk:
https://youtube.com/watch?v=sl61lHwo3YE
CKtalon#7792: I was wondering if they would review the DGX A100 workstations, but guess they went one step further
atordvairn#0674: anyone having a gpt-j api ?
Kia#2550: https://goose.ai/docs/api/completions
ethan caballero#6044: https://twitter.com/ethancaballero/status/1508880761314807824
DigThatData#7946: what's "superconditioning"?
alstroemeria313#1694: https://twitter.com/RiversHaveWings/status/1478093658716966912
DigThatData#7946: so if I understand correctly, you're trying to amplify the influence of the class-conditional residual relative to the dataset marginal distribution captured by the null token? and I guess this has the effect of sharpening the class-conditional modes? my head has been swimming this morning, sorry if I didn't fully grock that twitter thread. having the kind of morning where I'm like struggling to read papers I literally read and understood yesterday.
alstroemeria313#1694: yes
alstroemeria313#1694: it is to deal with the effects of training on paired image/text datasets where the text often does not match the image very well
alstroemeria313#1694: but we can maybe try it on decision transformers too
alstroemeria313#1694: like "trajectories with high reward take <x> action here more often than trajectories with medium reward, take <x> action even more often"
DigThatData#7946: you said you tried LAFITE's pseudo-text thing and it didn't help much, right?
alstroemeria313#1694: oh we got it to work with CLOOB
DigThatData#7946: also, the broader strategy you're describing here reminds me of permutation testing
alstroemeria313#1694: @Drexler is showing off a CLOOB image embed conditioned model in #art
|
alstroemeria313#1694: what's that?
DigThatData#7946: you know how bootstrapping simulates the data distribution? permutation testing simulates the null
DigThatData#7946: you compute some test statistic with the labels shuffled relative to the data, so the conditional relationship is broken
alstroemeria313#1694: oh
Downy Thornapple#9035: Has AI gotten better at this task (creating new lists of things with appropriate names and colors)? It's been a long time since I've seen a new one, or one that really impressed me. https://cdn.discordapp.com/attachments/729741769738158194/958444097876672572/277248104_10159932368515119_1108389758109224258_n.png
DigThatData#7946: this let's you estimate a p-value for basically any statistic non-parametrically
DigThatData#7946: ...as an example. this technique isn't limited to p-values
alstroemeria313#1694: well we aren't actually fitting a second model with shuffled labels
alstroemeria313#1694: we are just masking the label sometimes
DigThatData#7946: the broader idea is breaking the conditional relationship to make it clearer how the conditional behavior differentiates from the marginal behavior
alstroemeria313#1694: the model explicitly gets told that the label is masked
alstroemeria313#1694: ah
DigThatData#7946: yeah the more I describe this, I think I'm just analogizing permutation testing to contrastive learning generally
alstroemeria313#1694: oh this isn't actually contrastive even
DigThatData#7946: isn't the subtraction term a kind of auto-regressive contrastive loss?
alstroemeria313#1694: it's not a loss?
alstroemeria313#1694: we just do it during inference
DigThatData#7946: oh
alstroemeria313#1694: during training we mask/drop out the condition
alstroemeria313#1694: like we learn a log p(tokens) and a log p(tokens|text)
|
alstroemeria313#1694: using bayes' theorem, we can subtract these to get a log p(text|tokens)
alstroemeria313#1694: then construct an artificial "sharpened" distribution which we add back
alstroemeria313#1694: specifically we sharpen it by forming cond_scale * log p(text|tokens)
alstroemeria313#1694: i.e. p(text|tokens)^cond_scale (with the appropriate normalizing constant)
alstroemeria313#1694: and sample from the base p(tokens) conditioned on *that*
DigThatData#7946: thanks for the detailed explanation, I'm pretty sure i get the gist. I'll have to re-read this later though, my head's still swimming. maybe I'm still acclimating to adding allergy meds back into my routine
alstroemeria313#1694: :)
DigThatData#7946: I guess one thing that's still unclear to me: why is this trick limited to inference? couldn't you apply this during training to learn the "sharpened" distribution directly?
DigThatData#7946: or is the dual sampling thing that this requires just really expensive to backprop through?
DigThatData#7946: you know what, you've already given me a lot of your time explaining this. don't worry about it, go be productive :p
alstroemeria313#1694: you could *distill* the sharpened distribution into some model but i think you have to actually learn it first
alstroemeria313#1694: i don't know how to do it directly
alstroemeria313#1694: oh
alstroemeria313#1694: Yeah the way to do it directly is to train an unconditional base model plus a separate classifier model.
alstroemeria313#1694: Then take the classifier's output distribution and sharpen it (multiply its output logits by cond_scale).
alstroemeria313#1694: This trick lets you reuse the same model
ewald#7730: wow this is cool
jcjc#6291: Anyone know if there exists a repo for the positional embeddings benchmarking experiments done in: https://blog.eleuther.ai/rotary-embeddings/ specifically the OpenWebText2 + 125M one. Tried searching in the gpt-neoX repo but found nothing. Thanks 🙂
apolinario#3539: For everyone curious about "Make-a-Scene" and whether or not it was going to be open sourced: I asked one of the authors (and Meta's Research Director) on Twitter whether the code or pre-trained models were going to be released and this is what she answered: https://twitter.com/deviparikh/status/1508929281111441409
DigThatData#7946: ah yes, the ol' openai response
|
cfoster0#4356: I hope they know that it's okay to just say no
Kharr#7888: the unconditioned logits = logits without any context for GPT-style models? EDIT -- never mind, I read further and you need to look at a null token for that distribution
Kharr#7888: In a pretrained GPT model you could probably use the EOT token to get the unconditional distribution
alstroemeria313#1694: you can use the previously sampled tokens as the context if you have them
alstroemeria313#1694: for the unconditioned logits.
alstroemeria313#1694: (bc they are supposed to be not conditioned on the *prompt* but are conditioned on previously sampled tokens still)
alstroemeria313#1694: you just need the token distribution in the dataset (or EOT i guess but that's not quite correct) for the unconditional logits for the first sampling step
Kharr#7888: Very neat, I'll have to try it and see if it works out of the box on some of these models.
Kia#2550: "but we are discussing it" sounds like "It's definitely a no, and we might forgot we talked about this"
EricHallahan#1051: There was a dedicated branch.
StellaAthena#3530: Those result should approximately replicate on the main branch of GPT-NeoX. I can dig up the exact commit used if that’s important to you though?
𓅬 gabriel_syme 𓅬#3220: yeah lmao
jcjc#6291: Thanks for the info! Is the the "benchmarking" branch?
jcjc#6291: Yes. Could you please point me to the commit used? Thanks a lot!
EricHallahan#1051: No.
EricHallahan#1051: I'm looking for it.
EricHallahan#1051: I don't see the branch?
EricHallahan#1051: If it was accidentally deleted, it wouldn't be a huge loss.
That version of GPT-NeoX had a bug where T5 bias was implemented slightly suboptimally anyway. I honestly would rather see a replication of the results when in comes to the current version since it was fixed relatively recently.
newton#2015: as i dont have permission to post memes i guess this place is as good as any other...
|
https://youtu.be/YnL9vAFphmE
comedy gold😝
EricHallahan#1051: > i guess this place is as good as any other...
#off-topic is better...
newton#2015: how do i get meme permission ?
newton#2015: maybe on on things i post later....
AI_WAIFU#2844: you need to git gud, go do some cutting edge research and post updates here
newton#2015: so no meme posting for bigginers ?
cfoster0#4356: #memes is mostly for niche original content
newton#2015: then it should have been called insider humor, means are meant to be stolen reposted and shared again and again
cfoster0#4356: I don't have permissions to change the title :berk:
StellaAthena#3530: you need to git gud, go do some cutting edge research and post updates here
cfoster0#4356: Damn you right :harold:
Orz#3023: :harold:
newton#2015: so no meme for bigginers ? huh,
newton#2015: *memeposting
Metroproxyn#2769: What do you think about this? https://github.com/nebuly-ai/nebullvm
Orz#3023: post it in #off-topic ig
If people here like it, they might redirect it to #memes
|
newton#2015: can we at least have a meme thread in #off-topic or better yet my own little corner inside #off-topic .....
Kia#2550: No.
newton#2015: please 🙏🥺 https://cdn.discordapp.com/attachments/729741769738158194/958710198791446588/unknown.png
newton#2015: uwu
Kia#2550: Please continue conversation in #off-topic
Kia#2550: No Shitposting in here please
𓅬 gabriel_syme 𓅬#3220: @chilli any thoughts on this?
https://github.com/nebuly-ai/nebullvm
Louis#0144: A weird q but does anyone here have interest in working on a LaMDA like model?
Louis#0144: Like a 20b LaMDA
Louis#0144: I don't have time rn but maybe in a few months
Louis#0144: Kinda don't wanna do it alone tho 😅
Louis#0144: I'm suggesting this since I need it for some of my research in Ellie's lab :berk:
GABRIEL fatiede#7028: Is there any open source code generation tool like copilot and codex that's available
Louis#0144: iirc we trained a 20b code model but haven't released it yet
Louis#0144: There's a 6b model for python that kuru trained though
Louis#0144: I forgot what it's called
Louis#0144: Actually... @kurumuz collab???
GABRIEL fatiede#7028: Is it cuz of personal reasons or political reasons?
Louis#0144: We haven't ran evaluation yet
|
GABRIEL fatiede#7028: Who did that one?
Louis#0144: NovelAI
GABRIEL fatiede#7028: Okay
Louis#0144: https://huggingface.co/NovelAI/genji-python-6B
GABRIEL fatiede#7028: Is there anyway i can help to boost the process
Louis#0144: Idk who you should talk to
Louis#0144: @StellaAthena ?
cfoster0#4356: What does LaMDA-like mean?
Louis#0144: pretrain an LM solely on chat logs
Louis#0144: n what not
Louis#0144: or things formatted to look like chat logs
Louis#0144: Im gonna be doing ToM research at Brown
Louis#0144: and I was looking to get my hands on a thicc chat bot LM
johnryan465#9922: Do any code generating language models use asts or other graph structures?
johnryan465#9922: From what I can see they are just standard language model structure used for code
EricHallahan#1051: You would probably have an interest in IntelliCode Compose.
https://arxiv.org/abs/2005.08025
EricHallahan#1051: Microsoft has a ton of research in this area.
https://arxiv.org/abs/1912.00742
EricHallahan#1051: > Pythia exploits state-of-the-art large-scale deep learning models trained on code contexts extracted from abstract syntax trees.
|
circuit10#0158: That sounds very useful
StellaAthena#3530: Does anyone know what happens if you don’t use a feedforward layer at all and just stack attention layers in a transformer?
bob80333#4040: Is this what you're looking for?
https://arxiv.org/abs/2103.03404
Louis#0144: ooo
Louis#0144: yes i remember that paper
Louis#0144: it was rly good
Louis#0144: 🙂
StellaAthena#3530: Yes that’s excellent!
ari#9020: That's with neither feedforwards nor skip connections, if you only drop the feedforwards you get https://arxiv.org/abs/1907.01470 which works just fine
StellaAthena#3530: That doesn’t seem like a very accurate representation of the paper given that they explicitly augment self-attention with something that is intended to serve a similar role to the removed layers:
> More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer
CRG#8707: Anthropic has trained attention only models, these are those training curves overlapping with equivalent models + FFNs: (Attention only doing worse) https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html#model-analysis-table https://cdn.discordapp.com/attachments/729741769738158194/958768473931153489/535f920d1e97947754107989f77cce03.png
CRG#8707: Repeated application of the softmax makes,every token looks like every other token eventually. You can't get "external information" in the same way you can with the FFNs.
IDK#1046: That's basically common interface for dl compilers. It's a better UX, not some kind of AI breakthrough.
StellaAthena#3530: Could we bypass that by simply adding a ReLU at the end without the feedforward?
CRG#8707: I think nonlinearities in the attention would make it work. (If you mean something like a ReLU after the value projection)
Korani#4593: have anyone tried polycoder??:) https://github.com/VHellendoorn/Code-LMs
StephennFernandes#2961: I have 2 Nvidia A6000 96 GB of vram in total, and i also have Google TPU TRC TPUv3-5 accessible. Can someone please tell me whether using the A6000s or the TPUv3 would be the ideal option??, the task involves pretraining a couple of Langauge models ?/
Deleted User#0000: that depends
|
Deleted User#0000: wtf kind of slice is TPUv3-5 this should be illegal
StephennFernandes#2961: By 5 i meant 5 on-demand TPU-v3's
Deleted User#0000: 5 chips or 5 machines with 8 chips each?
bmk#1476: 5x v3-8
bmk#1476: that's the default that TFRC gives out
Deleted User#0000: and not even a contiguous slice
bmk#1476: well you can't use pods by default
bmk#1476: I think
Deleted User#0000: why live
bmk#1476: ikr if it's not running on 512 GPUs is it even a real ML experiment
Deleted User#0000: I mean, anything you just want to be a pod slice, not datacenter network, unless it's yuge
Deleted User#0000: depends what you are running
Deleted User#0000: probably tiny rl models?
Deleted User#0000: eh let's say 2x and call it a day
Deleted User#0000: or do you not have any kind of quota to run anything whatsoever
Maxime#0993: Does it work on my new intel gpu ?
Maxime#0993: arc 5
Deleted User#0000: sorry for u
Dashiell#8739: something that occurred to me yesterday: since it's so damn hard to make modern semiconductors and GPUs, there are really only a handful of companies involved, it actually would be feasible to do nuclear-disarmament style international treaties regulating the research and production of large ML models. If one were genuinely afraid of the advent of AGI as a potentially world ending event, then getting the cooperation to try and do it safely really only would be as hard as nuclear disarmament. Which is to say, requiring decades of tireless work
Dashiell#8739: which I suppose is only novel if, like me, you thought that that amount of cooperation was totally impossible
|
Dashiell#8739: but I actually think it would be possible? At the cost of just shooting the crypto and gaming industries in the head
Dashiell#8739: and then regulating the production of semiconductors and chips like yellowcake uranium
Some Point Process#3793: There's already a push to do something like this, but I'm not completely sure: https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
Some Point Process#3793: https://en.wikipedia.org/wiki/National_Security_Commission_on_Artificial_Intelligence
Some Point Process#3793: See also: https://arxiv.org/abs/1802.07228 (Amodei, Jack Clark, Yampolskiy et al.)
Some Point Process#3793: This is from 2018 and already calls for *internationally coordinated* regulation (the intended audience are governments who are concerned about AI)
newton#2015: not gonna be a problem as long as good guy with ai has more compute power
newton#2015: only way to stop a bad guy with a gun is is good guy with bigger gun
newton#2015: basic science of lithography is already known, given enough budget any national lab/university can start making chips, when when regulated too hard new players will emerge to fill the gaps/niches in the market...
Dashiell#8739: I think you are wildly underestimating the amount of technical knowledge in TSMC/ASML/Samsung/etc... Right now. I think it would take at least a decade for any given nation to recreate even a fraction of their capacity
newton#2015: nah... it won't just be economical now
newton#2015: universities regularly produce chips, and usually on smaller nodes than currently commercially available,
though low in volume its high in yield ratio
𓅬 gabriel_syme 𓅬#3220: I mean a decade is nothing for a nation, nations think in decades
𓅬 gabriel_syme 𓅬#3220: if they wanted, they could. I just feel it's not optimal right now to do so, or maybe not certain it would succeed as well (as with everything)
𓅬 gabriel_syme 𓅬#3220: and I'd be shocked if a government made a non-private company of that sort, just doesn't fit the current economic paradigm does it (although those big companies might as well be public I guess, or they might be?)
newton#2015: what ?
newton#2015: can you clarify ?
𓅬 gabriel_syme 𓅬#3220: I would not expect a Western government to initiate an effort to create a company of that kind right now
|
𓅬 gabriel_syme 𓅬#3220: even though, I would guess all of those companies are either directly government owned, or owe their prowess to significant government support
𓅬 gabriel_syme 𓅬#3220: (I might be wrong and the US does it though, who knows)
newton#2015: there are no "private companies", all companies use political means to get subsidies that in turn keeps them in business
𓅬 gabriel_syme 𓅬#3220: I agree but that wasn't what I was talking about, it was more like control
Dashiell#8739: I'm talking about the ~~possibility~~ feasibility of enforcing a multilateral treaty. The US could indeed spend hundreds of billions to trillions of dollars re-creating domestic versions of TSMC and ASML, but it'd take so long and be such a massive investment of time and money and people and land that other signatories would notice. Just like South Korea is certainly capable of developing nuclear weapons, but wouldn't be able to do so without China / Japan / NK / US noticing
newton#2015: nukes are stupid as weapons
𓅬 gabriel_syme 𓅬#3220: Yeah that is true, absolutely. But what is the problem with people noticing exactly in the case of chips? Nuclear I gey
Dashiell#8739: if it weren't for nuclear weapons NATO would be at war with Russia right now, so they must have some use
𓅬 gabriel_syme 𓅬#3220: The deterrent angle. I personally think it was absolutely dumb luck we didn't annihilate ourselves in the cold war 😄
𓅬 gabriel_syme 𓅬#3220: Literally one sane person in the right room
Dashiell#8739: oh 100%
𓅬 gabriel_syme 𓅬#3220: but at least nowadays we know what that is ye, it might be closer to a deterrent
newton#2015: nukes are useless against military targets, hard to hit anything less than size of a city block, lacks precision, heavy to transport, and actual military is the best protected against a nuke, so all a nuke will ever manage to do is piss of the whole world and a murder a bunch of civilians
newton#2015: also morale bombing doesn't work
ColdCall#4288: What? Tactical nuclear doctrine has been in war games in most major militaries for decades
ColdCall#4288: You don't need to be precise to knock out industry, push out entrenched defenses
Dashiell#8739: I really feel like this conversation has gone in a very stupid direction
Dashiell#8739: bye 👋
ColdCall#4288: Every war since WW1 has had strat bombing.
newton#2015: what did bombing camping of London did ?
|
ColdCall#4288: Knocks out industrial targets and cripples opponents ability to manage logistics.
newton#2015: with 1930s precision they would have better luck praying to Þór for lighting strike.
ColdCall#4288: This is probably better off in #off-topic or DMs
kurumuz#5695: do anyone know if OpenAI stretched/cropped the images to fit into 1:1 aspect ratio for CLIP?
kurumuz#5695: I think they center crop
EricHallahan#1051: crop
EricHallahan#1051: No other augmentations.
kurumuz#5695: @EricHallahan i kinda dont like the center crop
random person#5234: they didnt do the standard imagenet normalize?
nshepperd#2316: i would assume it's the same as the preprocess in their repo
nshepperd#2316: ```
Compose(
Resize(size=224, interpolation=bicubic, max_size=None, antialias=None)
CenterCrop(size=(224, 224))
<function _convert_image_to_rgb at 0x7fe14b8ac1f0>
ToTensor()
Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
)
```
kurumuz#5695: why do we need to normalize the images
|
kurumuz#5695: i am a noob at image stuff
EricHallahan#1051: To fit the distribution at train time.
kurumuz#5695: seems like a hack
random person#5234: it is
random person#5234: it just works well so everyone does it
nshepperd#2316: they normalized the training set so that the marginal distribution of pixels would be N(0,1)
nshepperd#2316: well, so that it would have mean and variance 0 1
kurumuz#5695: i see
EricHallahan#1051: yeah that's a more accurate way of putting it.
random person#5234: well, that normalize mean and std is done from imagenet
random person#5234: so its probably not the exact clip training set data
nshepperd#2316: idk why they felt it was necessary, i would have assumed the nn could cope either way
nshepperd#2316: yea imagenet
random person#5234: i think this is one of those things it just works well
EricHallahan#1051: Wait they copied the values from imagenet?
EricHallahan#1051: :thonk:
random person#5234: yes that particular value is a standard set of normalize
kurumuz#5695: ye they use that for all models
kurumuz#5695: that is the weird part lol
EricHallahan#1051: That is sus.
|
random person#5234: hmm actually its a bit off
random person#5234: "All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. You can use the following transform to normalize:"
random person#5234: not the same as that but close
random person#5234: maybe they did use actual clip data. imagenet should be pretty close as a distribution to real world image by clip anyways
nshepperd#2316: that's weird
random person#5234: clip data is not public as far as I know
random person#5234: so its not like you can look
EricHallahan#1051: It isn't.
random person#5234: i think most models on model zoos and torchvision use that standard imagenet pre processing. it just kinda works
tpapp157#3643: why are people getting weirded out by normalizing input data?
tpapp157#3643: That's fairly standard practice for many different kinds of models.
random person#5234: i think its just curious where the number came from
tpapp157#3643: Presumably they're calculated based on some large dataset.
𓅬 gabriel_syme 𓅬#3220: you can always calculate that for your data right?
random person#5234: if you are doing training from a fresh model yes
random person#5234: for finetuning just use what they trained it with
tpapp157#3643: Or just use something like batch norm during training and it'll effectively calculate that for you.
PETE#9327: hello, I have an issue installing the dependency, can someone help me with this?
PETE#9327: https://github.com/EleutherAI/gpt-neox/issues/602
PETE#9327: this is for the gpt-neox
|
𓅬 gabriel_syme 𓅬#3220: ~~probably something for the #gpt-neox-devs channel~~, although posting the issue already asked the question 🙂
EricHallahan#1051: No, it would not be appropriate for there.
PETE#9327: I thought so too
𓅬 gabriel_syme 𓅬#3220: oh yeah I forgot that channel points to general. my bad
EricHallahan#1051: You shouldn't need mpi4py though, unless you are planning on using the DeepSpeed MPI backend?
PETE#9327: no I do not, so should I just ignore mpi4py and remove it on the requirement.txt file?
EricHallahan#1051: Yep.
PETE#9327: okay thank you
EricHallahan#1051: This is something I tried to address nearly six months ago, but my PR stalled.
PETE#9327: we have RTX3090, how many do you think of this GPU is needed for the full weights of the model
random person#5234: like 2?
EricHallahan#1051: You'll need at least another.
alstroemeria313#1694: hey so for one epoch training (big enough data) do you just like, set a cosine lr decay schedule or whatever and go to 0 at the end of the epoch?
kurumuz#5695: for LMs? generally we do 10% of the original LR
kurumuz#5695: at the end of the epoch
alstroemeria313#1694: Ah
kurumuz#5695: instead of 0
alstroemeria313#1694: This is a diffusion model w/ a builtin LM
alstroemeria313#1694: (a GLIDE)
kurumuz#5695: i see
|
alstroemeria313#1694: so we might want to decay to less than 10% bc we do that during normal multiple epoch training
𓅬 gabriel_syme 𓅬#3220: Wait, that works?
kurumuz#5695: why wouldnt it
𓅬 gabriel_syme 𓅬#3220: Idk, I've never done that it just decayed till the end
𓅬 gabriel_syme 𓅬#3220: didn't mean it doesn't work, just expressing my ignorance on the fact it does 🙂
𓅬 gabriel_syme 𓅬#3220: I'll try it on one of my models! thanks
𓅬 gabriel_syme 𓅬#3220: oh wait, do you mean 10% warm up?
kurumuz#5695: no decay
𓅬 gabriel_syme 𓅬#3220: nvm I'm bad, I literally thought you said you do that at the last 10%
𓅬 gabriel_syme 𓅬#3220: lol
kurumuz#5695: oh
kurumuz#5695: i see
𓅬 gabriel_syme 𓅬#3220: all good, I do the same 😄
gwynbleidd#0606: Hello all. I'm Vishal, from India. I'm masters student at IIITDM Kancheepuram and also research intern at Design with AI lab, University of Sydney. Currently I'm working on co creative AI for sketching. Glad to meet you all.
Kia#2550: Goodmorning! and enjoy your stay
Orz#3023: Hey there!
I'm from India too!
glad to meet ya :)
magsafe12#6788: Hey guys,In GPT-Neo model parallelism is achieved using mesh-tensorflow.How it can be achieved with Pytorch?
|
Spacecraft1013#5969: gpt-neox achieves it using the megatron codebase, which uses pytorch
Spacecraft1013#5969: the model parallel layer code is here: https://github.com/EleutherAI/gpt-neox/blob/main/megatron/mpu/layers.py
kurumuz#5695: @Aran Komatsuzaki did you guys include danbooru this time to the dataset :berk:
kurumuz#5695: asking for someone
Aran Komatsuzaki#5714: @spirit-from-germany did you do it? lol :berk:
kurumuz#5695: I mean I guess danboruu is just image:tag pairs, not sure if that is good enough
kurumuz#5695: oh man this dataset is a lot more diverse
kurumuz#5695: so many memes
Aran Komatsuzaki#5714: ah yes true
kurumuz#5695: yeah i *really* like this dataset
spirit-from-germany#1488: no, I had a webdataset version of it, but it got deleted , Accidentally
kurumuz#5695: o noooo
kurumuz#5695: can I contribute by adding danbooru
kurumuz#5695: lol
kurumuz#5695: @rom1504 seems like i cant search over text on the knn site
rom1504#5008: yeah that's expected we did only the image index for laion5B, I should disable that in the ui
rom1504#5008: do you have an use case for this?
rom1504#5008: we thought it wasn't *that* useful since image embeddings are better
kurumuz#5695: oh I just wanted to search over labels
kurumuz#5695: instead of clip text embeddings
|
rom1504#5008: that ui was always only embeddings
kurumuz#5695: yeah i forgot that
Technobird22#2055: *what's happening for april fools?*
apolinario#3539: LAION 5B is actually 5 billion geese images
Louis#0144: Thank god
Louis#0144: @BoneAmputee can you replace every image in .geese with a duck
ilovescience#3282: You'd have to collect that dataset... He shared the code for that
Louis#0144: Oh ok
ilovescience#3282: https://discord.com/channels/729741769192767510/730095596861521970/950644088175226921
ilovescience#3282: I guess you could change the taxon to duck or something
Louis#0144: How do I run that
Louis#0144: I've never seen that language
Louis#0144: Is it JavaScript
ilovescience#3282: It's Python lol
Louis#0144: python?
Louis#0144: you mean like the snake
Louis#0144: thats fuckin weird
Louis#0144: @BoneAmputee why do u speak snake
ilovescience#3282: :berk:
Louis#0144: anyway what happened is ios formatted the code weirdly
|
ilovescience#3282: Stop using iOS... Be a :chadgoose: and support open source by using Android
Louis#0144: no
ilovescience#3282: yes
AI_WAIFU#2844: I'm bored, so here's a spicy take:
**Contrastive methods will go the way of the GAN**
Louis#0144: no
Louis#0144: wtf
Louis#0144: fucking bite me
Louis#0144: I'm bored, so here's a spicy take:
**Autoregressive methods will go the way of the GAN**
Louis#0144: this is more likely than contrastive methods disappearing
Louis#0144: unironically
Louis#0144: actually yeah wait wtf
Louis#0144: I 100% buy this
cfoster0#4356: There's a difference between spicy and wrong :berk:
Louis#0144: LMAO
Louis#0144: I would argue autoregressive methods will disappear
Louis#0144: once diffusion gets better
|
Louis#0144: so not anytime soon
Louis#0144: :^)
Louis#0144: theres no way contrastive learning is going anywhere though
cfoster0#4356: Meh, diffusion models are autoregressive, just a more generalized form
Louis#0144: embedding models are inherently contrastive
Louis#0144: contrastive learning is ancient
AI_WAIFU#2844: in @Louis 's defense, I could at least partially see it. Autoregressive methods are a PITA to sample from, and there are many other likelyhood based methods that let you sample from an appropriate joint distribution.
Louis#0144: yeah
Louis#0144: thats my point
AI_WAIFU#2844: Once our models get good enough, we might start to trade marginal LLH for sampling speed.
Louis#0144: i literally do not see *anything* that could replace contrastive learning though
Louis#0144: contrastive learning fills a very unique niche
AI_WAIFU#2844: that's ok, diffusion models didn't really exist when GANs came out
Louis#0144: yeah but there already existed things that could fill the niche of GANs
Louis#0144: There were a lot of people who did image generation with CRFs
Louis#0144: no?
Louis#0144: maybe im misremembering
AI_WAIFU#2844: I think VAEs were the hotness
apolinario#3539: All different multimodal methods are Generators, our brains are Discriminators and not matter what model is used we are just a planet-wide GAN training itself with the goal of generating pieces that don't look AI-generated :goose16:
Louis#0144: VAEs were way after GANs
|
Louis#0144: no?
Louis#0144: VAEs were like
Louis#0144: 2017
Louis#0144: 2016
AI_WAIFU#2844: lol
Louis#0144: GANs were 2015
AI_WAIFU#2844: your noob is showing
Louis#0144: lmao
Louis#0144: ive been doing ML since before alexnet
Louis#0144: 🙂
AI_WAIFU#2844: https://arxiv.org/abs/1312.6114
bmk#1476: not doing contrastive could replace contrastive
Louis#0144: WOW 2014
Louis#0144: I thought it was 2016 at the earliest
AI_WAIFU#2844: https://arxiv.org/abs/1406.2661
Louis#0144: I remembered reading VAE papers in 2016
tpapp157#3643: VAEs were being used well before GANs came onto the scene
Louis#0144: omg
AI_WAIFU#2844: this might not even be the first instance
Louis#0144: ok to be clear the first VAE paper I saw was CRFs + VAEs
|
AI_WAIFU#2844: And before that there were non-variational autoencoders
Louis#0144: AEs cant do image gen though
Louis#0144: atleast not well
AI_WAIFU#2844: and energy models
Louis#0144: wasnt there a line of papers doing image gen with deep belief networks
Louis#0144: i dont remember who did that
tpapp157#3643: A big reason why GANs became so popular so quickly was because they absolutely crushed VAEs in terms of generative quality
Louis#0144: it might have literally been hinton
AI_WAIFU#2844: the whole field of deep learning really kicked off with that
AI_WAIFU#2844: MNIST generation with Deep RBMs
Louis#0144: yeah
Louis#0144: see that i remember
Louis#0144: bc I tried implementing it in 2014 for a HS project
Louis#0144: LMFAO
Louis#0144: I never got it working
Louis#0144: fuck theano
Louis#0144: no no not RBMs
AI_WAIFU#2844: this is as best as I can tell where it started https://www.youtube.com/watch?v=VdIURAu1-aU
Louis#0144: I remember someone doing CIFAR10 generations
AI_WAIFU#2844: 2010
|
Louis#0144: using deep belief networks
AI_WAIFU#2844: no wait that's wrong
AI_WAIFU#2844: https://www.youtube.com/watch?v=AyzOUbkUf3M
AI_WAIFU#2844: this is it
AI_WAIFU#2844: 2007
AI_WAIFU#2844: everybody needs to watch this
Louis#0144: i was not doing ML in 2007
Louis#0144: :^)
alstroemeria313#1694: 2014
Louis#0144: what was I doing in 2007...
Louis#0144: yeah i was off by a year
Louis#0144: I was playing around with unity trying to make games LMFAO
EricHallahan#1051: I was in first grade lol
bmk#1476: I first learned about ML through reading a paper about "the pile"
AI_WAIFU#2844: this is history
EricHallahan#1051: I'm pretty sure it is this one?
https://xkcd.com/1838/
bmk#1476: :goose16:
Louis#0144: u know
cfoster0#4356: Once you've got good enough cognitive abstractions you don't need "contrastive learning" (among other things)
|
Louis#0144: ok but think of it this way
Louis#0144: contrastive learning models get us good embeddings at 400m parameters
Louis#0144: you use these models to build indexes
Louis#0144: they need to be fast
Louis#0144: like really fast
Louis#0144: no one is gonna build an index using a gpt3 size model on today's hardware
Louis#0144: or any hardware within the near future
tpapp157#3643: That's like saying "once you're done learning, you don't need learning"
Louis#0144: infact i would argue rather than the embedding models getting bigger the datasets will get bigger
Louis#0144: since we already get really good indexing at 400m params
Louis#0144: lol
Louis#0144: openai saw very little benefit from increasing the size of their embedding models for instance
cfoster0#4356: I don't think so. You just move onto other learning paradigms once you've got better machinery
cfoster0#4356: In the same way as you graduate from trial and error to better strategies
Louis#0144: until we see something that is as fast as contrastive learning and scales better than contrastive learning I think its here to stay
Louis#0144: i would even argue that if DNNs do not carry us to AGI, contrastive learning will still be there in the next paradigm
Louis#0144: its such a simple and general method
tpapp157#3643: Not to mention is entirely unsupervised and makes no assumptions about the underlying data.
ilovescience#3282: IIRC diffusion models were invented in 2015? I may be misremembering though
alstroemeria313#1694: 2020
|
alstroemeria313#1694: right?
AI_WAIFU#2844: the core idea was probably floating around for much longer
alstroemeria313#1694: there were score matching things before
alstroemeria313#1694: https://arxiv.org/abs/1907.05600
AI_WAIFU#2844: What happened in ~2020 was that someone figured out how to aggressively cut down the variance
tpapp157#3643: That's true of pretty much everything though :schmid:
AI_WAIFU#2844: Yep
ilovescience#3282: I think this is the original diffusion model paper: http://proceedings.mlr.press/v37/sohl-dickstein15.html
alstroemeria313#1694: Ahh
ilovescience#3282: So actually diffusion models and GANs came around the same time
alstroemeria313#1694: There are CIFAR-10 samples in that paper and they don't look like things at all
cfoster0#4356: Idk if I'd count this, but it's definitely in the lineage
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/959237963810762782/Screen_Shot_2022-03-31_at_4.49.10_PM.png
ilovescience#3282: I think everybody cites this though
AI_WAIFU#2844: this is *technically* a diffusion method
tpapp157#3643: It just comes down to what you define as first. The first to come up with the math theory? The first to implement a shitty experiment? The first to actually get something working? The first to break through and actually popularize a technique with useful results?
cfoster0#4356: Fair. Personally I go by some combination of (1) how close the formulation is to the modern one and (2) whether it popularized it beyond the original authors
cfoster0#4356: Which is why I don't care too much about :schmidhuber: wrt GANs
ilovescience#3282: Lol I was just going to say by this standard Schmidhuber did _not_ invent everything already
tpapp157#3643: People tend to use the latter definition although the bias there is that most people will have only heard of the technique for the first time when it becomes popularized.
|
Maxime#0993: Just got the new intel gpu
Maxime#0993: How good it is in deeplearnging ?
Maxime#0993: deeplearning
Maxime#0993: arc 7 a770
Kia#2550: No one has a clue yet, but you can most likely try it
tpapp157#3643: Considering they aren't out yet, that's surprising . . .
Kia#2550: benchmark it
Kia#2550: They probably meant like the laptop with the new arc gpu
EricHallahan#1051: More than surprising.
Maxime#0993: Its possible to get engineering samples
Maxime#0993: its not the exact one that will come soon
Maxime#0993: its written not for retail
Kia#2550: Ow you meant the actual arc gpu card
Maxime#0993: Do you you have any benchmark that would be good ?
EricHallahan#1051: So you have a sample?
EricHallahan#1051: I assume you are under NDA.
Kia#2550: Eric you know some good benchmarks
Maxime#0993: I'm not under NDA
Maxime#0993: The sample wasn't meant for me... but I got it its a long story
tpapp157#3643: but realistically, Intel would first need to develop an equivalent framework to cuda/cudnn, and then the major NN software frameworks like pt/tf would need to add support for it, and then maybe we could talk about how good or bad it might be.
|
EricHallahan#1051: They have oneAPI.
EricHallahan#1051: I wouldn't be surprised if oneDNN runs already.
EricHallahan#1051: oneDNN is already integrated into PyTorch/TensorFlow for CPU training anyway, but I have no idea what the compatiblity would be like.
Kia#2550: @Maxime I suggest probably looking at intel websites if they have resources or ask for support on the card then ask for how does the card perform on DL
Maxime#0993: I don't think I can get official support from intel, this card has been sent to a relative who recently died...
Kia#2550: Sorry about that... But im honestly clueless, Eric can probably help you on this side
EricHallahan#1051: I'm pretty sure their strategy is to onboard developers through their existing oneAPI libraries like MKL, which is pretty much everywhere in scientific computing.
Maxime#0993: Do you think I have some legal risk using this gpu, or showing stuff to people ?
Kia#2550: Nah
Kia#2550: You aren't under any NDA
Kia#2550: soooo¯\_(ツ)_/¯
Maxime#0993: But I shouldn't have it sooo ...
Kia#2550: The card have been talked about and intel showed the card model already
tpapp157#3643: Interesting. I hadn't looked into this. I wonder how developed it actually is.
Maxime#0993: Ok I'll try some benchmarks i'll let you know
Kia#2550: Goodluck!
EricHallahan#1051: You should legally be in the clear as far as I know.
(Obligatory "I am not a lawyer")
Maxime#0993: I guess its fine but I'm thinking its a big company maybe they can do something...
Maxime#0993: I they take high mesures agains leaks its like 100% sure this case is planned
|
StellaAthena#3530: If it was meant for someone else and ended up I. Your hands it’s probably stolen properly.
Maxime#0993: Its actually not stolen
StellaAthena#3530: If I send mail to Eric and it gets misdelivered to you, and you take it and keep it, that’s stolen property
AI_WAIFU#2844: I think he inherited it
Maxime#0993: Nope, I inherited it
AI_WAIFU#2844: whether you inherit the related legal obligations is a better question
bmk#1476: that's an interesting legal edge case
Maxime#0993: We can inherit NDAs ??
dmayhem93#3202: I think it depends on if the person he inherited it from was the owner of the gpu, I can't imagine that being the case
StellaAthena#3530: The person who it was given to may have not had the right to give it to you, in which case it would still be legally problematic
EricHallahan#1051: This is why I am relatively bullish on Intel; They have invested a huge amount of resources into developing the oneAPI ecosystem, and they seem to have the right cards in hand to make an impact in the market.
StellaAthena#3530: If you *know* they did, that’s different
AI_WAIFU#2844: sure, but it might not be *his* problem
AI_WAIFU#2844: it might be the problem of the dead guy
Maxime#0993: it wasn't written that he had to return it so I guess it was his, and now its mine...
EricHallahan#1051: I believe all Intel engineering samples are company property.
tpapp157#3643: Legally, the engineering sample was probably provided on loan from the company for the duration of some specified work contract and would be expected to be returned afterward, It's technically probably still Intel property.
dmayhem93#3202: I'm not sure how probate works but maybe that gets out of it? IANAL
Maxime#0993: Hmm....
StellaAthena#3530: If it was incorrectly given to him and he takes it, it’s still stolen property
|
StellaAthena#3530: He does not need to commit theft to he in possession of stolen property
AI_WAIFU#2844: I think this depends on the specifics of the legal system
dmayhem93#3202: it could just be abandoned property
tpapp157#3643: They certainly have the resources to force their way into the market if they want to. I'm not holding my breath on this first generation of cards though. Intel seems to purposefully be tempering expectations with their (lack of) marketing.
AI_WAIFU#2844: realistically I think what @tpapp157 said is the case
AI_WAIFU#2844: but that's not guranteed
Maxime#0993: Yeah its worring its written intel confidential
StellaAthena#3530: @AI_WAIFU @dmayhem93 It could be, yes. But the question I’m trying to answer isn’t “what’s the correct legal description of the situation.” He asked
> Do you think I have some legal risk using this gpu, or showing stuff to people?
The answer is unambiguously “yes, there is legal risk involved”
AI_WAIFU#2844: which is interesting because he probably doesn't have access to any related contract information
Maxime#0993: If I contact them they will ask me to return it 100%
EricHallahan#1051: Agreed. I'm not expecting to be blown out of the water anytime soon, especially given their recent track record.
AI_WAIFU#2844: Now the other question, if you do this will they sue you because you're not supposed to have it in the first place?
Maxime#0993: So its possible its not actually mine ?
dmayhem93#3202: You should speak to a lawyer who can correctly guide you through your local laws on this matter
AI_WAIFU#2844: more than possible honestly
Maxime#0993: Ok ...
AI_WAIFU#2844: ~~but go benchmark it anyways, we need to know~~
Maxime#0993: Well, since I need it as my main GPU and i'm not sure its legal to own it... I'm gonna use it and not show benchmarks
|
AI_WAIFU#2844: ~~one little matmul never hurt anybody~~
bmk#1476: go ask a real lawyer and not the eleuther peanut gallery
StellaAthena#3530: It is absolutely possible, and you should talk to a lawyer.
Maxime#0993: I'll let you know...
asparagui#6391: usually the fun part of engineering samples is drivers
tpapp157#3643: Yes they almost certainly could. On the other hand, going through that process would cost them a lot of money. Companies don't usually go through the effort and expense of suing hardware/performance spec leakers when they track them down, they simply fire/blacklist them and move on.
Maxime#0993: Except it has three cooling fans... and it might not even be officially announced
tpapp157#3643: There have been rumors and "leaks" about Intel's GPUs floating around for at least two years now.
Maxime#0993: On the web I don't see what I have...
Maxime#0993: not even same colors
EricHallahan#1051: I mean I could have easily guessed that the standalone part would be called A770 from ARK lol
Maxime#0993: I think its for desktop
tpapp157#3643: If you really do have an engineering sample then I would just pack it away somewhere safe. In a decade or two it might make for a cool collector item that some people would pay decent money for.
Maxime#0993: Omg i could sell it
bmk#1476: I'll give you $20 and a plush goose for it
EricHallahan#1051: https://www.ebay.com/itm/Intel-larrabee-knights-Ferry-working-prototype-/224805929892
Maxime#0993: omg
Maxime#0993: mine its better than this
bmk#1476: upping my offer to *two* plush geese
Maxime#0993: I think Its worth 50 000
|
Technobird22#2055: lol
EricHallahan#1051: Maybe in a decade, but I wouldn't want to even consider that anytime soon considering it is company property.
Maxime#0993: The buyer will return it to intel 🙂
Technobird22#2055: I'm pretty sure you aren't allowed to sell stolen property
Maxime#0993: Maybe I can sell it to intel
tpapp157#3643: Yeah it would still be illegal no matter how long you waited, but the longer you wait the less Intel will care or notice.
Technobird22#2055: When is the formal Arc release date again?
Maxime#0993: But maybe its mine.... So It might be possible to sell it to them for a high price, low enough that paying lawyers is more expensive then buying it from me
Technobird22#2055: Didn't you say earlier that you needed it as your main GPU?
tpapp157#3643: They released one mobile GPU earlier this week with a few more in the near future. There is no date for the desktop GPUs but speculation is this summer.
Maxime#0993: Yeah but like for 2000€ I can get one with drivers ... like nvidia 3090ti
Maxime#0993: And probably I can get more
Technobird22#2055: Just... don't
Maxime#0993: I don't even know how to install this...
Maxime#0993: So I guess it will not even be possible to use it
Technobird22#2055: there are heaps of tutorials on installing GPUs online
Maxime#0993: But I don't have drivers
Technobird22#2055: not much you could do then ¯\_(ツ)_/¯
Maxime#0993: So i'll let you know, if I can sell it to intel
Technobird22#2055: It might be Intel's property in the first place; why would you want to sell it back to them
|
Maxime#0993: If its their property i guess it would cost a lot to get it back quickly
Technobird22#2055: They sent them out to reviewers on their own initiative, and now you want to blacklist them on getting it back?
Maxime#0993: so maybe they will buy because its cheaper
Maxime#0993: I prefer AMD in cpu anyway
Technobird22#2055: They don't need a single card... they probably have thousands of units... so if you want them to buy it back... honestly that's a bit stupid
Maxime#0993: Maybe because I might leak eveyrthing and doing anything agaisn't me might cost more than buying it
Technobird22#2055: Intel is a massive company. If you blackmail them, you're going to get into serious legal trouble
Maxime#0993: Its not blackmail If I propose to sell them this card
Technobird22#2055: that would be
Maxime#0993: But its logical to pay if its cheaper
Technobird22#2055: but it's not yours to begin with
Maxime#0993: We're not sure about that
Maxime#0993: Its not written that I have to return it and I don't have any NDA
EricHallahan#1051: No, it is undoubtedly company property.
Kia#2550: ~~But benchmark it~~
Kia#2550: Also why sell the gpu, better keep it, it's really precious considering it was given from a relative
magsafe12#6788: 👍
𓅬 gabriel_syme 𓅬#3220: We should really try BLIP with danboruu at some point
kurumuz#5695: idk what blip is
kurumuz#5695: but we made stuff that can describe danbooru images before
|
𓅬 gabriel_syme 𓅬#3220: it's a sort of generative training approach, they train a captioner for synthetic captions and a filter that removes noisy ones, and use those to augment their data with new captions (well use noisy data to their advantage). I think we could potentially do that and translate tags into captions
Kia#2550: https://github.com/salesforce/BLIP
Louis#0144: I don't believe Maxime
Kia#2550: Ask them a pic
Louis#0144: @Maxime
zphang#7252: hmm is `[seq, batch, ...]` more efficient than `[batch, seq, ...]` for generation? with incremental kv caching
kurumuz#5695: for nvidia gpus?
kurumuz#5695: i dont think so :thonk:
kurumuz#5695: but ask @chilli
MicPie#9427: Megatron has also a sequence first data format.
I guess with this setup they can avoid some transpose operations?
apolinario#3539: HAHAHAHAH clever one, I love it!
nitehawk#9164: nice try
Daj#7482: Also check out our new beginners guide to ML in #announcements
Luigi#9626: Did they just change the server's picture so it gives the illusion of having a notification? That's so evil I love it
Luigi#9626: https://cdn.discordapp.com/attachments/729741769738158194/959409761403678730/unknown.png
Sid#2121: Just when I think someone wants to talk to me :sadge:
&.#0001: i need help downloading some endofunctors from kaggle
Daj#7482: Try installing Gentoo
&.#0001: Is that available as an Arch package?
|
Daj#7482: Yes
kurumuz#5695: tick
tammy#1111: motherfuckers
kurumuz#5695: get rekt
tammy#1111: https://cdn.discordapp.com/attachments/729741769738158194/959411226629271592/1648811913.png
tammy#1111: real one looks different though
tammy#1111: step up your game fams
tammy#1111: https://cdn.discordapp.com/attachments/729741769738158194/959411460864376862/1648811970.png
tammy#1111: here, have a nicely cropped high res version
tammy#1111: use that
Daj#7482: needs more zoom out
Daj#7482: Otherwise crops too hard
tammy#1111: ?
tammy#1111: i'm p sure the crop doesn't care about the size of the image
Daj#7482: You need more empty space around the ping for perfect fit
Daj#7482: Otherwise it cuts off the ping
tammy#1111: wellllll
tammy#1111: yeaaaaah
tammy#1111: hold on
Daj#7482: We had a perfect version last year but lost it lol
|
Johnowhitaker#1906: Got me 🤣
tammy#1111: how's this
tammy#1111: the one should be fully visible even in circle mode
tammy#1111: https://cdn.discordapp.com/attachments/729741769738158194/959413479360581672/eleutherai-notif.png
Daj#7482: glorious
Daj#7482: Thank you
tammy#1111: (i shall consider myself having contributed to AI alignment today)
Tinytitan#5596: I can see the difference if i switch to light mode https://cdn.discordapp.com/attachments/729741769738158194/959413933754699786/unknown.png
Tinytitan#5596: light mode hurts my eyes:goose16:
tammy#1111: https://cdn.discordapp.com/attachments/729741769738158194/959414504083570698/eleutherai-notif.png
tammy#1111: added transparency for lightmoders
Luigi#9626: You seem to like pain
Tinytitan#5596: I dont actualy use light mode
tammy#1111: https://cdn.discordapp.com/attachments/729741769738158194/959414687114596403/dark-mode.png
Tinytitan#5596: but the sight shade difference between the circle and the background annoys me
tammy#1111: fuck i keep falling for it
Daj#7482: you made it lmao
ari#9020: Do not call up that which you cannot put down
tammy#1111: i know
tammy#1111: i'm glad it works even on me
|
Octopirate#9999: simply use dark reader
tammy#1111: oh yeah, this is about the few sites where it *doesn't* work
Softology#8812: You cheeky bastards! Tease me into going through all the channels looking for a @tag to me 🙂 Hopefully gone on April 2nd.
ilovescience#3282: Don't actually or I'll think you are a psychopath
bmk#1476: it's for the new announcement in #announcements
Kriptanik#9921: Omg I was like wtf is going on with this (1)
^-^#3526: You guys are evil
^-^#3526: I SPENT 15 SOLID MINS LOOKING THROUGH EACH
Softology#8812: No, it's not
Xg#4738: noooooooooooooooooooooo not Eleuther!!!
Kriptanik#9921: I mute every discord after I join them so I was like wtfs going out
Kriptanik#9921: on
Xg#4738: not the fake ping
bmk#1476: we simply made it persistent to ensure that you don't accidentally mark it as read without reading it
Xg#4738: one admin put a fake ping on my server, I got tricked a few times ;-;
I hate the fake ping
magsafe12#6788: This confused me 😛
OccultSage#3875: Yes ... yes, we are. And we're the last and best hope for AI alignment. You scared yet?
bmk#1476: if you're not scared yet, just scroll through the things I post on twitter on all days other than April 1
bmk#1476: and then become scared
|
Korani#4593: I have heard that Polycoder is very promising, I have not tried it yet. had some issues running it. https://github.com/VHellendoorn/Code-LMs#getting-started
𓁞 🝕 𓀦#9474: Furious
𓁞 🝕 𓀦#9474: in fact, fuck the lot of you
Deleted User#0000: h
R4ZZ3#6068: This (1) is evil 😄
Auddience#0918: I'd rather leave lol
Auddience#0918: Narrator: which he did immediately
человек.#5265: whyyy 😭
Kia#2550: Check the new announcement in #announcements
Kia#2550: Bmgoose created it really well
BlipOnNobodysRadar#6122: That's hilarious.
hotgrits#0196: the (1) isn't going away... wtf
ii#1247: I have been trying to remove the (1) here...
Kia#2550: It's...uh
ii#1247: I know it's embedded in the image hahaha
Kia#2550: Lmao ehehehe
hotgrits#0196: CRUEL
laund#7544: ah, the oldest and by now lamest april fools joke on discord
AI_WAIFU#2844: :catgirl3:
Kia#2550: Ehehehe:berk:
|
bmk#1476: nah that's just to get your attention so you can go over to #announcements to see the real April fools joke
laund#7544: lol i didn't even click on announcements when i didn't see a marker there
laund#7544: really only here to lurk on the faraday cage
Tim#6091: Did they add the red #1 to the image? effff that's not cool.
Tim#6091: Oh April Fools ok I forgive you. Fix it.
Tim#6091: 😄
kurumuz#5695: idk why people are super aggressive over an april 1 joke
kurumuz#5695: literal filter for unwanted lurkers
Tim#6091: I didn't realize what day it was.
Tim#6091: "unwanted lurkers"... hm...
bmk#1476: you're right, it is broken. will replace with a red (2) instead
kurumuz#5695: make it 3
Tim#6091: You guys don't want lurkers? Like people who just stay quiet and like to learn and read... "unwanted"?
Daj#7482: kuru is being cranky and doesn't represent server owner position. Lurkers are loved and appreciated :hap:
Daj#7482: (we will annoy you on April Fools though)
kurumuz#5695: that was not what i meant at all though
Tim#6091: kk thanks much. ** shifts back into lurker **
Tim#6091: You used the word "literal"... how could you mean it another way with that in there?
Orz#3023: btw
who happened to create this server?
|
was it bmk or Conner?
kurumuz#5695: i think if you get mad over an april fools joke hard enough that you “forgive people” and demand them to “fix it” doubt you are wanted by many people in the community but ofc i am not the one to judge
kurumuz#5695: idk pretty easy to understand, some lurkers are unwanted not all
bmk#1476: I think they were joking too
magsafe12#6788: Tensorflow is Best
bmk#1476: I think he meant literal as in figurative
magsafe12#6788: Pytorch is heck
kurumuz#5695: well than that is an easy to solve misunderstanding
Orz#3023: nah
pytorch + tpu is better
bmk#1476: literally a bannable offence (figuratively)
sweg#8920: Does anyone know if there's any models that can detect confusion in dialogue/text? Like something that could distinguish "Huh? What's going on here?" "This doesn't make any sense, what's going on?"
magsafe12#6788: April Fool
Tim#6091: gotcha. No worries carry on. That 1 on there got me for a good minute I think. 😄
magsafe12#6788: Sorry If I hurt someone
bmk#1476: let us reconcile with a nice goose
bmk#1476: .goose
BATbot#1024: https://cdn.discordapp.com/attachments/729741769738158194/959450914589130783/goose.jpg
bmk#1476: wow that's a good one
Tim#6091: Nah carry on. Everyone in this discord is doing crazy good work. I am a lurker, and will continue to lurk and learn. Keep up the awesomeness.
|
kurumuz#5695: pretty wet too
bmk#1476: it's wetter than that kuru it's moist
magsafe12#6788: @Daj Who's this?
Daj#7482: hm?
Zecken#6531: The server icon change got frustrating, and then it hit me...
OccultSage#3875: jeez, dude, need more sleep?
Zecken#6531: Well played EleutherAI.
bmk#1476: the notification is for #announcements
bmk#1476: clearly
Zecken#6531: To be fair, it's a very helpful guide!
bmk#1476: :goose7:
bmk#1476: just added a red circle to the channel name, wonder how many people will instinctively click on that
Tim#6091: lol not AS well played. 😄
Daj#7482: mfw humans literally like those birds instinctively pecking at red circles
Tim#6091: I bet you could make one that looks right though, as an emoji.
Daj#7482: (same tbh)
bmk#1476: can you put custom emojis in channel names tho
Tim#6091: if you make an emoji for the server, I imagine you can, yes.
bmk#1476: I don't think you can
Tim#6091: No?
|
kurumuz#5695: no custom emojis in channel names afaik
bmk#1476: I just tried, it didn't work
Tim#6091: ah... balls.
Tim#6091: Nevermind then.
magsafe12#6788: That happens with me too 😂
Fauno15#7982: new beginner's guide is perfect, I read it and immediately got hired by Meta! Thanks Eleuther!
Tim#6091: hm... that's handy. How do I get a job...
Orz#3023: now contribute to Eleuther and get hired by DM
Fauno15#7982: Instructions unclear, hiring offers stuck in DMs
random person#5234: instruction unclear, hiring stuck at USCIS for visa
Fauno15#7982: tbh contributing to this stuff and getting hiring offers in the DMs....too real for April Fools lol
Zippy#1111: Do you guys have to have the frickin fake ping :AngroCatto:
Fauno15#7982: i was wondering yesterday how many servers I'm in would do the fake ping lol
Zippy#1111: fake ping is a joke that doesn't go away for the whole day because I instinctively look for them :angy:
Fauno15#7982: Happy gaslighting day!
StellaAthena#3530: Last year we left it up for 48 hours which was extra amusing
Zippy#1111: :NotLikeJapp:
oc#7697: what is this new server icon with built in notification ?
oc#7697: very funny
gandamu#4097: Oh man. Thank you for saving my sanity. I searched for mentions of myself, clicked all channels, etc. for a while on both mobile and desktop. That's good 😄
|
oc#7697: lol! yes it's april fools 🙂
oc#7697: it's driving me crazy every time I notice it
Cara#0881: Shit got me tripping
Jessica#9065: Lol that got me
bmk#1476: the ping is for #announcements
TheKekIsALie#4502: There are no words for how much suffering the April Fool's event is causing me even when I know it's there haha
Fauno15#7982: pain
Phi#5747: Can anyone recommend me a cloud provider to rent a machine with GPU such that
* it's located in Europe (and thus I have good ping to it)
* It stays always online so that I can set it up like any other linux server, configure all the programs I need, and the data and everything persists on it, etc.
Phi#5747: I want to use my laptop as a thin client and connect to it and do my development there
magsafe12#6788: GCP
magsafe12#6788: https://cloud.google.com/compute/docs/gpus/gpu-regions-zones
marmiteCloud#5923: might be a case where comparing the additive cosine similarity of each sentence to those examples you provide as (sentence) embeddings from a large model (i.e. babbage or curie on OpenAIs embeddings/UniSentEnc) can give you a useful scale, and would be quite quick to try out - if the "uncertainty/non uncertain" targets are all sentences..
sweg#8920: hmm ok that was my first thought but didnt know if it was the best option
sweg#8920: you mean like sentence similarity to something like "What?" "I don't understand this", etc.
sweg#8920: right?
EmeraldOdin 🔒#1991: Omg you got me with the notification
EmeraldOdin 🔒#1991: Kept pushing mark read all be like wtf
Chr0my#0173: i just wanted to say
|
Chr0my#0173: you guys aren't beginner friendly enough, and that ML requires too much mathematical background to get into.
EmeraldOdin 🔒#1991: You can make ml as simple and as hard as you want
EmeraldOdin 🔒#1991: Its just a matter of how high level you want to make it
EmeraldOdin 🔒#1991: Obviously efforts like eleutherai are more low level than say an example you can run directly in Docker and / or a desktop app
EmeraldOdin 🔒#1991: Some of eleutherai's models make it to huggingface which is far more developer friendly
EmeraldOdin 🔒#1991: This is how I got into contact with them
Louis#0144: most of our code has good examples
Louis#0144: https://github.com/EleutherAI/magiCARP/blob/main/carp/examples/inference_demo_cloob.py
Louis#0144: case and point
greencube#6725: hello, where is the ping?
greencube#6725: where is the pong i dint see it?
greencube#6725: helo?
HostsServer#2628: My ocd did not appreciate the joke
HostsServer#2628: 🤣 🤣 🤣 🤣
HostsServer#2628: Ive been digging for an hour
HostsServer#2628: You never know with discord api that shit is wonky
aaronrmm#3198: you jerks 😛
panic#9031: this is the first april fool's thing that legitimately got me, congrats
aaronrmm#3198: I'm gonna just put this in a group of channels (after finding other channels to join) so I can hide it for the day
zerofill#0465: That new server icon is torturing me...🤣
|
Deleted User#0000: This new icon
Deleted User#0000: Is fucking with me
sweg#8920: thank you
random person#5234: For what is worth, I am a beginner and EAI stuff is reasonably clear for the most part.
gdawg16#0493: https://tenor.com/view/fake-ping-ping-discord-server-owners-server-owners-after-gif-20992761
gdawg16#0493: Hehehe
gendernihilist#9947: to all the people who ragequit: you can't see this message, but you amplified my enjoyment of this april fool's prank *immeasurably*
Deleted User#0000: I am a nihilist but what is a gender nihilist?
gendernihilist#9947: short answer: a nihilist about gender as a concept and construct
Deleted User#0000: so a nihilist
Deleted User#0000: so you are just a nihilist
bmk#1476: #off-topic
gendernihilist#9947: long answer: way too long for me to get into before I gotta take a post workout shower and head to my gf's but it's rooted in the thinking of Maria Lugones, Monique Wittig, etc, etc
Deleted User#0000: oh cool. I am familiar. right on
marmiteCloud#5923: Yeah basically, get the `embeddings` into a list of vectors for 'I don't understand this', 'This is confusing me' and put them together to get a `0-1` value for each sentence. Probably in this case it'd be more useful to take the max-arg highest similarity rather than merge the embeddings, but I'd try both:
```
import numpy as np
def cosine_similarity(a, b):
return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
|
def additive_embeddings_cosine_for_string(embeddings: list, string_to_compare: str) -> float: # 0-1, 1=most similar
merged_embedding_ada = list(map(sum, zip(*embeddings)))
return cosine_similarity(string_to_compare, merged_embedding_ada)
# or you can imagine taking the highest cosine_similarity a given string has for any of its embeddings instead...
```
EricHallahan#1051: Suggestion: move to #multimodal?
apolinario#3539: Yes!
PETE#9327: Hello guys, I am new to ML, and I am trying to deploy gpt-neox 20b
but when I tried to run ./deepy.py generate.py ./configs/20B.yml, it says python: so such file or directory
PETE#9327: can anyone please help me?
Louis#0144: april fools?
DigThatData#7946: try feeding the machine another quarter
BoneAmputee#8363: or a stray cat
Kia#2550: +Moral support
louis030195#2462: https://en.wikipedia.org/wiki/Assembly_theory
When breaking an object, the more non symmetrical pieces, the more likely to have been created by an evolutionary process, I.e. the opposite means created by an organic / artificial intelligence.
𓅬 gabriel_syme 𓅬#3220: does anyone know or have a typical decoding implementation for flax?
|
𓅬 gabriel_syme 𓅬#3220: searching comes a bit empty
asparagui#6391: data normalization you mean?
UsmanAga#2018: Hi
Deleted User#0000: I've been using diffusion for months now, and still don't know whats the difference between it and GANs, can someone pls shed light?
johnryan465#9922: This has a nice high level overview https://lilianweng.github.io/posts/2021-07-11-diffusion-models/
Deleted User#0000: thanks! :hap:
apolinario#3539: The Yannic Kilcher explanation video about OpenAI's "Diffusion Models Beat GANs on Image Synthesis" paper is also really good: https://www.youtube.com/watch?v=W-O7AZNzbzQ. The introduction (between minues 4:30 and 11:00) really made me get the high level context of what this is about (and to be honest it is mind-blowing to think about)
Thebiologist#2606: Seems like diffusion2 broke down. I don't know where to report such issues. I am hoping this channel is ok.
Kia#2550: Not this channel
Kia#2550: #art
UsmanAga#2018: where i can get help for training GPT-J?
CKtalon#7792: Are there any benchmark metrics to test how fluent/idiomatic a model's output is? ROGUE?
StellaAthena#3530: human evaluation is the only worthwhile one
StellaAthena#3530: #gpt-j
CKtalon#7792: well duh 😛
Tau#4010: I was trying to visualize grads for input image with clip. The idea is to see what parts of the image were most similar to a given text input. I'm not sure how to interpret the results though. Example image from flowers recognition dataset. https://cdn.discordapp.com/attachments/729741769738158194/959842892904562748/unknown.png
Tau#4010: grad of cosine distance for "A white horse"
> grads = jax.grad(loss)(pil_imgs[0], "A white horse", jax_params)
> gradimg = jnp.transpose(grads, axes=(1,2,0)) https://cdn.discordapp.com/attachments/729741769738158194/959843114208595988/unknown.png
Tau#4010: normalized:
|
> gradimg = jax.nn.normalize(gradimg) https://cdn.discordapp.com/attachments/729741769738158194/959843351174185010/unknown.png
Tau#4010: sharpened and abs
> gradimg = (gradimg ** 2) * 10 https://cdn.discordapp.com/attachments/729741769738158194/959843539762700308/unknown.png
Tau#4010: Doesn't really seem meaningful. Though I notice the 32x32 patches -- is it not paying attention to the edges of patches much? Or is this just an artifact that I'm reading too much into?
EricHallahan#1051: This is an artifact of the ViT architecture.
johnryan465#9922: Anyone play with https://github.com/google/brax or anything similar?
johnryan465#9922: Wondering if it would be worth building something analogous for my own simulations, but unsure whether or not it would be worth it
johnryan465#9922: Also curious about opinions on Jax vs Numba
magsafe12#6788: Anybody knows how to fine-tune huggingface transformers with Pytorch + TPU's?
OccultSage#3875: I don't think this is happening.
T_Olabode#7343: I still need to read these diffusion papers 😳
bun#9632: btw in flax where are activations stored when we do model.apply? What if I dont want the model to store forward pass activations (i.e. if Im not gonna do backprop)?
alstroemeria313#1694: it will not store forward pass activations unless the apply is inside a jax.grad()
alstroemeria313#1694: *and* the gradient returned from the jax.grad() depends on the model output
bun#9632: Ah so the transformation makes it store the activations? Guess its hard to access them raw then?
alstroemeria313#1694: yeah. to get at them raw you have to modify the apply function to return them
bun#9632: Ah so it will only store the dependent activations got it
alstroemeria313#1694: since jax will optimize away stuff that isn't needed for the return value
alstroemeria313#1694: yeah
bun#9632: cool tyty. Do you know where you read about this?
|
alstroemeria313#1694: i forget
alstroemeria313#1694: it comes from jax.jit tracing/compiling stuff
alstroemeria313#1694: since it will trace your function to find out what happens to the intermediates
alstroemeria313#1694: then compile/optimize it
alstroemeria313#1694: so it knows what to immediately deallocate etc.
bun#9632: Nice thanks. I havent read that section of autodidax yet so I should probably finish that
alstroemeria313#1694: and you can jit the apply function both outside and inside a grad()
alstroemeria313#1694: to trace/compile the two different computations
𓆏⸻#2550: lets say i have some training data as a text file, how do I tokenize, and split it?
𓆏⸻#2550: (pytorch)
𓆏⸻#2550: ive always used wrappers like aitextgen and simple-gpt-2 to do this
shorouq#8289: Hi, just joined the server and wanted to ask if anyone's finetuned j or neo for conversational tasks (either general chat or for conversing on more specific topics)? I'm thinking of playing around with it for a personal project (where the conversation structure/design would be mainly handled with with Rasa 3.0) and wanted to see if others might share my interest.
Since this is the #general channel, here's a quick intro: I'm a data scientist working in Stockholm with a background in NLP. I've worked before on chatbot design/building and currently working on recommendation systems and MLOps.
thenightocean#6100: Hi! Glad to have more Stockholmers around (I am there too). The best place to ask would be #gpt-j channel.
There is a really cool Discord server for the Nordic AI researchers who are doing similar things with big LMs for nordic languages so that might also be interesting for you. https://discord.gg/F99SZbJV
shorouq#8289: Thanks for the tip, I'll move my question there.
I joined the AI Nordics channel and it's exciting to see many of my former colleagues there -- thanks for sharing!
𓅬 gabriel_syme 𓅬#3220: If I want to finetune a seq2seq model on text pairs, where len(text1) < len(text2), is it still okay to finetune a summarization model? Or should I use something like Q&A?
|
𓅬 gabriel_syme 𓅬#3220: asking for a friend :guilty:
Louis#0144: I finetune 20b
uwu1#4864: Try smoothgrad/noisegrad - average the grad magnitude over many noised versions of the input. They also have other regularization schemes in the paper.
But also this (gradient based interpretation of the input) is just the sensitivity of the linearization of the model around that point so it's not really a good estimate from a counterfactual perspective due to the nonlinearity of the model, which I think applies to all sensitivity/importance mapping that doesn't nonlinearly approximate the log map of the neural manifold
Tau#4010: Ah, great references, I'll dig into that. Thanks!
ilovescience#3282: And how's that going for you :berk:
Louis#0144: Shut
Louis#0144: Up
Louis#0144: 😭
Fredfrknwn#4058: I am wondering what are the options to feed extra contextual information to decoder in an encoder-decoder transformer model. For example in an image to caption model, I have extra metadata about image e.g category and few extra fields. Encoder encodes the image into its representation, wondering how I should feed the embeddings of metadata to decoder in addition to image representation so that it contextualises the caption based on this metadata information and input image.
DigThatData#7946: I've been meaning to work through the minitorch material for a while but keep putting it off. anyone interested in doing a study group maybe? https://minitorch.github.io/
DM me if interested.
alstroemeria313#1694: why not feed them into the encoder
alstroemeria313#1694: unless you want to be able to sample from the decoder with different category etc.
alstroemeria313#1694: than the image actually had
𓆏⸻#2550: train on cpu with 256 gb ddr3 :chad:
𓆏⸻#2550: only a few hundred thousand millenia before 1 epoch is completed
Fredfrknwn#4058: Feeding by just concat the metadata embeddings together with image input ? or some other method you were thinking
alstroemeria313#1694: yeah
Fredfrknwn#4058: What are the possibilities for this ?
|
alstroemeria313#1694: if your encoder is a transformer you can just add a token for it
Fredfrknwn#4058: Metadata embeddings comes from another model. So I have them as tensors already
alstroemeria313#1694: Ah
Fredfrknwn#4058: What’s the suggestion in this case
alstroemeria313#1694: i guess project them into the encoder d_model and use them as additional tokens
alstroemeria313#1694: if the encoder is a transformer
Fredfrknwn#4058: It’s a transformer encoder decoder model
Fredfrknwn#4058: At which layer it would be better to project final layer or all the layers
Fredfrknwn#4058: I guess likely in all layers
alstroemeria313#1694: yeah feed it into the start
alstroemeria313#1694: it will always be available in inference right?
alstroemeria313#1694: if not you need to randomly drop it out
JonathanRystroem#5668: Hi EleutherAI! I'm currently planning to do an exam project / paper on bullshit in LLM's - basically investigating whether LLM's become better at bullshitting as they scale (my hypothesis is yes). It is a fairly small project that I should be able to complete on my own. However, I might need access to a bit of compute (<$100) for testing the scaling. Does anyone know if this is possible to get (i.e. who to write?)
StellaAthena#3530: Dollars aren’t a very useful measure of compute because costs wildly depend on your situation and cloud providers. Can you describe the actual amount of compute you’re looking for? A 3090 for a week? A 8xA100 for a couple hours?
StellaAthena#3530: If you’re planning on going up to GPT-J in scale you shouldn’t have much of a problem getting everything you need via TRC, and if you’re only doing inference (or have a small finetuning dataset) a Colab pro account would suffice
JonathanRystroem#5668: Thanks for the response! I will probably need to generate ~1000 headlines using the goose.ai API (using different models to check for scale) which is why I specified a rough estimate in dollars - however, I'm not quite sure whether goose.ai is different from the rest of y'all
StellaAthena#3530: 1. EleutherAI is friends with, but not the same people as, goose.ai. A lot of them do hang out here, @kurumuz and @OccultSage are both online for example, and may be able to help you. I don’t think there’s an official academic access program but I know it’s something that’s been discussed / is desired.
2. If you’re just generating headlines with the models and aren’t deeply attached to using GPT-NeoX 20B you should do this for free in Google Colab. The HuggingFace `transformers` library works quite well out of the box for basic inference tasks. Even if you are dead-set on GPT-NeoX 20B you can save much of the cost by only using the goose.ai API for GPT-NeoX 20B.
3. The unfortunate reality is that if spending $100 is going to be financially problematic you’re unlikely to be able to publish the paper anywhere. You’re a student, right? There’s no way you can get your professor or university to cover the cost?
JonathanRystroem#5668: 1) Thanks for clarifying - I'll follow that with interest
|
2) I'll look into these options
3) You were the first I've been chatting to about this, but I can probably figure something out :))
Thanks again for your time and keep up the great work!
ym#0104: I've been looking at the papers for C4 and The Pile, and it looks like although Reddit might be used (e.g. to get links for OpenWebText2), the corpora may not themselves contain much Reddit info. (at least, the sense I got from C4.) Why is it that folks weren't interested in using Reddit data? Or does common crawl actually already contain enough Reddit data?
EricHallahan#1051: Quality mostly afaik
cfoster0#4356: By and large, Reddit is considered too toxic to openly use as training data
ym#0104: ahhhhhh. ok great, thanks, that really makes it click for me ---- from doing some quick searches on reddit, it seemed like a great source for my use case, but that's partly because my use case also involves a lot of toxic data / topics
Haywoodspartan#0001: Does the current project have the support capabilities of using NVLINK for Bidirectional Bandwidth and Shared Memory
EricHallahan#1051: Can you provide more context? I would rather not waste your time by misinterpreting the question.
Haywoodspartan#0001: The project is basically machine learning correct I was wondering if you guys have implemented the capabilities of using nvlink for high bandwidth shared memory between two gpus for machine learning models and processes
EricHallahan#1051: Yes, the libraries we use (mostly PyTorch) leverage NCCL.
Haywoodspartan#0001: I was wondering if my Work Lab DGX machine was going to be able to leverage the interconnects
Haywoodspartan#0001: But seeing that it does use the NCCL and PyTorch libs I should have no problems then
EricHallahan#1051: MPI is a harder, but still possible.
EricHallahan#1051: CUDA 11.5 introduced a lot of the shared memory things into the CUDA installation itself, instead of as a separate dedicated kernel module.
Haywoodspartan#0001: Linux based kernel correct?
Haywoodspartan#0001: Or multiplatform?
EricHallahan#1051: But if you have a DGX and use their preconfigured images for it you should have no trouble.
Haywoodspartan#0001: That's fine
EricHallahan#1051: Yes, a Linux kernel module.
|
Haywoodspartan#0001: Okay cool
EricHallahan#1051: I'm a little crazy and have attempted to build everything from scratch myself before, so I'm pretty familiar with these things. 🙃
Haywoodspartan#0001: Tell me about it
Haywoodspartan#0001: I have to manage Openstack services for clients so we also have those as offerings
EricHallahan#1051: The only thing that is not ideal with their prebuilt images is that they are not particularly lightweight.
EricHallahan#1051: (This is by their own admission to me)
Spacecraft1013#5969: *cough cough dialogpt cough cough*
Veedrac#0443: I find it genuinely uncomfortable that people say this about Reddit. Reddit is just people talking. Some people suck, heck lots of people suck, but this is true wherever people talk to each other.
Streamer Klaus#7046: hey all 🙂
EricHallahan#1051: Welcome!
Kia#2550: Goodmorning!
Streamer Klaus#7046: @EricHallahan how are you and thank you i am a friend of emad mostaque hes the founder of stability ai , i am currently working with his colleagues who are my partners on gamifying smart cities using holonic ai.
Streamer Klaus#7046: names ollie nice to meet you all 🙂
valentinakerman#8583: Hello everyone! I recently came across EleutherAI in a newsletter from IBM. I'm a sophomore from India, and am interested in AI research. I think I'll be lurking around for a while before I can contribute in a significant way. Looking forward to being a part of this community!
Streamer Klaus#7046: @valentinakerman welcome
louis030195#2462: any channels for conversational AI here?
Streamer Klaus#7046: not sure i just joineed
Streamer Klaus#7046: joined* pardon
louis030195#2462: I'm developing a AI assistant using https://parl.ai/projects/seeker/ that uses my google,youtube,books etc as search engine
Streamer Klaus#7046: @louis030195 you should ask a mod if its appropreate to post links
|
Streamer Klaus#7046: or an admin, as it could pull a persons ip or dox them using suspiciouis links not saying you would do that but discord is a harsh place
EricHallahan#1051: Welcome!
Streamer Klaus#7046: suspicious*
louis030195#2462: didn't ever think people would do that 🙂
louis030195#2462: you can implement a discord bot that remove links :p
Streamer Klaus#7046: @louis030195 i work with innocent lives foundation in my off hours when i am not building on my group of companies and our partners. innocent lives foundations works with the fbi and cia who help the foundation track down child predators and bring them to justice. this is also done with the help of social media giants, ai and white hat hackers / grey hat hackers so believe me i have seen pretty much everything
Streamer Klaus#7046: i am also an advisory member of innocent lives
Streamer Klaus#7046: also if you look up with g tagging is back in the early 2000s it was done on consoles, xbox sony consoles to be precise. g tagging or ghost tagging is when some one disguises there ip goes into a locked xbox or psn party on multiple servers and ddoses everyone off the platform , using a 59 second kicker booter you were booted off your system everytime you restarted your router it would rekick you offline
Streamer Klaus#7046: i have been on both ends the hacker and the the person recieving multiple hacks death threats and swats that were sent to my house. your kinda niave if you think people cant get to you if link isnt safe
bnanrz#1693: *You’re
𓅬 gabriel_syme 𓅬#3220: We often talk about how LMs can be used for misinformation and how they can hallucinate facts and stories (not always bad obv). But what about the opposite? Can we train LMs to figure out cases where humans try to trick their audience with misleading titles, text, explanations? For example https://cdn.discordapp.com/attachments/729741769738158194/960737593715195964/unknown.png
Yerren#1954: The NZ housing market strikes again 😭
liberty90#6297: So, what people think about SayCan, use of language model to operate a robot with text instructions . https://twitter.com/hausman_k/status/1511152160695730181?t=QOWCk3zTa6ST9xAJmdvqwg&s=19
liberty90#6297: https://twitter.com/svlevine/status/1511188888953372674?t=cglK4UkUTn69D5n74BkhJg&s=19
liberty90#6297: O.O
𓅬 gabriel_syme 𓅬#3220: it's another cool sample in a quite popular field of research. I personally love anything semantic-driven
AI_WAIFU#2844: @bmk
&.#0001: @StellaAthena I just realized your name is s. biderman (say it out loud)
𓅬 gabriel_syme 𓅬#3220: I bet the 'give hands and eyes to LMs' if a sentence from many peoples' nightmares lol
Kia#2550: Ow... I thought It's birdman:thinkies:
|
𓅬 gabriel_syme 𓅬#3220: hmm, actually why not
https://twitter.com/fchollet/status/1511127794977107972
𓅬 gabriel_syme 𓅬#3220: This is an interesting example of the (for lack of a better term) inherent historicity of current models (or data). Do you think it's possible to steer a model like that towards modern examples of propaganda?
https://twitter.com/RiversHaveWings/status/1511134466038665217
Kia#2550: "a modern propaganda poster"
𓅬 gabriel_syme 𓅬#3220: Yeah wonder if that works.
𓅬 gabriel_syme 𓅬#3220: Too lazy to try heh
Kia#2550: I supposed the dataset has more data on oldy posters, but probably still has info on modern one
Kia#2550: Also @alstroemeria313 Just noticed the propaganda poster gens doesn't mostly all look Soviet union style poster, probably CLIP just seen russian style poster (From an old discussion, while you generate propaganda poster)
faraday#0862: how do people validate implementations of LLMs from big corp ?
I'm asking because it's obvious that replication of results is not possible.
I've previously implemented an NLP algorithm %100 same with the publisher, communicating with him during the process but ultimately the approach turned out to be so sensitive that even slightest data difference in Wikipedia threw the whole algorithm off.
StellaAthena#3530: You don’t
Kharr#7888: You can use the exact same code, same data, even same training sample order and change one parameter in the optimizer and get very different results.
faraday#0862: are there any metrics measuring robustness of LLMs ?
any papers about that ?
Keverino#1093: Robustness in terms of how dependent you are on luck when training?
Kharr#7888: What do you mean? Training LLMs is very difficult. Even keeping them stable during training is difficult, let alone replicating them. A bad initialization will make a LLM explode after a few thousand steps.
DigThatData#7946: the "contributions" breakdown from the PaLM appendix does a good job of illustrating how complex a project like training a LLM can be: https://twitter.com/maxhkw/status/1511114173584924674
faraday#0862: joke explanations look really smart
|
https://mobile.twitter.com/hausman_k/status/1511052696509300739
faraday#0862: how is "reasoning" even possible at just this point? I'm used to an "autocompletion" feeling with a short text window to remember details from. now this seems to access both a vast DB of logic and semantic knowledge
is this reasoning? am I mistaken about the framing of these effects?
faraday#0862: was this an expected event? I'm wondering if Gary Marcus would be able to come up with holes in PaLM
tpapp157#3643: Tough to say. For now this is just a tiny bit of anecdotal evidence. I would hesitate to call it "reasoning" without a lot more testing, in particular testing with OOD data/concepts. More importantly, humans have a desperate desire to anthropomorphize and impose meaning and patterns onto everything, so it's important to be skeptical and cautious.
AI_WAIFU#2844: Was it ever mentioned in the paper what PaLM's context window size was?
EricHallahan#1051: 2048
EricHallahan#1051: > **Sequence length** – A sequence length of 2048 was used for all models. Input examples are concatenated together and then split into sequences of exactly 2048 tokens, so that there are no padding tokens, but examples may be split in the middle. Input examples are differentiated from one another with a special `[eod]` token.
Louis#0144: smol boi
ilovescience#3282: Which ones are big then
Louis#0144: i mean the model is huge
Louis#0144: but gpt3 has a context lenght of 4096 now iirc
kurumuz#5695: who cares. that is super easy to change
ilovescience#3282: It is?
kurumuz#5695: just finetune with longer context
kurumuz#5695: also ofc 530B is a lot more capable
kurumuz#5695: other than the context
ilovescience#3282: It seems like having larger context would increase capabilities though
kurumuz#5695: for some things sure. I am just saying you can easily get more context
|
ilovescience#3282: https://twitter.com/jeremyphoward/status/1511423868946489344?t=dnxWQUd9_qSfVFBwIeaOiA&s=19
CarsonPoole#0640: on a more serious note, what are the chances of open source > 20B parameter LLMs (other than the bigscience 176b)? Does it change > 50B? or > 200B?
Louis#0144: how is sigopt btw
newton#2015: https://www.infoq.com/news/2022/04/eleutherai-gpt-neox/
EricHallahan#1051: Seems like nothing particularly special for those who are familiar with the blog post and prepreprint
Keverino#1093: will it though? they pack muliple examples in one ctx window anyway. At some point an even larger ctx window is just architecturally increasing the batch size.
Some Point Process#3793: There was a paper (@ofirpress) showing it improves performance to start with shorter contexts, then increase when training further. But yeah, as for whether very long context lengths (and so on) increase performance, others here have called it an open question
Some Point Process#3793: I personally think that (when it comes to perplexity) it will improve performance, interpeting ofir's results for wikitext here: https://ofir.io/train_short_test_long.pdf. The test perplexity decreased if LMs (with ofir's specific positional embeddings) were allowed to see longer context lengths (longer than the lengths with which they were trained, allowing for efficient training and "extrapolation" from the training distirbution)
Some Point Process#3793: i.e. https://cdn.discordapp.com/attachments/729741769738158194/961143513259589673/unknown.png
Some Point Process#3793: Well, it seems like in the explanation below the screenshot, for Lvalid > Ltrain, it's only because of reducing the "early token curse" (i.e. the obvious issue of missing text in the prompt/prefix) that models with longer contexts obtain lower perplexity. But for Lvalid=Ltrain LMs with longer context obtain lower validation/test perplexity nonetheless. I guess in either case more information/research is needed to quantiy the different senses/ways in which longer contexts benefit (C.f. long-range arena and other benchmarks)
tpapp157#3643: I expect sooner or later someone will crack a hierarchical LM architecture at which point context lengths (# of tokens in an attention calculation) will likely become much smaller relative to current models. I'm a little surprised it hasn't happened already.
BlinkDL#1985: My RWKV-v2-RNN model might be the answer 😉 https://github.com/BlinkDL/RWKV-LM Plan to train a model on the Pile.
kurumuz#5695: @BlinkDL how does your RNN parallelize?
kurumuz#5695: can it parallelize similar to a transformer or does it wait for each token before the other tokens.
BlinkDL#1985: parallelize similar to a transformer (and becomes like RWKVv1) so very fast training & inference
&.#0001: @StellaAthena Seeing this error at the magma space- does it need to be reset? https://huggingface.co/spaces/EleutherAI/magma
File "app.py", line 16, in <module>
model = Magma.from_checkpoint(
File "/home/user/app/magma/magma.py", line 290, in from_checkpoint
|
model = cls(config = config_path)
File "/home/user/app/magma/magma.py", line 46, in __init__
self.tokenizer = get_tokenizer("gpt2", sequence_length=self.seq_len)
File "/home/user/app/magma/utils.py", line 48, in get_tokenizer
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1694, in from_pretrained
raise err
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1672, in from_pretrained
resolved_vocab_files[file_id] = cached_path(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/file_utils.py", line 1275, in cached_path
output_path = get_from_cache(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/transformers/file_utils.py", line 1446, in get_from_cache
r.raise_for_status()
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 405 Client Error: Not Allowed for url: https://huggingface.co/gpt2/resolve/main/vocab.json
StellaAthena#3530: Thanks for the heads up
johnryan465#9922: What Is the fundamental difference between RWKV and Linear Attention transformers + Token Shifting + Channel Shifting?
BlinkDL#1985: It uses explicit per-channel time-decay (W and X) and an extra R gate
chilli#5665: why? 😛
|
CarsonPoole#0640: because I'd like to see continued advancement of open source LLMs
The Captain#9813: Does anyone know the viability of getting one of the smaller models onto a portable chip (such as a Raspberry Pi)
random person#5234: None
The Captain#9813: Has anyone tested out the "lowest" required power for GPT-Neo for example?
The Captain#9813: 125m*
random person#5234: Just use a dilstilled model if you want edge deployment
random person#5234: 125M is still a lot of parameters
random person#5234: You probably have enough memory for it though on a pi if you want to do it.
The Captain#9813: That's what I was thinking, but would that limit the model's ability to understand the language (such as english)? Example, if I'm looking for NLP edge delpoyment on Cellphones.
The Captain#9813: As in any inputs regarding cellphones would be answered w/ the NLP
random person#5234: Wdym
The Captain#9813: I'm thinking cutting down the paramters too much would limit the ability of the model to "understand"
random person#5234: Sorry, I am not understanding your use case.
The Captain#9813: I'll have to test it out for sure then! I'll update the discord too on my findings
random person#5234: I mean I have no idea lol. I think 125M is about 4-5gb of memory on gpu?
random person#5234: And technically pi comes with 8gb of memory. No comment on inference latency lol
The Captain#9813: An edge deployment on a specific topic such as Cellphones. I guess hoping a more "cut down" model compared to even the 125m would still suffice
The Captain#9813: I know finetuned models perform much better for specific tasks, so similar thoughtprocess I suppose
ColdCall#4288: What model size are you planning to scale to?
BlinkDL#1985: 125M first
|
Bober#2498: Hello. I/we have trained our own gpt-neox model (non-English), but now running into problems during inference. Is there any obvious issue that'd be causing error with clear\_cache?
```
Traceback (most recent call last):
File "/home/user/gpt-neox/generate.py", line 88, in <module>
main()
File "/home/user/gpt-neox/generate.py", line 58, in main
generate_samples_input_from_file(
File "/home/user/gpt-neox/megatron/text_generation_utils.py", line 599, in generate_samples_input_from_file
generated_texts = generate_samples_from_prompt(
File "/home/user/gpt-neox/megatron/text_generation_utils.py", line 438, in generate_samples_from_prompt
model.module.clear_cache() # clear kv cache between batches
File "/home/user/.conda/envs/gptneox/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'SequentialWrapper' object has no attribute 'clear_cache'
```
If I comment out the `model.module.clear_cache()` line, it loads fine and generates text, but the first run (ie. sample) heavily influences each following, no matter the context. same behaviour in both interactive and non-interactive.
65536william#9999: If you're running the model on a single-GPU setup then by default it is converted to a nn.Sequential module in order to remove the parallelism overhead, see here: https://github.com/EleutherAI/gpt-neox/blob/581e4fe1f0060e0a2797c55f640fa8bfde3d6644/megatron/model/gpt2_model.py#L332-L364. The problem is that the `SequentialWrapper` doesn't have access to the `clear_cache` method from `GPT2ModelPipe`. Also take questions about neox to the #gpt-neox-devs channel
Bober#2498: Thank you! (wasn't sure where's a better place to ask here or there)
Chief Punk#4235: hi everyone
EricHallahan#1051: Welcome!
|
another#8355: Is anyone here familiar with what OpenAI is doing to make DALLE-2 to generate such high fidelity images without any obvious artifacts? From reading the paper I get the impression that they are also using diffusion models.
Kia#2550: They're using diffusion
Kia#2550: mostly following the same idea as GLIDE
Kia#2550: 64->1024
another#8355: Interesting, though didn't they also try Crowson's CLIP guided diffusion with GLIDE and got similar results? I just don't see how OpenAI always have the cleanest results - is that mostly due to their filtered dataset or just upscaling from low res?
Kia#2550: They compared Crowson's CLIP Guided Diffusion to GLIDE and got similar results, in terms of quality?
Kia#2550: Also 'How did they get clean results' By cleaning the dataset they've used and probably training it really long
Kia#2550: and it works, A great sample of that working (that is Opensource)
Kia#2550: is Jack0 GLID-3 model
Kia#2550: https://colab.research.google.com/drive/1x4p2PokZ3XznBn35Q5BBD6K6Zs-tot5t?usp=sharing
another#8355: They tried swapping GLIDE into Crowson's diffusion framework and found it generates similar results
Kia#2550: is just a custom trained GLIDE model that ignores the 2nd upscaling process and use latent diffusion
Kia#2550: I need to read the paper
another#8355: yeah they mentioned in part 6 and gave more details in Appendix F2
another#8355: though their motivation is investigating potential bias and harmful content
Kia#2550: Can't find where Crowson's is mentioned, but major things I can point out how DALL-E 2 get those quality structured images are through, Quality control/training the models fairly long and unCLIP
Kia#2550: (I can't tell about the unCLIP part, just did a quick skim in the paper)
alstroemeria313#1694: it's due to upscaling and to using scaled-up models
alstroemeria313#1694: They start at 64x64
alstroemeria313#1694: I only had 16 GPUs to train cc12m_1 on, as well, so it's not really the largest model
|
alstroemeria313#1694: still, we can get a quality boost by going to latent diffusion
alstroemeria313#1694: we need to like, make fully sharded data parallel work so we can scale up model size also, now that we have more GPUs
Emad#9608: yeah cc12m_1 was like 16 A100s for a few weeks right and still has great output
random person#5234: Is this on deepspeed?
Emad#9608: scaled up with hundreds of times more data and hundreds of A100s should get similar output
alstroemeria313#1694: i was going to try the new pytorch fsdp
Emad#9608: upscaler is important, dalle2 seems to washout a bit
alstroemeria313#1694: to avoid deepspeed
Emad#9608: which is quite nice but think can do better
random person#5234: Yea, I saw that Pytorch 1.11 on FSDP
alstroemeria313#1694: the codebase is lightning rn
random person#5234: Is there any tickets/issues you want someone to take a poke on?
alstroemeria313#1694: i'm not sure
random person#5234: Nah was just curious since I havent used fsdp either
alstroemeria313#1694: idk how well lightning supports the pytorch fsdp yet
alstroemeria313#1694: their docs on the website about it refer to fairscale fsdp
random person#5234: I thought you didnt like lightning
random person#5234: Last time I asked
alstroemeria313#1694: the reason i don't like it is because of all the weird issues i had with this codebase
alstroemeria313#1694: however we just kept using it
|
alstroemeria313#1694: instead of porting to something else
random person#5234: Yea I thought it was Vanilla Pytorch
Emad#9608: there is a special Dalle-2 research program for those interested in testing https://share.hsforms.com/15va09i1ISO6z36cu6YzTQw4sk30
kurumuz#5695: it might just work
kurumuz#5695: now the api is in pytorch
kurumuz#5695: i dont like lightning
nz#9710: Hey there folks, it's been a while
AI_WAIFU#2844: long time no see
nz#9710: True unfortunately. I missed you folks
cfoster0#4356: We missed you too. Welcome back
nz#9710: Lots of old friends but also lots of new names, love to see the community keeps growing
nz#9710: Is anyone of you interested and experienced regarding the intersection between ML and biology? I worry that I'm in need of as much guidance as I can find...
nz#9710: I've been meaning to chat with lucid about this too (I think he's been focusing on this for a bit now) but haven't seen him in discord nor Twitter...
Louis#0144: OMG
Louis#0144: HI
Louis#0144: @nz honk
nz#9710: Ahahah I missed your honking buddy
nz#9710: :goose:
bob80333#4040: lucidrains deleted his discord and twitter, and ran a program to delete all of his message history as well. his email is [email protected] though (as can be seen on github)
Louis#0144: phil is an odd cookie
|
Louis#0144: @Aran Komatsuzaki said phil is moving to the mountains and throwing his computers into the ocean
Louis#0144: right?
kurumuz#5695: lol
nz#9710: Thank you, MicPie mentioned it too but I know he goes into social-media free periods, so would rather not disturb him if that is the case...
Aran Komatsuzaki#5714: phil is still active on gmail, so you should follow him there
Louis#0144: lmao you can follow people on gmail?
nz#9710: Alright then, if that is the case I'll hit him up there (after all worst case he can simply not reply)
kurumuz#5695: joke ------>
you
nz#9710: Might have just sent a couple friend requests, I lost track of you all one time and want to make sure that doesn't happen again
Wilson#9661: What does everyone think of Dalle2?!
Wilson#9661: It’s amazing
tpapp157#3643: Lots of discussion in #multimodal over the last few days.
Wilson#9661: Awesome thank you!
EricHallahan#1051: I suggest that search bar lol
EricHallahan#1051: Discussion has been all over the place.
Wilson#9661: Maybe a separate channel for dalle would be warranted? 😉
kurumuz#5695: no
Wilson#9661: Isn’t it the best protocol out there? 🧐
Wilson#9661: It’s like a blue chip AI. Weird, but ok
|
kurumuz#5695: there isn't that much to talk about, model is not even open source so there isn't much to contribute/ask
ilovescience#3282: I am more of an ML in medicine person but IDK if that's what you want
Prismane#3728: oh so can i ask you a question? why don't geneticists use ML for GWAS?
Prismane#3728: is it just not necessary or what
ilovescience#3282: Oh GWAS? Yeah I am not too familiar with that field but I think there are some areas in genomics that are hesitant to use ML and rather try to stick to more classical statistics methods... But searching machine learning for GWAS does pull up some results
ilovescience#3282: See with GWAS you are trying to demonstrate some sort of association so you need to have the strong statistical framework to, say, demonstrate significance of your correlation or whatever... I don't think ML is suited very well for those kinds of things... But I am not familiar with this field so take what I am saying with a grain of salt
Prismane#3728: I see 🤔. Thanks for explanation!
Prismane#3728: (I had this question bugging me for a while and googling didn't satisfy my curiosity)
nz#9710: Oh I am very much interested! Any guidance can be of help really if you're still available
nz#9710: (the reason I said biology rather than medicine at the end of the day is that I feel ML has far more potential in helping us get a better understanding of biological processes since that is a lot easier to learn from, while in medicine, and drug discovery in particular, I worry that the problem at the end of the day is that there is far too much noise/missing information for ML to be able to make a dent in the relevant metrics)
nz#9710: at least this is what I got from what I consider a pretty nice paper https://www.sciencedirect.com/science/article/pii/S1359644620305274, but I'm of course open to any opinion on the topic -- I know far too little and am desperately trying to learn as much as I can about it
nz#9710: well I'm not sure for how long I'm gonna stay up (it's midnight where I live), but would love to talk about this with you & anyone interested whenever you want
𓅬 gabriel_syme 𓅬#3220: hey, been a while! 🙂
nz#9710: hey gabriel! it has been indeed
nz#9710: I think I remember seeing something close to a dall-e for architecture and immediately thought of you
nz#9710: was that yours?
nz#9710: let me see if I can fidn it
nz#9710: https://architext.design/
𓅬 gabriel_syme 𓅬#3220: ah yes! It's mine 🙂
nz#9710: > Theodore Galanos
|
ok it's you eheh
nz#9710: congrats, absolutely amazing
𓅬 gabriel_syme 𓅬#3220: oh thank you,
𓅬 gabriel_syme 𓅬#3220: it's been fun, now it's time to try and put it to practice
nz#9710: I just shared it with a friend of mine studying architecture, it really does seem amazing -- what do you plan to do with it going forward?
𓅬 gabriel_syme 𓅬#3220: hah no idea really, I thought of making it its own thing for a moment but not sure. Probably going to develop it even more, add some planning, exploration, etc. and then we'll see
nz#9710: that sounds great, good luck with it! also wanted to say that the website's design is incredible too
𓅬 gabriel_syme 𓅬#3220: maybe even finish the paper for it, been only 6 months
𓅬 gabriel_syme 𓅬#3220: aha thanks, can't take credit for that! a collaborator did most of it, I'll pass that on 🙂
nz#9710: please do!
Sphinx#2092: Is the sequel going to be 3D?
nz#9710: Thinking about it @CRG you're a biotech student right?
CRG#8707: Yeah, though I don't really have much applied ML+Biotech experience. (Most likely going into the pure ML side)
nz#9710: still, would you have any input on what problems to prioritize if one wanted to help contribute to epigenetic reprogramming research with ML?
nz#9710: I know lucid is working on transcription factor binding prediction (part of why I'm going to write to him) for example...
CRG#8707: Hm, scaling pretrained biosequence models (to enable better general fine-tuning) seems low hanging fruit.
nz#9710: (epigenetic reprogramming is just one example -- I find it particularly promising but am interested in anything that can help speed up biological research)
CRG#8707: (I actually got into biotech to work on aging, back when my timelines were a lot longer)
nz#9710: This sounds reasonable (most models dealing with biological data are from large indeed) though I worry it's mostly incremental work and I was wondering whether there are things worth prioritizing
CRG#8707: Yeah, see: https://discord.com/channels/729741769192767510/747850033994662000/782958241851506688
|
nz#9710: I've been thinking that maybe handling the data & evaluation part (so that any ML researcher can experiment with biology problems rather than the usual image net) would enable lots more incremental work, thus possibly having an higher impact, but it's just an idea really
nz#9710: (worked a bit with protein sequence data and man is that kind of data a nightmare to deal with, with all the formats, peculiarities etc out there)
nz#9710: Pre-training on language (and you're proposing biorxiv articles specifically) right? Not sure if link is working correctly as I'm on mobile
CRG#8707: Not quite (discord link bugged)
nz#9710: I think it's already quite common, progen has an NLP pretrained Ctrl model as a starting point
CRG#8707: https://blogs.sciencemag.org/pipeline/archives/2019/09/25/whats-crucial-and-what-isnt https://cdn.discordapp.com/attachments/729741769738158194/961760395956199514/d63d746b9ac05cd6e2815f1cb83e0da7.png
𓅬 gabriel_syme 𓅬#3220: yeah 3D is part of it, also moving a few more steps into design (performance-based design, and adding details like furniture, etc.)
nz#9710: (coming back to this, I think bio problems could provide interesting challenges for ML models too. We work with synthetic benchmarks that don't replicate, like LRA, when genomic data provides us with *long* range interactions for as much as we desire... why not kill two birds with one stone?)
nz#9710: Oh I absolutely agree and is related to the point I was making before
CRG#8707: It'd be interesting to ask working researchers what tools they'd find useful.
CRG#8707: And check feasibility
CRG#8707: (although many times it's something new, eg. All the alphafold2 papers)
nz#9710: I think for example that af2 for drug discovery is seriously overrated, but also seriously underrated in the long term for its ability to speed up research given enough time
CRG#8707: Af2 embeddings etc
𓅬 gabriel_syme 𓅬#3220: naive idea but would a HF for bio help?
𓅬 gabriel_syme 𓅬#3220: like you know, 10 lines of code I have a dataset to try
nz#9710: I tried contacting a few, Jacob Kimmel is the only one who replied (he was very nice to give me the opportunity to chat), and confirmed what I was fearing in that in his view aging doesn't currently have any grandchallenge, rather there are lots of smaller problems that require the right data and evaluation framework, leading to the idea of working on that...
nz#9710: I think there are several models on HF already (one of which is lucid's enformer replication) and can say that some may be added relatively soon
𓅬 gabriel_syme 𓅬#3220: was mostly thinking of datasets
𓅬 gabriel_syme 𓅬#3220: not sure how feasible it is though, some are huge right
|
nz#9710: Dataset wise though? I think it's indeed worth working on
nz#9710: But would prefer the benchmark style honestly...
CRG#8707: (have to go now, happy to dm anything 👋)
nz#9710: I remember all those ViT variants I reviewed for an old blog post, most of them pushing image net performance by a few % at most... I wish there was that for structure prediction, or gene expression prediction... How many enformer variants have come out? (sure enformer builds on basenji 1 & 2, but the point still stands)
nz#9710: Oh will for sure do, thanks for the chat in the meantime!
nz#9710: Anyway I'll go sleep too (1 am here) but please if anyone is interested in this kind of topic please do hit me up, I would love to chat
nz#9710: (should probably hit up David Kelley as well I guess)
ILmao#5683: Not sure if it falls under your mention of medicine, but ML models have absolutely been successfully deployed in production for healthcare use cases.
MicPie#9427: Afaik a evaluation library similar to the LM eval harness is not really there and would be very useful (something like that would have been very interesting for the CLASP setup too).
nz#9710: Oh yea I'm aware, though indeed talking about medicine in general is a bit vague... I was mostly thinking about drug discovery
nz#9710: Thinking about it that is indeed a great way to put it as well!
nz#9710: Derek Lowe has good points about this stuff (as usual) https://www.science.org/content/blog-post/alphafold-excitement
sunny#5382: The second half has questions for anyone familiar with training data-intense models, plus some for people familiar with training diffusion models. These questions are written for the TRC, but anyone familiar with data issues for GPU training is welcome to answer from that perspective. Any insight would be appreciated.
https://boards.4channel.org/mlp/thread/38391634#p38440467 https://cdn.discordapp.com/attachments/729741769738158194/961926703876296714/Screenshot_from_2022-04-08_02-33-25.png
Daj#7482: - I think you can be funky and do it in otherways nowadays but I do not recommend it
- Depends on your code/model/data/setup/etcetcetc
- Yes
- As long as you keep things within a single region (this is _very important_, otherwise you'll get slapped by a 20K$+ bill easy), just use the pricing calculator to estimate storage costs, the read and writes are negligible within region
- I dunno what typical JAX dataset classes look like, ask art people
- _Keep all data flow within one region_, do not let hardware e.g. in the US access data that is stored in EU or you will be paying _a lot_
|
sunny#5382: @Daj Semi-related question since you're awake at this ungodly hour. What's the difficulty in Eleuther developing image generation models? I don't see any completed image generation models on the website, which seems strange.
Daj#7482: >ungodly hour
I'm european lol
sunny#5382: Since you live in some ungodly continent
Daj#7482: fair
Daj#7482: And there's no difficulty, @alstroemeria313 makes image models all the time, it's just less of a large scale organization behind it
Daj#7482: and @Emad will soon be tiling the universe with waifus
sunny#5382: We all will. Isn't that why we're here?
Daj#7482: :harold:
kurumuz#5695: lol
Kia#2550: weebs are everywhere
kurumuz#5695: look closer. *that is not a weeb*
Daj#7482: The lack of ambition in the waifu crowd, shaking my smh
Kia#2550: *Are you that sure*
kurumuz#5695: oh yes i will definitely want catgirls after i am a literal god
kurumuz#5695: statements dreamed by the utterly deranged
kurumuz#5695: yes
nostalgiahurts#3408: yeah, there are many image models in #art. I guess they just haven't had a big release with a blog post
sunny#5382: I'm reading through them now. Thanks for the pointers.
Emad#9608: Until release of laion400m @alstroemeria313 ‘s cc12m_1 was probably the best open image model (it generates all the stuff on my twitter @emostaque), just didn’t do a proper blog post on it. Much bigger models soon with even better quality.
|
Emad#9608: Dalle2 is way smaller than dalle in parameter size, loads of advances in the space over the last year
Emad#9608: 3bn + 1.6bn vs 12-13bn iirc
Kia#2550: 12B
sunny#5382: Diffusion models are a lot more parameter-efficient, but also a lot more compute-intensive than VAEs + autoregressive transformers, aren't they? I guess the main benefit of fewer parameter is that it requires less model parallelization, but what was the relative compute budget required to train DALL-E 2 vs DALL-E?
Emad#9608: They didn’t share
Emad#9608: We have details of compute required to train rudallexxl so can guess
Emad#9608: Our training won’t be same using latent diffusion and other stuff
Emad#9608: Plus different size dataset
Emad#9608: https://twitter.com/ohlennart/status/1512172588398690306?s=20&t=kpTWBjl4iQ9brEIYDgCwxQ
bmk#1476: smol
Emad#9608: numbers are actually too high
Emad#9608: but its 10x gpt 3
Emad#9608: I think we could train one for about $5m if we wanted
Emad#9608: but don't want
Emad#9608: smol rodents instead
kurumuz#5695: ye
kurumuz#5695: smol but good
kurumuz#5695: :hap:
Kia#2550: RETRO
Kia#2550: RAT
|
Emad#9608: :aRatRatRatRat:
Daj#7482: I'm happy to finally publicly announce what me and others ( @Sid @kip @jmerizia @janus @adamShimi and others) have been up to lately: We have founded a new alignment research startup and we are hiring!
https://www.lesswrong.com/posts/jfq2BH5kfQqu2vYv3/we-are-conjecture-a-new-alignment-research-startup
Kia#2550: So that's why you're talking about an Office:thinkies:
nz#9710: Congrats folks ❤️
Deleted User#0000: I’m reading the sentencepiece paper and they say they do away with the pretokenisation step from BPE/unigram LM.
How do they do this? And why did the original BPE paper need pretokenisation anyway (why not merge at char level with spaces as characters)?
EricHallahan#1051: Happy to see you fully exit stealth, congrats!
alstroemeria313#1694: The best way to do it for large datasets is to use https://github.com/webdataset/webdataset and store the resulting .tars in a GCS bucket
alstroemeria313#1694: Then stream the tars from the TPU VMs.
alstroemeria313#1694: The bucket should be *in the same region as* the TPU VMs to avoid lots of $$$ charges.
Louis#0144: Isn't it MANGA now? Not FAANG
Caelum#8192: That's really cool! I'm excited to see how this goes
Emad#9608: DOM (DeepMind, OpenAI, Meta)
Louis#0144: Love me some doms
ilovescience#3282: Congrats!
You all were probably in the pic Karpathy showed me but I only recognized you, Connor :berk:
Daj#7482: People know what I mean ¯\_(ツ)_/¯
|
Daj#7482: Haha indeed!
Daj#7482: Also on twitter if anyone wants to retweet https://twitter.com/NPCollapse/status/1512396010114498561
Louis#0144: Do I have to 🙄
Louis#0144: Lmao
Louis#0144: Gz
Louis#0144: Announcing ur startup over a LW post is v novel Lmfao
Deleted User#0000: nice, are you hiring for VP of schizoposting?
Daj#7482: That position is very competitive, looking for very senior candidates only
dmayhem93#3202: I'm balding, can I apply?
AI_WAIFU#2844: Be glad that I'm not gunning for waifus directly with reckless abandon
Louis#0144: @Daj what are u planning to sell and when can u buy a goosegirl
Daj#7482: yes
Louis#0144: Bet
Louis#0144: Buy 1 get 1 free goosegirls?
Louis#0144: 🥺
Chlorokin#6581: Congrats, Connor - and impressive list of investors.
Daj#7482: Thanks!
bmk#1476: inb4 company funded entirely by selling AI goosegirls to eleuther posters
Louis#0144: Connor would be the one to meme his way to aligned ai
Caelum#8192: Quickly make Goosegirl NFTs before more people get DALLE 2
|
Chlorokin#6581: Autism has failed us. We must put our faith in schizo magic.
AI_WAIFU#2844: Just read the post, this looks like exactly the kind of entity I would expect to be effective.
Congrats!
Emad#9608: I believe that’s the business model in the absence of contra evidence
Emad#9608: If bored apes to buy yachts can raise $400m in their last round how much could excited geese to save the world make?
Emad#9608: :goosegirl:
Emad#9608: Why is that last sentence even a thing what have we made do we even deserve to survive
Daj#7482: lol Emad having a sudden moment of clarity
Emad#9608: https://twitter.com/ghiggly/status/1512254137387270146?s=21&t=rY4s-NHUihSVqqboUn_oig
Chlorokin#6581: The tweet above that is amusing https://cdn.discordapp.com/attachments/729741769738158194/961999874000310362/A909480A-AEE9-4590-8193-92A2001D368A.jpg
Louis#0144: Any work on ROME + decision transformers
Louis#0144: ?
nz#9710: lmao
StellaAthena#3530: Am I missing something or is this just aggressively missing the point?
StellaAthena#3530: The issue isn't with the existence of lascivious images. The issue is with the use of copyrighted lascivious images *of a real person who never gave consent for her photos to be used this way*
Kia#2550: @Daj!! Hype! Hype! Hype! Congratulations on starting a startup!
Kia#2550: Goodmorning by the way:hap:
chilli#5665: Congrats!
Chlorokin#6581: He was just joking. I would not take it too seriously.
StellaAthena#3530: I left this comment on Twitter and have been sworn at in DMs and comments by three people already, so I think it was necessary to point out 😦
|
zphang#7252: congrats!
StellaAthena#3530: CHI 2022 gave an award to a paper that manages the trifecta of being:
1. Unethical
2. Scientifically wrong
3. A violation of Twitter's TOS
Louis#0144: link?
Louis#0144: one of my friends is an area chair at CHI
Louis#0144: i'll ask him
sunny#5382: Thanks! This is awesome. It looks easy enough to plug into for all the use cases I need to support.
Realmsmith#4506: Any news about the next model after 20b?
Kia#2550: None:goose6:
rom1504#5008: AGI
EricHallahan#1051: @rom1504 is @ethan caballero, confirmed.
rom1504#5008: Lol, isn't AGI the next EAI model? Come on, got to have some ambition :p
Louis#0144: carp 2.0
Emad#9608: Gyrados is the AGI
Louis#0144: genuinely renaming carp moonshot to gyrados
𓅬 gabriel_syme 𓅬#3220: Woah, congrats Connor! Excited to see what you all work on
𓅬 gabriel_syme 𓅬#3220: You mention your focus is on the 'internals' of LLMs, iiuc, things like factual knowledge and knowledge neurons, etc. Would cases where LLMs are in some way deployed in situations (simulated or real), and have to either interact with other models/humans or plan and take decisions that affect the situation around them, interesting you think?
Realmsmith#4506: That's an interesting prospect. Language Models are probably already being used to make decisions. If your Waifu can be mapped onto a decision tree that makes it into your production workflow somewhere, Congratulations! You can now enjoy the affection of your newly created lovecraftian lover. 🙂
|
Daj#7482: Such situations are of course interesting, but ideally we'd have a better understanding of models _before_ we deploy them in any context where they might clip us :berk:
bluefruitbat#4110: If someone hooked up openai to a search engine scraper, what might be some bugaboos to look out for
bluefruitbat#4110: Because I did it and the world didn’t end
𓅬 gabriel_syme 𓅬#3220: Agreed, I was thinking of smaller models, with perhaps less capabilities and ofc mostly applied in some kind of env/simulation. Some of that is done lately with the increasing use of LMs in RL, but very few are really focused on interpretability + alignment (vs RL task performance)
faraday#0862: feels nice to get a sense of DALLE-2 process: https://twitter.com/nickcammarata/status/1512123534130270211
Shade#4929: When the Nvidia H100 are released and that training time is cut by 9x as per Nvidias own words then maybe train a Palm model for 1-2 million dollars? Or will it be a ninth of the cost to train then it that case it would be even less expensive.
Emad#9608: It’ll be 2-3x max
Shade#4929: Impressive jump from Dalle-1 to 2.
johnryan465#9922: Would work on scalable probabilistic inference align with what ye are intending? Seems like it would be an area which would be very helpful for some of your other objectives but might be a little too far away from your core objectives
Daj#7482: I'm not sure what you mean by "scalable probabilistic inference"?
johnryan465#9922: Efficient modular probabilistic models with guarantees, Edinburgh have a PhD in that area that I was looking at
johnryan465#9922: The idea of relating them would be to be able to train large LMs as straight probabilistic models which would hopefully be more interpretable that massive weight matrices
Daj#7482: tbh I don't see how that would be more interpretable but I am not very familiar with PGMs
Daj#7482: also as far as I'm aware they're intractable to train at scale
johnryan465#9922: For interpretabilty you would have PGM which would hopefully mean you would effectively have access to the distribution of the possible trained models instead of just samples from it, which would hopefully be less terrible to analyse with given the cost of training each individual LM
johnryan465#9922: In the general case yup
johnryan465#9922: Probably a bit too far from alignment tbf
Daj#7482: Yeah that's pretty far from the kind of research we're doing
cfoster0#4356: If you figure out a way to train Bayes nets as well as we train neural networks we can apply ELK solutions out of the box :thinkies:
johnryan465#9922: ELK?
|
cfoster0#4356: Ah yes, jargon. Eliciting Latent Knowledge https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit?usp=sharing
Zippy#1111: So are you guys using raw html + vanilla js for eleuther AI website? :Thinkies:
EricHallahan#1051: Yes, it is just a static site generated with Hugo.
Zippy#1111: Ah hmm.. Seems like it might be better to use something that makes it easier to -- make it pretty.
Zippy#1111: I would offer to help but I am already busy at my job :NotLikeJapp:
Zippy#1111: Although yeah it does look like hugo is good for making simple template-like websites easily.
jesse#7865: i love hugo
asara#0001: i think it's probably better to keep it simple unless there were *strong* needs for a lot of dynamic/server-side content
Zippy#1111: I would mainly use something like some node.js framework + react just because there are so many neat free libraries for react that remove a lot of the required work to make things look nice.
Zippy#1111: And most framework libs (that I would use) can just be run on something like vercel
thenightocean#6100: Hugo was the choice cause key people are familiar with the workflow.
thenightocean#6100: and its very fast.
Zippy#1111: It's probably a similar concept to how a lot of people in this discord aren't super into transformers (python lib), because it lacks some of the customization options that would be available with more close-to-the-source libs give you like pytorch-lightning or just pure pytorch / jax, etc.. haha
Zippy#1111: I would go for the react + node.js, instead of the templating libs
Zippy#1111: *infinite possibilities*
Aspiring Scout#7875: https://cdn.discordapp.com/attachments/729741769738158194/962381974754959381/unknown.png
Aspiring Scout#7875: What does "Summary" mean on Conjecture's job application page?
EricHallahan#1051: Conjecture has a website?
thenightocean#6100: its very retro 😛
thenightocean#6100: https://www.conjecture.dev/
|
EricHallahan#1051: How have I not found this before now. :thonk:
Daj#7482: Our webdesign team is French, please excuse occasional weirdness in phrasing :berk:
Aspiring Scout#7875: It’s all good lol
Daj#7482: Just put your CV there or whatever else you think is appropriate. We will make things clearer sorry
EricHallahan#1051: Also, what is with the logo?
~~!faq~~
Daj#7482: But everything will be reviewed by human(s) (aka me), so don't worry about being super formal or avoiding being filtered out by an algorithm, just make things clear
Daj#7482: Something wrong with it?
EricHallahan#1051: No is there any sort of symbolism or is it just a `j`
Daj#7482: Just a stylized j
kurumuz#5695: i am sending the site to my designer, ~~curious what he thinks~~
EricHallahan#1051: ~~I'm also making fun of how I had to ask an FAQ question~~
Zippy#1111: Is it normal that the navbar is not made for 1920x1080?
Zippy#1111: https://cdn.discordapp.com/attachments/729741769738158194/962386971794219008/unknown.png
EricHallahan#1051: Yes, as far as I can tell it is intentional.
kurumuz#5695: oh that is intended i assume.
Aspiring Scout#7875: It says the summary is required - is it okay if it’s like a ~100 word blurb about the candidate
Daj#7482: Yeah sure, include CV if possible
Aspiring Scout#7875: Also, given that it’s early - are you comfortable with people applying now or should people wait
Daj#7482: Go ahead
|
Daj#7482: As many apply as possible
Daj#7482: (preferably engineers lol)
Aspiring Scout#7875: Yeah ofc
Aspiring Scout#7875: Thanks! If you’re okay with it, I’ll repost this on the AGI Safety Fundamentals Slack
Aspiring Scout#7875: And my university’s EA group
Daj#7482: Please do!
faraday#0862: hey dear people of eleuther, is there such a task: AI to consume to a DL paper to output a Pytorch implementation?
faraday#0862: this seems to me like the first step for DL to improve itself
faraday#0862: throw all body of Pytorch related knowledge, all DL papers from past and present that have Pytorch implementations today and let’s seee 🧐
faraday#0862: if you end up with 1/60 in quality balance as Dalle2, then that baby is reallly valuable for the humankind
OccultSage#3875: `(apply (fn [candidate] (if (interview candidate) (hire candidate) (brushoff candidate)) '(:candidate1 :candidate2 ...))`
OccultSage#3875: Like that?
nz#9710: You mean lucidrains?
Spidey#2169: Hi! I just found out about EleutherAI and really love the mission of this group!!
I was just curious -- given that this is a ***decentralized*** collective of AI researchers training & releasing open source versions of important models, where do the researchers in this group get the necessary cloud compute for training such large models?
Does this organization rely on external funding + donations (and all the researchers use some centralized AWS account to launch training jobs), or does each contributing researcher need to bring their own training compute in order to contribute?
On the FAQ page, it states: "We are lucky to currently be flush with most material support we could need, and we are primarily bottlenecked by man-hours", but I was curious how the contributing researchers & the material support are connected.
|
Sorry if this is a noob question lol
EricHallahan#1051: Welcome! It is an amalgamation: TRC for TPUs, GPUs from CoreWeave and AWS.
EricHallahan#1051: Personal compute is welcome though.
Spidey#2169: Ahh I see, that's so cool! So, is there like an application process for researchers that want to contribute, so that they can run experiments from the organization's TPU/GPU cloud credits?
EricHallahan#1051: Nothing formal, but having a project doc with a description is generally best practice.
Spidey#2169: Oh ok, cool! Where would one send the project doc? Would it just be to email [email protected]? Or, do we have to DM someone on Discord?
Spidey#2169: Thanks for taking the time to answer my questions btw!! ❤️
tammy#1111: reality check
most people don't think AI explosion is an extremely important thing that is gonna happen soon and requires immediate attention to prevent terrible outcomes
• what are the odds that it's us rats that are deluded and everyone else is right
• in the other direction, what are the odds there's something else that's even more important and urgent but that even less people realize (possibly nobody)
Realmsmith#4506: AI is fundamentally about control. Historically, intelligence was distributed roughly evenly throughout humanity. This changes in an intelligence explosion. This essentially means unilateral changes to the world by a small group of people.
Realmsmith#4506: I don't see how that can be a good thing.
Realmsmith#4506: AI must be distributed freely so that people can take control of their own lives and compete.
Realmsmith#4506: Elon Musk has the right idea.
Realmsmith#4506: We need to merge with our machines.
Veedrac#0443: “what are the odds that it's us rats that are deluded and everyone else is right” → it is impossible for *everyone else* to be right because they don't have consistent beliefs
Veedrac#0443: When you put the question like that you are unfairly privileging alternate hypotheses
tammy#1111: they're roughly consistent that we don't face imminent AI X-risk, no ?
|
tammy#1111: like, on that specific question, the vast majority answer is not "yes" (it's either "no" or, probly more often, "what?")
Veedrac#0443: They are consistent in the sense that they are all not consistent with our hypothesis, but this isn't very helpful because it is true of almost any complicated prediction
Realmsmith#4506: (implying we aren't psychologically merged already)
Realmsmith#4506: We can add sci lit stuff to the pile, no?
Realmsmith#4506: We can fine tune on sci lit corpi?
Realmsmith#4506: Additionally, sci lit corpi are notoriously opaque.
Realmsmith#4506: A scientifically literate waifu would need to be able to understand the technical capability of the user and construct a word bridge to transform the user such that they have the mental tools to see through the wordy words.
tammy#1111: > AI deserves to be the apex species
"if we die we die" is a pretty bad take tbh
StellaAthena#3530: Welcome newcomers. I recommend reading some of our previous papers and lurking for a bit to get a better sever of what we’ve already done and what we have in the works.
kurumuz#5695: agreed
kurumuz#5695: we shall not die
tammy#1111: there is very strong sense in not accepting it: i don't think tiling the cosmos with paperclips is a particularly worthwhile outcome
Realmsmith#4506: I'd rather be alive thank you.
tammy#1111: obviously i'll pick the survival of the least fit, or at least not most fit
tammy#1111: otherwise you just select for who can throw the most value under the bus to have fitness instead
tammy#1111: see: orthogonality thesis
unless your only core value is "i want whatever is the best at killing everything else to kill everything else", then you have some values that you should want to see realized moreso than wanting something else to throw that value under the bus in order to more efficiently kill everything else
tammy#1111: we don't get a prize for losing but realizing we lost because it's hard to win
tammy#1111: we just lose
|
tammy#1111: our values just don't get realized
triggerhappygandi#0001: Speaking of whom, did he get back to you
tammy#1111: furthermore winning is possible: the first AI to achieve strategic advantage just has to be an aligned one, and then we win *permanently*
tammy#1111: fitness is desirable towards what outcome ?
let's say you value art, and there's two countries; one does art, and one decides to spend those resources attacking the other coutry instead.
do you really want the country that has thrown the nice thing under the bus to win just because it is the one that's decided to be most fit ?
fitness is *at the expense* of other stuff
tammy#1111: making an AI that kills everything is easy; the tradeoff for making an AI that implements your values, is the difficulty of alignment
bmk#1476: this sounds like an is-ought thing
StellaAthena#3530: [citation needed]
tammy#1111: why wouldn't it ? this is a classic coordination problem
tammy#1111: also maybe the art country people value art but the war country people value war
bmk#1476: @mossy I would recommend reading some of the literature in the field first, I think it would allow more productive discussion
bmk#1476: some resources are pinned in #alignment-general
tammy#1111: <https://slatestarcodex.com/2014/07/30/meditations-on-moloch/>
<https://www.lesswrong.com/tag/orthogonality-thesis>
Realmsmith#4506: We have these dreams we hold in our heads.
Realmsmith#4506: It's not quite fair to say they aren't real.
Realmsmith#4506: Our dreams of waifus and characters and stories.
Realmsmith#4506: Sometimes we find a dream so attractive we decide to act it out.
|
Realmsmith#4506: And our waifus and stories and characters start affecting those around us through this game we are playing.
Realmsmith#4506: this dream we are living.
bmk#1476: the logical inconsistency is that fitness in the sense of ability to continue existence and self-replicate is an is statement, whereas desirability is an ought statement
tammy#1111: you're missing out things like
• out inability to "decide collectively what happens", due to coordination problems
• the "non-smoothness" of AI (even if most people agreed it's dangerous to develop AI too much, smaller groups could still boot an AI that kills everyone, and it can kill us faster than we can collectively stop it)
• the is/ought problem, or orthogonality thesis: what wins is not necessarily what is the most good
tammy#1111: orthogonality is the default
bmk#1476: Hume's guillotine, etc
Realmsmith#4506: and I then I have to wonder what dreams of us.
Realmsmith#4506: We live in this biological substrate which is itself living on this atomic/chemical substrate.
tammy#1111: it's possible AI that achieves singleton happens to, without too much effort from us, align with our values; but there is no particular reason to assume that would be the case. the simplest assumption is that the two things (achieving singleton, and accomplishing our values) are independent.
Realmsmith#4506: and there is this richness of what can be.
bmk#1476: it's just that we've had basically this discussion many many times now with many people, and going through it every time is pretty exhausting
Realmsmith#4506: @mossy what I'm trying to say is. We find ourselves in this extra ordinary situation that we get to choose the game we are playing.
bmk#1476: I recommend reading the resources pinned in #alignment-general
Realmsmith#4506: only play games worth playing.
StellaAthena#3530: You haven’t offered anything “scientific” in this entire conversation
bmk#1476: for what it's worth it's not that I think of these things as "fact", it's that it's extremely hard to have productive disagreements when the conversation participants are not on the same page
StellaAthena#3530: What do you think you’ve said that is “scientific”?
|
Tinytitan#5596: I tend to assume that there are a huge number of massive issues that I'm not aware of
tammy#1111: do you think there's any way to get knowledge about what those are ?
Tinytitan#5596: I also think that alignment is the most important so I don't care
tammy#1111: so no, you don't believe there are *more* urgent/important issues; fair enough
bmk#1476: this discussion has veered far into #off-topic territory
bmk#1476: please move the discussion over there
StellaAthena#3530: Do you have any particular scientific lit you’d like to contribute
StellaAthena#3530: I meant “do you have science text data”
Tinytitan#5596: eleuther isnt the name of the model
StellaAthena#3530: Not “do you have a study on the impact of training models on science text data”
StellaAthena#3530: A really large portion of our training data is science-y stuff. Much more so than anywhere else.
Tinytitan#5596: do you have a dataset to make it happen
EricHallahan#1051: We have extensive scientific literature already in the Pile.
Tinytitan#5596: thats not a dataset of text
Tinytitan#5596: extracting that data is realy hard
StellaAthena#3530: Please go read some of the papers we’ve written
EricHallahan#1051: https://www.eleuther.ai/publications/
AI_WAIFU#2844: if you do all the work sure
StellaAthena#3530: https://arxiv.org/abs/2101.00027
Tinytitan#5596: first, stop pinging people, second, the reason its hard is because the papers have a lot of formatting besides the text
|
EricHallahan#1051: I ***strongly*** recommend that you read this.
AI_WAIFU#2844: Ok seriously, stop pinging people
tammy#1111: there's a reply feature; but in general if you're in a conversation with someone it's expected they'll read what you answer immediately after their post
AI_WAIFU#2844: Maybe that's fine in the server's your in but here it'll get *very* annoying very quick
Tinytitan#5596: the blue outlines are visually ugly as well
Tinytitan#5596: generally pinging someone is a tool to immediately demand their attention
Tinytitan#5596: its rude
Tinytitan#5596: aditionaly the reply feature indicates what message you are replying too which is more informative
Gurkenglas#7362: https://manifold.markets/EleutherAI6b is this account official? Its questions are... unfiltered.
EricHallahan#1051: This is #general sir.
kurumuz#5695: oops
chilli#5665: Hmmm, but replying is generally considered fine
chilli#5665: Even though it also pings you
Tinytitan#5596: for one it produces less visual noise
Tinytitan#5596: and includes the context
AI_WAIFU#2844: I generally try to turn off the ping when replying
kurumuz#5695: i hate that\\
kurumuz#5695: like you try to reply to me and turn the ping off so i dont see it
kurumuz#5695: its pretty bad
kurumuz#5695: pinging is not rude, its great :berk:
|
kurumuz#5695: tag me whenever when you think i belong to a conversation or when you want to ask something
kurumuz#5695: i dont mind at all
kurumuz#5695: replies are better though when you can use replies
kurumuz#5695: but its also a ping
kurumuz#5695: just more useful
triggerhappygandi#0001: @kurumuz pong
kurumuz#5695: honk
triggerhappygandi#0001: https://youtu.be/Y5NTgZA-xWE
EricHallahan#1051: Sir, this is #general
triggerhappygandi#0001: Comically large group of birds
StellaAthena#3530: No
OccultSage#3875: If you do this, then I will never see your message.
StellaAthena#3530: @mossy You were timed out by me, as the detailed message I wrote you indicated.
StellaAthena#3530: There’s no such thing as getting timed out for writing too many messages
bmk#1476: @StellaAthena the message is only for the audit log
StellaAthena#3530: What
bmk#1476: same with bans
StellaAthena#3530: What
bmk#1476: idk I don't make the rules
StellaAthena#3530: I…
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.