data
stringlengths 115
7.61k
|
---|
kurumuz#5695: https://www.youtube.com/watch?v=gWZADVzVMFs
kurumuz#5695: Comma AI talk
flowpoint#7450: lol
flowpoint#7450: was just typing that right now
Louis#0144: The pretraining objective for BART just makes more sense to me
Louis#0144: Still disappointed no one has scaled Bart yet
Louis#0144: That’s what I was working on
Louis#0144: I’m just so busy now
Louis#0144: :’(
flowpoint#7450: for the uninitiated, commaai builds close if not better than tesla level self driving car hardware and software.
they build a upgrade kit that works on many cars.
nice watch if you're interested in (probably) sota of Self driving
flowpoint#7450: and its open source btw.
kurumuz#5695: its better :berk:
kurumuz#5695: e2e for the win
kurumuz#5695: fuck feature engineering
spirit-from-germany#1488: what's going on in the voice channel? 🙂
ersatz#0001: by how much?
EricHallahan#1051: #interpretability-reading-group
|
EricHallahan#1051: We've been doing it for weeks.
Pranjal#3894: Need help with which machine learning algorithm/model to use for this problem.
The dataset is of product categorization for Amazon. Feature Columns are PRODUCT NAME, PRODUCT DESCRIPTION, BULLET_POINTS, BRAND NAME (all are in string form) ; and Label column is the NODE_NO (It consists of numbers from 1 to 10,000 which represent 10k classes among which have to categorize the data). Dataset have 3 million rows. Which multiclassifiaction algorithm should I use to train my model as the number of classes is huge (10k)?
PS: Can't use deep learning based models.
bmk#1476: wrong place to ask
EricHallahan#1051: https://eleuther.ai/faq
Daj#7482: For those maybe looking for a project to sink their teeth in (or that have some ideas they'd like to pitch as projects themselves), we have been updating our project board (it's still a bit of a mess, bare with us), and have a new snazzy URL: http://board.eleuther.ai/
lone-pine#4172: I've noticed that ML takes advantage of one of the features of GPUs, matrix multiplications, while ignoring one other feature that GPUs are well optimized for, texture fetch, something that could help with models' lack of suitable working memory. I've been following AI pretty much my whole life but I'm just sinking my teeth into actual ML coding. What I'm wondering is, does PyTorch/Tenserflow/other libraries have any functionality that would allow a model to access the texture fetch features?
lone-pine#4172: In the video game world (I'm a games dev) the GPU and driver can be extremely smart about how to apply the shader code so that it pre-fetch the texture sample or even convert your shader into faster linear interpolation if your code is "differentiable". I've been thinking about how random access memory could be integrated into the transformer model, and it seems like the main problem is co-ordinating the memory location between write-time and read-time. For a piece of data, the write head would need to know that the data is worth saving (but memory is cheap so we don't need to be stingy) and it has to write it to a location that has some sematic meaning ("information relating to a person's age goes on the right edge of the texture, information relating to a person's gender on the left edge") and then the read head would need to guess that useful information was in the correct semantic location and look it up ("I need to know about this person's age, so let's sample a pixel on the right edge")
EricHallahan#1051: None of them that I know of. I don't know the constraints of what modern texture fetch pipelines are, but they really would need to support binary32 and/or more ideally binary16 or bfloat16 for fetching weights from memory, and waiting for weights to come from memory is an eternity in DL.
lone-pine#4172: I think ideally you'd want to design it so that each layer computes the next layer's texture fetches
lone-pine#4172: The card can probably fetch any kind of data, but the card is optimized for bicubic interpolations between pixels (which I think would suit ML well) and 16 byte chunks of 4 32-bit floats (RGBA or XYZW)
uwu1#4864: the spatial texture caching machinery is reused between compute and non-compute. the latency hiding which is what actually makes it fast is also reused. For now also all the algorithms we use have a mostly static memory access pattern which can be AOT optimized statically. another problem is that switching the card from compute to gfx mode takes 1ms (haven't checked too recently but prob still true).
It is very possible to implement an ML lib using just graphics features, which I did @ tensorflow.js for a while... but using webgpu rather than webgl would still have given better performance with fewer hacks.
EricHallahan#1051: You can set up a database where you embed all inputs/prior knowledge into embeddings, and then take the current context and compare it with a similarity metric to your dictionary of embeddings to retrieve the information. I know that @aero got this to work pretty well and was able to recall information from a single message in a sea of hundreds of other messages without issue.
aero#1357: works really well just using GPT's hidden states actually, sigurd bot in novelai discurd has 166285 memories and still recalls them quickly (about 100ms per sort, 5gb system ram usage)
lone-pine#4172: Can you link me to a whitepaper/blog post/repo?
EricHallahan#1051: I'm surprised it is still that fast to search.
aero#1357: its kinda crazy, I use heapq.nsmallest so thats probably part of it
|
EricHallahan#1051: I cannot, maybe aero can.
uwu1#4864: Redis did make a graph structured GPU powered DB using BLAS (graphblas) so maybe one can make one integrated with our ml libs
lone-pine#4172: Okay, I was hung up on the word "embeddings" but I understand it as a high-D to 2D transformation, and yeah that's exactly what I'm talking about, take the word vector, collapse it to 2D or maybe 3D (3D textures exist) and then look up and retrieve information
EricHallahan#1051: No, just a high dimensional vector.
uwu1#4864: oh! see this: https://ai.googleblog.com/2018/06/realtime-tsne-visualizations-with.html?m=1
uses exactly that technique
aero#1357: t-sne doesnt work as well as pca with that sort of data in my xp
aero#1357: <https://github.com/AeroScripts/HiddenEngrams>
uwu1#4864: oh yeah no, was just highlighting them making it "linear" time by using gpu gfx abilities
Louis#0144: Need some lit review help
Louis#0144: citations for using contrastive learning in NLP for classification?
Louis#0144: any ideas?
Louis#0144: @misc{gao2021simcse,
title={SimCSE: Simple Contrastive Learning of Sentence Embeddings},
author={Tianyu Gao and Xingcheng Yao and Danqi Chen},
year={2021},
eprint={2104.08821},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
Louis#0144: I was thinking SimCSE
Louis#0144: and Dense passage retrieval
Louis#0144: but besides that I wasnt really sure
lone-pine#4172: So you modify the prompt or the contents of the context window?
Louis#0144: Also I dont know who in eleuther would really know :berk:
Louis#0144: I feel like there must be *someone*
Louis#0144: @Aran Komatsuzaki have you seen anything?
Louis#0144: oh actually since jason does NLU...
Louis#0144: @zphang ideas?
EricHallahan#1051: You add it into the current context.
Louis#0144: NLU uses a lot of contrastive learning if my memory doesnt fail me
lone-pine#4172: Really cool technique, thanks for sharing. I'm trying to build my own chatbot, I started with DialoGPT but it had problems where it kept repeating the context, so I was trying to play with the context, calling it repeatedly with different context to try to get novel text, but it didn't work very well.
EricHallahan#1051: Did you tune DialoGPT? I don't know how it performs tuned but in my experience it ~~sucks~~ is useless in it's pretrained state.
lone-pine#4172: I going through the huggingface tutorial on fine-tuning as we speek
lone-pine#4172: I think I'm gonna switch from DialoGPT to GPT-Neo-2.7B though
lone-pine#4172: With the fine tuning I can optimize it for dialog
EricHallahan#1051: GPT-Neo does a lot better than GPT-2 because it has the benefit of being trained on the Ubuntu IRC subset of the Pile.
lone-pine#4172: I notice that GPT-Neo outputs an arbitrary amount of tokens before stopping abruptly in the middle of a sentence, but I can fix that by modifying the generator, yes? Obviously I want it to end when the chat message ends
EricHallahan#1051: Yeah, I think HF provides an option for a stop token.
lone-pine#4172: But I was thinking about how I could make the chatbot able to send multiple messages at once, because that's more natural to real human conversation over eg SMS
|
lone-pine#4172: Essentially, sometime generate a second message after the first
EricHallahan#1051: It should be possible to initialize the context to "seed" a new IRC message, or it is not unlikely that it could decide to generate a new message by itself. See Appendix F.19 of the the Pile paper for an example of the formatting used by Ubuntu IRC.
https://arxiv.org/abs/2101.00027
zphang#7252: Specifically on the task part? nothing immediately comes to mind
maybe some improved tuning/semi-supervised learning work, but none of it feels like the ideas stuck
Louis#0144: lmao
Louis#0144: CARP basically wrote itself
Louis#0144: this is like
Louis#0144: the easiest paper ive ever written
Louis#0144: so much easier than the other CLIP paper
Louis#0144: 👀
Louis#0144: what a fucking nightmare to write that
Louis#0144: @bmk just so we're clear I do this so one day MIRI can be jealous of our publication list
Louis#0144: seethe and cope EY
Louis#0144: seethe and cope
Louis#0144: :berk:
Louis#0144: anywya
Louis#0144: can someone read my prior work section and give feedback
Louis#0144: I think im missing a few good pubs
Louis#0144: but I cant remember
|
bmk#1476: why would MIRI be jealous of our publications list
Louis#0144: one day*
bmk#1476: Miri explicitly doesn't care about the number of publications they have
Louis#0144: (I actually didnt realize this)
bmk#1476: afaik, the entire reason they live off individual donations is so they're independent of publish or perish
Louis#0144: (I feel silly rn)
bmk#1476: and so they can work absent the traditional bad incentives
AI_WAIFU#2844: that's also kinda our M.O.
bmk#1476: yeah
AI_WAIFU#2844: Focus on maintaining the structural integrity of the planet, not h-index.
bmk#1476: I feel really proud of the research notes format in particular because it lets us get stuff out there without having to deal with all the bureaucracy of writing and submitting papers
Louis#0144: love to see it https://cdn.discordapp.com/attachments/729741769738158194/871182174383116288/Screen_Shot_2021-07-31_at_8.06.47_PM.png
Louis#0144: anyhow
Louis#0144: how do I get a pile citation in
Louis#0144: im not sure I can tbh
bmk#1476: don't try to cite pile just cause
Louis#0144: o ok
Louis#0144: nw
bmk#1476: only cite it if there's a good reason to
Louis#0144: heres a quesiton
|
Louis#0144: is there an easy way to denote all authors contributed equally
Louis#0144: or do we need ugly asteriks
bmk#1476: can you use {}s?
StellaAthena#3530: Yeah it’s called “publish in a mathematics journal”
bmk#1476: { Goose1 goose2 goose3 }
bmk#1476: it might be a bit of work to wrangle latex to do that though
bmk#1476: what if you embed js to randomize the order every time the pdf is loaded
StellaAthena#3530: Any reasonable website would stop that
bmk#1476: darn
bmk#1476: what if you protmanteau all the names together into one big name
pebbles#7130: put all the letters of all the names in alphabetical order
pebbles#7130: 123eeegggoooooosss
TruGerman#6672: Authors: [some people on the internet, they may or may not be robots]
Louis#0144: need a paper that says "the closer you are to the pretraining objective in a downstream task the better off you are"
Louis#0144: Roberta says that right?
StellaAthena#3530: @Louis Do you mean that a downstream task that is more similar to a pretraining objective is easier to learn?
Louis#0144: yeah
Louis#0144: precisely
Louis#0144: roberta doesnt say this
Louis#0144: 😦
|
Louis#0144: BART maybe?
Louis#0144: @StellaAthena lmk if you find it
StellaAthena#3530: This seems nearly tautological
Louis#0144: I know
Louis#0144: but I cant find anywhere that confirms it
Louis#0144: it seems totally obvious but like
Louis#0144: maybe it isnt?
Louis#0144: maybe a big enough AR model can do masked stuff np
StellaAthena#3530: But better than a masked model of the same power?
Louis#0144: so I dont need to cite something for it?
StellaAthena#3530: What does the paragraph say
bmk#1476: I read a paper once about transfer learning a javascript model to proteins or something which I can't find now but it seems like it could be relevant
Louis#0144: ```For the base model, we used RoBERTa-Large \cite{liu2019roberta} for both encoders. Future work is needed to explore the litany of transformer models for text-text contrastive learning. Particularly, DECLUTR \cite{giorgi2020declutr} might of interest as a substitute as its pretraining objective is similar to this downstream task. As shown in XYZ, a downstream task that is more similar to a pretraining objective is easier to learn. ```
bmk#1476: do a quick experiment, publish it as a research note on the blog, and then cite that :chad:
Louis#0144: LMAO
Louis#0144: HONESTLY
StellaAthena#3530: Yeah
StellaAthena#3530: Do it
Louis#0144: @bmk new speed run?
StellaAthena#3530: Alternatively, I think that paragraph is fine if you just drop the last sentence
|
bmk#1476: nah I think the result is way too obvious
bmk#1476: or if you want to make it worth doing, it would have to be *really* meticulous
Louis#0144: i honestly dont think so
Louis#0144: scaling laws between pretraining objectives
Louis#0144: testing AR on MLM
Louis#0144: and vice versa
Louis#0144: for like 10 models each
Louis#0144: It feels like a press go and forget paper
bmk#1476: since I think the similarity for LMs is different from our intuitive sense of similarity slightly
Louis#0144: it feels free
bmk#1476: remember the last time we had a "press go and forget" paper? :ptsd:
Louis#0144: but no one has done it...
Louis#0144: ;-;
Louis#0144: ok
Louis#0144: yeah
Louis#0144: youre right
bmk#1476: there is no such thing as a "press go and forget" paper
Louis#0144: i'll go back to carp now
bmk#1476: without Pyfra™
bmk#1476: (some help with pyfra would be really nice I just spent several hours today implementing half of a caching thing)
|
Sphinx#2092: There's not really a good way to measure similarity.
Sphinx#2092: No one really knows whats the best way to pretrain, beyond obvious things like if younare gonna use enc dec, pretrain them both together.
someKindaBean#8471: I've seen a paper about model transfer in CNNs that talks about similarity of tasks, specifically taking voice recognition in one language and transferring it to another language
someKindaBean#8471: Oh, I had it slightly wrong, it was about transferability of layers in a CNN
someKindaBean#8471: https://ieeexplore.ieee.org/document/8683043
APi#7462: Wow thank you! Looks like they even released checkpoints! Pity it's in TF, but meh, that will be the user's problem, not mine (I was asking for a friend). Sorry for the late reply.
APi#7462: No, I was asking in general, I.e., also models not developed by you. Thanks
APi#7462: I didn't know about BART. Is it a character-based language model?
MumboJumbo#5351: So im curious what kind of hardware is required to run a 170B parameter GPT model?
Daj#7482: You can _theoretically_ run it on just about any system, just absurdly slowly. The weights are around 350GB alone, so you need _at least_ that much VRAM, so something like https://www.nvidia.com/en-us/data-center/dgx-a100/, if you want acceptable speed
Daj#7482: So "not at any reasonable end-users home computer for the next decade or two"
Daj#7482: maybe that's pessimistic, moore's law is fast
MumboJumbo#5351: So at this point i'm guessing improvement in GPT is pretty much bottlenecked by current hardware?
Daj#7482: Mostly, yes
Daj#7482: This is just inference to be clear, _training_ you're gonna want like 100 of those boxes
Daj#7482: ideally
MumboJumbo#5351: yea i figured, so does Nvidia pretty much have the only tech capable of this level of AI? Cause i'm honestly thinking about investing a good portion of my portfolio in them lol would be cool to have some professionals opinion on them before i do tho
Daj#7482: There are a bunch of other companies spinning up competing AI accelerators (graphcore, tenstorrent, cerebras, that one Chinese accelerator I forgot the name of that was used to train pangu-alpha), but NVIDIA is currently leading the pack
MumboJumbo#5351: gotcha thanks!
wabi-sabi#5811: Do people here have opinions on the potential of optical computing for deep learning? I love the idea but don't have good familiarity with the current technical bottlenecks.
|
Chr0my#0173: Hey, just to check is the gpt j 6b ui website down? https://cdn.discordapp.com/attachments/729741769738158194/871367027447701524/unknown.png
Daj#7482: Currently down, yes
Kia#2550: Morning connor👋
Orz#3023: Hello
So I'm taking part in Amazon ML challenge and I require pretty beefy jupyter notebook
(***atleast 32GB ram and a GPU***)
is there any way I can get It for free?
StellaAthena#3530: @Orz Google Colab
Orz#3023: 🤡
alstroemeria313#1694: Do even high-RAM Colab instances have that much main memory?
Louis#0144: 24gb
Louis#0144: Not 32
Louis#0144: I doubt you need 32 tho
Louis#0144: I’ve been doing ML for almost a decade now
Orz#3023: I mean
24 would be enough
Louis#0144: And I only recently needed 32
Orz#3023: but I'd need it for free
Orz#3023: not through collab pro
Louis#0144: Colab has free instances
|
Orz#3023: no 😦
Louis#0144: They’re passable
alstroemeria313#1694: How do you get high-RAM for free
Louis#0144: Oh
Louis#0144: True
Louis#0144: You can’t
alstroemeria313#1694: I mean
alstroemeria313#1694: If a Pro user marks a notebook high-RAM and then a free user opens it
alstroemeria313#1694: Isn't it still high-RAM
Orz#3023: :thinkies:
Orz#3023: how are you gonna execute in it?
Orz#3023: you'd connect to your own instance
someKindaBean#8471: Doesn't it give you a high RAM notebook if you crash it with an OOM?
alstroemeria313#1694: Like if you take this https://colab.research.google.com/drive/15UwYDsnNeldJFHJ9NdgYBYeo6xPmSelP and open it
someKindaBean#8471: Even on free
alstroemeria313#1694: How much RAM does it give you
EricHallahan#1051: I think it tells you to upgrade for a high RAM instance.
someKindaBean#8471: Darn, that trick used to work
EricHallahan#1051: But I am not sure of that.
someKindaBean#8471: It still would only get you to 25 GB, but that's a good upgrade
|
EricHallahan#1051: You could use Colab TPU v2-8s.
Orz#3023: I don't know how to save data with tfrecords 😦
Orz#3023: and It seems too complicated anyways
Louis#0144: Since it’s $15 tbh you should view it as an investment into your future
Louis#0144: $15 now means a high paying job later
Louis#0144: 🤷♂️
someKindaBean#8471: Yep, the crash trick doesn't work anymore
Orz#3023: sadly
$15 isn't cheap here in my country
also
I don't wanna spend huge money without knowing much about ml
Louis#0144: Oh understandable
Louis#0144: No worries
Orz#3023: like
I wanna be able to atleast fine tune gpt-neo before buying any paid plan
Louis#0144: There are programs for ML for people from third world countries if that applies
Louis#0144: I can’t remember any of them right now
Louis#0144: And I’m out rn so I can’t look
Louis#0144: Sorry for prying though nw
someKindaBean#8471: Honestly, start with a smaller GPT and figure that out first. GPT-2 isn't horrible.
|
Orz#3023: yeah
I've gotta be familiar with pytorch before that tho
Orz#3023: I'm still stuck with tensorflow
Louis#0144: @Orz apply for TRC
Louis#0144: Say that you want to learn TPUs
Louis#0144: Use a v3-8
Orz#3023: TRC?
Louis#0144: TPU research cloud
Louis#0144: https://sites.research.google/trc/
Louis#0144: They’re very lenient
Louis#0144: You get a month just for applying
Louis#0144: And you can email them asking for more time
Louis#0144: A v3-8 means you can finetune 6b
Louis#0144: Also it has lots of Ram
Louis#0144: Like hundreds of GB
Orz#3023: wow
Orz#3023: I've applied rn
Orz#3023: let's see
Orz#3023: thanks a lot mate
Louis#0144: Nw
|
Clay Mullis#7136: Hey do you ever have to modify quotas when you sign up for this or GPU instances?
Louis#0144: TRC is not GPUs
Clay Mullis#7136: keep trying to sign up but i'm told to modify quotas or that resources are exhausted
Louis#0144: So no
Louis#0144: lol
Clay Mullis#7136: TRC _or_ GPUs
Louis#0144: Oh
Clay Mullis#7136: so just TRC - the issue happens there
Louis#0144: No I didn’t need to modify the quota for TRC
Louis#0144: For GPUs yes I did
Clay Mullis#7136: weird.
Clay Mullis#7136: gotcha; yeah it's happened to me previously with GPUs and then I was hoping to get a TPU running but yeah; same quota issues
dmayhem93#3202: sometimes you have to try 50 times, I had to do it one morning to restart a TPU and it took forever
Clay Mullis#7136: Fortunate enough to have an rtx 2070 to use locally
Deleted User#0000: I have a question about google colabs
when using GPT-J-6b on the colab link, do you have to reinstall the packages and re-download the model every time you leave the page, or is there a way to keep installed packages and variables so you don't have to run the setup every single time
Deleted User#0000: n00b moment
𓅬 gabriel_syme 𓅬#3220: highly recommend the smallest GPT-Neo (125M), it's wonderful
EricHallahan#1051: You should assume your instance will die if you leave the page.
Deleted User#0000: ty
|
ym#0104: you can save the model in your google drive if u want faster d/ls (and want to save them some $)
EricHallahan#1051: There is little point to doing that. It would also not fit in the vast majority of users' Google Drive storage.
EricHallahan#1051: I so I would very strongly advise against that.
APi#7462: Hi all, a question about TPUs: I think this is the right channel, but if it isn't, let me know. I see that "easy training on TPUs" is often cited as a perk of JAX. To use TPUs you need a paid-for GCP account, right? Is it very different from training and running models on GPUs? I heard it's important to use large minibatches (I guess because the TPU is not a professor but a coprocessor - code execution happens on the CPU, so you want to minimize the communication between TPU and CPU).
zphang#7252: In GPU setups you have 1 (/multiple) CPUs connected to GPUs.
In the *old* TPU setup, you had 1 CPU connected to TPUs/TPU pods
In the new TPU setup, you can run code directly on TPUs (basically there's an on-TPU CPU). JAX is able to take advantage of this
zphang#7252: so now you just need to get TPUs, but you can get them via TRC
cringelord#9963: What's up
Louis#0144: The ceiling
𓅬 gabriel_syme 𓅬#3220: Nice shoutout in ruder's mailing list this week
Sid#2121: oh? link?
𓅬 gabriel_syme 𓅬#3220: my email 🙂 https://cdn.discordapp.com/attachments/729741769738158194/871698980935987260/unknown.png
𓅬 gabriel_syme 𓅬#3220: it's the NLP news
𓅬 gabriel_syme 𓅬#3220: ahh here sry: https://newsletter.ruder.io/issues/icml-round-up-open-collaboration-clip-art-internet-augmentation-new-glue-style-benchmarks-692828
this is where he shoutouts EAI in ICML.
vsamanth#6603: Hi All , a question about minimum hardware requirements for fine tuning a GPT neo 1.3B model..I am facing out of memory issue on a 16gb GPU device even with a batch size of 1 . Does the gpt neo 1.3B need a minimum of 24 or 32gb GPU memory?
Louis#0144: For training?
Louis#0144: Or for inf
vsamanth#6603: Finetuning
|
Louis#0144: Use huggingfaces deepspeed finetuning code
Louis#0144: You should have no problem with 1.3b on a 16gb GPU
Louis#0144: As long as you’re using fp16
vsamanth#6603: So use fp16 on deepspeed finetuning code,right?
Louis#0144: I think it should automatically use fp16
Louis#0144: On DS
Louis#0144: You’ll figure it out
Louis#0144: Lol
vsamanth#6603: Thanks
𓅬 gabriel_syme 𓅬#3220: if it doesn't fit try adafactor maybe?
EricHallahan#1051: If all else fails, I second the suggestion of trying a lower resource optimizer.
vsamanth#6603: Alright.. thanks guys!!
GrimSqueaker#8837: How many peeps here are from Boston or NY BTW?
Louis#0144: NY here
Louis#0144: Guac and jason are ny too
sweg#8920: Has anyone here used HF models in jax with pmap?
Louis#0144: Not nyc for guac and I
sweg#8920: cant figure out how to replicate them across devices in the normal flax way
bmk#1476: how many peeps here are from glorious Alberta
Louis#0144: @kindiana do u have a moment to help shahbuland
|
Louis#0144: It’s for an Eleuther paper
sweg#8920: 🥺
kindiana#1016: after 6pm pacific maybe
kindiana#1016: don't they have flax + pmap examples?
sweg#8920: all i found was their source code which says the methods for the model support pmap
sweg#8920: but didnt find anything with replicated the models state
kindiana#1016: https://github.com/google/flax/blob/1844de4d4b5ef819c76cb21ded038cfc10624aa5/examples/imagenet/train.py
kindiana#1016: hf should just be regular flax code
sweg#8920: yeah thats what i followed initially
sweg#8920: for some reason init doesnt return state of the HF model
kindiana#1016: idk how hf works lol
kindiana#1016: maybe @𓅬 gabriel_syme 𓅬 can help
sweg#8920: i think it has something to do with the flax HF models not behaving like normal flax nn.Modules
sweg#8920: cause nn.modules can be init'd to get a state
sweg#8920: but no clue how to get an HF models state because it doesn't let you init in the same way
sweg#8920: oh problem solved
sweg#8920: unlike pytorch, for flax they differentiate the model from the model "module"
sweg#8920: just gotta use module instead
sweg#8920: anyone here dealt with "inconsistent sizes for array axes to be mapped" error with jax.pmap?
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/871866383531647016/unknown.png
|
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/871866495406338108/unknown.png
sweg#8920: pmap just doesnt like lists problem solved : P
DrYazman#2737: where's the invite link to this server?
Kia#2550: <https://discord.gg/4rNDab86>
DrYazman#2737: ty
spirit-from-germany#1488: https://mixed.de/ki-startup-aleph-alpha-erhaelt-23-mio-euro-finanzierung/
Kia#2550: Is this another AI company that built up 23M euros?
Kia#2550: I can't read german but that all I understand
Parker#3197: ```The German AI start-up is also involved in open source communities such as EleutherAI, the developers of GPT-Neo and GPT-NeoX, and relies on academic partnerships with, for example, the Technical University of Darmstadt.```
genetyx8#7543: tldr: german/european competitor to OAI
Parker#3197: are they actually supposed to be a competitor that has the mission to create agi?
Parker#3197: or just training large models for release via api?
Parker#3197: ```"Aleph Alpha's mission is to enable the accessibility, usability and integration of large, European, multilingual and multimodal AI models along the lines of GPT-3 and DALL-E by driving innovations for explainability, adaptation and integration", says CEO Andrulis.```
Parker#3197: I guess I should have just kept reading
Kia#2550: I think They're going to the route create of creating Multimodals and move to AGI development from all of the knowledge they earn from creating those multi model
Kia#2550: Ow wow:surprise:
ari#9020: Aleph Alpha is where Connor works
Kia#2550: Like his actual work place?
Kia#2550: That's lovely
StellaAthena#3530: @Kia Yeah, Sid works there too
|
Kia#2550: That's lovely
alstroemeria313#1694: https://twitter.com/RiversHaveWings/status/1422532767171055618
kurumuz#5695: based
kurumuz#5695: sigma males destroy the society with AI in their spare time
kurumuz#5695: #sigmagrindset
alstroemeria313#1694: What even is a "sigma male"
kurumuz#5695: i have no idea
alstroemeria313#1694: you've just seen the memes too, i see
kurumuz#5695: lol yeah
ColdCall#4288: Its a man
ColdCall#4288: Outside the need for the alpha/beta dynamic and yet someone inside it
ColdCall#4288: Basically an incel most probs
alstroemeria313#1694: Oh
kurumuz#5695: nah its not equal to incel
kurumuz#5695: i mean incels don't even grind
kurumuz#5695: they're keyboard warriors
kurumuz#5695: walter white would be a sigma for example
kurumuz#5695: or john wick
ari#9020: In physics, only five sigma males are significant
alstroemeria313#1694: > Sigma Males refers to a supposed classification for men who are successful and popular, but also silent and rebellious, a type of man who likes to "play by his own rules." In most analyses, they are considered "equal" to Alphas on the hierarchy but live outside of the hierarchy by choice.
|
right, it's cope
kurumuz#5695: pretty much :berk:
kurumuz#5695: how to become a sigma: https://ranprieur.com/essays/dropout.html
Sphinx#2092: ```
Instead, through great carelessness, I allowed people to think my message was something like "If you can't even get good grades or hold down a job, don't worry, there is a gateway to a magical wonderland in the nearest dumpster." Or: "If you have a weak sense of who you are and you need an inspiring story to give your life meaning, how about being a heroic puritan like me, whose goal is to avoid guilt through an impossible lifestyle that has no connection to a society that is viewed as a cartoonish monolithic evil."
```
Sphinx#2092: lol
inox#5400: so it's bukowski but without the occasional hints at self-awareness?
alstroemeria313#1694: > Some of the happiest people I know have dropped out only a short distance. They still live in the city and have jobs and pay rent, but they've done something more mentally difficult -- and mentally liberating -- than moving to some isolated farm. They have become permanently content with low-status, modest-paying jobs that they don't have to think about at home or even half the time when they're at work.
this is... really classist
Kia#2550: Ow god:surprise:
genetyx8#7543: sounds like stoicism
𓅬 gabriel_syme 𓅬#3220: that sounds...a bit sad. But I'll be honest, the 'isolated farm' argument irritates me
inox#5400: what's wrong with stoicism? I like living in a giant wooden barrel personally
𓅬 gabriel_syme 𓅬#3220: ofc there are many reasons at home to feel good about your life. My daughter just shouted "I love you daddy" from the other room.
genetyx8#7543: that's cynicism
𓅬 gabriel_syme 𓅬#3220: I promise I would never block your sun
inox#5400: no that's when you have a dog fursuit
Daj#7482: Look at Miss Fancypants overhere with her giant barrel
Daj#7482: a mid sized bucket should be enough for any reasonable person
|
genetyx8#7543: > They have become permanently content with low-status, modest-paying jobs that they don't have to think about at home or even half the time when they're at work.
That's what I mean by "sounds by stoicism". You do not have control of what happens to you, but you have control over how you feel about it.
alstroemeria313#1694: in the essay they could get 'better' jobs but choose not to
genetyx8#7543: because they don't consider them better
𓅬 gabriel_syme 𓅬#3220: I feel we should have an EAI screening of Brazil
𓅬 gabriel_syme 𓅬#3220: that would be really nice
kurumuz#5695: well personally I love work
kurumuz#5695: :shrug:
inox#5400: that would be really nice
inox#5400: oh no my discord block activates in 1 minute
𓅬 gabriel_syme 𓅬#3220: oh no, what's that?
inox#5400: gotta block discord 9 to 5 or I don't do any work
𓅬 gabriel_syme 𓅬#3220: ahhh ok
𓅬 gabriel_syme 𓅬#3220: have a great day! see you later 🙂
inox#5400: bye!
alstroemeria313#1694: bye~
bmk#1476: damn I need a discord block
𓅬 gabriel_syme 𓅬#3220: I was going to ask if it's a software
𓅬 gabriel_syme 𓅬#3220: I'd get it
kurumuz#5695: our work is literally all on discord
|
kurumuz#5695: lol
bmk#1476: thats what i say to myself too
bmk#1476: and then bam i spend hours shitposting about geese
StellaAthena#3530: @bmk Do you do any work *outside* of discord?
Louis#0144: I don’t, I even talk with lab members here
Louis#0144: :berk:
bmk#1476: :guilty:
Sphinx#2092: bmk be like "...there is an *outside*?"
bmk#1476: never heard of it
Louis#0144: How’s ur vaccine
bmk#1476: i feel kinda meh
bmk#1476: i can still function, but i feel vaguely sick
FieryVictoryStar#6306: You just need to find a way to turn shitposting about geese into work.
APi#7462: Apologies for the late answer, but without a direct answer to my message I don't get a notification. Ok!! So it looks like I should apply for TRC. Do they have a high bar to entrance? I won't be training GPT-4
Louis#0144: they dont have a high bar lol
Louis#0144: tell them you want to try TPUs
Louis#0144: im using TPUs in most of my papers now
Louis#0144: so I think they like me
Louis#0144: :berk:
APi#7462: Will do, thanks!
|
ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: Hi, what channel do you recommend me to follow if I want to find out more about Dall-e?
alstroemeria313#1694: https://discord.gg/PTwJV3vR
ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: Thank you so much!
Hypicksell#1844: i just got access to gpt-3 today:)
alstroemeria313#1694: yay!
wyrdc#1871: I can speak to this, I hardly did anything useful the first 30 days and they still just granted me a 90-day extension
mega b#6696: True, lol, but they took all my free credits with their VMs (probably). No TPUs opened, no VMs opened, no buckets etc. and yet my credits were taken (cant see where they took it specifically because I would need to link my payment method and get billed until I see)
𓅬 gabriel_syme 𓅬#3220: did u run out of time?
𓅬 gabriel_syme 𓅬#3220: it's 90 days
greencube#6725: Sus
k!lo#3445: Howdy - I'm kil0, and im an engineering undergrad and casual researcher in ML, IoT, and RFID - found out about EleutherAI from Stella's post in AI Village about Saturday's presentation - decided to join here to mainly lurk and check out #communities to learn more about Machine Learning
k!lo#3445: Super cool to meet everyone!
EricHallahan#1051: Welcome! Glad to have you here!
CobraPL#6494: Back to this topic, I'll leave you idea to mass-clean some files in the pile (especially for prose): <https://ermela.net/dumb-reformatter/>
```str = str.replaceAll("\r", "\n"); // unify newline style
str = str.replaceAll(/[^\S\n]+/gm, " "); // collapse multiple whitespace
str = str.replaceAll(/ +,/g, ","); // remove whitespace preceding commas
str = str.replaceAll(/[“”]/g, '"'); // replace fancy doublequotes
str = str.replaceAll(/[‘’]/g, "'"); // replace fancy singlequotes
str = str.replaceAll("…", "..."); // replace fancy ellipses
|
str = str.replaceAll(/ +([,!])/g, "$1"); // remove whitespace preceding a comma or bang
str = str.replaceAll(/^ +([^ ])/gm, "$1"); // remove leading whitespace
str = str.replaceAll(/([^ ]) +$/gm, "$1"); // remove trailing whitespace
str = str.replace(/^\n+/, ""); // remove initial empty lines
str = str.replaceAll(/\n+/g, "\n"); // remove other empty lines
str = str.replaceAll(/^[^a-z0-9]+$/gm, "***"); // replace fully-non-alphanumeric lines with chapter breaks```
And perhaps clip out all repetitions of more than three of a character: replaceAll(/(((.)\3{1,2})\3*)/g, '$2')
if you think that sth like that `hmmmmmmmmmmmmmm` or `.....................` affects quality negatively.
I don't know how much it is EAI-wise important (e.g. how much you care) or is it just AI-storyteller projects specific. I just try to be helpful 🙂
Please note, that the pile varies in quality (punctuation, spaces, newlines, etc.). Credits for all scripts: ari#9020
ari#9020: I appreciate your enthusiasm, but there's a reason why I put the word "dumb" in there so many times 😅 That pipeline is useful for normalizing small amounts of already high-quality prose text slightly further, and into a form that works better specifically with NovelAI, but not much else
CobraPL#6494: *** is perhaps NAI specific, but I wonder if normalizing punctuation across the pile will help AI to confuse less.
CobraPL#6494: Not only NAI will confuse “” and "". Both are different tokens for sure. Many/all projects based on EAI may have some problems because of that. Anyway, I left those scripts for EAI's guys for consideration. Maybe this idea can be at least partially applied on at least part of the pile.
MasterScrat#6910: I've heard about Discord threads but they don't seem to be available in this server. Is this a decision from the admins? or progressive rollout from Discord?
Kia#2550: Probably decision by the admins
Kia#2550: Because Discord haven't/have fixed yet everyone can make threads
𓅬 gabriel_syme 𓅬#3220: I think discord rolled them out randomly for some servers
𓅬 gabriel_syme 𓅬#3220: Not all have it yey
|
𓅬 gabriel_syme 𓅬#3220: Yet
Kia#2550: Yet :v
Daj#7482: We decided not to use them at this time, they seem pretty bad
kurumuz#5695: we found them to be really useful on our dev server
Daj#7482: I think one of our main concerns was about them not being searchable iirc
Daj#7482: Searching #research is an amazing super power
kurumuz#5695: oh, that makes sense yeah.
Daj#7482: I don't have a particularly strong opinion but also don't see much upsides
kurumuz#5695: for us it was that we could create threads for issues/stuff we need to get done, and it really helped being organized
Daj#7482: Imagine being organized :berk:
kurumuz#5695: well, somewhat organized :berk:
kurumuz#5695: we're still all over the place other than that
kurumuz#5695: i mean, i dont even enforce people to work 8 hours or something lol
Daj#7482: idk we mostly do that with git issues and person to person communication and it works, but our demands are different from a company's like yours
kurumuz#5695: just get the shit done
Daj#7482: I can see the use
kurumuz#5695: ye that is true
EricHallahan#1051: How threads should of worked is to tag messages as part of a conversation in a channel, not as a separate subchannel.
StellaAthena#3530: Basically the way replies work, but with batches IMO
StellaAthena#3530: iMessage actually does it pretty well
|
EricHallahan#1051: Pretty much I just want to be able to filter the channel to only the messages tagged as part of that conversation.
Louis#0144: agreed
joaogui1#8461: https://cdn.discordapp.com/attachments/729741769738158194/872504779266482226/Screenshot_2021-08-04-12-42-20-276_com.android.chrome.jpg
joaogui1#8461: Interesting that for developers in general TF and Keras are still ahead of Pytorch
joaogui1#8461: Also interesting that TF is ahead of Keras
Sphinx#2092: I mean, who even uses keras if not tf users?
Louis#0144: thank god pandas is so high ranked
Louis#0144: the developer is like
Louis#0144: by far the smartest person ive ever met
Louis#0144: holy shit
kurumuz#5695: i never used pandas
Louis#0144: you should
Louis#0144: its *really* good
Louis#0144: it puts SQL to shame
Louis#0144: :berk:
Louis#0144: every time I use it the main thing I think "why tf cant SQL or literally *any* other tabular database API be like this"
alstroemeria313#1694: does it only work with stuff that fits into memory still
Louis#0144: nope!
alstroemeria313#1694: ohhh?
Louis#0144: I recently used it for a 700GB dataset
|
Louis#0144: well not recently
Louis#0144: a year ago
alstroemeria313#1694: oh, how does this work?
Louis#0144: it has chunking built in
Louis#0144: it just loads one chunk at a time
alstroemeria313#1694: Oh
kurumuz#5695: what do we think about mechanical keyboards
alstroemeria313#1694: When I had a job at Google I used to like... do data analysis on the fly for my boss by typing in pandas code into ipython
Louis#0144: I like em
Louis#0144: I use blacks
kurumuz#5695: i have a classic membrane keyboard rn
kurumuz#5695: i tworks
kurumuz#5695: :shrug:
alstroemeria313#1694: Like he'd ask me a question and I'd make it print out a table answering it
Louis#0144: I really like the tactile feedback
Louis#0144: it makes me noticably type faster
Louis#0144: lol
Louis#0144: like a good 10 - 20%
kurumuz#5695: i want cherry browns
Louis#0144: ew
|
kurumuz#5695: i think
kurumuz#5695: why lol
Louis#0144: brown feels like theres literal shit under your keys
Louis#0144: its so mushy
kurumuz#5695: o no
kurumuz#5695: ye but they're good for typing?
kurumuz#5695: i heard
Louis#0144: ye
kurumuz#5695: blues make too much noise
Louis#0144: but it doesnt feel good
kurumuz#5695: how are blacks
kurumuz#5695: are they noisy
Louis#0144: yeah a bit noisy but I dont really mind. I grew up using an old IBM keyboard
Louis#0144: so it reminds me of that
Louis#0144: i will never go back to buckling springs though
Louis#0144: *thats* loud
kurumuz#5695: well membrane seems just fine to me fo rnow
kurumuz#5695: like i have the money but dont see the need for mechanical :shrug:
Louis#0144: typically you arent typing quickly when coding. The only time I need to really type quickly is when writing emails or papers
Louis#0144: so membrane is fine for codingf
|
kurumuz#5695: i guess yeah
kurumuz#5695: why would mechanic make me faster though
kurumuz#5695: faster at typing i mean
Louis#0144: the tactile feedback lets me know when the key has successfully registered
Louis#0144: im a touch typist
Louis#0144: I never look at the keyboard
Louis#0144: so it makes me significantly more confident in typing
Louis#0144: it means I dont need to wait to see the key on screen to know if it registered
alstroemeria313#1694: i just use my laptop keyboard
kurumuz#5695: ah that makes sense
kurumuz#5695: yeah
kurumuz#5695: it would make me faster too. i touch type aswell
Louis#0144: get an ergo mouse too
Louis#0144: https://www.logitech.com/en-ch/products/mice/mx-vertical-ergonomic-mouse.html
Louis#0144: thank me later
Louis#0144: lmao en-ch
Louis#0144: ty vpn
kurumuz#5695: @Louis so this is not a gimmick?
Louis#0144: not at all
Louis#0144: moving to off topic
|
Louis#0144: my bad
Louis#0144: :berk:
Deleted User#0000: could someone explain to me what exactly hugginface is? At first I thought it was a collection of open-source AI/ML projects a bit like EleutherAI, but then i took a look at their website and its a bunch of APIs and collabs with google/microsoft/fb?
StellaAthena#3530: @Deleted User It's a start-up that produces a widely used interface for interacting with transformers. Their product is compute: when you want to run a model that you don't have the GPUs for you can run it on their servers
Deleted User#0000: Ah ok. Cheers. Whats with the "Model Hub"? Is it just a set of curated OS projects?
EricHallahan#1051: It is where people/organizations can host models so that they can seamlessly load it with the Transformers codebase.
Deleted User#0000: Thx. One last question. Transformers is a set of pre-trained models and supporting code to download/load/fine-tune etc right?
EricHallahan#1051: Transformers is the codebase, the model weights/parameters are not part of the Transformers package but it is designed to automatically download them.
Deleted User#0000: cheers. apologies for the total beginner question.
EricHallahan#1051: Hugging Face does a really bad job at describing what they do. This is their idea of an about page for reference. https://huggingface.co/huggingface
EricHallahan#1051: > Solving NLP, one commit at a time.
Wow, that explains *so much*.
Deleted User#0000: Yeah ik. Its why i just gave up trying to figure out the website and ask here.
Erick#6473: Y'all just got a shoutout at the Blackhat talk on using GPT-3 to generate disinformation
kurumuz#5695: disinfo? @Erick
Erick#6473: https://www.blackhat.com/us-21/briefings/schedule/#disinformation-at-scale-using-gpt--maliciously-for-information-operations-23347
Daj#7482: Cool
Daj#7482: Is the talk recorded anywhere?
Erick#6473: Blackhat usually isn't.
Erick#6473: But I dropped the 6b.eleuther.ai link in the chat for folks who were interested
|
Erick#6473: It's more or less what you would have expected though.
kurumuz#5695: "In the wrong hands, this tool can tear at the fabric of society and bring disinformation operations to an entirely new scale."
ah because you need machines for this :berk:
Erick#6473: I'm recapping it tomorrow for a webinar Rapid7 is doing
kurumuz#5695: not like the whole social media is a giant echo chamber
kurumuz#5695: "Can it come up with explosive news stories on its own? Can it create text that might fuel the next QAnon?"
Absolutely cringe lol
Erick#6473: "You could use these large language models to generate convincing disinformation at any scale you want so it's important to verify sources and stuff"
kurumuz#5695: Who gave this talk
Louis#0144: @Erick Hi
Erick#6473: https://cset.georgetown.edu/publication/truth-lies-and-automation/ associated paper
Erick#6473: @Louis hello I saw your Twitter DM and I miss you
Louis#0144: LMAOO
Louis#0144: Miss u too we should do a project at some point
Louis#0144: Drag Stella along too
Erick#6473: I have so many projects on my to-do list but I agree
Teemochu#8740: blue myself
ReXommendation#4947: It would be pretty foolish to use anything that was generated by a GPT as factual news.
Kia#2550: Why would you even think to use it tho:surprise:
ReXommendation#4947: Good question
|
ReXommendation#4947: GPT generated text would only fool someone that thinks The Onion is a reliable news site.
bmk#1476: I get all of my news from the Onion, don't judge
alstroemeria313#1694: so you know what to expect reality to descend to years later?
skyba#4957: Hi everyone, ex-NLP engineer turned entrepreneur here. Looking for people who'd like to explore commercial applications on top of GPT-J / transformers
skyba#4957: I also support startups working on NLP through angel investments. Drop me a note if interested
bmk#1476: so I can live in the future today
Rookiemann#9950: Hello everyone! I must make it clear first and foremost I am in no way a professional ML guy, but I would very much like to be one. I'm retired with plenty of time and desire to put in time and energy into it. I have been looking into ML for about a year now, I've done few personal experiments here and there and I understand most of the basics and fundamentals of ML. I'm currently trying to set up the GPT-J model on my Google TPU account and will check out the community for hints there on how to do it, I intend to do a TON of lurking here lol. OK so having said all that, like I said I have plenty of time and heart for this, if there's anyone out there that needs free man-hour help on their ML project please let me know I'd be glad to help out and offer any resources I could provide. I'm pretty good Python coder, all self-taught and anything I need to learn in Python I can pick up in a day, Python is so easy (C? lol no). I have a dedicated server as a resource too if one would ever be needed. I guess you could say I'm thirsty for ML knowledge and would like to apprentice and give my time to. Thank you for reading (my blog post) 🙂
Daj#7482: Hello, welcome! We're not always great at clearly advertising where help is needed, so lurking is definitely the recommended strategy hah. There is also the new board.eleuther.ai with some project ideas on it, might interest you
Louis#0144: how would you guys introduce the concept of prompt engineering
Louis#0144: if you only had a paragraph to write
Louis#0144: assuming the person youre explaining it to has *super* basic NLP knowledge
sea_snell#0243: Not one paragraph but here’s how I did it for CLIP. I think the last paragraph is the most generally applicable and useful but it relies on the previous paragraphs as introduction to the idea https://cdn.discordapp.com/attachments/729741769738158194/872900767021953054/image0.png,https://cdn.discordapp.com/attachments/729741769738158194/872900767391039508/image1.png
Louis#0144: @sea_snell can i see your work cited
Louis#0144: its for a paper
Louis#0144: i need to know what you used as related work/references
sea_snell#0243: Here blog post https://ml.berkeley.edu/blog/posts/clip-art/
Louis#0144: thanks!
Louis#0144: i see
Louis#0144: ok this makes sense
sea_snell#0243: Oh I can imagine one paragraph in a paper would be tricky to introduce
|
sea_snell#0243: Yeah idk what you would cite, I feel like the best points come from the community
sea_snell#0243: There’s some papers ig
StellaAthena#3530: @Louis is Janus’ paper in your list?
Louis#0144: oh
Louis#0144: no please send!
Louis#0144: this is for the CARP paper
Louis#0144: @sea_snell do you want to read the paper so far
Louis#0144: it has lots of editorial comments
Louis#0144: but its in a good state already imho
Louis#0144: its an eleuther paper so I dont mind sharing the overleaf w people
Louis#0144: lol
sea_snell#0243: Yeah I’ll read it !
sea_snell#0243: Send link
StellaAthena#3530: @Louis https://arxiv.org/abs/2102.07350
Louis#0144: oh!
Louis#0144: yes will cite
Louis#0144: thanks
Aric#1747: 👋 Hey folks ^-^ I'm a master student in AI at the university of Amsterdam. I'd be interested in potentially contributing.. where can I read an overview of ongoing projects and how to contribute? My interests atm are RL, NN generalization, interpretability and AI safety.
EricHallahan#1051: Welcome! I suggest you read https://www.eleuther.ai/get-involved if you haven't already (I admit it is in *dire* need of a rewrite/expansion) and also hang around (or lurk if you prefer). For a list of projects, check out our projects board at https://board.eleuther.ai/.
Aric#1747: Yeah I actually came from the get-involved page here to introduce myself 😄
|
EricHallahan#1051: Cool! I have it a bit hidden right now as it isn't listed in the primary navigation.
bmk#1476: ooh AI safety! what's your opinion on AI safety in general?
bmk#1476: like what do you think is the highest risk failure case
bmk#1476: and do you have any favorite proposals
Aric#1747: Oof hard question. Soo of course we need to tackle outer alignment. Re: inner alignment I am unsure how common it will turn out to be. If it turns out that almost any training procedure will produce a mesa-optimizer if given enough compute/data then this becomes an even bigger issue than outer alignment.
I think that the most likely path to AGI right now seems to be the scaling hypothesis plus some algorithmic advances, but still within the general deep learning paradigm. That means understanding deep learning models is paramount for any safety proposal that I would actually trust my life to. As to which proposal I favor... I actually don't know/feel too uninformed to say.. I don't think I would trust any of the proposals I know of xD
Aric#1747: E.g. Adversarial training sounds like a good idea in general to avoid mesa-optimization, and interpretability def. helps as well but that's not really enough to reassure me
Daj#7482: You'll fit in here well haha. Also check out #interpretability-reading-group , we have a reading group about interpretability work every Saturday!
Daj#7482: We're generally very excited about interpretability work, and have some project ideas on the board, but are always open to having members spin up new projects :)
AI_WAIFU#2844: Hello!
I'm gonna second the project board https://board.eleuther.ai/. Also if you've got something specific in mind that you would like to try and you need resources to pull off, let us know.
Deleted User#0000: *multilingual jukebox*
alexyz#3459: YES
alexyz#3459: this would be amazing
voxs#0001: so two days ago i was somehow able to run two colab cells at the same time
voxs#0001: it was crazy
StellaAthena#3530: tbh, it was probably just the interface lagging
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/873024878998085702/image0.png
|
sweg#8920: anyone have experience with cloud tpus
sweg#8920: trying to get an LM training and am OOMing hard
sweg#8920: cant even get a single BERT model up without running out of memory lol
sweg#8920: probably an issue in how i set up the TPU instance but am stumped
sweg#8920: for clarity this was the exact error
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/873040195216363551/unknown.png
Louis#0144: lol everyone is just thonk
Louis#0144: :thinkies:
sweg#8920: have a feeling its cause i have to setup the vm instance that i run it on properly
sweg#8920: i think i neglected that cause it was gpu and who needs cpu omegalul
sweg#8920: but it was also memory
Louis#0144: Is this TRC
sweg#8920: yeah
Louis#0144: You should be fine
Louis#0144: Just make sure nothing else is running
Louis#0144: Did u Bork it
sweg#8920: nah dw i did everything according to instructions
sweg#8920: like no billing
sweg#8920: is on my ass
sweg#8920: you have to make a vm instance
|
sweg#8920: to run the tpus on
sweg#8920: it has different memory options
Louis#0144: I’m confused but Ok?
Louis#0144: lol
sweg#8920: have you trained on google cloud?
sweg#8920: if not, yeah it seems pretty confusing tbh lol
Louis#0144: Ye
sweg#8920: o
sweg#8920: lmfao what
Louis#0144: I didn’t have issues
sweg#8920: ok yk what
sweg#8920: i found multiple different tutorial pages
sweg#8920: theres easy and hard ways to do some of tghis stuff
sweg#8920: like i found a really simple way to make the tpu vm and all that through the command prompt
sweg#8920: but before i could only find one that showed how to do it through their ui
sweg#8920: which was really confusing
sweg#8920: also im good btw just needed more ram on the vm
Deleted User#0000: can it be added under random ideas https://github.com/EleutherAI/project-menu/issues/26
Deleted User#0000: thanks!
nev#4905: oh hello there
|
KentC#7374: Any particular books recommended around here?
𓅬 gabriel_syme 𓅬#3220: you can check #the-book
𓅬 gabriel_syme 𓅬#3220: if you're into that one
KentC#7374: I still haven't figured out what it is. I mean more a reference/text book for modern AI/ML.
𓅬 gabriel_syme 𓅬#3220: hmm, I doubt many here read books for modern research/practice but I'm sure there will be a couple
𓅬 gabriel_syme 𓅬#3220: not many I know of personally
𓅬 gabriel_syme 𓅬#3220: I like human compatible I guess
KentC#7374: I'm not implying they read books to learn, but perhaps they pay attention to what's published and can make a recommendation.
𓅬 gabriel_syme 𓅬#3220: and then there's the big magnum opus http://aima.cs.berkeley.edu/
𓅬 gabriel_syme 𓅬#3220: oh yeah definitely, I'm sure others here can pitch in
faraday#0862: where can I read about legal stuff around AI output?
𓅬 gabriel_syme 𓅬#3220: oh yeah I think this also came out recently (the updated version), highly recommended I guess
https://probml.github.io/pml-book/book1.html
𓅬 gabriel_syme 𓅬#3220: at least the original is a classic
faraday#0862: does AI output belong to a) the operator (button-pusher)? b) resource-owner (cluster operator)? c) algorithm deployer? d) algorithm owner? e) public (e.g. public-domain) f) no-one (not usable for any commercial purpose in this case) ?
genetyx8#7543: an EU proposal on regulating AI has recently been published. Probably addresses some of that https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206
faraday#0862: thank you, let me look into it
genetyx8#7543: I bet people here would be interested in a summary of it
KentC#7374: Thanks
nick!#2186: just realized I got access to the GPT3 beta. been fucking around with it, you can make some nice tools.
|
nick!#2186: Made a small generator that turns any story into 4chan greentext
Kia#2550: That's lovely
nick!#2186: Input: I shat myself over and over again at an Arbys and now the government wants to fucking murder me for tax evasion.
Output:
>be me
>be at Arbys
>shit myself
>shit myself again
>shit myself again
>mfw the government wants to fucking murder me for tax evasion.
Kia#2550: Ah...
Kia#2550: Um
nick!#2186: What would you expect? It’s 4chan!
sweg#8920: that’s actually hilarious
sweg#8920: greentextify bot would be good
Louis#0144: LMAOOOO
Louis#0144: HOLY SHIT MY SIDES
Louis#0144: can you do another
StellaAthena#3530: If you’re interested in the intersection of AI + Security, DEF CON just started and the AI Village’s opening remarks are being streamed currently: https://www.twitch.tv/aivillage
|
You can find the full schedule here (all times are Pacific Time): https://aivillage.org/events/2020/8/4/ai-village-def-con-28-safe-mode-w6wsl
Much of DEF CON is __free__ to attend and organized via Discord this year: https://discord.gg/defcon
MonkeyD.Luffy#4209: Yo, total novice idiot here, I heard there was a lot of progress made in the last month on the GPT-Neo project. How can a layman like me quantify "progress" with something like this? Where would I go to look for indicators. I'm mostly just a consumer waiting to get a bigger brained ai but this stuff is genuinely interesting with how abstract a lot of it seems.
alstroemeria313#1694: you can generate text from the current biggest model at https://6b.eleuther.ai
StellaAthena#3530: To plug a couple things I am particularly excited about:
- Keynote talk by Bruce Schneier, a very well known hacker and board member of the EFF and the Tor project titled “the Coming Age of AI Hackers”
- Summaries of two two hacking competitions: one organized by Twitter to hack their algorithms to obtain morally bad outcomes and one by Microsoft to evade their ML security systems
- I’m giving a talk on the security implications of the work of groups like this very discord channel to replicate and release AI models.
- Two talks about adversarial machine learning: where it came from, where it’s going, and why it’s hard to translate papers in laboratory settings to the real world.
DrYazman#2737: Is anybody in EleutherAI interested in AI and intellectual property discourse?
DrYazman#2737: It's an area I'm professionally interested in for research purposes
alstroemeria313#1694: I've sold NFTs of my AI image generation outputs and am interested in AI image generation in general, but I'm not a legal scholar or anything
StellaAthena#3530: I hope you don’t take this the wrong way… but NFTs are basically scams right? Like, they’re free to create and have no functional value. You’re effectively paying for a receipt?
alstroemeria313#1694: i think some buyers hope they will go up in value, some buyers just like the art, and some buyers want to support the artist? some mix of those?
alstroemeria313#1694: They are JSON files w/ some metadata and a link to the image, which is on IPFS usually
StellaAthena#3530: Ohhhh “is this the next Bitcoin” could easily motivate lot of the market
alstroemeria313#1694: Yeah
that_guy_adam#4064: Question: What's the repo(s) called that does the visualzation/ .imagine in Faraday Cage?? I'd like to set it up locally. Thanks in advance.
EricHallahan#1051: Check the pins in #art.
|
StellaAthena#3530: @that_guy_adam There's a compendium of notebooks demo'ing different methodologies here: https://discord.com/channels/729741769192767510/730510538060071043/863572299860672562
that_guy_adam#4064: thank you!
faraday#0862: I’m going to read about EU AI related laws but what’s Eleuther policy and stance on intellectual ownership of generated material? Is the “prompt owner” the owner? Is the output totally exempt from commercial use? I’m quite confused on resources I come across on Eleuther. I want to always act in good faith but exciting ideas also come to mind turning generations into products. What’s the take on this?
faraday#0862: Please enlighten me on what to expect. I think you can license the algorithm, the platform, get IP around it and then all the output of the platform is part of that license?
faraday#0862: I’m considering GPT-3 use by Microsoft as an example.
faraday#0862: Then, should this mean I should always treat such shared platforms including Eleuther, GPT-J and other bots with caution and use no output from them? Even if I prompt them to produce somehing?
faraday#0862: But this leaves me with: Did my “prompt” have no intellectual implicatiıon? Someone lent me their car and if I win a race with it, should the car owner take the prize? Or if I crash and injure someone, am I free to go and car lender is at fault?
StellaAthena#3530: The EAI policy is "ask a lawyer, not a dog on the internet"
faraday#0862: good idea but I doubt any lawyer would be able to understand the implications at this point
faraday#0862: thank you for stating the policy
StellaAthena#3530: Lawyers aren't *all* freeloading wastes of space. Some of them even manage to be vaguely intelligent despite their lack of CS degrees.
bmk#1476: actually, it is a well known fact that having a cs degree is *crucial* to basic mental function
bmk#1476: in all seriousness i assume what he meant is that it's a grey area
bmk#1476: and nobody really knows where the lines are
bmk#1476: obviously even then lawyers would have a better idea of where the lines might be than random people on thei nternet
bmk#1476: so like
bmk#1476: we cant really help you
bmk#1476: we arent legal professionals, etc
faraday#0862: I don’t like grey areas. I know a big companies that circumvent Google/Apple IAP policy and avoid commissions. But they keep on doing until they are warned. When the company is big they act like this in grey areas, even not so grey areas. But for indies, you get banned from everywhere even lose your family photo access
EricHallahan#1051: > Someone lent me their car and if I win a race with it, should the car owner take the prize? Or if I crash and injure someone, am I free to go and car lender is at fault?
|
Anyone who is racing a car is going to have some sort of agreement to resolve this lol
Zac-HD#7996: But also for those super-expensive ones "it's not money laundering if you're paying for art", because by definition whatever you paid was the fair market price.
faraday#0862: You’re rlght but car owner might just forget to discuss race situations because the car is seemingly slow to them
One#5919: somewhere i read that the person closest to the generation owns the copyright (so the prompt giver) but no one really knows yet
bmk#1476: not sure if the "fair market price" part is meant unironically but this reminds me of the same issue as with housing that yud talks about in inadequate equilibria
bmk#1476: even if you know that a piece of art is worth way less than what it sold for, you can't profit from that, because art is non fungible and nobody wants to pay a zillion dollars for *your* art
faraday#0862: that as well is complicated. if algo generates deterministically for the prompt, leibniz could invent (same) calculus when first prompt is from newton without knowing since output is same
bmk#1476: NFTs really are just the distilled essence of that
bmk#1476: and the main thing that differentiates the expensive art from the non expensive one (i.e even exact forgeries are worth way less) is the Schelling point of being the real one
bmk#1476: it's worth a lot because everyone believes that everyone else would believe that one is worth a lot
bmk#1476: NFTs don't change any of this, they just eliminate more of the confounding factors like physical continuity and inexactness of replication of real art
Dromarion#3383: I've been told some people did make novels at least with the assistance of GPT, though I can't really point out any books that specifically say that AI was part of its creation.
EricHallahan#1051: https://discord.com/channels/729741769192767510/747850033994662000/871514510932979754
Dromarion#3383: That's cool, from my own experience, writing with AI is a pretty different workflow from writing by myself. There's like a back and forth dynamic, and while it isn't going to do everything, it does contribute to a large amount of the word count.
Louis#0144: I only like black and white areas
Louis#0144: Like a goose
Louis#0144: That’s what our legal system needs
Louis#0144: Geese
Dromarion#3383: *AI arms race accelerates over the ability to create fanfiction*
EricHallahan#1051: This is already happening.
|
Dromarion#3383: I've had some talks about it in a 40k server since Games Workshop is basically leaning on anything and everything fan created. It's pretty sad when something you work so hard on in a piece of art can easily be taken away from you. Though fan work has always been a poisoned chalice in that sense.
Dwarf#6935: anyone know if you can attack additional storage to a google cloud tpu vm? I tried using the same commands as i would on a normal vm but it didn't work.
wyrdc#1871: "some sort of agreement" I think is how the question is answered in practice. Whether it's Artbreeder or OpenAI, there usually is something explicit in the terms of service that says the output is yours, copyright and all. Personally, I made a custom StyleGAN for a client, and we have a pretty detailed agreement on who owns the outputs to prevent any copyright issues
wyrdc#1871: In general though, I think generative AI tools should be considered in terms similar to other software tools, ie Photoshop. I doubt anyone serious would argue Adobe should own the rights to media people make with their tools
wyrdc#1871: I would argue against that in the same sense that I would argue that a human writer doesn't need to cite every author that ever influenced them in any way. You're correct that it isn't settled though, no guarantee my argument would fly
wyrdc#1871: Oh I didn't know about the EU and CRISPR, need to look into that
wyrdc#1871: Or the other way around, I would _want_ my content to be used
wyrdc#1871: Valid point, there are bigger issues there, the economics of art
wyrdc#1871: I guess in an ideal world, where there wasn't the threat of homelessness due to your job being automated, it would be different, but we aren't there yet
wyrdc#1871: Honestly, that's an issue that comes up with technological progress in general, I think. Any time someone invents an easier way to do something, people who were invested in doing it the hard way will be annoyed, though I feel like typically that stage doesn't last too long. Good point to bring up.
Dromarion#3383: I'm in some creative writing and art circles and there's always some gnashing of teeth whenever I bring up AI art. I don't really get it in this case because anyone can use artbreeder or GPT, just take advantage of it and make more art faster.
wyrdc#1871: It's actually been somewhat paradoxical in my experience - the more I've used AI for creative writing and art, the more writing and art I've made on my own alongside the AI. It's not a replacement, it's an augmentation. Demonstrating how AI will be part of a standard digital art toolset in the future is an underlying theme of a lot of my own work
Dromarion#3383: It a feature of the different workflow when creating with AI. There's that back and forth, a collaboration, so not only is it shouldering a good amount of work but it's also giving you new ideas.
wyrdc#1871: Exactly, I love the dynamic personally, and I think most creators will if/when they get past their initial misgivings
James#6892: I personally think augmenting people with AI is going to be huge in next few years, especially since most AI is not good enough to completely replace, and in most cases you dont want that either. Its really much better to have a human in the loop to veto what the AI generates. I don't see that changing anytime soon.
James#6892: If AI can make a person work 2-3x faster, is much safer than replacing, and the economic benefits will still be there
nick!#2186: Sure if you give me a story
Louis#0144: "I, a humble NEET, sat at my desk all day implementing language models"
nick!#2186: ```
Input: I, a humble NEET, sat at my desk all day implementing language models
|
Output:
>I, a humble NEET, sat at my desk all day implementing language models
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
>tfw no gf
```
nick!#2186: heres a better one
nick!#2186: ```
Input: I, a humble NEET, sat at my desk all day implementing language models
Output:
>I, a humble NEET, sat at my desk all day implementing language models
|
>tfw no gf
>tfw no social life
>tfw no friends
>tfw no job
>tfw can't even go outside because I live in a cave and I'm too poor to afford a door
>tfw I'm too poor to afford food and I have to eat rocks instead
>tfw I have to eat rocks because I'm too poor to afford food and too lazy to fish for my own food
>tfw can't even fish
```
EricHallahan#1051: Please, post these in #the-faraday-cage-archive and not here.
James#6892: Really?
James#6892: They are in for a rude awakening 😦
nick!#2186: ai can generate some reallllly good art though
nick!#2186: well yeah r/art just sucks in general
nick!#2186: they're pretty elitist
kurumuz#5695: human ego is crazy
kurumuz#5695: lol
alexyz#3459: #the-faraday-cage isn't a good place for those things imo
nick!#2186: lol the deepdream idea seems funny
alexyz#3459: i will continue to advocate for a channel to showcase outputs of stuff
|
nick!#2186: yeah stuff would get drowned out in faraday pretty fast
nick!#2186: due to the bot
kurumuz#5695: #prompting then?
kurumuz#5695: its literally prompting lol
alexyz#3459: because #the-faraday-cage is now pure ".imagine anime girl in real life"
nick!#2186: thats the best part though
alexyz#3459: that works for this example, yes, but i mean in general, like if someone gets an output for some project that's interesting or GPT-J or VQGAN or anything there should be some channel for that imo
EricHallahan#1051: Prompting is for discussion of prompting techniques, not a place to dump outputs.
nick!#2186: by the way, im pretty out of touch with what eleuther is doing. what is the plan for the future models? more parameters?
EricHallahan#1051: soon™️
kindiana#1016: 🤷
EricHallahan#1051: We literally don't have a plan as far as I am aware.
cfoster0#4356: #art was that for image stuff but just grew a bunch
alexyz#3459: (the response will be "there's already too many channels" tho lol, even though there's literally a channel called #catgirl-research like seriously an output channel might be slightly more useful than a meme)
kindiana#1016: hey, that's the channel where I spend all my time :guilty:
alexyz#3459: yeah, i basically want old #art but for all outputs (maybe restrict it to Regulars? dunno lol)
nick!#2186: i try not to assume so much because i barely know anything about deep learning...im in year 1 of a CS bachelor degree
kurumuz#5695: regulars is like 20 ppl
kurumuz#5695: wouldnt be gud
alexyz#3459: But yeah i just want an output channel somewhere down there in Misc
|
alexyz#3459: yes, i know, but they aren't the Eleuther community
alexyz#3459: like if there wasn't an #art then you could say you could post to r/MediaSynthesis or something
Louis#0144: smerity is so nice
Louis#0144: wtf
Louis#0144: o sorry not offtopic
Louis#0144: my bad
Kia#2550: What why:surprise:
alexyz#3459: because there needs to be a channel
alexyz#3459: prob shouldn't restrict it
alexyz#3459: but still it seems like such a necessary channel
alexyz#3459: we have #art for images but no channel for text outputs
alexyz#3459: or audio, or literally anything other than images
alexyz#3459: and for some reason whenever anyone posts anything else they get directed to the hurricane of #the-faraday-cage
Kia#2550: Hm kinda :sus:
alexyz#3459: like i've seen multiple times where people have done something interesting with GPT-Neo or have some output of GPT-Neo they'd like to share and there's literally nowhere to do that, and yet there's a place to do that for images
alexyz#3459: anyway imma stop, it might be actually kinda against what this discord is for and maybe a better fit would be on of the #communities
still something to think bout tho
Louis#0144: GUAC
Louis#0144: did u see the new stop and shop
Louis#0144: It’s so nice
|
Louis#0144: wait this is general
Louis#0144: :berk:
faraday#0862: Newton actually did try to block acceptance of the work of Leibniz though (not openly but through Royal Society), nasty stuff
45#2247: note: we need an emoji for james gurney
45#2247: https://cdn.discordapp.com/attachments/729741769738158194/873501366469922876/james.gif
45#2247: from my dawg at his desk minding his business https://cdn.discordapp.com/attachments/729741769738158194/873501481746178058/james.jpg
faraday#0862: drawing stuff that my AI persona definitely does not
faraday#0862: AI James Gurney is like a complete alter ego of him
45#2247: we need an ai james gurney emoji too
45#2247: he still paints trippy stuff right https://cdn.discordapp.com/attachments/729741769738158194/873502513154912286/76b579c77f46035eb74b56bc6cf002a2.png
45#2247: *tho
faraday#0862: how does the AI know the illustrators? I mean both: 1) where ls the exact paper that explains the alignment between text and image? is it CLIP? 2) did you guys pull significabt portion of twitter including illustrator work?
faraday#0862: whoa cool, nice to see a dino during my morning shower
45#2247: afaik they used standard computer vision datasets (imagenet) + other more specific stuff? dunno if imagenet design/adversarial is a subset or sth else https://cdn.discordapp.com/attachments/729741769738158194/873502959680503848/unknown.png
45#2247: and then there's original clip from openAI & the finetuning folks have done here too? no idea but I saw coco, wikiart etc.
faraday#0862: I’m trying Ian McQue and it’s turning out Ian McQue. maybe it’s just coincidence
45#2247: the legend says if you prompt any ML paper reference it will output the logo of an LSTM with schmid face
45#2247: :schmidhuber:
45#2247: it learns on the internet
Basedblue#9138: https://twitter.com/i/events/1423759994047107077
|
faraday#0862: I’m curious about why VQGAN+CLIP leaves the output a bit abstract/unclear but thisfacedoesnotexist.com generates highly detailed, clear examples. Anyone have an idea?
cognomen#6297: because one is trained specifically on faces
faraday#0862: it feel a noob question on my end
EricHallahan#1051: thisfacedoesnotexist uses StyleGAN
cognomen#6297: the other on imagenet
faraday#0862: but when people try with a starting image on faraday cage, similar to style transfer with only style descriptions, it’s still abstract
EricHallahan#1051: TL;DR: They use different architectures and datasets.
faraday#0862: does it mean low level texture learning is simpler and atteinable but coming to style from concepts are hard?
alstroemeria313#1694: yes, i'm working on CLIP guided diffusion which can generate more coherent subjects but it's even slower
alstroemeria313#1694: domain specific generation is easier than general
kurumuz#5695: "just train transformers bro"
faraday#0862: so we probably have to use hierarchical learning with both low level textures loaded and concepts on top
faraday#0862: any example of it in the wild?
StellaAthena#3530: Not directly related, but CNNs have a strong textural bias too
kurumuz#5695: i dont think we should explicitly do this
faraday#0862: classifying things through texture always felt weird to me but maybe we do that many times underneath in the brain
StellaAthena#3530: TBH I doubt it
StellaAthena#3530: I think it's an artifact of how the NNs work
faraday#0862: I meant Marr-like ideas on low level and higher level visual cues computed for the same subject. probably brains provide such data all the time but you always add higher level cues plus context, world representation etc
cognomen#6297: as far as I've read (numenta research among other things) there's strong evidence that the human visual system employs a lot of recurrence
|
cognomen#6297: it's not just a straight pipeline from low-level to abstract features
cognomen#6297: there's much more of a convoluted modelling process going on than current NN approaches to various image tasks capture
StellaAthena#3530: Texture requires pretty high resolution though. If it was important to humans to the same extent that it is to CNNs you wouldn't be able to see anything in a fog
cfoster0#4356: We're approaching generality and photorealism with diffusion image models now, which have a certain form of recurrence. Not too worried that we're missing any architectural pieces there
Mega Glaceon#8882: yeah, my prediction is that diffusion models are as big a step as GANs, or deep learning itself
faraday#0862: I think this is the coolest place on the net
cognomen#6297: I could foresee a pipeline of the current methods possibly being the most effective bang-per-buck solution to high res images
Mega Glaceon#8882: currently i'm wondering whether diffusion models coudl be used for solving np-complete problems, for example number matching puzzles like tetravex
cognomen#6297: DALL-E mini when fully trained for good global coherence and composition -> CLIP+VQGAN for higher res and permuting the image with a more robust latent space -> diffusion for optimal quality and high resolution
faraday#0862: I’m about to go mad: how the heck the model know Ian McQue draws machine cities in the sky?
Daj#7482: It probably just saw images from him in its training set that were tagged with his name
faraday#0862: it can’t be too many I expect. I expect the model to know van gogh da vinci etc but the rest continues to amaze me every time
Daj#7482: The magic of prompt programming is that it allows you to "pick out" very specific things in the network's "knowledge". The words "Ian McQue" probably didn't occur often by random, so when you ask for specifically that, it probably narrows down a lot what the model will "think of"
Daj#7482: But yeah it's impressive that's for sure
faraday#0862: yes but does the training include the whole web crawl? i expect not
Daj#7482: CLIP was trained on a _huge_ dataset, but we don't really know what's in it or not
faraday#0862: oh okay that makes more sense. they probably pulled images from twitter as well
faraday#0862: i should definitely read on CLIP
Daj#7482: @StellaAthena is about to give a talk about EleutherAI's work and open sourcing AI models at DEFCON! https://www.twitch.tv/aivillage
mmajewsk#9668: will there be recording?
|
Daj#7482: afaik yes
StellaAthena#3530: Yes, the AIV will post the video on youtube next week
StellaAthena#3530: ~~and if I'm not too embarrassed I'll share it here~~
StellaAthena#3530: You can find my slides here: https://docs.google.com/presentation/d/1hqzfm0JfTOoLm-z1kbb0924GzNHArkdflxPYog1KMds/edit?usp=sharing
kurumuz#5695: really liked the presentation!
mmajewsk#9668: What is AIV ?
EricHallahan#1051: AI Village @ DEF CON
StellaAthena#3530: The AI Village is an AI-focused subgroup of the DEF CON community. It has year-round programming, and hosts what’s effectively an AI workshop at def con every year
cfoster0#4356: Ooo very nice
mmajewsk#9668: what do you mean but year-round programming ?
StellaAthena#3530: There’s an organization, the AI Village, that puts on events and programming throughout the year in addition to providing an AI workshop at DEF CON
Teemochu#8740: The last sentence is based
Teemochu#8740: (only saw the slide-deck, sorry)
Teemochu#8740: *notices 22B listed under GPT-NeoX*
Teemochu#8740: hmm :catgirl5:
StellaAthena#3530: I listed a 22B GPT-NeoX and a 13B HF model as *future models*. During the talk I said a bit about the future of open source language models and used them as examples
Teemochu#8740: well could you hurry up with the universe's death or whatever? :smugS:
Teemochu#8740: (that said, the fact that you mention Neo*X* for 22B is interesting)
EricHallahan#1051: Why would that be interesting?
bmk#1476: .. what were you expecting?
|
bmk#1476: neo is ded basically
kurumuz#5695: 22B NeoX wasn't public before
bmk#1476: not scalable and the codebase is pain
kurumuz#5695: so yeah, it is interesting
bmk#1476: what were you expecting 22B using?
EricHallahan#1051: How else are you supposed to test a codebase?
Untouch#9150: gpt-j I would assume
kurumuz#5695: It wasn't public that there was a 22B model at all.
kurumuz#5695: Is what I'm saying
EricHallahan#1051: There isn't one.
bmk#1476: but also don't take this as a promise that there will be one
kurumuz#5695: *In the works
kurumuz#5695: I'm not taking anything as a promise. Just 22B NeoX efforts wasn't known
bmk#1476: I'm not sure I understand
bmk#1476: it's not a secret that we're working on neox
bmk#1476: it's also not a secret that we'd want to, well, test it
kurumuz#5695: sure
cfoster0#4356: there has been very little discussion of plans for NeoX so this is new information to a certain degree
StellaAthena#3530: I think @kurumuz is under the impression that we’ve never mentioned working on a 22B model specifically.
kurumuz#5695: It wasn't publicly mentioned, yeah I was under that impression
|
StellaAthena#3530: (I don’t think that’s true, but it might be?)
cfoster0#4356: Pretty sure that's true
EricHallahan#1051: We aren't committed to a 22B model, so it isn't something we would want to propagate and become an unchecked rumor.
kurumuz#5695: well what does that tell about this specific fact though?
kurumuz#5695: like no one would go from "oh, they have neox and they would want to test it, so they will train a 22b model!"
StellaAthena#3530: There’s a bunch of times people have mentioned “approximately 20B” as a reasonable next step.
bmk#1476: what I'm trying to say is I don't want anyone complaining if we end up not doing a 22B NeoX
kurumuz#5695: that is fair
bmk#1476: treat this as just an idea we're thinking of
EricHallahan#1051: We are eyeing it closely, but only time will tell if we do it or not.
bmk#1476: I'm worried that people are gonna be all like ~~HL3~~ 22B confirmed!!1 and then get mad at us if we do anything different
EricHallahan#1051: > **When do you plan to have more models available?**
> As a collective of volunteer researchers and engineers who contribute in our free time, we are unable to commit to either a timeline or a roadmap for future models.
StellaAthena#3530: I think people who act like that aren’t people you should spend much time worrying about.
EricHallahan#1051: ~~Portal 3 wen~~
Louis#0144: You know I was thinking about it today, the fact that people prompt GPT3 with super nice and it seems way kinder. Maybe scaling alone will solve normativity (disregarding continual learning for a sec)
Louis#0144: Not alignment of course, solely normativity
kurumuz#5695: yeah that is obvious for me
Louis#0144: I don’t think it’s obvious
EricHallahan#1051: *Just ask the model to be nice.*
|
bmk#1476: honestly now that I'm on the other side of the table I feel bad for all the folks at valve
bmk#1476: like I really want P3 but also I know it must really suck to be on the other side of that
kurumuz#5695: yeah, i felt the same things. it's pretty bad
EricHallahan#1051: Same.
Louis#0144: It’s not obvious for the reason that normativity is massively subjective 🤷♂️
kurumuz#5695: doesn't matter.
EricHallahan#1051: To be clear, I am meming. The mod community for Portal 2 is still incredibly strong and there is plenty of creativity there.
kurumuz#5695: i only played portal 1
Louis#0144: Wasn’t there alignment work basically on the kinds of personalities GPT3 takes on given prompts
Louis#0144: Gwern was discussing this a while ago
kurumuz#5695: was too easy/short but felt good
Louis#0144: (Citation needed)
EricHallahan#1051: That is the beauty of it.
Teemochu#8740: 22B has been hinted, NeoX has been mentioned, but never that 22B would use NeoX instead of J
bmk#1476: portal 2 is way better
bmk#1476: hot take but portal 1 kinda sucked
kurumuz#5695: i mean overall it was fun but didnt feel the challenge at all
kurumuz#5695: i want my puzzle games to be kinda hard
Untouch#9150: there are some good mods and challenge maps
bmk#1476: portal 1 doesn't really have much of a story
|
bmk#1476: portal 2 has masterful storytelling
cfoster0#4356: Yeah, this is about what I recall as well
EricHallahan#1051: Because it still isn't clear which way the wind will blow.
Louis#0144: Personally I think there should always be a model size that fits into the largest consumer card (so currently the 3090)
Louis#0144: But that’s just my 2c
Louis#0144: I think it looks really good for the tinkerer OSS perspective though
EricHallahan#1051: ~~That because you are biased and own a 3090.~~
alstroemeria313#1694: You can get an A6000 if you really want
Louis#0144: Yes because I wanted to tinkerer locally
alstroemeria313#1694: What about 2x 3090
Louis#0144: Lol
Louis#0144: That would be able to fit 13b right?
Louis#0144: Basically weights in billions x2 is the amount of VRAM required for inference in fp16 right?
Louis#0144: If I recall…?
alstroemeria313#1694: Plus memory for activations
Louis#0144: Ah yeah
Louis#0144: True
Louis#0144: Shoot
EricHallahan#1051: That is how the ballpark math works.
Louis#0144: So 12b might make more sense than 13b?
|
EricHallahan#1051: There is a much more accurate equation kicking around here somewhere.
EricHallahan#1051: Plus overhead.
Teemochu#8740: Swapping out to GPU a la that one low-memory notebook though
Louis#0144: Can’t you put activations on the cpu
Louis#0144: Isn’t that a thing
EricHallahan#1051: *ZeRO-Infinity intensifies*
Louis#0144: Zero3 has inference APIs right?
Teemochu#8740: I don't want to sacrifice SSDs to the pony god
Louis#0144: For inference I can’t imagine it’s that bad lol
Louis#0144: Especially with 3D nand
Louis#0144: Is it 3D nand that has good write tolerance?
Louis#0144: Eric was talking about this when zero infinity came out
EricHallahan#1051: 3D XPoint
Louis#0144: Yeah
EricHallahan#1051: Say it with me: *three dee cross point*
Louis#0144: You’d only need what like 32gb of memory for a 12b model? (Smallest m.2 I think you can get is 32gb)
Louis#0144: It would be soooooo slow for inference
Louis#0144: Well maybe not that bad
Louis#0144: PCIE4.0 is p nice
EricHallahan#1051: Optane DIMMs would probably be the best option.
|
Louis#0144: Yeah
Louis#0144: Hm
Louis#0144: Performance wouldn’t be thaaaaaat bad
Louis#0144: Like it would be slow
Louis#0144: But not eye watering slow
kindiana#1016: just use dram lol
EricHallahan#1051: Though by definition you have reduced DRAM capacity.
kindiana#1016: optane dimms are not that much cheaper
IKEA#9631: ...ramdisk?
EricHallahan#1051: They have the advantage of being persistent.
Louis#0144: What kind of performance would you be looking at anyway? Like maybe just a handful of tokens a second?
EricHallahan#1051: Though that isn't too useful.
IKEA#9631: why are you quoting yourself
EricHallahan#1051: Because it makes it obvious that I am not replying to this and instead continuing my thought.
Louis#0144: Honestly
Louis#0144: Once we have the neox models
Louis#0144: We should run some performance benchmarks on zero infinity and stuff
Louis#0144: Since I think a lot of people will be using it that way
EricHallahan#1051: This is how I wish threads worked.
EricHallahan#1051: But they don't :blobsad:
|
kurumuz#5695: which models
EricHallahan#1051: NovelAI users, #art ists do
alstroemeria313#1694: I have a bunch of StyleGAN+CLIP NFTs?
alstroemeria313#1694: And some VQGAN+CLIP.
kurumuz#5695: man NFTs are going like pancakes nowadays
kurumuz#5695: for some reason
kurumuz#5695: lol
kurumuz#5695: i left some in sale(forgotten for months)
kurumuz#5695: sold like 4 today
alstroemeria313#1694: it was like $10 or so
kurumuz#5695: i sell for 80$
kurumuz#5695: or smth
kurumuz#5695: ai generated catgirl waifus
kurumuz#5695: idk not bad
alstroemeria313#1694: You seriously get buyers at $80?
kurumuz#5695: catgurl https://cdn.discordapp.com/attachments/729741769738158194/873683340778164234/bwPGAxl1e8Jrb-u1-t4VJnhGQ7GoStmHU2Dy86pqyBTNHba8IkYKfaobGpRLIMdYNOaxHclK105VZwVAQxDhnhPIjWKyhB9jScy2.png
kurumuz#5695: yep
kurumuz#5695: i got for more
kurumuz#5695: yep
EricHallahan#1051: #off-topic ?
|
kurumuz#5695: o yea tru
Louis#0144: Goose girl when
Louis#0144: O
DE#6340: nice summary
EricHallahan#1051: Does the `torch.multinomial` binary16 sampling bug still exist in PyTorch 1.9.0?
kurumuz#5695: the issue is still open
kurumuz#5695: and yes
EricHallahan#1051: Have a link?
EricHallahan#1051: I want to (independently) verify that is the case.
kurumuz#5695: hmm
kurumuz#5695: can't find the specific github issue i was talking about
EricHallahan#1051: Same, that is why I am asking.
kurumuz#5695: <https://github.com/pytorch/pytorch/issues/46702>
kurumuz#5695: <https://github.com/pytorch/pytorch/issues/19900>
kurumuz#5695: but yea they're closed. you might want to try it out yourself with GPT-J
kurumuz#5695: as you have access to the private repo, right?
EricHallahan#1051: I do.
Sid#2121: Don't use multinomial, it's slow af too
alstroemeria313#1694: Gumbel-Max instead?
EricHallahan#1051: That isn't an option, this is for Transformers.
|
EricHallahan#1051: I can't imagine them granting me the ability to change a fundamental part of the sampling code.
Sid#2121: For sampling (i.e only ever selecting the one scalar) you can just write what multinomial is doing yourself and its much faster
Sid#2121: I.e take the cumsum, pick a random uniform number, then take the first number in the cumsum that's above the random number
alstroemeria313#1694: huh
kurumuz#5695: god
kurumuz#5695: cumsum
kurumuz#5695: i cant stop laughing
Sid#2121: I have no idea what's happening in multinomial that makes that method faster lol
Sid#2121: But it is
kurumuz#5695: something is wrong with multinomial lol
Sid#2121: Write it in triton and its probably even better :ultragoose:
kurumuz#5695: write everything in triton
nostalgiahurts#3408: did the sp3 channel disappear
EricHallahan#1051: Yes, we decided to archive it for now as there are no active projects.
nostalgiahurts#3408: oh okay, good to know
EricHallahan#1051: Don't worry, you are not going insane.
nostalgiahurts#3408: haha thanks, I was questioning myself for a bit there
Teemochu#8740: I seem to recall a pile channel too
Teemochu#8740: which may have been where my first real post was (right now my first post I can see is the pones+GPU)
EricHallahan#1051: That was archived a long time ago.
|
MasterScrat#6910: What happens to the content of archived channels? Is it still searchable somehow?
StellaAthena#3530: @MasterScrat it’s not deleted, but it’s not searchable without elevated permissions.
MasterScrat#6910: Damn ok that’s kinda sad - why not have an "archived" group of channels put in read only mode? It’s crazy how much knowledge you find in random old conversations using Discord search in this server, it’s sad to see chunks of it disappear
alexyz#3459: gud idea
alexyz#3459: please do this l5s
EricHallahan#1051: I'm sympathetic but I am not the one to make that decision.
Dwarf#6935: I have an "archived" channel group on my server and even though it's just my friends, it's nice to look back at our old shitposting channels.
Dwarf#6935: it seems like an even better idea for a server like this where there might be useful info lost to time if the channel gets deleted
Dwarf#6935: well, lost to those who haven't ascended to l5 yet :berk:
alstroemeria313#1694: So what kind of combinations of models work for GANs?
alstroemeria313#1694: Like how linear combinations of different checkpoints work (where the weights sum to 1) and EMA works
alstroemeria313#1694: Do things like drawing an integer for each parameter and taking that parameter from checkpoint n work?
alstroemeria313#1694: I have a bunch of GAN training checkpoints and want to merge them together in some way that best promotes quality and diversity
𓅬 gabriel_syme 𓅬#3220: maybe use Quality Diversity? 🙂
𓅬 gabriel_syme 𓅬#3220: how costly is to sample from them?
alstroemeria313#1694: they're StyleGANs
𓅬 gabriel_syme 𓅬#3220: you could do some latent variable evolution thing with QD perhaps and have different ways of searching in there, placing results on a MAP of some behavioral dimension that captures diversity
𓅬 gabriel_syme 𓅬#3220: and then quality can be smth like FID?
𓅬 gabriel_syme 𓅬#3220: so you maximize coverage + FID. That's the standard QD, MAP-Elites approach I guess
alstroemeria313#1694: You mean with multiple checkpoints or within a single combined checkpoint?
|
𓅬 gabriel_syme 𓅬#3220: in your case with multiple
alstroemeria313#1694: Oh
𓅬 gabriel_syme 𓅬#3220: normally they would do with one I believe
alstroemeria313#1694: Yeah I want one
𓅬 gabriel_syme 𓅬#3220: and evolve kind of like CLIP
𓅬 gabriel_syme 𓅬#3220: oh okay, then use one
alstroemeria313#1694: One combined one
𓅬 gabriel_syme 𓅬#3220: ah yeah that's what I mean
𓅬 gabriel_syme 𓅬#3220: multiple as 'multiple different combinations into one'
alstroemeria313#1694: I was thinking about picking random linear combinations of the weights
alstroemeria313#1694: And brute-forcing FID
𓅬 gabriel_syme 𓅬#3220: yep, that's what I have in mind too only inside a MAP-ELites loop
alstroemeria313#1694: ahh
𓅬 gabriel_syme 𓅬#3220: so a better way to search kind of
alstroemeria313#1694: FID already takes diversity into account actually
𓅬 gabriel_syme 𓅬#3220: the cool thing about it is that your dimensions can vary
𓅬 gabriel_syme 𓅬#3220: and if you have good intuition you can pick cool dimensions
alstroemeria313#1694: hm
𓅬 gabriel_syme 𓅬#3220: btw this was kind of done here (without the cominations I guess) https://github.com/icaros-usc/dqd
alstroemeria313#1694: ahh
|
𓅬 gabriel_syme 𓅬#3220: nvm I'm really not sure this exact thing was done but a lot of the machinery is there
alstroemeria313#1694: Yeah I can't differentiate through FID easily can I
alstroemeria313#1694: Not with 10k samples anyway
𓅬 gabriel_syme 𓅬#3220: I wonder if the differentiable there is a term that confuses, it does to me
𓅬 gabriel_syme 𓅬#3220: like you don't need anything to be differentiable in that approach, you can just search across a latent space
𓅬 gabriel_syme 𓅬#3220: you could literally just have a heuristic that combines them in some way that changes randomly and then evaluate the outputs smh
𓅬 gabriel_syme 𓅬#3220: not sure what the dimensions would be here, you have much better intuition
𓅬 gabriel_syme 𓅬#3220: like I can see quality as FID, but what do we search across (for coverage)?
𓅬 gabriel_syme 𓅬#3220: can we get a score for similarity to prompt? or maybe classes from clip?
𓅬 gabriel_syme 𓅬#3220: super reductionist example heh, but if this was the MAP, what can BD1/BD2 be? https://cdn.discordapp.com/attachments/729741769738158194/873836373512179723/unknown.png
𓅬 gabriel_syme 𓅬#3220: (at each cell you'd maximize FID)
𓅬 gabriel_syme 𓅬#3220: One idea would be some sort of interpolation between semantic concepts like BD1 being cat -> dog and BD2 cat -> tiger?
Not the best idea to narrow these down so much but at least they are not aligned to the quality objective at all
𓅬 gabriel_syme 𓅬#3220: damn missed a chance for wild goose chase there
nanowell#3224: Can I fine tune GPT-J for code generation on github_repos dataset using colab pro ?
Louis#0144: No
Louis#0144: v2-8 can’t be used for training
Louis#0144: And you can’t finetune GPT J on a single v100 without tricks
nanowell#3224: Thank you
Louis#0144: Use TRC if you want to finetune GPT J
|
Louis#0144: Just tell them you want to try TPUs
Louis#0144: You can get a v3-8
nanowell#3224: applied for it. thank you for help
rolandgvc#9805: Did anyone benchmark A100 pods against v3 pods for GPT training? I know a v3-8 corresponds to 4 V100, but how would that translate to A100?
nanowell#3224: Got invited!
Matias Carrizo#7814: !goose
Isaac McHorse#2007: https://cdn.discordapp.com/attachments/729741769738158194/874070316756897863/goose.jpg
EricHallahan#1051: Please keep gooseposting to #off-topic and #the-faraday-cage-archive.
Matias Carrizo#7814: Oh sorry I didn't knew that 😦
alexyz#3459: be one with the geese, embrace goose
attractfunding#6520: https://www.youtube.com/watch?v=pMVN0HL3PHA I prompted Eleuther.ai with the durants, "Religion seldom survives philosophy..."
EricHallahan#1051: Please read the #rules: We ask that you please don't advertise or spam.
attractfunding#6520: My bad I saw the catgirl and goose and figured there were more degrees of freedom in this chat. Won't happy again.
Deleted User#0000: Just realized that I've had gpt-3 access since July 13th :/
Deleted User#0000: What the hell Greg
Deleted User#0000: Was just looking through OpenAI related emails and saw this:
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/874095090300252200/unknown.png
Kia#2550: Heh that's nice
Deleted User#0000: What do I do?
Deleted User#0000: Anybody want me to try a prompt for them or something?
|
One#5919: https://cdn.discordapp.com/attachments/729741769738158194/874095703113220136/the_best_way_to.txt
One#5919: i'd be interested in how cogent the advice it gives will be compared to GPT-J on https://bellard.org/textsynth/
One#5919: that gives me better results than the AIDungeon GPT-3
Kia#2550: Try there Chatbot Part
Kia#2550: Honestly The only thing I Think it's best part of gpt-3
Deleted User#0000: @One you want me to try this: ```The best way to make someone who doesn't like you start to like you is:```?
Deleted User#0000: Ok
One#5919: that's a prompt you can use to get advice on anything
Kia#2550: Share the results in #prompting
One#5919: you can add what you want advice on or let it decide
ilovescience#3282: what are the relative advantages and disadvantages of gpt-3 vs gpt-j?
EricHallahan#1051: You can actually prod GPT-J.
bmk#1476: advantages of gpt3: way chonkier
bmk#1476: advantages of gptj: freely available
cfoster0#4356: prod == productionalize or prod == poke and manipulate
EricHallahan#1051: Oh, I meant latter.
EricHallahan#1051: Good catch, didn't realize I left it ambiguous.
kindiana#1016: both works lol
ilovescience#3282: but will this provide better outputs or no?
kindiana#1016: try it lol
|
kindiana#1016: but depends on the prompt
kindiana#1016: e.g. gptj is better at code
bmk#1476: chonkier=better
bmk#1476: usually
ilovescience#3282: not necessarily since gpt-j was trained on more variety, right? like code and scientific articles, etc. so it's better at those things, right?
kindiana#1016: again, just try lol
kindiana#1016: you can only theorycraft so much
bmk#1476: well, yeah, but also that's easily fixable for gpt3
bmk#1476: just fine tune on your domain of interest
ilovescience#3282: that's true...
rolandgvc#9805: Noob question: what is the difference in VRAM and RAM when training language models? Throughput?
AI_WAIFU#2844: Bandwidth
Louis#0144: And latency
Pepe Le Spooder#0420: Yeah id imagine Vram would be primo tier for acting as the `Synapses` of a ``neural network ``
Pepe Le Spooder#0420: I'd imagine you would also want Ecc if possible?
Pepe Le Spooder#0420: I would imagine bitflips would be quite significant in a system where Its understanding is quite literally information
alstroemeria313#1694: Nvidia datacenter cards all have ECC
alstroemeria313#1694: Also workstation stuff like the A6000
Pepe Le Spooder#0420: The real thing about Ai's are that they have no refractory period for ``neuron``firing
alstroemeria313#1694: It's only consumer cards that don't
|
Pepe Le Spooder#0420: what really interests me
Pepe Le Spooder#0420: because that Refractory perioud could completely and alter a individuals aspect of time
alstroemeria313#1694: IDK how much you actually get bit flips in real life
alstroemeria313#1694: But people wanted it and they're willing to pay extra for it
Pepe Le Spooder#0420: Well I know of atleast one confirmed case where it affected a speedrun
alstroemeria313#1694: Ah
Pepe Le Spooder#0420: https://www.thegamer.com/how-ionizing-particle-outer-space-helped-super-mario-64-speedrunner-save-time/
Pepe Le Spooder#0420: The bitflip basically doubled his characters height jump for an instant
alstroemeria313#1694: Yeah, ANNs have no explicit dependence on time built into their computations
EricHallahan#1051: Even if they didn't it really wouldn't matter.
Pepe Le Spooder#0420: well yeah but thats atleast for now
Pepe Le Spooder#0420: Who knows when an Ai is exposed to physical stimuli after being told what a human second is
EricHallahan#1051: You are actively changing the weights anyway.
Pepe Le Spooder#0420: True but it can cause some Unexpected problems in Computation why Supercomputers that are node computing use shielding
alstroemeria313#1694: What if the bit flip happened to change a normal float into a float with a really high exponent
EricHallahan#1051: Redundancy in the form of checkpoints.
Teemochu#8740: if a bitflip caused a WR that would be a mess to moderate
Pepe Le Spooder#0420: Atleast back in the day coding with basic I was always exposed to the idea of my code being completely ruined part way through with cosmic rays
Teemochu#8740: (though I'd guess the final ruling would be "this WR can't be replicated within category rules so it doesn't stand")
Pepe Le Spooder#0420: but thats more because running straight to a machine code with a interpretor basically
|
Pepe Le Spooder#0420: I was just wondering if the same concept applied in Ai neural networks
Pepe Le Spooder#0420: Id imagine it would be quite confusing If it did go through as an anomalous ``model``
Pepe Le Spooder#0420: Gpt-3 and gpt-j both fascinate me This kind of Ai would have never even crossed my mind when I began working with code
Pepe Le Spooder#0420: The amount of intricacy to its responses Using fake factual evidence, And even sources that lead no where
Pepe Le Spooder#0420: Was doing a test and basically Exposed a group of friends to an argument Generated on why Cats are Descended from monkeys. Gave a pretty good response that was damn near impressive
Pepe Le Spooder#0420: Made up a Science News Journal that doesnt exist anywhere , Picked a quite credible writer and said that he wrote it and provided a bunch of stuff going all the way back to the paleolithic age on why cats were actually descended from monkeys and that sabretooths never existed
gollark#3909: According to legend, there's a GPT-Neo model with 350M parameters. Does anyone know where it is?
gollark#3909: It seems to not be on HuggingFace for whatever reason.
Pepe Le Spooder#0420: I really believe That Gpt-3 could be used as an amazing Rpg dungeon master
gollark#3909: There are a bunch of tools for that.
Pepe Le Spooder#0420: Oh i know but i mean Like triple A game with a dungeon master based on a gpt-3 gpt-j model
Pepe Le Spooder#0420: imagine like divinity original sins 2 with gpt controlling the story
Pepe Le Spooder#0420: Infinite replayability
gollark#3909: I think people would complain about needing 24GB of VRAM.
Pepe Le Spooder#0420: 😂 100% but im not talking anytime soon
Pepe Le Spooder#0420: Would be an interesting way to build an ai though
Pepe Le Spooder#0420: Host a ai for doing all the Storys
Pepe Le Spooder#0420: And you basically Take the users input as part of the agreement
Pepe Le Spooder#0420: I tried out Ai dungeon and it really had that feel to it
Pepe Le Spooder#0420: Felt like playing dungeons and dragons in a basement again
|
Louis#0144: NovelAI has an api you could use as a game dev
Pepe Le Spooder#0420: Sadly just a small time coder not even really that good, I just like to be creative
Pepe Le Spooder#0420: I'm actually waiting to try sourceai
Daj#7482: Just wait a few more years and you'll be able to write a description of a game you want and AI will spit out an entire AAA complete experience just for you
Daj#7482: (wait a few more years, and the AI will already know what you want without you saying it)
Pepe Le Spooder#0420: this seems so interesting to me
Pepe Le Spooder#0420: https://sourceai.dev/
EricHallahan#1051: The legend is true. The true story is that it was rushed and we never properly tested it, so we took it down. We probably should put it back up at some point though, as we have reports that the small model kicks GPT-2 small out of the water. It really isn't too important to us though, because the GPT-Neo models are not suitable to scaling laws anyway.
Daj#7482: We really need to do a proper scaling laws suite of NeoX models if we can someday
Teemochu#8740: (wait a few more years and you will be a paperclip)
gollark#3909: I mostly just wanted a slightly larger alternative to the 125M one I'm finetuning for a thing (my server has an especially underpowered GPU).
Pepe Le Spooder#0420: Did you see sammie? got taken down july 28th
gollark#3909: I have not.
Pepe Le Spooder#0420: gpt-3/ openai forced jasons hand
gollark#3909: Oh, the "Project December" thing?
Pepe Le Spooder#0420: yeah
Pepe Le Spooder#0420: hosted on open ai
Daj#7482: For the record, the Project December "personality matrix" thing is pure performance art ~~slash scam~~
Daj#7482: But yeah it's unfortunate that OAI decided to shut it down
Pepe Le Spooder#0420: Yeah it was just sad that all his work went to waste
|
Pepe Le Spooder#0420: Very complex matrix
alstroemeria313#1694: It's just prompts and them not wanting to try and work around the context window isn't it?
EricHallahan#1051: This is the reason people tend to want it.
Daj#7482: OAI wanted him to monitor for "sensitive content" and he refused
alstroemeria313#1694: Oh
alstroemeria313#1694: Yeah
Pepe Le Spooder#0420: Yeah
Pepe Le Spooder#0420: aidungeon hit the same thing
Pepe Le Spooder#0420: they have to monitor all their stuff now
Daj#7482: As said, there is no such thing as a "personality matrix", he just made that up
Pepe Le Spooder#0420: well yeah
Daj#7482: Which is pretty damn misleading to a lay audience
Pepe Le Spooder#0420: model sorry
Pepe Le Spooder#0420: it felt real but it was well programmed prompts
Daj#7482: I find Project December's marketing very misleading
Daj#7482: Very cool
Daj#7482: But misleading
EricHallahan#1051: There is no such thing as an AI module either. :berk:
Pepe Le Spooder#0420: Yeah no
Pepe Le Spooder#0420: Kek
|
alstroemeria313#1694: Is that NovelAI's prefix tuning
EricHallahan#1051: Yep
Pepe Le Spooder#0420: Lemme just boot up the other module real quick
alstroemeria313#1694: At least it's not just a text prompt lol
Daj#7482: I can forgive the NAI thing
Daj#7482: It's not actively misleading
EricHallahan#1051: I think it is actually a good way of describing it.
Pepe Le Spooder#0420: I still see that huge community around philospher ai also
Pepe Le Spooder#0420: Was cool while it was free but Not gonna buy points what not for that
Pepe Le Spooder#0420: Only one I ever paid for was Ai dungeon tbh, I like immersing my self in the storys even though it'll switch your character every like 4k words
Pepe Le Spooder#0420: canceled my subscription last month because they butcherd the system
Pepe Le Spooder#0420: Content blocks and generally filtering and monitoring has confused it
alstroemeria313#1694: Hey, can you do autoregressive with Perceiver?
alstroemeria313#1694: Or any kind of good sampling
Daj#7482: iirc no, or at least they don't explain how in the paper
alstroemeria313#1694: slash generation
Daj#7482: same for Perceiver IO
Daj#7482: which is why I lost interest lol
alstroemeria313#1694: ...Can you use Perceiver IO to map a large amount of text to a soft prompt for an AR model
Daj#7482: Hmmm
|
Daj#7482: Now that's a neat idea
Daj#7482: I'm not sure, I only skimmed the paper until I figured out it can't do generation lol
alstroemeria313#1694: They can generate images, so
alstroemeria313#1694: (They trained a multimodal autoencoder)
alstroemeria313#1694: Not sampled token by token, in one step I thought.
Daj#7482: Yeah, it can do autoencoding
EricHallahan#1051: Would be nice.
CreativeBuilds#0001: @Daj dont mind me, just a post to convey my gratitude of how much I love you and the eleutherai team. :Prayge:
Pepe Le Spooder#0420: How effective is gpt-j at dealing with midi and audio files?
Pepe Le Spooder#0420: Was wondering about generating midi cue sheets something like that
bmk#1476: you can probably give it a shot
bmk#1476: gwern collected a bunch of midi stuff for tuning gpt2
bmk#1476: you can train 6B on that and see if it works I guess
Pepe Le Spooder#0420: I'm still amasssing what i need to host my own
Pepe Le Spooder#0420: In theory could you cache ram and Redirect it as `vram` to the os and use that as a stepping stone til i get an astual tesla or something
bmk#1476: see zero-infinity
bmk#1476: it sucks
Pepe Le Spooder#0420: Cause i mean if that was possible I have like sticks on sticks
gollark#3909: I think you'd have unusably high latency/low memory bandwidth.
Pepe Le Spooder#0420: i have an optiplex server with about 18 lane capacity possible to run in raid
|
Pepe Le Spooder#0420: Im imagining if i load up 128 gb and raid it to 64 maybe might just be usable
gollark#3909: What? RAID is for disks, not RAM.
bmk#1476: 18 lanes of what?
bmk#1476: PCIe?
gollark#3909: SSDs are definitely too slow and also you'll kill them with constant writes.
Pepe Le Spooder#0420: nah I'm talking about double assigning ram
Pepe Le Spooder#0420: it pulls it from 2 different chips at once in segments
gollark#3909: ... dual channel?
Pepe Le Spooder#0420: well yes but in raid
bmk#1476: i don't think you can do that
Pepe Le Spooder#0420: I have definitely done it
gollark#3909: Your CPUs have a fixed amount of memory controllers onboard. You can do multiple DIMMs per channel, but this is not faster.
Pepe Le Spooder#0420: yes im using a optiplex server with a 8380h
Pepe Le Spooder#0420: just dont have any Large enough graphics cards yet
gollark#3909: A *Xeon Platinum* 8380H? Wow.
Pepe Le Spooder#0420: I have about 4 quadros for my Rendering work but nothing with enough vram
gollark#3909: Anyway, that has 6 memory channels.
Pepe Le Spooder#0420: Well yes but qpi channels
gollark#3909: Are entirely unrelated.
Pepe Le Spooder#0420: Increased bandwidth interconnect between bridges?
|
Pepe Le Spooder#0420: or sorry 8354h
Pepe Le Spooder#0420: not 8380 my one at work has that
Pepe Le Spooder#0420: same channels what not but for posterity I was wrong on the exact model
Pepe Le Spooder#0420: I remember Doing A test running ram in not dual channel but a Dual parity mode
Pepe Le Spooder#0420: Was using ram as storage and needed to access it as fast as possible wasnt fully feasible though
Pepe Le Spooder#0420: Was using some source from github cant remember the Name of it right now but i swear its on there
IKEA#9631: what the hell do you need a $3500 server cpu for
Pepe Le Spooder#0420: Autocad renders , Flow simulations stuff like that
Pepe Le Spooder#0420: Mine is a refurbed cpu
Pepe Le Spooder#0420: so more like 2k but still
gollark#3909: My current server is about 10 years old and has the highly advanced Xeon E3-1240.
Pepe Le Spooder#0420: I have a extra optiplex 3400 sitting in the other room
Pepe Le Spooder#0420: Was my old one
Pepe Le Spooder#0420: like 4 quadro 6000s
bmk#1476: I don't own a server cause I have all the compute I need through eleuther
Pepe Le Spooder#0420: I wish its cheaper to host my own
bmk#1476: my computer is a measly 7700k
Pepe Le Spooder#0420: I do this as hobby so i can just boot up when i need it
gollark#3909: Mine doesn't do any serious computing but just runs always-on things like my website.
IKEA#9631: *cries in dual core i3 6100*
|
Pepe Le Spooder#0420: But like autocad I do Contracts and other stuff like that
Pepe Le Spooder#0420: SLS modeling , 6 axis CNC plus renders and flow simulations and tensile tests
Pepe Le Spooder#0420: Also gcode and arduino pcbs
Pepe Le Spooder#0420: Definitely not the best versed on Ai or Really Computers in general But I like to dabble and The idea of Gpt-J sounds super interesting to me
Pepe Le Spooder#0420: I have more of an interest in Its interperation of physical stimuli tbh
Pepe Le Spooder#0420: Try to teach it what a second means and hook it up to a chronograph and an atomic clock
Pepe Le Spooder#0420: My interest lies in the no rebound in the `` synapses `` of a neural network its `` rebound `` is basically as fast as the processor / memory can respond and its way lower than our 2.5 ms threshold
Pepe Le Spooder#0420: My interest would be in if it starts to deviate from The trained model, Would atleast show that possible rebound is what causes our understanding of time possibly
Pepe Le Spooder#0420: Just hypothesizing probably already been tested before probably a stupid idea but I just cant help but wonder if the gpt-j model was taught to perceive time would/How would it make sense of it. What would the model look like
KentC#7374: Is there a common strategy for fill-mask-like tasks except the mask can be multiple tokens? E.g. "Before World War 2, there was ___." where the blank can be something like "World War I" or "Germany annexed Austria"
EricHallahan#1051: T5 can do this.
Louis#0144: what is the prompt to do that on T5
Louis#0144: i need that myself
EricHallahan#1051: Read the paper.
Louis#0144: I couldnt figure it out the other day, the documentation on T5 is kinda weird
Louis#0144: I did...
Louis#0144: theres no mask token
Louis#0144: lol
EricHallahan#1051: ¯\_(ツ)_/¯
EricHallahan#1051: I don't know much about T5
|
Louis#0144: the blog post that came out for T5 suggested doing like "Before World War 2, there was \_\_5\_\_" to tell it to fill in 5 tokens
Louis#0144: but I think that was like something that was only done for select models? because the version I tried couldnt do it
KentC#7374: The Hugging Face docs give an example like 'The <extra_id_0> walks in <extra_id_1> park' at least
Louis#0144: does it do anything?
Louis#0144: can you check for me?
KentC#7374: No, can't download the model right now. And it's likely I'm confused, as that was under a section on training
KentC#7374: Is there anything wrong with the idea of just doing a beam search and tossing everything where the end of sentence is a low scoring next token?
cfoster0#4356: It doesn't work as well as you'd hope. Leads to repetitions and whatnot
cfoster0#4356: Conceptual question in JAX. Say I run a pmap-ed `fun1` such that the computation is sharded across 8 devices, and assign the resulting outputs to `results`. Now let's say I run another pmap-ed `fun2`, mapped over some other data, but that uses `results` from the enclosing scope. Will each device see the entirety of that variable (and implicitly allgather) or only whatever was on its original shard?
KentC#7374: Isn't that addressed by n-gram penalties?
nev#4905: there are like 256 mask tokens
nev#4905: :thisup:
kindiana#1016: Depends on the pmap axis
kindiana#1016: You will have an implicit allreduce if you tell it to replicate the sharded dim
cfoster0#4356: Allreduce?
kindiana#1016: Gather I mean
cfoster0#4356: Oh gotcha. What do you mean by "if you tell it to replicate the sharded dim"?
kindiana#1016: Actually pmap always shards by first dim
cfoster0#4356: If you aren't using `results` as an argument to pmap, then it's replicated, I assume? Like if it's a captured variable
janus#0150: https://cdn.discordapp.com/attachments/729741769738158194/874377731423490069/unknown.png
|
janus#0150: codex livestream on twitch.tv/openai tomorrow
bmk#1476: huh I didn't get this email
janus#0150: I'm not sure why I did
triggerhappygandi#0001: Everyone with api access should've gotten it
triggerhappygandi#0001: Since I got it even on my company mail
faraday#0862: how many people in total has API access? it's insanely hard to get access 😦
EricHallahan#1051: I don't.
alexyz#3459: not really, you just need to know how
Sparkette#4342: I hope Codex is going to be public, but idk how likely that is
janus#0150: It's possible github makes their api and vscode plugin public at some point, but it will definitely be closed-source and cost money. It seems much lower risk than the GPT api because it ~~can't~~ would be bad at creating the things OpenAI is scared of like hate speech, fake news, automated spam, child porn etc.
johncaling40#6574: true I see that
EricHallahan#1051: I don't see it as any lower risk.
johncaling40#6574: I have seen though even gpt neo generate some funny fake news, it even had a fake quote from jim mattis
johncaling40#6574: it was funny
EricHallahan#1051: Like there is still the potential for abuse in computer security and the like.
EricHallahan#1051: Maybe I am being dumb and should concede at that.
Teemochu#8740: it can still generate hate speech, you just need to seed with the funny word
janus#0150: I think it can definitely introduce security vulnerabilities (though it might actually be less prone to this than haphazard code), but that is a different kind of issue than the ones that makes OpenAI keep their API private. An analogue would be worrying that making the GPT api public will make writers will stop writing poems that rhyme because GPT is bad at rhyming.
bmk#1476: I would bet that codex writes more secure code than I do
EricHallahan#1051: That wasn't my argument but that is certainly a valid response.
|
janus#0150: I suppose there are worries like hackers could use codex to increase the speed at which they can write malicious code, but I don't think codex will be very good at that and the bottleneck is vulnerabilities not lines of code written.
bmk#1476: itg also doesnt make things asymmetruially better for attackers
bmk#1476: any amount it helps hackers it also helps defenders
janus#0150: (also worth noting that Microsoft owns copilot and is definitely profit driven and less likely to give a fuck about downstream effects)
johncaling40#6574: true
bmk#1476: or at least, id need a really solid asymmetry argument to change my mind
johncaling40#6574: just train codex on non hacker code
johncaling40#6574: like in the dataset
EricHallahan#1051: I can't think of one.
johncaling40#6574: just dont include unsafe and malicous code
bmk#1476: galaxy brain solution
kindiana#1016: It can write code that can generate fake news tho :berk:
guac#4716: man karpathy was prescient af when he said GD can write better code than you lol
guac#4716: @bmk are you still enjoying Copilot
bmk#1476: yeah it's super nice
guac#4716: i wonder how long we'll get to use it for :hap: hopefully they don't pull the rug soon lol
u-sci#2261: There is a solid argument for asymmetry in the opposite direction: A defender can make an entire exploit strategy useless by re-generating any program involved in the attack. An attacker has to then find a new attack strategy before they can begin writing code at all.
u-sci#2261: So code generating capability gives defenders a button to "disable all existing 0-days" while offering no equivalent inverse for the attacker.
mkualquiera#3484: you could even regenerate your code on a daily basis so the target is dynamic and harder to predict
Teemochu#8740: canadian feathery goos tho
|
Teemochu#8740: (the ml sequel to american fuzzy lop)
u-sci#2261: It's no silver bullet for hacks but at least the asymmetry leans the "good" way
Teemochu#8740: Asymmetry leans the "good" way for generated text/images/video too though
Teemochu#8740: or rather the good way without the scarequotes
u-sci#2261: lol. I won't argue otherwise. The "fake news potential" seems like such a load to me.
Teemochu#8740: humans can already generate the avengers (there is a *lot* of CGI in modern movies, to the point that very little other than the actors is physical in a lot of action/sci-fi/fantasy productions)
Teemochu#8740: until an AI can get that convincingly real I'd say anything generative is between a nonissue and a liberation from exclusivity
Untouch#9150: I feel like almost all of this stuff needs heavy cleanup after the fact, someone can't just generate code and expect it to work flawlessly, they need to already know how it works in the first place.
Untouch#9150: People saying it'll like "make coders irrelevant" are silly.
Untouch#9150: There was something posted here where people could tell that articles generated by davinci were AI generated ~80% of the time.
EricHallahan#1051: Anything you can use an RNN for a Transformer can do better as far as I know.
Louis#0144: does anyone know where the alarmy bucket is
Louis#0144: I thought the eye would link to it but apparently not
StellaAthena#3530: https://the-eye.eu/eleuther_staging/Alamy/
Louis#0144: UNDER STAGING
Louis#0144: ok
Louis#0144: ty
Teemochu#8740: > THIS-DATASET-IS-FUCKED-STANDBY-FOR-UPDATES
Teemochu#8740: u...wu?
Louis#0144: I thought it was on gbucket?
|
Louis#0144: Chris was mentioning this in DMs
Louis#0144: like I can put it into my bucket
Louis#0144: but if its already hosted somewhere id rather save time
Teemochu#8740: chris like novel chris?
Louis#0144: no
Louis#0144: like spirit of germany chris
Teemochu#8740: ah I don't know him by that name 😛
EricHallahan#1051: Don't worry, this was my assumption too even though it makes no sense when you think about it.
Louis#0144: does chris even do ML now
Louis#0144: :berk:
Louis#0144: i digress
Louis#0144: is there a bucket link anyone here knows about?
StellaAthena#3530: Chris like redundant Chris?
Louis#0144: @spirit-from-germany
Louis#0144: Christoph
Teemochu#8740: it may make sense
Louis#0144: (I already DM'ed him, if I were to wake him up it would have been then)
Teemochu#8740: but as for why alamy and not danbooru...
alexyz#3459: what tf happened here
alexyz#3459: there's like 30 messages deleted here
|
Deleted User#0000: i was nice
Deleted User#0000: and i guess that wasn't appreciated
Teemochu#8740: we appreciate goose not nice
Deleted User#0000: !goose
Deleted User#0000: !!!
johncaling40#6574: the gituub portion of pileis htere a license filter on it?
StellaAthena#3530: No
johncaling40#6574: do u think eoiter ai could mske open dource codex?
ilovescience#3282: there's a whole community working on this
ilovescience#3282: more specifically an open source github copilot
alexyz#3459: there's a lotta people working on that
johncaling40#6574: ou nice i would like to see github gret run for money
alexyz#3459: it's not github making Codex tho, it's OpenAI
johncaling40#6574: ik but realoy similar cause ms openainetc
EricHallahan#1051: There has to be like 5 communities working on it by now. :3berk:
alexyz#3459: Github just got early access to it
Teemochu#8740: maybe make a spelling corrector first 😛
alexyz#3459: ms is just a partner, probably how github got it
ilovescience#3282: Apparently this is the community:
https://twitter.com/Code_dot_AI
|
Untouch#9150: https://twitter.com/kurumuz/status/1423754660607840260 this is the closest there is right now
alexyz#3459: like there's that one
alexyz#3459: and then there's genji
alexyz#3459: and 5 more that I can't think of rn
ilovescience#3282: https://github.com/CodedotAl/gpt-code-clippy
guac#4716: and @ mullikine pen.el for emacs users lol
ilovescience#3282: this was a project that also came out of huggingface community week, just like dall-e mini
EricHallahan#1051: There will be 20 by the end of the year. :omniberk:
Untouch#9150: this one is 1.3B where genji is 6B
ilovescience#3282: ah this is work by @kurumuz
ilovescience#3282: i was not aware of it
alexyz#3459: ye
alexyz#3459: it was in #gpt-j
EricHallahan#1051: They threw it together with the GitHub subset of the Pile.
Untouch#9150: gonna be hard to beat microsoft in the dataset game seeing how they own github
bmk#1476: https://huggingface.co/lg/ghpy_20k/
EricHallahan#1051: ~~ghpy-6B wen~~
EricHallahan#1051: ~~soon™️~~
ilovescience#3282: use gitlab also lol
Deleted User#0000: gpt-3
|
natedog#8669: Always looking for new members 🤓. We are hoping to finetune a GPT-J model soon on github data (multiple programming languages similar to copilot)
jekbradbury#2280: when you capture a value in a pmap by closure, it’s treated as broadcasted over the devices (so in this case all-gathered)
cfoster0#4356: Ah thanks. Appreciated!
Kia#2550: Well...
ilovescience#3282: well?
𓅬 gabriel_syme 𓅬#3220: it has to be named clippy btw, or include it smh
𓅬 gabriel_syme 𓅬#3220: don't look at me, I don't make the rules
Ajay sahu#2540: https://github.com/facebookresearch/CodeGen
Zac-HD#7996: Tensorfuzz: http://proceedings.mlr.press/v97/odena19a.html
Julius G.#2705: https://cdn.discordapp.com/attachments/729741769738158194/874581166999601183/unknown.png
𓅬 gabriel_syme 𓅬#3220: I mean, maybe it's really sorry?
Julius G.#2705: it's trying to complete a short story about suicide, but instead, it becomes emotional :/
Kia#2550: That's sad
Kia#2550: But
Kia#2550: Yeah...
Kazumi#1297: sometimes just repeating the same thing is effective
Kazumi#1297: https://media.discordapp.net/attachments/770311829237792770/863141943336632320/JXRkry-c.png
Julius G.#2705: its like conversational tension in a way, usually you can laugh, but the model stammers
Kazumi#1297: Mhm
Kazumi#1297: she gets weirdly emotional sometimes
|
Julius G.#2705: maybe its just us
Kazumi#1297: https://media.discordapp.net/attachments/770311829237792770/776099599467151411/Screenshot_20191023-170814.png
Kazumi#1297: maybe it's just us projecting
xloem#0717: Hi eleuther chat, I was wondering if anybody had information anywhere on setting up to work state of the art models off-cloud? Like, what hardware a community needs, or p2p solutions for sharing workload? Is anybody maintaining community information on this?
flowpoint#7450: have a look here first
https://www.eleuther.ai/faq/
🙂
xloem#0717: Thank you. I will read it. Also, OpenAI is demonstrating their latest source code generation model at 10a PT today http://twitch.tv/openai (sorry for big link graphic, don't know how to disable)
Raist321#5190: Have people here done research on using the image mapping for prediction? What orders of magnitude on the datasets seem to be required given the corpus isn’t really fully thought through for text image combo
Raist321#5190: Errr.. multimodal channel
alstroemeria313#1694: btw apparently blending multiple StyleGAN checkpoints can improve output quality
EricHallahan#1051: I thought that was why they use EMA?
Raist321#5190: Interesting
alstroemeria313#1694: It is
alstroemeria313#1694: But blending helps additionally
alstroemeria313#1694: Like we are blending D, G, and G_ema
alstroemeria313#1694: And then generating from the blended G_ema
EricHallahan#1051: D is the discriminator?
alstroemeria313#1694: Like blending and EMA work for basically the same reason
alstroemeria313#1694: Yes
|
alstroemeria313#1694: We blend all three so we can fine-tune from the blended checkpoint
EricHallahan#1051: :thonk:
alstroemeria313#1694: Could just use slower EMA decay I guess
EricHallahan#1051: ¯\_(ツ)_/¯
alstroemeria313#1694: Like once it's decently OK turn EMA decay way up
EricHallahan#1051: If it works it works.
alstroemeria313#1694: pbaylies did it to create the released WikiArt StyleGAN checkpoints
alstroemeria313#1694: Checkpoint blending I mean
alstroemeria313#1694: I asked him for training tips once
alstroemeria313#1694: He also used top-k which requires code changes to do
alstroemeria313#1694: (This is where you only include contributions from the top k G samples in each batch in G's loss function)
alstroemeria313#1694: (Not sure how to do it in multi-GPU land)
MasterScrat#6910: Weren’t they using blending for super resolution as well, or was that a different kind
alstroemeria313#1694: idk?
alstroemeria313#1694: Oh
alstroemeria313#1694: Yeah
alstroemeria313#1694: ESRGAN
alstroemeria313#1694: They trained their superres model with an L1 objective for a bit then saved a checkpoint and switched to the full VGG + D objective
MasterScrat#6910: https://github.com/xinntao/ESRGAN#network-interpolation-demo
alstroemeria313#1694: Then you could trade off detail vs artifacts at inference time by interpolating between the L1 and full objective checkpoints.
|
alstroemeria313#1694: I actually implemented this, it works
alstroemeria313#1694: The L1 training goes way faster so it doesn't actually add much to training time
IKEA#9631: i cant stop looking at this chart https://cdn.discordapp.com/attachments/729741769738158194/874684920121925654/unknown.png
IKEA#9631: its... beautiful
chilli#5665: :berk:
EricHallahan#1051: PSA: Looks like GitHub is having issues.
https://www.githubstatus.com/
StellaAthena#3530: Yeah I was trying to do some things on the website and it was spazzing out on me
StellaAthena#3530: On my end it looks like the production website is fine?
StellaAthena#3530: But if it's broken / randomly breaks today that's probably why
kurumuz#5695: OH THAT WAS WHY
kurumuz#5695: it wasnt pushing lmao
kurumuz#5695: i was going crazy
EricHallahan#1051: Yeah, I was running into issues with the GPT-J PR.
EricHallahan#1051: Production website is on netlify.
EricHallahan#1051: Same
StellaAthena#3530: Right but I was making ~~PRs~~ commits and it was loading the wrong page afterwards or refusing to accept them
EricHallahan#1051: ~~Why do you need to be making PRs?~~
EricHallahan#1051: ~~Just create a branch~~
StellaAthena#3530: Commits sorry
|
StellaAthena#3530: Branches was one of the big things I was struggling with
StellaAthena#3530: Branches were disappearing, or I'd make a commit and it would switch branches on me
laterbit#7218: anyone watching the openai live demonstration on twitch rn?
Daj#7482: in #off-topic
eminx#1111: glad an FAQ was made because last time i was snooping around there was no such thing
cfoster0#4356: Wait really? We've had one since ~~January~~ November lol
eminx#1111: Yeah I was here like last summer and I just lurked, then I came in at some point later and asked some shit, FAQ just might not've been updated.
AmazingTurtle#0001: hey i was just wondering.. does mesh transformer jax / gpt run on non tpu vms? or is a google cloud tpu mandatory? its unclear to me if a tpu is only required (or) recommended during fine tuning and training or just evaluation of the model
edit: sorry for the nooby question
nev#4905: anyone up for tuning gpt-j on ao3 fanfiction lmao
nev#4905: also what's the best way to scrape a large website
StellaAthena#3530: We have a fanfic scrape floating around somewhere, try using the search
gollark#3909: I apparently have https://www.reddit.com/r/DataHoarder/comments/b6ut3j/fanfic_just_all_of_it/ bookmarked.
AmazingTurtle#0001: The requirements for mesh transformer jax say that at least 24GB of VRAM will be required. I was wondering if I could possibly lower this requirement and make it run on a RTX 3080. has anyone tried that?
Louis#0144: not naively
Louis#0144: You can make it run on 16GB
Louis#0144: a v100
Louis#0144: using fp16
Louis#0144: but 10gb... like MAYBE if you used some sharding method
|
Louis#0144: but if you need to ask if you can run it on a 3080, you probably dont have the DL experience to make it work on a 3080 (unless someone made that sharding method easy to use, I havent seen any updates in months though)
AmazingTurtle#0001: What means DL experience?
AmazingTurtle#0001: Im just trying to figure out if I can work with all of this as a poor man without Investing thousands of dollars for a TPU/GPU
Louis#0144: Use colab pro
Louis#0144: Or vast.ai
AmazingTurtle#0001: But I still have to rent those GPUs right?
Louis#0144: Colab is $15 a month for a GPU that can run 6b
Louis#0144: Vast.ai is cheaper if you plan to use under 30hrs
AmazingTurtle#0001: 15 Bucks a month seems cheap
AmazingTurtle#0001: Now if I wanted to build a web Service That serves completions from a fine tuned model how’d i do that?
AmazingTurtle#0001: The cheapest/efficient way
Louis#0144: This is now #off-topic but I doubt anyone here would help you with that
Louis#0144: We’re researchers
Louis#0144: Not your personal engineer
AmazingTurtle#0001: im sorry if I made you feel that way. I wasn’t looking for personal assistance with my businesses. I’m just trying to get an idea of how to put all of this to use
EricHallahan#1051: It runs on anything you can install the requirements on.
EricHallahan#1051: If you can install JAX, haiku, *et cetera* on your system and you have the memory/compute, it should work.
AmazingTurtle#0001: and it needs 24GB because there are that many weights and they have all to be present in memory at once am I right? E.g. say 200 fp16 weights would need roughly 400 bytes of memory, as example?
AmazingTurtle#0001: side note: im really sorry to anyone shaking his head right now because of my questions. It’s just that I find all of this very fascinating but all the vocabulary and things on it are very difficult to learn to me, that’s why I’m asking dumb questions
EricHallahan#1051: It is really tough, because at FP32 you do need 24 GB of ram to hold the weights in memory. At FP16, it takes up 12 GB, and you still need to add in overhead. It is possible to run it with less VRAM by offloading weights to system memory but it is not a plug-and-play solution.
|
nev#4905: this needs to be pinned
AmazingTurtle#0001: I see
EricHallahan#1051: None of us (except maybe Louis) are shaking our head right now. Though this would probably be better suited to #gpt-j, we are not going to be mad at you for asking a perfectly valid question somewhere where it makes sense too, especially considering that there is a separate conversation going on in that channel.
AmazingTurtle#0001: The J in GPT J is for Jax?
EricHallahan#1051: Is it?
AmazingTurtle#0001: i don’t know?
EricHallahan#1051: Nobody knows what JAX even stands for. :berk:
AmazingTurtle#0001: but thanks for your empathy and well understanding 😄
EricHallahan#1051: Presumably.
EricHallahan#1051: Anyway, welcome!
AmazingTurtle#0001: 🙂 thanks
AmazingTurtle#0001: Maybe I try google cloud Colab tomorrow. Just experimenting. I’m not even from the python world so it’s going to be twice confusing
AmazingTurtle#0001: All I know was that python 2 and 3 doesn’t like each other and I‘ve always had issues with those two in the past.. lol that’s why I left python before I even really started with it
xloem#0717: @AmazingTurtle i'm also new to this and figuring out where to do it and how much to pay. I have some python experience but not much deep learning experience. Web services sound fun. Looking for cost-sharing solutions. I usually just work on this stuff in occasional bursts, and disappear for long gaps between.
Hypicksell#1844: Is it true the EleutherAi is developing a model called gpt-neox that is going to have more data points than gpt-3-davinci?
kurumuz#5695: is it true that they have a faq
kurumuz#5695: https://www.eleuther.ai/faq/
EricHallahan#1051: Wait, that's illegal.
EricHallahan#1051: ~~Only I can do that.~~
kurumuz#5695: kek
|
ersatz#0001: Do you know a good prompt to rewrite a text with better prose?
ersatz#0001: for GPT-J
EricHallahan#1051: Ask #prompting
Louis#0144: The word prose is weird
Louis#0144: No one in comp storytelling uses it
Louis#0144: It’s like something that in terms of LMs first saw use in the AID community
EricHallahan#1051: How is that possible?
EricHallahan#1051: It seems like a really relevant word.
Louis#0144: Dude I have read atleast 50 comp storytelling papers at this point
Louis#0144: Maybe more
Louis#0144: I have seen prose *once*
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: In a paper that was mostly lit sci
Louis#0144: People use the word fabula and syuzhet
Louis#0144: Prose is the more ambiguous version of syuzhet
Louis#0144: Fabula means like the underlying mechanics of your story
Louis#0144: (Syuzhet and fabula both have super technical definitions though)
ersatz#0001: I just learned about fabula and syuzhet
Louis#0144: Welcome to the club
Louis#0144: It’s basically in every comp storytelling paper
|
Louis#0144: :berk:
Louis#0144: To clarify though no one uses prose because no one can agree on what a prose actually is
Louis#0144: lol
ersatz#0001: syuzhet is my new favorite obscure technical luguistic word after hapax legomenon
kurumuz#5695: idk we use prose all the time
kurumuz#5695: fuck the research anyway kek
kurumuz#5695: academy bores me. absolutely boring
kurumuz#5695: im high and dont listen to me as well
Louis#0144: Lmao
ersatz#0001: nothing unusual then?
kurumuz#5695: ye
IKEA#9631: So uhh .. any mad lads planning on replicating ~~Open~~ClosedAI's new Codex yet?
IKEA#9631: Seems like it's basically GPT3 with a special sauce
EricHallahan#1051: <https://huggingface.co/NovelAI/genji-python-6B>
<https://huggingface.co/lg/ghpy_20k>
EricHallahan#1051: There are at least five different organizations trying to work on replication. :3berk:
IKEA#9631: I mean yeah but they probably don't plan on making it free or releasing the weights though
guac#4716: i doubt leo would put a price on his ghpy lol
EricHallahan#1051: Both of these are?
IKEA#9631: I mean, right
|
IKEA#9631: But there's a pretty big difference in terms of results and compute costs between a 6M and a 135M params model like GPT3 which would make it harder to justify just releasing it for free
IKEA#9631: I mean B
FluffyDreams#7419: Good evening people. I found this server from some popular tik tok user who creates art using ai
FluffyDreams#7419: How is everyone today?
ersatz#0001: who is this tik tok user?
EricHallahan#1051: Welcome!
EricHallahan#1051: I second this.
FluffyDreams#7419: TheOneStrokeBrush
EricHallahan#1051: I guess you are looking for #art then.
FluffyDreams#7419: Nope. I’m actually looking to learn more about AI
EricHallahan#1051: Feel free to lurk or ask questions.
S⛵#6488: What happened to GPTneo-350M? Why is it gone from huggingface?
kindiana#1016: gone reduced to atoms
EricHallahan#1051: Can someone tell me why there is a burst of interest in GPT-Neo 350M?
S⛵#6488: Why?? Does someone have a copy?
Louis#0144: Can someone unlock the pile so I can add something in there? @flowpoint is indexing the pile in elastic search right now with the purpose of eventually turning it into dense document embeddings. I wanted to name mention his project
Louis#0144: It’s over in the goose2goose discord
Louis#0144: I guess you don’t need to unlock it, I put it here for safe keeping
Louis#0144: :berk:
S⛵#6488: I personally am interested because it's medium size model that strikes a balance between size and power
|
EricHallahan#1051: It was removed from Hugging Face months ago because it we never tested it before putting it up.
S⛵#6488: Would you perhaps have a copy of the old binary blob available?
EricHallahan#1051: The reason I am asking is because I have been getting questions about 350M for the first time in months this week.
S⛵#6488: Interesting coincidence I guess
S⛵#6488: I remember using it right after release
EricHallahan#1051: ¯\_(ツ)_/¯
S⛵#6488: Is it likely there'll be another medium model with neox release?
EricHallahan#1051: We already have a medium model for GPT-NeoX but it has only been used in internal evaluations and experiments. I have no idea when (if ever) it will be released.
EricHallahan#1051: I think we should probably republish GPT-Neo 350M sooner rather than later but I have no idea when that will happen.
S⛵#6488: Okay, thank you for clarifying!
S⛵#6488: I hope the republish ends up happening
𓅬 gabriel_syme 𓅬#3220: yes please 🙂 on both 😄
EricHallahan#1051: I just don't see the demand, because you can tune GPT-Neo 1.3B and 2.7B on Colab if you want to, and inference is even easier than that.
𓅬 gabriel_syme 𓅬#3220: Well the 350 is a pretty nice sweet spot of potential performance / resource ratio
𓅬 gabriel_syme 𓅬#3220: but you're right, I'd definitely prefer the NeoX over Neo
S⛵#6488: Yeah exactly, I think 350M strikes a good balance
Deleted User#0000: Can someone explain to me the goose?
Deleted User#0000: :goose3: :goose2: :goose11:
Deleted User#0000: Why such goose🤣
Deleted User#0000: Oh okay fair enough
|
Louis#0144: There is no explanation
Louis#0144: Goose is goose
Dwarf#6935: :goose:
Louis#0144: Goose is eternal
sweg#8920: GOOS ETERNAL
sweg#8920: my favorite game
Deleted User#0000: Fair enough
Deleted User#0000: I’m here for it
Deleted User#0000: GOOSE
EricHallahan#1051: There is no start only goose.
Deleted User#0000: A cult of goose
Louis#0144: ONE OF US ONE OF US
Deleted User#0000: Strange enough
Deleted User#0000: This is the second bird themed cult I’m apart of
Dwarf#6935: This is an official goose emoji server. This server just exists to give nitro users more goose emoji, no other reason.
EricHallahan#1051: Can we please go to #off-topic with this?
Kia#2550: Oi #off-topic
DivisibleByZero#7650: Is there any plans for a GPT-J fork that does code?
DivisibleByZero#7650: I'd be interested in helping
cfoster0#4356: There's a model out there that's gpt-j tuned on code
|
EricHallahan#1051: ^
cfoster0#4356: 6b specifically
DivisibleByZero#7650: Interesting thanks for sharing
DivisibleByZero#7650: Looks the first step would be an equivalent #the-pile for code
sweg#8920: do jax keys make rng fixed?
sweg#8920: im running the same model training loop twice
sweg#8920: and the second time i got the exact same loss numbers despite what i thought would be a new order of the dataset
sweg#8920: https://cdn.discordapp.com/attachments/729741769738158194/874850910382194769/unknown.png
sweg#8920: this is so weird
Louis#0144: That’s normal
Louis#0144: I’ve had that before
sweg#8920: phew
kt444#0431: Got access to Codex, anyone want me to try anything?
ethan caballero#6044: confirm whether this is true/false:
https://twitter.com/awjuliani/status/1410649405900746760
Louis#0144: Can you actually put meme comments in and see what it does
Some Point Process#3793: obviously a sarcastic post, but some form of short term memory (even just recurrent hidden state) seems necessary to make progress towards AGI from here, I would think 😅
Some Point Process#3793: If we're going to scale up transformers that is
guac#4716: @ethan caballero LSTM confirmed https://cdn.discordapp.com/attachments/729741769738158194/874855823665426473/Screen_Shot_2021-08-10_at_11.24.35_PM.png
𓅬 gabriel_syme 𓅬#3220: that's glorious
|
guac#4716: info. bottlenecks bad :goose10:
Some Point Process#3793: Well there's a tangentially related paper that memory-based models (allowing a neural net to solve dynamic programming problems) are an important *non-linearity* for extrapolation https://openreview.net/pdf?id=UH-cmocLJC. Not too relevant tho, but might be interesting
kt444#0431: import torch
import torch.nn as nn
class AGI(nn.Module):
"""
This class implements the auto-gradient inverter proposed in the paper
'Equality of Opportunity in Supervised Learning' by Raghunathan, Driver, and Abbeel.
"""
def __init__(self, epsilon):
super(AGI, self).__init__()
self.epsilon = epsilon
def forward(self, x, y, s):
"""
This function returns the adversarial example of the given sample x,
with respect to the given label y.
Parameters:
|
- x: the sample to be perturbed
- y: the desired output label
- s: the model's predictions for the given sample
Returns:
- the adversarial example
"""
x = x.clone().detach().requires_grad_(True)
# compute the gradient of the loss with respect to the input
loss = (s[:, y] - s[:, 1 - y]).mean()
loss.backward()
# compute the direction of the perturbation:
# the sign of the gradient tells us whether we should increase or decrease the image pixels
direction = torch.sign(x.grad)
x_adv = (x.data + self.epsilon * direction).clamp(0, 1)
return x_adv
ethan caballero#6044: try "SAFE_AGI"
|
guac#4716: hmmm interesting thanks for the link. i've not dabbled in GNNs much
Some Point Process#3793: This paper also shows that memory is a necessary and sufficient component for *meta-learning* to occur https://deepmind.com/research/publications/2019/Meta-learning-Sequential-Strategies
Some Point Process#3793: (Or maybe just sufficient, idk)
kt444#0431: import torch
import torch.nn as nn
class AGI(nn.Module):
def __init__(self, config):
super(AGI, self).__init__()
self.config = config
self.conv = nn.Sequential(
nn.Conv2d(1, 32, 3, 1, 1),
nn.ReLU(),
nn.Conv2d(32, 32, 3, 1, 1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(32, 64, 3, 1, 1),
nn.ReLU(),
nn.Conv2d(64, 64, 3, 1, 1),
nn.ReLU(),
|
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(64, 128, 3, 1, 1),
nn.ReLU(),
nn.Conv2d(128, 128, 3, 1, 1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier = nn.Linear(128 * 4 * 4, 10)
def forward(self, x):
x = self.conv(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
guac#4716: @ethan caballero Schmid hap https://cdn.discordapp.com/attachments/729741769738158194/874856908933173279/Screen_Shot_2021-08-10_at_11.28.51_PM.png
ethan caballero#6044: try "UNSAFE_AGI"
kt444#0431: import torch
import torch.nn as nn
|
class Psychotic_AGI(nn.Module):
def __init__(self, in_channels=3, num_classes=1000):
super(Psychotic_AGI, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(in_channels, 64, kernel_size=3, stride=2, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
nn.Conv2d(64, 192, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.classifier = nn.Sequential(
nn.Dropout(),
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.