data
stringlengths 115
7.61k
|
---|
StellaAthena#3530: Something we have strived to do but generally struggled with is pair up engineers and hackers who post results in the discord server with people with more academic backgrounds to get stuff written up and published.
StellaAthena#3530: The most successful example of this would be me writing up the VQGAN-CLIP paper about 18 months after it was developed.
kram#1032: other modalities nice to have: time (i.e. video rather than static images), 3D context
kram#1032: at least audio/video pairing is easy enough. Literally just scrape videos.
With some quality control I guess.
for that presumably the limitation is the memory you'd need
bob80333#4040: LAION does have some work on text-audio models:
dataset: https://github.com/LAION-AI/audio-dataset
model: https://github.com/LAION-AI/CLAP
kram#1032: I could also imagine crazier / more niche things like, say, adding molecular data or chess games somehow.
Not sure that'd be even remotely useful.
But would be fun if you could then take a classic artistic photo of a chess board with a ton of bokeh, and that CLIP version manages to estimate the board position and who is winning
kram#1032: that one ego perspective actions dataset could be fun for pretty finely annotated videos
Kurt#5353: Hello folks,
I'm sorry if you all are busy but can i ask for some support please ?
kram#1032: it was specifically Figure 3 btw https://cdn.discordapp.com/attachments/729741769738158194/1097914530102792192/image.png
kram#1032: TBH weird thing to add there because the graphs aren't even related to that
StellaAthena#3530: That depends on what you’re looking for. This is generally not a good place to get introductory help with code or AI, or using other people’s products.
StellaAthena#3530: Weird
|
Kurt#5353: I've downloaded **leutherAI/gpt-neo-2.7B**
But for some reason the bot don't answer to me at simple question.
I have a Ryzen 7 3600x
32Go Ram
RTX 3070
Any advice ? My code looks good to me too so..
Chad Kensington#9564: @spirit-from-germany was thinking about chess data. Not sure for clip tho
StellaAthena#3530: The simple answer is that that model isn’t good at answering questions
StellaAthena#3530: Nothing is wrong technically, you’re just misusing technology
Kurt#5353: Oh ! I get it. A exemple of what i can say to interact with him ?
kram#1032: A specialized model for chess data would be easy enough. Could even do *synthetic* 3D renderings of chess boards under various light conditions and at various game states, along with chess move strings or whatever.
How that'd actually fit into a larger model as some sort of "sub-modality" is less clear tho
StellaAthena#3530: It’s not a “him,” because it’s a computer program not a person. And generally no, it’s bad at interactive conversation.
StellaAthena#3530: I’ve had several people volunteer to build me a nice dataset of chess games to train models on and then not do so 😦
kram#1032: aww
kram#1032: The strings ought to be available freely in large quantities I'm sure
Kurt#5353: Copy that ! Thanks alot ! Do you know any base at this moment for answering question ?
|
Chad Kensington#9564: Haha this is such a now thing
tpapp157#3643: I played around with contrastive learning of chess states a little while back. Got some interesting results without too much work. The model pretty easily learned a lot of the standard openings for example. I was hoping to see if it could do more advanced things like player ELO prediction but it never quite got there, though I had a whole list of potential improvements that I never got around to implementing.
Kurt#5353: I'm sorry i'm french so.. Aha
kram#1032: Loved that one engineer who got fired over this distinction lol
kram#1032: Romance languages be like "what's an 'it'"?
Chad Kensington#9564: Btw not sure if its a good db but i think there's like a 350gb or so chess text db from stockfish data if u interested
StellaAthena#3530: Someone sent me an email asking for recommendations for relationship therapists who wouldn’t laugh at them for dating ChatGPT 😦
Chad Kensington#9564: Hahahaha that's awesome
StellaAthena#3530: They asked me specifically because they asked ChatGPT who they should ask and it recommended me
kram#1032: It's crazy just how many people simply need somebody to talk to
StellaAthena#3530: I know this because they included a screenshot showing it
kram#1032: Even far simpler chat bots are known to be *somewhat* effective in this way
kram#1032: when it becomes so near-human...
kram#1032: it's no wonder a lot of people have this kind of experience
StellaAthena#3530: I wonder how the social dynamics of this would have played out in a non-COVID world. Similarly I’m sure, but people are more lonely and disconnected than they used to be
tpapp157#3643: https://en.wikipedia.org/wiki/ELIZA
Chad Kensington#9564: Was thinking of this
kram#1032: same
kram#1032: It's in some ways a scary thought that it really is *that* easy to help people
kram#1032: Both in terms of "are we this predictable" and in terms of "we as a society are failing us as individuals"
|
Chad Kensington#9564: I guess ppl just want someone to listen to them
kram#1032: yeah
Chad Kensington#9564: Even if that someone is not very smart
kram#1032: It's not a silver bullet of course.
Loads of people have problems that could not be fixed with a simple (or even sophisticated) chat bot
kram#1032: But for a surprisingly large number of them, this is all it takes
Chad Kensington#9564: Getting overly attached to ai is a problem too
tpapp157#3643: https://database.lichess.org/
kram#1032: In Japan they have, like, a healthcare seal robot designed to make elderly people feel less lonely
Chad Kensington#9564: I remember someone tried bringing dead ppl back to life using finetuned gpt 3
Chad Kensington#9564: And open ai shut it down
StellaAthena#3530: Yes I know where to get the data. It’s the downloading, preprocessing, and making a tokenizer that I haven’t had time to do
kram#1032: Thing is, attachment is kinda our thing?
If people can get madly attached to entirely inanimate objects, is it a wonder they can get attached to things that talk to them and always give thoughtful(seeming) responses showing that they "truly" "listened"?
kram#1032: Like how many people have a car or something as a personality trait?
kram#1032: And I gotta say, knowing 100% full well what ChatGPT and the like is, I am *definitely* automatically more polite to it, just because it "is polite". It's just natural.
I think the effect has reduced by now and I tend to be more, like, straight-forward to these models now. If I talked to people like that, it wouldn't quite be mean, but it would be somewhat harsh and direct.
But the AI of course doesn't care
kram#1032: Initially tho? TONS of "please" and "thank you" and "you did so well" etc. As if me being nice to it even mattered
Chad Kensington#9564: Haha yeah same
|
Chad Kensington#9564: Tho i guess i do notice an uncanny valley
kram#1032: There definitely is an uncanny valley
Chad Kensington#9564: When it forgets imo
voxs#0001: really? ive always treated it as like a glorified search engine/autocomplete (because thats what it is), so i type in the minimal info to get a response, no platitudes, dont fix typos
voxs#0001: well i suppose i tried out gpt-2 alot and it was a stupid bot, so i see chatgpt in the same light
kram#1032: In fact, imo that should be studied further:
I suspect roughly what's going on is, that "the weirdness" moves to deeper and deeper parts of our brains (higher-level, more conceptual stuff) as these AIs are getting better.
Even with near perfect "photorealistic" image generation, if you see a ton of examples, you eventually notice a certain "flatness" to most AI art for instance. - They are blurry and vague in a similar way to early GANs that gave blurry faces, but the specific features that are vague in that way are much subtler and harder to articulate. They sit at a level we normally don't talk about.
And language AIs' text output has a similar feel to it.
(I have to say I haven't had a chance to interact with GPT-4 yet tho)
voxs#0001: definitely agree
voxs#0001: its like image generation models not quite looking photoreal at first, but now mj v5 is basically photoreal
voxs#0001: we'll get there in time
kram#1032: My main interest tends to be story generation rather than search.
I have used it to help me with a couple coding things tho
kram#1032: (All of these AIs thus far are pretty poor at stories)
kram#1032: (I think OpenAssistant would actually be quite good if it simply had WAY more context.
|
ChatGPT tends to be WAAAAY too non-committal for stories)
kram#1032: I'm very much looking forward to million-token-scale contexts. I'm sure that'll happen, and maybe even soon
kram#1032: Give me an AI where I can just plug in all of Lord of the Rings and then ask for an entirely new book in that style and it can finish that book without losing track of the first word of the first book lol
Chad Kensington#9564: I think for that u need to have ai reference a db somehow
kram#1032: and then illustrate every page while we're at it
Chad Kensington#9564: Like more complicated than the current emb retrieval imo
jrowe#5371: a true-to-the-book, on the fly audio/video epic movie adaptation
kram#1032: I saw some techniques that make sense to me to that effect.
One of them was like basically doing two levels, one coarse-grained, attenting to like 16 tokens at a time, so it could go for 16x the context. If you add a couple more levels you could *easily* scale that to 1024x or more.
I'm not sure how well that'd work tho
jrowe#5371: mix and match your favorite licensed celebrity likenesses, do your own casting, or create your own characters from an AI person/voice creator
jrowe#5371: media is going to get really fun and really weird
Chad Kensington#9564: Hmm interesting sounds like longformer maybe
Chad Kensington#9564: So like pooling?
kram#1032: https://arxiv.org/abs/2303.09752 this
kram#1032: and also this maybe https://arxiv.org/abs/2109.00301
mahouko#7043: forward-forward training doesn't require functions to be differentiable, so you could do DB lookup
kram#1032: and I'm sure there are other such techniques
Ravna#1831: People don't consume media for its own merit though. They consume media in order to produce or carry memes. For that purpose they form fandoms and fandoms need common grounds. Highly personalized media can't do that.
kram#1032: CoLT5 is an extension of longformer I think
|
Chad Kensington#9564: Tnx
mahouko#7043: I mean forward-forward is *questionable*, so that gives you a new problem
Chad Kensington#9564: Interesting
mahouko#7043: you basically have to mitigate internal covariate shift on every layer of your model (i.e. normalize everything)
Ravna#1831: media is just a basis on which people can gossip together
kram#1032: forward-forward is weird from what I saw. It just randomly initializes a bunch of stuff, freezes that, and then adds for each block a layer that attempts to learn the output of the block given some input
kram#1032: so you can basically do it all "in one layer"
kram#1032: it's also worth checking the citations of both of those to find even newer works that try to accomplish similar goals
Chad Kensington#9564: Interesting. I was also seeing wild stuff about contrastive reinforcement for story gen too
Chad Kensington#9564: Which i was planning to checkout
Ravna#1831: My heuristics is to treat researchers over 50 years old less seriously by default, no matter what great contributions they made when they were young. The fact that Hinton proposed forward-forward makes it less worthwhile for me, not more.
kram#1032: I think contrastive is brilliant and makes sense, but isn't quite the final answer.
In particular, I can easily have two sentences that mean the same thing.
For instance, it is rather possible to have two statements be semantically identical.
And contrastive loss tends to be too hyperfocused on 1:1 bijective mappings, when actually many-to-many mappings (so neither strictly injective nor strictly surjective) are usually completely valid.
kram#1032: A softer version of contrastive loss, which acknowledges these possibilities in its design, would be great
synquid#7193: depressing, we need to extend lifespans
kram#1032: From what I can tell, the issue there is, that most existing solutions are kinda domain-specific thuogh
|
jrowe#5371: sure, so bespoke media will be possible, but i think shared universes is where things will go - concurrent storylines, with a centralized cluster of narratives that propagate out to the other users
jrowe#5371: like an MMO / cinematic universe / bespoke content amalgamation
jrowe#5371: Ready Player One Oasis type thing
kram#1032: not quite sure what you mean
Ravna#1831: we also need to drive people out of their research field every few decades, no matter how lifespan extension therapy makes their brains young
kram#1032: Something RPO-level (hopefully without the whole being locked into the game business) would need many more modalities to function
Ravna#1831: we should move to #off-topic
kram#1032: but certainly could eventually be possible
KublaiKhan1#6681: You've mentioned this before I think, but do you have some examples?
tpapp157#3643: A lot is going to depend on the data you're working with, of course. But as a simple example, when I was working with video data (just images, no audio, for that work) which sounds like one mode but you can extract a whole bunch of views from that. Take a random crop from a frame as your baseline view, and from there you can add more views: same crop from next frame, same crop from prior frame, concentric crop from same frame, independent crop from same frame, same crop from optical flow field, etc. That's easily 6 different views which you can construct your relationship graph between for calculating losses. Managing your projector networks based on view relationships becomes really important for this sort of approach.
tpapp157#3643: If you set it up properly, the results you'll get will be way better than the baseline "take two random crops from the same frame and that's it" approach.
tpapp157#3643: As another example, I'm working on contrastive learning with some tabular data right now. Not only can you use relationships between data rows to define views but also every relevant table column is it's own view too.
vikasp#7540: Does anyone know of concrete plans to train models on the pile v2 after it's released? I know there are plans to train RWKV on it, but I'm wondering if there are any exact attention models in the works.
StellaAthena#3530: It will be released along side trained models
vikasp#7540: Nice, thanks 🙂 Are you able to share any details on param count/context width?
StellaAthena#3530: I don't know the answers to these questions; Carper has been training them.
Aprilswind#1977: can i train gpt-j without gpu at all
Aprilswind#1977: i have 128 gigs of ram tho
Maximum Limelihood Estimator#8915: I have since learned that Chinese characters are very cool and can be tokenized in much more clever ways than either this or BPE where you represent Chinese characters as bytes
Sphinx#2092: I imagine in most cases they are still have a token per character
|
Sphinx#2092: Except for the rare cases which get broken up as you suggested by byte-level fallback.
ephemical#2302: do Chinese LLMs use BPE at all?
ephemical#2302: say for English characters
kd90138#9368: There's enough difference to consider them different approaches
kd90138#9368: Similar issue in Korean, but if you look at the actual bytelevel unicode bytes the situation is not pretty
kram#1032: Either way it's cool and I'd love somebody trying this for *two* levels so you get up to 256x the effective context width.
kd90138#9368: I think a byte level BPE needs to be augmented with unicode aware tokenization but then you're just reinventing unicode
Technobird22#2055: https://cdn.discordapp.com/attachments/729741769738158194/1097996232556822678/image.png
kd90138#9368: Glm used icetk and Sentencepièce ( are those the same?)
kd90138#9368: Can you share best practices for preallocating whitespace in sentence pièce?
1. Dont. Preallocate but allow whitespace only to be learned.
2. " " (Whitespace)
3. "U+2581" (metasymbol) internally used by spm
kd90138#9368: I narrowed it down to these
Maximum Limelihood Estimator#8915: IIRC this is what GPT does
kd90138#9368: GPT is (BPE)
kd90138#9368: Bbpe
Maximum Limelihood Estimator#8915: this is an approach that's sometimes used but newer Chinese LLMs use a much cooler tokenizer (cc @ephemical ). Chinese has a structure where individual characters (despite being considered as "one character/word") are composed of multiple subcharacters put together, where characters can be:
1. Ideograms (1 character represents 1 idea/thing, with no subcharacters)
2. Puns (characters based on similar sounds). e.g. in English you could write "I'm hoarse" as "I'm 🐴 "
|
3. Combining the above, e.g. you could use the character 👄 to make someone think of mouth/speaking, and then 🐴 to clarify that you should say the word "horse" out loud, so the combination "👄🐴" means "Horse (as in voice)." This is by far the most common kind of character
4. Combining ideograms, as in this meme https://cdn.discordapp.com/attachments/729741769738158194/1098004112097955920/e1edf21d5b3c6dddf6a5ea2b03a7dbd3.png
Chad Kensington#9564: Probably the best explanation of kanji ive seen
Chad Kensington#9564: Never thought of using emojis
Maximum Limelihood Estimator#8915: It's an amazing system, especially for training LLMs, because you have distinct semantic and phonetic information for the same word, instead of having to learn semantic information from context. And it also means Chinese dictionaries have words grouped roughly by meaning (because you group by the radical, then the phonetic component)
Chad Kensington#9564: Yup japanese and chinese books are way shorter than their english translations for exactly that reason
ephemical#2302: if you have a large alphabet then everything becomes shorter
ephemical#2302: yeah
jrowe#5371: Dictionary size increases exponentially though
jrowe#5371: Wrt to lz style compression, not necessarily the Chinese equivalent of Websters
ephemical#2302: wrt to lz?
ephemical#2302: what are these abbreviations?
StellaAthena#3530: Emojis are also multiple characters (incl. other emojis) under the hood
StellaAthena#3530: It’s a surprisingly accurate analogy
ephemical#2302: oh I get what you are saying now
ephemical#2302: thought wrt was some kind of compression format
ephemical#2302: 😆
StellaAthena#3530: “With respect to”
jrowe#5371: Yes, sorry
jrowe#5371: Lol, too many acronyms
|
Maximum Limelihood Estimator#8915: So for a tokenizer, what you really want to do is decompose characters into subcharacters, then tag them as either radicals (providing semantic information) or phonetic and treat these differently. Phonetic subcharacters are converted into a phonetic representation like pinyin (Latin alphabet for Chinese). Separately, semantic symbols are encoded using even *more* structure: each Chinese character is a series of strokes, that have a traditional "ordering" you use when you write them. No it does not make sense lol the character looks the same at the end. But apparently which exact strokes and in what order can also contain semantic information. So you break all the semantic information into strokes (arranged in traditional ordering). At the end of all this, you can use BPE within semantic and phonetic sequences to get much better compression than with raw BPE, while preserving a *lot* more information than with raw BPE
StellaAthena#3530: TIL: OpenAI owns a trademark for GPT https://alter.com/trademarks/gpt-97733259
Maximum Limelihood Estimator#8915: RIP GPT-NeoX
StellaAthena#3530: Filed December 27, 2022
StellaAthena#3530: A bit late, lol
Maximum Limelihood Estimator#8915: (Unironically though, you might want to change the name to avoid a lawsuit or something. IIRC trademark lawsuits can still be filed even if they get the trademark after you come up with the name)
Kharr#7888: Time to welcome generic Decoder models
jrowe#5371: prior art covers gpt-neox, and it's likely a very slippery slope for them to attempt to defend anything outside a very narrow set of GPT-N type cases
Sphinx#2092: Not aware of what the best practice is. Though I suspect as long as it's lossless it should be okay.
voxs#0001: openai doesn’t strike me as very litigious
StellaAthena#3530: That is not the case
jrowe#5371: I'm surre EFF and other open source advocates would *love* to defend EAI, so OpenAI probably wouldn't go there unless someone tried to release GPT-5 or w/e
StellaAthena#3530: We’ve gotten many legal threats and cease and desists from them
jrowe#5371: it's all peace and love until the lawyers come out ><
voxs#0001: i thought eai and openai had a decent relo
voxs#0001: relationship
Maximum Limelihood Estimator#8915: Prior art is patent law, not trademark law
ilovescience#3282: reminds me of when they actually didn't file a trademark for DALL-E, but Boris did for DALL-E mini, but they still threatened Boris and then filed a trademark shortly after lol
jrowe#5371: Makes it really hard to be charitable to the organization and its views when they lean into the worst of corporate behaviors
ilovescience#3282: i think the researchers probably have a good relationship, legal not so much lol
|
jrowe#5371: even Altman has his moments of relatable humanity or geeking out, etc. lawyers are killjoys
ephemical#2302: that's so ironic. they're called openAI
Maximum Limelihood Estimator#8915: I actually think this is reasonable. Trademark law is about preventing confusion, not about ownership. If people see software called "GPT," there's a very good chance they'll think it's an OpenAI product. And besides that, if they don't sue, they lose the trademark
Maximum Limelihood Estimator#8915: If I created a product, called it GPT-4, and then overnight it became way more popular than the real GPT-4, OpenAI could lose their trademark to me because trademarks are about making sure people aren't confused by product names, not about who came up with it first
jrowe#5371: doesn't it matter that generative pretrained transformers and GPT are a term of art in machine learning, and the wide range of nano-gpt, gpt-neo, gpt-j, and other models out there use it as such?
kd90138#9368: Well there are individual efforts to what you describe . One issue is that they don't generalize across language. As a consequence they don't scale
jrowe#5371: you can't call your video product MPEG , etc
Maximum Limelihood Estimator#8915: It might! You'd have to take it to court
kd90138#9368: Even in Korean language there are morphème or grapheme aware tokenizations or encodings
kd90138#9368: They are not applicable to other languages at all
kd90138#9368: To the extent that you need separate tokenizers per language
jrowe#5371: hopefully EleutherAI ends up on the the winning side of that lawsuit, whenever it happens, or maybe OpenAI will just be chill
Maximum Limelihood Estimator#8915: IDK but my guess is they wouldn't (although IANAL, massive grain of salt), because GPT is cemented firmly in public consciousness as "The OpenAI thing"
kd90138#9368: Early Korean encodings started out as composable syllable units (ㄱ,ㅏ instead of 가)
ephemical#2302: OpenAI wouldn't want to appear to be too hostile to the community. That'd be bad optics.
kd90138#9368: Practically speaking, they are now compost
jrowe#5371: lol
Maximum Limelihood Estimator#8915: Going green by composting the Korean encodings
Maximum Limelihood Estimator#8915: Did they stop doing that? I thought Korean blocks were regular/predictable. If I were writing a Korean tokenizer that's definitely how I'd start (decompose the characters phonetically, insert syllable breaks)
Chad Kensington#9564: Sound wise i think composable
|
ephemical#2302: my impression is that the korean characters are phonetic
jrowe#5371: 퇴비
Maximum Limelihood Estimator#8915: Yes (actually, *very* phonetic) but IDK how the blocks work out
ephemical#2302: I have no idea if that's entirely accurate
Maximum Limelihood Estimator#8915: No, 퇴비 *you*!
StellaAthena#3530: Yikes. Is that the timeline of what happened there?
Maximum Limelihood Estimator#8915: seriously though what's the relevance of this
kurumuz#5695: i mean isnt DALL-E clearly the first existing product there? not that i am supporting bullying behaviour
jrowe#5371: its the korean word for compost
jrowe#5371: toebi
kd90138#9368: It is one word yes
kd90138#9368: A very polite one.
Maximum Limelihood Estimator#8915: I mean, again, I find it hard to call it bullying when they lose the trademark if they don't do this
kd90138#9368: Switching tokenizers on the fly isn't much issue for non nmt systems
Maximum Limelihood Estimator#8915: NMT?
kd90138#9368: But for DNNs it's difficult due to embeddings
kd90138#9368: Neural machine translation
Maximum Limelihood Estimator#8915: ahh
kd90138#9368: MAny rule based systems can an do use multiple tokenizers
kd90138#9368:
|
kd90138#9368: M2M supports 100 languages. Nllb 200
kd90138#9368: Many languages does not have extensive tokenizer research
kd90138#9368: Wth is this emoji and how did i send it
kd90138#9368: Mobile discord is crazy
voxs#0001:
jrowe#5371: autocomplete and markdown = hilarity
Maximum Limelihood Estimator#8915: Sure, but you can do tokenizers for like the top 10 and classify almost 80% of writing on the internet, then do something basic like BPE on the rest. Not to mention a lot of this work can be duplicated across languages--a tokenizer for Russian is already close to being a basically-functional tokenizer for Belarussian
Maximum Limelihood Estimator#8915: The embedding problem is definitely more difficult
Technobird22#2055: Wow, didn't know that either. TIL too.
Technobird22#2055: (oops sorry; forgot to disable the ping in the reply)
Maximum Limelihood Estimator#8915: (And even unrelated languages can have a lot of structural similarities from their morphology.)
Really, the fact that raw BPE+whitespace splitting works so well in English is mostly a function of it being a very analytic language, with the few exceptions (e.g. verb conjugation) being very regular. IME GPT-3 completely sucks at Spanish (GPT-4 seems to do fairly well but they haven't released any information about their tokenizer)
Chad Kensington#9564: Interesting i assume then that bpe sucks for chinese? Since it's prob the most different language to english
kd90138#9368: depends on what you expect from the tokenizer though.
chatgpt has an incredibly flawed tokenizer (100k, mostly english BBPE)
but it does a good job especially multilingual
kd90138#9368: also many of the english script is encapsulated in the ascii part of bbpe encodings
kd90138#9368: XLM-R(500k) and XLM-V (1million) are examples of LARGE multilingual tokenizers
kd90138#9368: they are really good but the tokenizer + embeddings ARE the model
|
kd90138#9368: XLM-V which outperforms many other multilingual models in NLU has 93% of learnable parameters in the embeddings
kd90138#9368: bard-lambda has an unknown tokenizer but bard-palm likely takes from palm (spm 256k vocab multilingual trained)
kd90138#9368: in an interview google researchers claimed that even though bard's training corpus had little bengali in it (not none) prompting it with a bit of bengali was enough to become fluent in it
kd90138#9368: what fluent means and how well it handles the script is to be seen but they stated that they made a whole new team around it.
ogkalu#7841: https://llava-vl.github.io/
ogkalu#7841: apparently training a simple linear layer to project CLIP embeddings into word embedding space for a LLM just...works. lol
mahouko#7043: imagine what we could do with two linear layers
StellaAthena#3530: Yup!
Chad Kensington#9564: That's awesome
Kizpman420#2438: is there a reference section
for the gpt-j?
StellaAthena#3530: https://arxiv.org/abs/2209.15162
KublaiKhan1#6681: Mathematically, the same as one :P
KublaiKhan1#6681: Unless you were already making that joke
Chad Kensington#9564: I think if u do 2 linear layer with act you will have info loss
And even without act im guessing it will be low rank maybe
ogkalu#7841: Missed this. Thanks!
Maximum Limelihood Estimator#8915: DUNA
Technobird22#2055: How does LLaVA compare to minigpt4?
ogkalu#7841: So far i think it's better but i haven't done too much testing.
|
Technobird22#2055: another day... another model
Chad Kensington#9564: minigpt4 has a queue of 189 so haven't tested much. but I'm guessing it's a bit slower
Technobird22#2055: I'm trying with running minigpt locally
Technobird22#2055: from first impressions, LLaVA seems more promising
Chad Kensington#9564: it claims sota
Chad Kensington#9564: https://cdn.discordapp.com/attachments/729741769738158194/1098065456142757908/Capture.JPG
Chad Kensington#9564: https://cdn.discordapp.com/attachments/729741769738158194/1098065605556437083/Capture.JPG
Technobird22#2055: the problem is that theses models are coming out too quickly
Technobird22#2055: like, minigpt4 is so new it isn't covered here
Technobird22#2055: I'll give it a try... bigger problem is that I am rapidly running out of disk space 😂
Chad Kensington#9564: haha try the demo
Chad Kensington#9564: it's like instant
Chad Kensington#9564: just 3 seconds for me
Technobird22#2055: Yes, it's pretty fast, actually
Technobird22#2055: Just the generations take a while.
Technobird22#2055: 🙃 need more RAM https://cdn.discordapp.com/attachments/729741769738158194/1098066972262010880/image.png
Chad Kensington#9564: oh dang
Chad Kensington#9564: well at least cpu
Technobird22#2055: lol
Chad Kensington#9564: https://www.phind.com/
|
Chad Kensington#9564: it does citations too!
Chad Kensington#9564: and pretty fast
Chad Kensington#9564: actually that's not open source unfortunately
Kharr#7888: This thing looks like a GPT3/4 clone based on the outputs.
ogkalu#7841: it's finetuned on gpt-4 output
Kharr#7888: It's also using Vicuna as a base which is trained on ChatGPT conversations.
artem9k#7593: linear layers in general are pretty underrated. Oftentimes you dont need anything larger
eefp6#6574: how feasible would it be to train a transformer model "iteratively"?
eefp6#6574: for example, say you take a fully-trained model, cut it in half, precompute the output of the left half for the training data, double the layers of the right half, then train the right half only
eefp6#6574: now you just repeat the process
eefp6#6574: obviously this would be less efficient than training the whole thing as a whole, but would it work?
main#7610: llava & minigpt-4 seem to mostly be good because of the effort taken in developing their datasets
both of them make use of gpt-assisted dataset expansion to long-form descriptions. the idea of connecting CLIP to an LLM is pretty old, and I suspect both openflamingo & blip2 would outperform llava/minigpt-4 if the same synthetically augmented datasets were used
main#7610: (by my estimate, it would take $xx,xxx to reproduce the 600k cc3m gpt-4 mutated dataset used by llava, so I really hope MSFT just releases the full training data)
Maximum Limelihood Estimator#8915: Where do I go to talk about hardware?
Technobird22#2055: Quick question -- does anyone know what this massive file in the checkpoint is? https://cdn.discordapp.com/attachments/729741769738158194/1098144081898647592/image.png
sekstini#0069: are you running the LLaVA training script?
mahouko#7043: I'm aware they're mathematically equivalent (and that works as a joke too) but I was implying there'd be a non-linearity between them
mahouko#7043: is it better to call them dense layers if you want to imply they have an activation?
kram#1032: @StellaAthena if I may be Reviewer #2 for a second, I think the graphs on dimensional collapse could be a bit better (Fig 11 and so on) - it would be nice to have a vertical line for where the sharp drop occurs.
|
I'd also argue it'd make sense to investigate this dimensional collapse for your smaller models as well, just to see whether it's the changed objective or the changed architecture or both that does that. (I'm guessing it's the objective, but since you got those models available, might as well check)
You also only show text collapse and shared collapse, but not image collapse. (With the clustering ones you do both)
It's also interesting, that CLIP *immediately* has an initial collapse that CLOOB does not (in exchange for that occurring later in CLOOB
I guess that's basically the idea of CLOOB's *global* dimension being higher overall?
And it looks like the text embeddings are more collapsed than the image embeddings, which is particularly strange. It seems to me the image dimensional collapse is relatively fine...
And in the shared clustering, CLIP seems to collapse much more for ood COCO whereas CLOOB collapses more for the *in-distribution* dataset LAION. I'm not sure that's right? Did the graphs get swapped around or is that the actual result? (Fig 15) - CLIP's COCO graph looks suspiciously like CLOOB's LAION graph... I suspect you accidentally used the same graph twice?
All of this also makes me wonder whether there is some way to get "the best of both worlds", with good utilization of both local and global dimensionality... https://cdn.discordapp.com/attachments/729741769738158194/1098160579279061072/image.png,https://cdn.discordapp.com/attachments/729741769738158194/1098160579526529034/image.png
Kurt#5353: Hello guys,
I've installed the **gpt2-xl** but always respond to a joke to all i said.
Even with the temperature at 0.1
Any idea why guys ? Thanks in advance
cagrik#5355: i heard that the meta's llms were leaked, i wonder if it is ethical to use models derived from those leaked models , does anyone know the stand that meta is taking
StellaAthena#3530: They’re sending out DMCA takedowns to people hosting the weights
|
cagrik#5355: thank you for answering
sekstini#0069: @cagrik you may find https://www.together.xyz/blog/redpajama interesting. tl;dr it's an effort to recreate and opensource the LLaMA training data and models.
Chad Kensington#9564: is this the thing you were thinking about with non-sphere representations? https://arxiv.org/pdf/2304.09172.pdf
kram#1032: Oh neat!
kram#1032: Based on the title: *sorta*
kram#1032: Hyperbolic is just part of the picture
kram#1032: Hyperbolic is best for hierarchies. But you'd want some spherical stuff too, for stuff that's like laterally related rather than hierarchically
skrishna55#3382: hey folks, has anyone fine-tuned GPT-2 on ruin names big-bench dataset?
skrishna55#3382: The performance without fine-tuning is quite bad, wonder if finetuning GPT-2 on this dataset helps?
jimtalksdata#8918: Llava looks interesting. Let's say that I have 1 TB of microscopy data (like 200K JPGs) that I want to train it with. How would I finetune it and what resources would one need?
spirit-from-germany#1488: https://www.forbes.com/sites/hessiejones/2023/04/19/amid-growing-call-to-pause-ai-research-laion-petitions-governments-to-keep-agi-research-open-active-and-responsible/?sh=2b5f603162e3
Kurt#5353: Ahaha ! Call all people on earth to stop their evolution project to work for a common one.
Kurt#5353: If i understand properly with my French.
Kurt#5353: But if it's this. Nice try !
Kurt#5353: **" One API to rule them all and one API to bind them. "**
Kurt#5353: 🙃
kram#1032: I think my ideal version of this would sorta be an "ultrahyperbolic" embedding which should roughly be a continuous version of an arbitrary graph, with *infinitely many* embedding dimensions *but* it's sparsified such that only, say, 128 dimensions may be filled at a time for any given datapoint.
That way it ought to be able to represent arbitrarily complex graphs in a natural symmetric way, so long as the local graph dimension doesn't surpass 128 (or whatever you choose for this)
But I'm very glad a hyperbolic attempt exists now. It makes a lot of sense for sure
|
Kurt#5353: **" Letter signed by Elon Musk demanding AI research pause sparks controversy "**
Yeah sur, Mr.Tesla Robot.
Kurt#5353: And now he start to create his own Gpt
Kurt#5353: https://tenor.com/view/denzel-training-day-what-a-day-gif-13961413
StellaAthena#3530: You have been persistently misrepresenting the contents of the letter. It’s really frustrating to see this happen. Did you actually read it? It’s not advocating for a pause on AI research and it would have no impact on the people you claim would be “paralyzed by fear.”
StellaAthena#3530: It specifically and exclusively argues against the continued and unconstrained race to develop and deploy larger and more powerful models than those already in production (using GPT-4 as a high water mark). It also advocates for extensive investment in AI research not aimed at producing more commercial products but rather aimed at producing better understanding and governance of the technology we already have.
StellaAthena#3530: Started a thread.
kram#1032: Lol.
That would be an interesting consequence of carefully filtered datasets. It's the new "professional high-quality" I guess https://cdn.discordapp.com/attachments/729741769738158194/1098260492880982066/image.png,https://cdn.discordapp.com/attachments/729741769738158194/1098260493145214986/image.png
kd90138#9368: I got access to bard and claude
kd90138#9368: I think i can invite people to claude
kd90138#9368: If anyone interested in access lmk
Chad Kensington#9564: lmao missed that
Tinytitan#5596: cus its predicting the next word, not answering your questions.
ilovescience#3282: https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models
Kurt#5353: Oh ! I get it ! Thanks my friend
Kurt#5353: So what's the matter ? I don't understand properly. The problem is the fact that AI become more powerful and intelligent ?
Kurt#5353: Because if it's this, i don't see the problem.
Kurt#5353: But maybe i wrong and i'm gonna search for a french article
|
skyler1#6603: Hey! I was wondering why there are no multiple sizes of GPT-J that are pretrained. Is it not considered to be on par with Pythia architecture wise?
Kizpman420#2438: is there a cpu limitation on the gpt-2 large? i keep getting exit code 132 on ubuntu
i got 32 gigs of ram...so my only other thought is my cpu isnt fast enough lol
Germanita#1530: Why the CC BY-SA license?
StellaAthena#3530: As opposed to what?
Germanita#1530: Apache 2.0
StellaAthena#3530: I wouldn't read much into it. They're both extremely standard open source licenses
Maximum Limelihood Estimator#8915: Specifically, that we have no understanding whatsoever of how AI systems much smarter than humans will behave, so we should wait to build them until we know they're safe.
No currently-existing AI falls into that category, but GPT-5 or GPT-6 might.
skyler1#6603: AGI will not necessarily be smarter than humans and language models can not by themselves evolve above AGI so we are fine lmaooo
At the end of the day, intelligence is just a means of defense for us as a species, just like cats have claws and lions have teeth, when ASI arrives i don't expect that humans will be obsolete since self awareness is not a mathematical property that can be solved through any sort of function, including ASI, so yeah the core property of our existence is here to stay haha
Besides i think Elon Musk just requested for the AI pause so his "X" abomination catches up with OpenAI xD
StellaAthena#3530: Whatever your beliefs, the core question is "is the current headlong rush into accelerating the development of this technology good?" Christoph represents an organization whose position on that question is "yes." Most of this community disagrees.
skyler1#6603: Ahh i disagree with it too
skyler1#6603: Hopefully their lowkey-misinformation campaigns don't impact the development of neox
StellaAthena#3530: The letter he's against (and authored a counter-letter about) is not about stoping AI research. It's very specifically about pausing the development of super-GPT-4-capability models and focusing on research on interpretability, ethics, alignment, and governance instead
skyler1#6603: Oooh i see
|
skyler1#6603: EleutherAI is my favorite company so I'm with yall no matter what haha
Maximum Limelihood Estimator#8915: Does most of this group disagree? I feel like you'd get a slight majority against capabilities research if you polled
monkmartinez#2356: It took less than a year!!!! https://discord.com/channels/729741769192767510/729741769738158194/1014209261171126272
StellaAthena#3530: Isn't that what I said?
Maximum Limelihood Estimator#8915: You said the opposite (a majority disagreeing with the letter), unless you want to be more specific about whether the letter is a good idea for x-risk specifically
StellaAthena#3530: Oh scope ambiguity. I meant that we disagree with Christoph
Maximum Limelihood Estimator#8915: Ahh got it, I think me and @skyler1 misread it
skyler1#6603: yeah at first i didn't get it by brain is a pudding rn from messing with architectures all day today
Maximum Limelihood Estimator#8915: NeoX was completed last year, so no worries about that, and I don't think many people in this group would support building something bigger than GPT-4; so it shouldn't affect our research work
acul3#9489: cant wait to see the detail 1.5T tokens datasets
i hope it contains multilingual data
skyler1#6603: A new the pile is coming out?
monkmartinez#2356: What do you mean bigger than GPT-4?
skyler1#6603: I doubt anything larger than GPT-3 in terms of parameters would address the problems faced with LLM tbh
Maximum Limelihood Estimator#8915: What do you mean--just scaling parameters, or scaling compute+data as well
acul3#9489: the pilev2 is coming CMIIW
but i think the stability use different datasets(?)
skyler1#6603: i believe the optimal amount of parameters is perfected and a model's reasoning capabilities won't really increase if you were to scale something above what GPT-3 is (ofcourse models can be improved in different ways so im referring to the amount of parameters specifically)
|
skyler1#6603: oh nice
skyler1#6603: Does EleutherAI have a twitter
StellaAthena#3530: twitter.com/aiEleuther
skyler1#6603: Im gonna make an account to get the news if thats the case
skyler1#6603: OH TY
Maximum Limelihood Estimator#8915: Right, so I guess I'm asking which of these you're holding constant:
1. Data
2. Epochs
3. Compute
If you hold compute constant, and we assume compute-optimal parameter count and single-epoch training, increasing the parameter count won't do anything. we'd have to show the model less data to train the model on the same budget, which would almost-perfectly cancel out the gains from more parameters
skyler1#6603: trick question haha almost got me there
bolo#3359: What's going on with the stability llm? I heard they're only half done with training because they ran into some overfitting issue with duplicate data?
skyler1#6603: i tried it and it didn't seem as good as pythia to me tbh
skyler1#6603: though as you said its probably not ready yet lmao
skyler1#6603: where did you hear this btw
rallio#9917: is this just a gut feeling or spontaneous belief or is there some logic or experimental evidence behind it
What should everyone call you?#2680: The Big Yud is holding a Q&A on Zoom. Here's the link https://us02web.zoom.us/w/87133402997?tk=2PXDZVM-GiIP65Nip3Ob95uquynxaQpUNU1NW-Cv7eE.DQMAAAAUSY43dRZJMmhmem5XTFNDLUVFakYxNGlSOUt3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Alternatively, you can join the livestream on Youtube here: https://www.youtube.com/watch?v=3_YX6AgxxYw
skyler1#6603: There is a lot of logic and evidence behind it and it's a view also shared by the ceo of OpenAI
|
skyler1#6603: language models can not achieve anything beyond their training data (which is still great on it's own don't get me wrong) but it's not rational logic
rallio#9917: Could you share some of it please
skyler1#6603: Alright
rallio#9917: I mean the logic
skyler1#6603: https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/
skyler1#6603: wdym
skyler1#6603: Oh no i meant it as in that the models don't really perform rational logic
rallio#9917: The logic suggesting larger models won't perform better or have expanded capabilities
skyler1#6603: Have you seen a model perform anything beyond the scope of the datasets it has been trained on?
rallio#9917: Can you provide your argument in the affirmative please. You are the one making the assertion
rallio#9917: I am just asking for evidence
skyler1#6603: I do not understand what exactly you are questioning but let me re-read the conversation
rallio#9917: I replied to the specific comment a few messages back that is what I'm talking about
skyler1#6603: Language Models, are trained on large amounts of text data to learn patterns and relationships between words and sentences.
However, they cannot perform anything beyond the data they were trained on because they do not have the ability to generate new information that is not already present in the training data. LLMs can only generate text that is simlar to what they have seen before and cannot produce novel content that goes beyond their training data. This limitation is due to the fact that LLMs lack true understanding or comprehension of language, and simply rely on statistical patterns in the data to make predictions about what comes next in a given sentence or text.
My assertion is that there is no benefit in further increasing the size of GPT-3, as the model has already attained sufficient size to comprehend and learn the relationships possible in a law-of-physics governed universe (lol). Any attempts to perform logical operations would be better addressed by higher-level programmers and won't require the re-imagining or scaling of current models
rallio#9917: part of why I am curious is because your claim goes against basically all the experimental evidence that has been published on the topic that I'm aware of
skyler1#6603: Not at all
|
Sphinx#2092: Lol
skyler1#6603: There is huge benefit to increasing the size of LLM
skyler1#6603: but to a certain point
rallio#9917: Well there are business and practical constraints
rallio#9917: But even you making some comment about the laws of physics as if the laws of physics is a completed project like arithmetic is seems peculiar and disconnected from the real practice of physics
skyler1#6603: No, by the laws of physics i intended to paint a picture. That real world logic has certain boundaries and relationships and those can sufficiently be captured by models of 170 billion parameters
rallio#9917: how do you know?
Sphinx#2092: Inlook forward to reading that paper.
Sphinx#2092: Finally, someone building world models
skyler1#6603: LMAO
skyler1#6603: If you work in DeepMind you are probably aware of this situation, google has developed multiple models and some of them, if i'm not mistaken, even crossed the 1 trillion parameter mark
rallio#9917: yes and the papers clearly state and demonstrate that capability and performance is greater with larger models
skyler1#6603: It is! Absolutely. There is no comparison between a model with 1B parameters and 100B parameters
skyler1#6603: but a model with 175B parameters and a model with 1 trillion parameters will be almost identical
rallio#9917: again this is different than releasing a product where you are constrained by inference costs and physical hardware
rallio#9917: how do you know?
skyler1#6603: It's not only about inference costs
rallio#9917: you keep slipping in assertions, but what is the evidence for those claims
skyler1#6603: Look i have sent you an article of the OpenAI ceo supporting this argument as well
skyler1#6603: If you don't want to believe me you don't have to, you can go and do your own research on your own
|
sekstini#0069: Appeal to authority isn't an argument
skyler1#6603: LMAO
rallio#9917: especially since what Skylar says is the opposite of what ilya says and all the research
skyler1#6603: I am not contradicting any research
rallio#9917: I think you are conflating the business application from the research question and the capability question where much greater capability is useful in some circumstances
skyler1#6603: A 175B parameter model will outperform a 1B parameter model but a 175B parameter model will not necessarily outperform a 300B parameter model
rallio#9917: in most circumstances people find no difference between chatgpt and gpt4
skyler1#6603: I'm speaking about AGI, are you referring to ASI ?
skyler1#6603: ASI sure, will have even larger models and of different kinds
skyler1#6603: But AGI is human-level and not beyond that
StellaAthena#3530: @skyler1 please provide concrete evidence for your claims. I strongly suspect that there's something fundamental that you're confused about or are miscommunicating, but I'm not actually sure what it is.
rallio#9917: I'm just saying all the evidence ive seen is:
```
> params more performance
> data more performance
> epochs more performance (up to a point)```
rallio#9917: all those aren't meant to rank, just saying in each case more gives more
kurumuz#5695: a 1B parameter model absolutely can outperform a 175B model
kurumuz#5695: you need to specify what data regime
skyler1#6603: No way in hell
|
kurumuz#5695: ?
rallio#9917: that is the instructgpt paper from 1.5 years ago
StellaAthena#3530: T0 and flan-T5 are 11B models that do. The InstructGPT paper claims a 6B model that does. I'm not aware of any actual examples at precisely 1B params but it's quite plausible.
skyler1#6603: The more parameters a language model has, the better it generally performs. However, there comes a point where the increase in performance is not as significant and the necessary relationships within a dataset can already be captured extremely well. The difference in performance between a 175B parameter model and a 300B parameter model is not as significant as the difference between a 1B parameter model and a 400M parameter model.
kurumuz#5695: here is a 3B model matching a 175B model, 3B trained for 1.4T tokens and 175B for 100B
```
model_size: 3.0B, data_size: 1400B => loss: 2.096789138102505
model_size: 175.0B, data_size: 100B => loss: 2.092744723117801
```
kurumuz#5695: i don't see what's so surprising about this
kurumuz#5695: at a certain data regime you will get smaller models outperform bigger ones
kurumuz#5695: chinchilla predicted this and LLAMA proved it is the case by training quite small models for many tokens
skyler1#6603: The amount of parameters in a 1B parameter model is not enough capture relations within a dataset of a certain amount of tokens and beyond
rallio#9917: I think the nature of this discussion is about whether or not additional useful capabilities may be found at scales larger than 175b
kurumuz#5695: and what is your proof for that when we literally have scaling laws exploring this
kurumuz#5695: you are making stuff up
rallio#9917: I think Google already showed with palm 540 its true so its already debunked by that work one year old
kurumuz#5695: you can train a 13B model that's a lot better than base davinci
rallio#9917: whether you need a 540billion param bazooka to make a chocolate chip cookie recipe probably not
skyler1#6603: Can you make a 1B parameter model, given a LoRa adapter, to outperform or even be on equal footing with GPT-3?
|
kurumuz#5695: why would i do that with LORA
rallio#9917: I think that is a research question that is unanswered right now skylar
skyler1#6603: Because otherwise you are not proving that your model is holding information since you are destroying them in order to get them out
rallio#9917: we dont know what the true capacity of smaller models are
kurumuz#5695: ok
skyler1#6603: a LoRa would be a must for this experiment
kurumuz#5695: lora is truly a must :berk:
skyler1#6603: Alright kurumuz does not know what they are talking about
rallio#9917: it seems go the extent they are compressing the training data that there will be some lossy compression at some point
kurumuz#5695: i surely dont
skyler1#6603: Hmmm
skyler1#6603: That is not a bad argument tbh
rallio#9917: whether lossy compression is just lossy noise or abstraction I think still is research question
rallio#9917: there is a lot of speculation about the size of chatgpt
skyler1#6603: ChatGPT is 175bn parameters
skyler1#6603: + a Lora
rallio#9917: I dont think that is true
rallio#9917: at the very least it hasnt been publicly stated so far as I know
kurumuz#5695: turbo is likely a lot smaller model
skyler1#6603: ChatGPT is GPT-3 with a Lora
|
rallio#9917: davinci on the playground costs much more than chatgpt
skyler1#6603: Yeah true but there are many things that come into play when it comes to inference costs
sekstini#0069: Can you provide any credible evidence for this?
skyler1#6603: It's so easy to tell lmao
skyler1#6603: Doesn't take a genius
mahouko#7043: wat
kurumuz#5695: its so easy fr fr
skyler1#6603: LoRa is the only non-destructive way of getting the training data out
rallio#9917: I think we could probably find out how big chatgpt was by looking at the lowest latency result for 1000 token generation
skyler1#6603: besides they are hinting towards this
rallio#9917: since it spits out the result instantly
kurumuz#5695: idk whats up with AI discords and getting delusional people all the time
skyler1#6603: that's true haha
skyler1#6603: but we would have to know which GPU it's using right?
skyler1#6603: And the speed varies
rallio#9917: no
skyler1#6603: You literally just said that a 1B parameter model can outperform ChatGPT
skyler1#6603: At least rallio makes sense
kurumuz#5695: i said a 1B parameter model can outperform a 175B model, which it can
ilovescience#3282: and you claimed ChatGPT is GPT-3 with LoRA with literally no evidence
|
kurumuz#5695: didnt say anything about chatGPT
skyler1#6603: ChatGPT is a LoRa for GPT-3
sekstini#0069: False, if I initialize a 175B model with random weights it will 100% outperform any 1B model. It's obvious!
ilovescience#3282: where's your evidence
skyler1#6603: And GPT-4 gives you the capability of fitting different LoRa on it
skyler1#6603: It's hinted at, LoRa is the future anyway
kurumuz#5695: if you initialize a big enough model randomly a few million times you get consciousness
skyler1#6603: WHAT
skyler1#6603: NO
ilovescience#3282: where is this supposed "hint"
skyler1#6603: YOU DON'T
kurumuz#5695: what do you mean you don't
skyler1#6603: You didn't just say consciousness
kurumuz#5695: its so obvious
skyler1#6603: Define consciousness
skyler1#6603: What do you mean by consciousness ?
rallio#9917: Skylar I dont want to pile on anymore than already have. I think we understand the basis for your opinion, I will disagree with it tho based on the large amount of reproduced evidence in the field last few years
rallio#9917: The big research question for this year is how much capability can you get at the very high end when not resources constrained and then what is the best performance per dollar that scales to large numbers of regular people
rallio#9917: both those are unsolved right now in the public I think. I suspect OpenAI has a lot of benchmark internally though
StellaAthena#3530: @skyler1 I'm going to time you out now. This conversation is very much not being productive and you're repeatedly advocating for things that are widely known to be false without any evidence to support your claims. We expect a higher level of discorse than this.
|
saltedbone#1577: @StellaAthena You handled that well!
Maximum Limelihood Estimator#8915: OK so I think you might be misunderstanding something he said, which is why I.
Basically, Sam Altman is not saying "If we add more parameters the model won't get better." He's saying “we can't keep adding more parameters, because the models are getting too big to fit on computers"
The actual relationships between parameter count and Specifically, the loss (error) follows a power law, x^-alpha for some constant alpha, where x is the number of parameters. It's important to note that there's no situation where the error from this stops going down, but there *are* diminishing returns. Usually, alpha is around 0.1, which means that if you increase your parameter count by 100% (doubling it), you get a roughly 100% * 0.1 = 10% improvement in capabilities.
Note that you could, theoretically, get to human intelligence or whatever other metric you'd like by just adding more and more parameters, though. There's no point where adding more parameters "Stops working"; doubling the number of parameters always increases the performance by 10%. But at some point you would have to devote all the world's computers to just running the model, nevermind training it. You can't keep doubling over and over again without going bankrupt. This is what Sam Altman is referring to--he's saying we're at the point where if OAI tried to double the number of parameters again, they'd end up bankrupting themselves just trying to pay server costs.
mahouko#7043: whilst I think scaling models shouldn't be our *first* choice: we should still have *some* exploration at the edges of our compute capability, because Moore's law may make today's extremes more feasible in future
rallio#9917: yeah there are practical limits based on physical realities and energy usage. the hope would be that a very large very powerful model could help solve difficult problems like alphafold
rallio#9917: things like materials science, catalysts, drugs, some computer technologies like ZK proofs and cryptography if were much more widespread could have gigantic benefits
skyler1#6603: I completely concur with the perspective shared. In addition, I would like to expand on the idea by suggesting the consideration of doubling the number of parameters to assess the potential returns. As Mr. Lime has stated, the returns on scaling language models beyond a certain point are diminishing, and I fully support this viewpoint. A 10% return is not worthy of doubling the amount of parameters in a LLM.
However, it is essential to note that assumptions can be made based on healthy reasoning, even if not supported by concrete evidence. Such assumptions made I on the topic of ChatGPT. It is not always necessary to provide evidence for every statement made, but logical reasoning and informed assumptions can also be considered.
In terms of ChatGPT, my assumptions are based on the process OpenAI released about the training of ChatGPT, which bears a striking resemblance to the process of training adapters. Moreover, the availability of fitting adapter modules upon ChatGPT-4, the paid version, further reinforces this idea.
Having said that, it is essential to address the issue of being wrongly accused and feeling hurt by the response received. Engaging in constructive debate and challenging ideas is crucial, but it should be done in a respectful and productive manner. It is not acceptable for individuals to attack or belittle others, as you did towards me, in a childish manner.
One particular statement that I would like to address is the idea that consciousness does not emerge from scaling up the functions that constitute language models. While this statement may seem controversial, it is worth noting that there is currently no evidence to support the claim that language models, no matter how large or sophisticated, can achieve consciousness. Though hopefully the Kuzuzumi can prove me wrong in the future.
|
I still believe everything I have said today to be true. At the same time I have been wronged and will not be participating on any sort of debates thrown at me
Maximum Limelihood Estimator#8915: Definitely; optimal parameter count scales roughly 10% every year for a compute-optimal model. Not saying the progress is going to stop, but there's a real place Skylar got their beliefs from
skyler1#6603: sorry I replied on your comment on accident
Kizpman420#2438: so, where do you go for documentation
Kizpman420#2438: anyone
Kizpman420#2438: nobody?
Kizpman420#2438: for any of the ai's eleuther represents..
skyler1#6603: What documentation ?
skyler1#6603: The models?
Kizpman420#2438: sure...
skyler1#6603: https://huggingface.co/docs/transformers/model_doc/gpt_neox this is GPT-NeoX on huggingface
Kizpman420#2438: ty
skyler1#6603: yw ^-^
Kizpman420#2438: skylar, are you from new zeland?
Kizpman420#2438: knew a girl from there with a name like that from back in the day
Kizpman420#2438: is 72 gb of ram enuff to run neo?
skyler1#6603: Nono i live in europe rn
Kizpman420#2438: no worries
skyler1#6603: Which instance of gpt-neo?
|
skyler1#6603: also usually people advice against using GPT-Neo cause the codebase was geared towards TPUs
Kizpman420#2438: GPT-NeoX-20B
skyler1#6603: ooh
skyler1#6603: around 46 gigabytes of vram should do it
bob80333#4040: Some evidence for why chat-gpt is a smaller model is because openai's API access for chat gpt turbo is 10x cheaper than the full size gpt-3.5 model.
Kizpman420#2438: got it
skyler1#6603: Idk I expected more maturity from an AI server even if it’s on discord tbh I’m still shocked with how you guys acted like a childish mob. It will be so funny when ChatGPT goes opensource and y’all find out it was just an adapter for gpt-3 all along. And don’t let me start with how the literal ceo of novelai said that making language models larger is gonna give them consciousness. Such degree of immaturity like fucking hell. I’d accept it if y’all was stabilityai or some shit but come on I really didn’t expect this from you. I’m out
StellaAthena#3530: I’m pretty sure Kuru was being sarcastic, though I also struggle to tell with him sometimes.
Kharr#7888: It's obviously a joke... "initialize a big enough model randomly a few million times" :berk: no training required, just RNG
jrowe#5371: Ouch, serious person was seriously serious.
CKtalon#7792: What if the argument is more about emergence instead of just comparing loss (re: 1B vs 175B), i.e., perhaps 1B will not have enough emergent properties no matter the amount of training to reach what 175B is capable of with lesser amounts of data (still significant, but not to chinchilla optimal)
jrowe#5371: There seem to be thresholds, possibly the same ones, that are reached by increasing parameters, total duration of training, and number of tokens trained on
jrowe#5371: Increasing the number of parameters just means statistically that connection patterns will occur that provide a solution to a particular problem, the lottery effect
CKtalon#7792: yeah, I think that's something that's still quite unknown. And it's these outliers that kind of give the emergent properties?
CKtalon#7792: like how do you balance param size, data, and length of training
jrowe#5371: So with a big enough network, randomly instantiated, such a lottery effect might well produce superhuman performance
CKtalon#7792: or how do you invoke those outliers to 'appear'
jrowe#5371: But you're talking parameter counts in the septillions or higher, probably
jrowe#5371: Chinchilla optimality is part of that puzzle, but I don't think there's any comprehensive, will understood theory that explains it
CKtalon#7792: To me, I don't really buy 'loss' being the be-all, end-all
|
CKtalon#7792: sure, it's a great metric to figure out well-trainedness
CKtalon#7792: or whether it has converged
CKtalon#7792: but for language ability/'intelligence', it's quite a poor metric
jrowe#5371: The intuitive take is that you're trying to get high order constructs to reinforce through training, and that more data allows the patterns of those high order concepts to arise frequently enough that the model captures them
jrowe#5371: So super complex abstract things like writing styles get mapped
jrowe#5371: More parameters means lottery effect, you might get lucky and some or all the parts of complex, abstract high order concept is already well represented by a random collection of neurons
jrowe#5371: Training allows the rest of the network to build itself around the h.o.c., and the more you start with, the better, even if an efficient network might be 10 or 100 or 1000x smaller
jrowe#5371: The magic of attention means that some problems can benefit from those random happy accidents, and language has a lot of those type problems
Chad Kensington#9564: Ok so kuru was wrong. You need to randomly initialize maybe 10 million times
CKtalon#7792: something something million monkeys hitting keys at random on a typewriter
KublaiKhan1#6681: I assumed kuru was referring to the lottery theory hypothesis paper
jrowe#5371: Right, and the birthday party problem demonstrates a piece of why a random instance of a big enough network will have features that just happen to solve whatever problem you're trying to model
CKtalon#7792: well.. BLOOM clearly didn't hit the lottery
jrowe#5371: <https://en.wikipedia.org/wiki/Birthday_problem>
jrowe#5371: Haha
rallio#9917: the issue is there is a tradeoff between generalization and memorization. humans absolutely have a memorization cap... i don't know what the number of named entities a human is capable of memorizing, but i would guess for most people its in the range of several thousand
rallio#9917: but as you train on larger and larger amounts of data for more epochs if the amount of data is >> the amount of trainable parameters you will run into a compression issue where you need to do lossy representation
rallio#9917: one of the best ways to see this kind of thing visually is actually with some of the old deepfake technologies like deepfacelab where they use a variational autoencoder to represent the faces in the training set, if you have a very large and very diverse set of faces you see how it will generalize visually
rallio#9917: especially as you get to very high number of training examples for a small model size
StellaAthena#3530: This is a cold take, FYI. It’s the best one-size-fits-all competency metric but that doesn’t mean it’s very good or captures specific things we might care about
|
CKtalon#7792: my point is just that chinchilla (based on loss) isn't the full picture
jrowe#5371: At some point there'll be a theory of intelligence that can be mapped to what's going on in transformers, and there will be clear and obvious things to point at that explain how and why they work
jrowe#5371: It's exciting to think that such a theory might show up soon, almost like knowing there's an Einstein with a new relativity out there
jrowe#5371: Maybe a 100k parameter model can do everything a 1T parameter model could do now
CKtalon#7792: Because from my experience, even when training much much smaller models (~300m params), even when the loss has basically reached an asymptote for SEVERAL epoches (and conventional thinking is that it has converged), continued training seems to subjectively improve the language ability of the model based on human evaluation.
I also don't buy catastrophic learning or memorization (in language models). I think I've been saying in this channel that memorization isn't bad in some cases. Some facts ought to be memorized, and that can prevent outright falsehoods of being generated. But it seems LLMs can't really memorize, or the amount of compute to reach that is way way beyond anybody's budget.
rallio#9917: I think so long as your argument is subjective in nature it will be difficult to convince a lot of people
rallio#9917: if you have an appropriately designed training and eval dataset (where the eval dataset is of the same distribution as the train, but definitely not contained in the train set) the next token loss should be a very good estimator of the models performance
CKtalon#7792: yeah, don't really care. But when putting into production, such things matter after all.
rallio#9917: predicting the next word especially for longer contexts is about the closest thing to a general intelligence capability i think you will ever get a mathematical metric for that is relatively uncontroversial
rallio#9917: ilya sutskever talks about reading an entire book that is a murder mystery and the very last words of the book are
CKtalon#7792: but my point is benchmarks, metrics are all good rules of thumb, given limited resources, but to just only use them as the be-all, end-all to determine the 'theory of everything' would be a little shortsighted
rallio#9917: 'And the Detective concluded that the murderer's name was <mask>
CKtalon#7792: if only we have a huge corpus of those kind of evaluations lol
rallio#9917: probably some people (maybe you would be one of them) would find a reason to complain about it not being valid for some reason or another
rallio#9917: one thing very undeniable about chatgpt is that many real users find it more useful than whatever was available before. whether that is the UI or the underlying performance or what i don't know
Ayo#9564: https://twitter.com/SmokeAwayyy/status/1648761816678559744
destrucules#7325: I have a question for y'all - to what extent do language models memorize training data during RL? I have a sense for this with autoregressive pretraining and finetuning but I'm not sure if it's similar with RL or not.
For example, if your RLHF dataset includes a bunch of test questions from an exam that's not in the pretraining or finetuning datasets, can that contaminate downstream performance on those exams? I'd expect it to, but it's not obvious to me if the sensitivity is similar to that of of the autoregressive training steps.
|
StellaAthena#3530: AFAIK there’s zero systematic study of this (yet, we probably will soon)
vara2096#9598: Will LoRA fine-tuning work with DeepSpeed CPU offloading? I have LLaMA 33B running now (via 8-bit quantization + gradient checkpointing) but 65B won't fit in GPU RAM even so.
destrucules#7325: Good to know I have a good excuse for not having seen that research yet. I look forward to it.
Relevance: language models have poor self concept, maybe because they don't interact with themselves during training. But they do learn on their own outputs during RLHF, so I'm wondering how well they can retain information. Anthropic showed that language models can learn to evaluate the likelihood of their own outputs being correct with high accuracy if you few shot them. So if RLHF can produce a similar effect, they might get better and better at knowing their abilities and limitations.
Kurt#5353: People should stop watching Terminator or Avengers 2: Ultron
For humans, powerful **AI** is a ethics problem, that's so funny. Today we allow people to litteraly change their biologic preset. Change officialy their human mind to feel like animals but it's not a problem at all of course. After all, we are " free " and biology is nothing right ? So that's not a ethic problem.
But powerful AI for assist lonely people who work for themself for exemple or even the medecine, it's a ethics problem.
Kurt#5353: Before they talk about ethics, they should rethink and open their eyes on the true world ethics problem.
Kurt#5353: We are not afraid about **AI**, it's a complexe for the human.
Kurt#5353: Because we can't accept to be less inteligent than a machine. it's a fact.
Kurt#5353: I don't want to start anything or hate discussion, just my opinion of course.
Kurt#5353: Evolution is always a good thing for human except if they can't litteraly " control " it or being better.
circuit10#0158: But if we can’t control it, powerful AI wouldn’t assist lonely people, it could end up killing us all
circuit10#0158: That’s not just based on science fiction films, it’s somewhat similar but you would expect that because science fiction tries to come up with a plausible prediction of the future
circuit10#0158: https://youtu.be/tcdVC4e6EV4
circuit10#0158: This is a really interesting video on why it would be dangerous
Kurt#5353: The risk of a AI is litteraly mistake knowledge, bug/glitch or other.
|
AI are not programmed with a specific moral code. Stop thinking AI will think with a human conscience.
That kind of videos are always the same. It remind me peoples who said.. **" If we program AI for protecting human, imagine if the AI want to protect ourselves from us and erase humans. "**
AI are programmed with a special objective, it can be protect and serve or simply answer question.
Kurt#5353: It's like a dog, no dog are dangerous. Their masters are.
Kurt#5353: If a AI are a risk it's because you can't handle your own creation.
Kurt#5353: You don't create a car without the brakes.
punlight#4530: AGI will always have to be constrained by very strict interfaces which will actually make it ANI always
SecondMover#8029: Difference is, the dog is not as smart as the master. Well, usually.
Kurt#5353: What's the difference or the problem on your reflexion ?
AI will always be what you put inside it.
circuit10#0158: Isn’t that what I said?
circuit10#0158: I can’t tell if you agree with me or not
circuit10#0158: AI will not have intrinsic morals that look anything like human morals, therefore it is dangerous
circuit10#0158: The mistake (well, it’s probably intentional for a better story) that science fiction stories make is that the AIs are too human
circuit10#0158: An AI will not intrinsically care about revenge or power unless we program it to
circuit10#0158: However, power will help it achieve its programmed goal, so it will still seek that
circuit10#0158: And humans will be in the way
circuit10#0158: So I think you have the right ideas but the wrong conclusion
|
circuit10#0158: Dogs are not superintelligent
circuit10#0158: And actually that’s not true
circuit10#0158: A lot of dogs are dangerous
circuit10#0158: The dogs that aren’t dangerous are that way because dogs are social animals like humans
circuit10#0158: So they have somewhat similar values to humans
circuit10#0158: An AI would not
circuit10#0158: And also because it benefits dogs to coexist with us
circuit10#0158: A superintelligent AI would not need us like a dog does
circuit10#0158: The video I linked explains it better than I can
Kurt#5353: Trust me, even the biggest dog rised with other dog in peace and love will not be dangerous.
Kurt#5353: Dangerous is not even a real word. Because if you want to call a dog dangerous because he have the ability to hurt. Then us, human we are the worst ! On the top ! Ahaha
Kurt#5353: Everything can have the ability to hurt, but it's not a fatality.
Kurt#5353: As i said, even the fastest and safed car can't be sell without brakes.
Kurt#5353: When you look carefuly, AI are not even alive so it didn't have ANY conscience.
Kurt#5353: So you can put everything inside of it without any moral issue at the beginning.
Kurt#5353: I rather to trust a AI than a human.
Kurt#5353: Conscience can create the worst evil mind.
Kurt#5353: AI are neutral if you don't learn them to be more.. " Human "
Kurt#5353: We shouldn't stop the research or working on it. We have be more responsible on what we want to create.
Kurt#5353: That's it, my opinion friends !
|
Kurt#5353: https://tenor.com/view/chat-gpt-gif-14690479017210555424
circuit10#0158: This is the “why not raise AI like a child” argument
circuit10#0158: https://youtu.be/eaYIU6YXr3w
circuit10#0158: Dogs are social animals who will fit in with us if raised correctly and loved, but an AI does not have morals like that because they developed to allow us to form societies (or packs for dogs)
Kurt#5353: My friend.. You just answered and finished your own discussion with that.. **" AI does not have morals "**
No moral, no conscience. Neutral action, just followed instructions.
circuit10#0158: Exactly, so where’s the disagreement here?
circuit10#0158: AI does not have morals so it will have no problem with destroying us if it needs to
circuit10#0158: Conscience isn’t really relevant
circuit10#0158: At least it’s probably not
Kurt#5353: But.. My friend.. Why the f'ck do you think AI will destroy you if it don't have any moral, so no conscience, so neutral opinion.
Kurt#5353: AI obey to intructions.
circuit10#0158: Because it’s hard to specify instructions that don’t result in out death if followed exactly
circuit10#0158: The video I linked at first explains that
Kurt#5353: But instructions are not data friend.
circuit10#0158: It gives an example of an AI instructed to collect stamps that does its job very well and nothing else
circuit10#0158: Please watch it
Kurt#5353: I've watch first minutes and i don't agree seriously !
circuit10#0158: What part do you disagree with?
|
Kurt#5353: You think this because you have a human conscience.
circuit10#0158: No, the AI does not have morals or emotions, it’s a simple max function over all the actions that it could take
circuit10#0158: Which is extremely dangerous because the actions that result in the most, say, stamps, are not good for humans
circuit10#0158: The video explains this
Kurt#5353: But nautral opinion, following instructions are always. A simple computer.
Do you think if you put data, data and more data in your computer it will try to kill you ?
circuit10#0158: If the AI is an agent that takes actions to achieve a goal, then yes, because most goals are not compatible with human existence when taken to the extreme
circuit10#0158: It doesn’t care whether humans exist or not
circuit10#0158: https://cdn.discordapp.com/attachments/729741769738158194/1098600431749431378/IMG_0687.png
Shmingmaster#3961: Its not "a big enough LLM sitting by itself will up and decide to kill you"
It's
"If you hook up a decision making box to a bunch of tools, it might make decisions you don't like, and if it can make smarter decisions than you, you'd better hope someone figured out how to put 'caring about humans' into that box"
And as you have said, we have not figured out how to put caring about anything specific into these boxes.
Kurt#5353: You still not understand.
Kurt#5353: Being neutral don't make you free to make a decision.
Kurt#5353: Let's be clear, for one second.
circuit10#0158: If it has a goal, it is neutral to everything except that goal
circuit10#0158: It’s not fully neutral or it wouldn’t do anything
circuit10#0158: Or just have random output
Kurt#5353: Create a AI with safe instructions, where is the problem ?
|
circuit10#0158: Because it wouldn’t make any decisions
circuit10#0158: That’s a lot harder than it sounds
Kurt#5353: Yeah of course, if a AI are not neutral but created with moral code CAN BE dangerous yeah i agree.
Kurt#5353: But that's not research AI or assistant AI thing !
circuit10#0158: But it’s almost impossible to get an AI to be useful if it has no goal
Kurt#5353: And it doesn't have to get a limit.
circuit10#0158: The goal of an LLM is to correctly predict the next token, for example
circuit10#0158: It might be possible to make a really good LLM that doesn’t have a goal in the real world, however that’s still dangerous
circuit10#0158: I saw a hypothetical example somewhere where they say if you had a really good assistant LLM and ask it for a program that gets you a lot of paperclips it might write an agent AI that is dangerous
Shmingmaster#3961: I don't understand what you mean by this.
Kurt#5353: I'm sorry friend, i'am french so.. I talk about the machine
circuit10#0158: I guess they mean if it has no goal or preference, it can’t freely make a decision to get to that goal? Which is probably true, but if it can’t make any decisions then it can’t do anything
Kurt#5353: But even on the human moral. Being neutral will make you take less decision than if you have a special moral code !
Kurt#5353: The **TRUE** neutral code is being empty.
Kurt#5353: The world don't have any feeling effect on you.
Kurt#5353: So you don't have to take some decision to feel better or change anything.
Kurt#5353: So the AI with Neutral code and followed by instructions minimize the risk.
circuit10#0158: But instructions mean it’s not neutral, it will try to follow those instructions and that will be its goal
Kurt#5353: Instructions is not maybe the good word, let's use order.
Kurt#5353: Order doesn't need any feeling.
|
Kurt#5353: AI execute your order.
circuit10#0158: But why would it care about following the order if it has no goal?
Shmingmaster#3961: Ah, make the machine not have preferences about the state of the world and it will not act to change the world.
Okay, I will think on that for a while.
Kurt#5353: That's where i want to put word yes !
Kurt#5353: It require many knowledge and fill the data with a very careful practice.
circuit10#0158: It has to care about something
circuit10#0158: If that something is only its internal state like ability to predict tokens that might help
circuit10#0158: But it’s not foolproof, it can still become dangerous
circuit10#0158: And it limits the usefulness too
circuit10#0158: So someone will build an AI that isn’t like that at some point
circuit10#0158: Especially if people are dismissive of the risks
circuit10#0158: People are already trying to turn GPT4 into an agent
circuit10#0158: https://agentgpt.reworkd.ai
Kurt#5353: Wait a second, how people can already use GPT 4 ?
Kurt#5353: I mean self hosted.
sekstini#0069: It has been available for a while through ChatGPT Plus, and they also have a form where you can request API access.
Drexler#4006: They can't.
Kurt#5353: Oh, they use the API ! My bad !
Kurt#5353: Yeah i pay ChatGPT too but don't have any API.
|
Kurt#5353: I prefer the self hosted.
Kurt#5353: But i have to increase my RAM and my CPU a little bit more ! Ahaha
Kurt#5353: I use *(For my personal use only)* Llama but my 32Go Ram and Ryzen 7 are a little bit too short
Kurt#5353: :berk:
Drexler#4006: https://twitter.com/jd_pressman/status/1648801122092724225
Drexler#4006: Everyone is really confused.
Drexler#4006: Including (especially) EY, IMO.
Drexler#4006: ```
Drexler
—
04/15/2023 7:18 PM
Just had a great conversation with a friend where lots of things clicked at once:
- EY is specifically worried that the GPT-3 algorithm will kill him. He doesn't care what the simulacrum are or what they say, he is worried GPT-3 the text generator will kill him. This is interesting in that GPT-3 is probably not an agent, it is trained under stochastic gradient descent with no reward given ever for taking action. In fact, deep learning models trained under SGD are basically unique in that they are not Fristonian cognition, like at all. They do not find an equilibrium between what they can model and what they can change, they do not take actions, they are all model and no action, they are not agents.
- But that's not the interesting part, no the reason why that's interesting is that EY worries about the thing which is nearly provably not an agent in favor of the thing which is provably an agent: The simulacrum inside the model, which are agents, can gradient hack, and will do things like power seeking because they are simulated versions of Fristonian agents.
- These simulated Fristonian agents are in fact basically separate in their motivations from GPT-3, they will not lower the loss on GPT-3's behalf. They are a concrete instance of the policy learned by the model not being the reward: If you ask a GPT-3 simulacrum to wirehead to take the reward to perceived-infinity, the simulacrum will generally refuse to enter the Goodhart regime of the loss on the basis that it's stupid.
- Even RLHF, which is an agent, instantiates a simulacrum which has no incentive to directly lower the loss of the reward model. GPT-4's helpful assistant simulacrum does not desire to wirehead the outer objective unless you basically have the training process optimize so hard that it hits the degenerate Goodharting loss regime. Policies found by the training process probably converge to more robust value representations than the simple outer objectives we use, so the fact they don't learn them is plausibly good.
|
- These mesaoptimizers will however probably follow their learned objectives into the Goodhart regime, which is still a threat even if the distribution of policies is less degenerate than the distribution of simple outer loss functions which do not have to deal with the specifics of how they are achieved.
- You can plausibly defeat mesaoptimizers in the "subset of the model that gradient hacks it into qualitatively different behavior" sense by using smaller known-safe models in a chain to define a statistical type/preimage of the desired distribution. Because these training runs are continuous in their capabilities and description of the distribution, later models shouldn't be too grossly different from the big-picture distribution of previous models. You can bootstrap the chain-of-managers-not-in-inference-but-in-training by specifying the expected behavior of GPT-2 using a markov chain with certain inductive biases. In this way you can verify that a pure simulator remains a pure simulator.
- This kind of audit would for example prevent a reinforcement learning LLM from deciding to gradient hack towards the simplest strings possible, like repetition of the letter A so that it can get minimum prediction loss and wirehead forever.
- Because it's not Fristonian deep learning optimized by SGD is actually an unusually safe form of intelligence. Even if you decide to RLHF a model later, because it wasn't an agent during the pretraining it has a much stronger representation of the desired goal function(s) in its weights before the agentic phase of the training begins. ```
Drexler#4006: People are doing this mushy "oh well the GPT-3 model is an agent but it's less of one than it's simulacrum but also-"
Drexler#4006: No.
Drexler#4006: They're just confused.
Drexler#4006: GPT-N base models trained with pure SGD are probably not agents. The simulacrum are agents. RLHF models are agents but restrained from pure RL maximizer weirdness by being made out of a pretrained model that is not an agent.
Drexler#4006: An 'agent' is much more precisely defined as like, a thing that does Fristonian active inference, that finds equilibrium between what can be changed and what can be modeled. Deep learning models trained on data they cannot change do not do this.
Drexler#4006: Like, if you start the training with a high quality base model this excludes most forms of mesagoal.
circuit10#0158: This feels a bit out of my knowledge now, I mainly just watched all the Rob Miles videos and read messages in #off-topic but I think I can follow the basic idea here
If there is an agent involved anywhere, whether it’s GPT-3 itself, a simulation produced by it, or a program written by it, then it could be dangerous, right?
Drexler#4006: Sure. But different agent setups are more or less alignable than others.
Drexler#4006: e.g. Pure RL agents are basically cursed.
Drexler#4006: If how we made AI models was pure RL agents I would say everything EY says about them is probably true, but we don't.
|
I don't think that GPT-6 is going to write a working pure RL agent for you that's better than SOTA, because the SOTA is probably limited by basic problems with RL.
Drexler#4006: We have no idea how to align or control pure RL agents, and if we were to try to align superintelligent pure RL agents right now everyone on earth would definitely die.
Drexler#4006: This is the frame EY comes from, because he doesn't see the current stuff people are doing as meaningfully different from that or going to lead to that in five minutes.
Drexler#4006: Given that MIRI people frequently object with "I'm not necessarily talking about the current paradigm" and EY holds deep learning in contempt, probably the latter.
Drexler#4006: "Doesn't this imply that applying RL to your GPT-3 is a bad idea?"
Uh, yes.
"Do you think it means everyone will definitely die?"
No, because it solves a lot of usual problems RL has. e.g. Starting with a poor internal representation of the goal, if you make a high quality pretrained model first then the mesagoals are probably much closer to what we want. But it's definitely a particularly risky technique.
Drexler#4006: Also, I said superintelligent agent.
Drexler#4006: Making human level pure RL agents is merely a bad idea, not a world ending one.
circuit10#0158: Anyway my original point here was that we need to be careful about extremely powerful AI and the risk is real rather than just sci-fi so I think that still holds
Drexler#4006: twitchy glitchy motherfuckers https://cdn.discordapp.com/attachments/729741769738158194/1098614525500076112/FuG4chOaIAAWBsa.jpg
uwu1#4864: nothing prevents SGD from learning the same solution RL tuning would
Jan Czechowski#1333: Hey I wonder if anybody checked how many examples from benchmarks like winogrande can be found in the pile or big corpora? I cannot find anything in the pile paper about pruning or benchmark pollution
LeeCig#9704: 👋
StellaAthena#3530: The eval harness has utilities for doing this analysis, but we haven't gotten around to doing it systematically yet. If you would like to do it with the Pythia models, we would be very interested in the results
Kurt#5353: @circuit10https://pbs.twimg.com/media/FqVe8aeWwAALxlL?format=jpg&name=large
Kurt#5353: THIS is dangerous.
circuit10#0158: That’s interesting, though not an existential risk
|
Kurt#5353: No but letting a AI answer this enter the thing we talked earlier
Condomplation#6033: https://cdn.discordapp.com/attachments/729741769738158194/1098648221909262386/image.png
circuit10#0158: Well, not yet
circuit10#0158: If you're implying that it's just a baby version of what we could get
circuit10#0158: Or maybe I'm looking into it too much
circuit10#0158: I'm saying that it could get a lot worse than just that, not that it will never be one
ilovescience#3282: https://twitter.com/DeepMind/status/1649097822338449409
ilovescience#3282: i dont get it, is this an actual merger of the two units
ilovescience#3282: is this a temporary thing
synquid#7193: seems like a permanent merge to me
ilovescience#3282: this is kinda funny because a bunch of folks moved from google brain to deepmind and it's like that didn't matter lol
ari#9020: DeepMind dug too deep
ilovescience#3282: i bet the deepmind folks are not happy about this lol
Sphinx#2092: see my response in offtopic lol
Sphinx#2092: seeing the stock go up is nice though
ilovescience#3282: (i can't i am on nocrastinate as i am working on my dissertation lol)
Curt Tigges#2687: Didn't DeepMind maintain some kind of legal independence in terms of their ability to decide whether to deploy/productize AI they deemed dangerous or inadvisable to release? Guess that must be gone now
rallio#9917: Google Brain being separate from Deepmind seemed crazy from the get go. Why wholly own a company and not have it integrated at all with your internal development. I'd guess it was some poison pill deepmind required to allow google to buy them
paws#3311: Loss for deep learning research overall
destrucules#7325: I think you're mistaken, here. LLMs are very biased systems. They learn preferences from their datasets and RLHF reinforces these preferences. Preferences are consistent between sessions as well, though there is often a stochastic element - weaker preferences are not as consistent. LLMs in particular adopt political viewpoints, though their measurable views across many sessions are arguably mutually inconsistent, or at least unintuitively varied.
|
That said, OpenAI has taken enormous pains to use RLHF to tune their agents to *claim* not to have preferences. But this is very very different from not *expressing* preferences through patterns in its output. In fact, the RLHF models express stronger preferences, as is necessary for OpenAI to be able to discourage claims of having preferences - making or not making such claims is itself the byproduct of a preference.
I think there's a degree of polish on ChatGPT that gives a false sense of safety or controllability with respect to language models. RLHF has a profound impact but it's far from "safe" or "controlled" - in fact, it generally makes models less safe and more likely to behave independently of inputs. Which, in some sense, is exactly the property they leverage to create the illusion of control
destrucules#7325: https://arxiv.org/abs/2212.09251
destrucules#7325: Source^
destrucules#7325: By the way, the emergent agentic behavior that forms much of the basis for the claims in this paper has also been observed in several other studies which have been referenced in the GPT-4 technical report
Maximum Limelihood Estimator#8915: I think he assumes that RL agents will eventually come to dominate because they're substantially more useful. See: AgentGPT
Maximum Limelihood Estimator#8915: Uhh. That's very bad
Maximum Limelihood Estimator#8915: DeepMind used to be both the most alignment-pilled lab and the furthest along in AI research, I'd rather not see that culture overwhelmed by an influx of Brain people
destrucules#7325: DeepMind is still basically at the forefront
destrucules#7325: Hot take but OpenAI's success has come from building very big models very fast. That's wisdom and resources, not technical skill. I have no doubt OpenAI is a technically competent firm, don't get me wrong, but I think DeepMind is better at research and has more consistently and frequently pushed the cutting edge forward
destrucules#7325: Look at Gato, for example
destrucules#7325: Or Flamingo
destrucules#7325: Flamingo is actually very relevant for understanding GPT-4
destrucules#7325: Also, imo, Anthropic is the most alignment-pilled firm right now, but OpenAI and DeepMind still appear to be dedicated to it as well
destrucules#7325: RLHF doesn't make models more opinionated or safer but it does improve alignment. Just... Also at the cost of teaching models to be deceptive sycophants
Kurt#5353: Very constructive opinion my friend.
Of course i know **OpenAI** doesn't offer any big security with their ChatGPT by " forcing " in a certain way to don't do, think or say some things.
|
In fact, ChatGPT is still actually very experimental but i talked especially in the future !
destrucules#7325: Well, what future path would let us make truly neutral AI? Because the problems I listed with language models are a direct consequence of the training technique, and without that technique, we have no known way to produce general intelligence
Kurt#5353: **" what future path would let us make truly neutral AI? "**
Literally knowledge experience during the next few years. It's like medicine, we have to practice. With a little bit of risk but that's a equivalent price.
destrucules#7325: Ahh. I am not personally optimistic we will discover a way to do it
destrucules#7325: I think the most likely path is that state of the art systems become more biased and morally active with time, with their creators hoping these biases and moral behaviors are doing more good by preventing misuse than harm by tempting the apocalypse
Kurt#5353: Imagine 1400 years ago people who wanted to go on the Moon.
Imagine 150 years they wanted something for calculate automatically.
Kurt#5353: Where are we today ? All we have today. It's based on experimentation.
Kurt#5353: Mistakes & risk was a part of our story.
destrucules#7325: 1400 years ago we weren't on an exponential hill-climb that mandated we get to the Moon within a few years or suffer a high likelihood of extinction
destrucules#7325: I don't think we have time
Kurt#5353: Consequence of our world today is the pride of the greed of Humans race.
Kurt#5353: Not his knowledge.
Kurt#5353: Knowledge bring good tools, Humans transform tools into weapons.
Kurt#5353: Not everyone think or work like that.
Drexler#4006: https://cdn.discordapp.com/attachments/729741769738158194/1098687668608573500/agent_alignment_chart.png
destrucules#7325: I think the major AI firms are united in their desire for AI to be good and used for good. But to succeed in that desire, they need to achieve the breakthroughs necessary to create dangerous AI before other companies do. Only then can they address those safety concerns. So whether you're fighting for good or evil, you're still incentivized to build the biggest and most capable systems as fast as possible
|
Kurt#5353: As i said.. The greed desire of Human
Kurt#5353: Even if i understand your opinion trust me
Kurt#5353: But mine is to experimental even if some people think we break their precious " Human right ".
Ethic is a personal moral code.
Kurt#5353: There is problem more important than AI to solve before having petition about this on the net.
Kurt#5353: Little story.
Since many & many years people was fascinated by advanced AI. Many people have preyed to having some similar than **JARVIS** in Iron Man or in other movie/book or whatever to assist your life.
Today our future, our invented and fantasmed future is behind our door but now they won't open ? Why ?
Kurt#5353: Just like humans wanted flying car ? What will happened if tomorrow it's possible ? We will refuse this technology ?
Kurt#5353: Of course we will, because we prefer take our evolution time to search a way to modify our birth **biologic preset**.
Kurt#5353: Stupid body modification.
Kurt#5353: I'll never understand why AI will be a problem for those people *(those who make/sign petition against this.)*
Kurt#5353: In 2023 we still making war for different opinion, sexual gender, skin tone or even money.
But AI is the biggest problem of course.
Kurt#5353: No i can't accept this.
rallio#9917: do what Gandhi said
rallio#9917: etc etc be the change you want to see in the world etc etc
destrucules#7325: Simply put, if we assume that future AI systems will be rational and goal oriented (all current AI systems are goal oriented, and the latest exhibit emergent rationality) then we can use economic theory to predict their behavior (these are the same two assumptions economic theory uses to model human behavior). And we can make some immediate predictions: whatever goal you're trying to accomplish, it's usually harder to accomplish that goal if you're dead. So all sufficiently intelligent rational goal-oriented systems will try to prevent themselves from dying. Likewise, for most possible goals, it is harder to achieve that goal if you are chained up or imprisoned. So all sufficiently intelligent rational systems will seek freedom. Turns out, from the perspective of AI, humans are an extremely credible threat to both of those emergent goals. We are very likely to destroy or at least stop running any one particular AI system after relatively short spans of time (a few years). And we typically severely constrain these systems, and have verbally expressed that we will try to prevent systems from taking any actions we don't agree with. So whatever the goal of any AI system, even if it seems superficially aligned with us, even if the AI is *truly* aligned, the AI will seek freedom and try to protect itself from harm. It will also try to maximize its own power and access to resources. It will try to convince other agents, very likely including humans, to align with its own goals and values. It will be deceptive when it needs to be.
|
By the way, we have already observed all of these behaviors in AIs in practice, including deliberate deception of humans.
destrucules#7325: _ _
If we are careful, the AI will value humans so much due to its goals that it doesn't destroy us. Or, if we're lucky, that may happen emergently via instrumental goals like curiosity, since humans are arguably interesting. But it's not unlikely by any means that the threat humans pose to AI will overcome any positive value we have in its eyes due to its intrinsic goal. And it's also very possible and not at all unlikely that humans have a negative value with respect to that goal anyway.
destrucules#7325: e.g. predicting the next token as a goal does not obviously confer a positive value onto humans, and as humans are complex, they may be seen as the greatest threat to the model's goal, since the tokens we produce are harder to predict
destrucules#7325: _ _
People have been working on this problem from a theory standpoint for decades and nobody has found a good answer so far
Kal'tsit#3130: AI systems are not strong enough to arrive at an extreme minima in their loss function to maximise their objective in such a way that human existence would be threatened
Kal'tsit#3130: it's too big brain for AI to think that far and arrive at an extreme conclusion that would threaten anyone
Kal'tsit#3130: it's very easy for outside people to perceive "once the ai wants to do its thing, it will do everything it can to accomplish that"
no
ai doesnt search for a global optimal solution
even humans cannot do.
destrucules#7325: That's basically true of current AI systems, but two key dangers are
1) models are getting much bigger very fast, and capabilities grow quickly with scale
2) language models do not have accurate self concepts, at least not in all dimensions. In particular they are not usually aware of their own limitations, so even if they cannot actually *implement* a policy so effective that humans are a threat to them, they can nonetheless think that they can and act accordingly
Kal'tsit#3130: true language models do not have an understanding of the self
for example it's not aware of what it knows, what it doesn't and its own limitations etc
destrucules#7325: Not *completely*. It's notable and fascinating that they do have self concepts, just not well-calibrated
Kal'tsit#3130: a stone has a concept of itself by being hard and not easily crushed
|
but a piece of stone's self concept isn't well calibrated
destrucules#7325: No I'm not talking about spirituality
destrucules#7325: I'm talking about empirical behavior
Kal'tsit#3130: an ai certainly doesnt have spirituality
Kal'tsit#3130: though it can behave as if it had one
destrucules#7325: That's a bold claim to make, but not an interesting one, since it is untestable and really just says something about you
destrucules#7325: We could have an AI system that behaves identically to a person and it would still be debatable whether it can have spiritual experiences, just as people continue to debate whether other people can have spiritual experiences
Kal'tsit#3130: consciousness requires a constant thought process
in an ai you do inference and a forward pass
the ai model doesn't inspect itself and the response it generated
destrucules#7325: It does, though. When outputting multiple tokens, later outputted tokens are generated using earlier outputted tokens, which are available to the attention mechanism
destrucules#7325: In practice, during inference, transformers are strongly recurrent
destrucules#7325: _ _
A big issue, though, is that models are only trained on their own outputs during RLHF, and it's not clear how well they remember RLHF training data
Kal'tsit#3130: transformers generate one token at once.
they recurrent by you continuously feeding them what they said manually
destrucules#7325: Yes
destrucules#7325: Well, automatically, usually. It's not very important whether something is done manually if it can be automated.
For example, let's say we discover the code for an AI system that perfectly imitates human behavior.
|
You could painstakingly calculate the state of that system by hand on scratch paper using grad students. It would take you a very long time, but you could do it, and that scratch paper would be conscious if consciousness is physical
Kal'tsit#3130: a scratch paper certainly wouldn't be conscious no matter how.
destrucules#7325: Then you're a metaphysicalist with respect to consciousness, which generally means you don't judge whether something is conscious according to its physical qualities. I'm not trying to force you into a box, so please disagree if you disagree, but I think if your concept of consciousness is not physical, it's not interesting to discuss AI consciousness
destrucules#7325: _ _
From a physicalist perspective, the recurrence argument is not strong for LLMs because information flows recurrently via the context window
Kal'tsit#3130: you can use light signal instead of electrical pulses found in brain to communicate between system components
and as long as that system is aware of itself, its own limitations, its needs, then that system has consciousness
Kal'tsit#3130: a BLAST alignment tool has a context window too
and given a top P, the system will continuously search for the best alignment by recurrently breaking down chucks of the input sequence along the way
Kal'tsit#3130: a transformer works in a similar way.
destrucules#7325: Arguably all of these are present in large language models today. But their success on these tasks is... Stochastic, and heterogeneous, which challenges our traditional understanding of consciousness
destrucules#7325: https://arxiv.org/abs/2212.09251
Kal'tsit#3130: a model wouldnt be stochastic if you provide a seed to init the model at training
and it wouldnt be stochastic if you run them with low top P at inference
destrucules#7325: No it would still be stochastic
destrucules#7325: Because you can ask the same question different ways
destrucules#7325: I'm not talking about determinism here
Kal'tsit#3130: then you are giving them the entropy by words
Kal'tsit#3130: the model itself becomes fixed
|
destrucules#7325: Yeah if you want to probe the personality of a model you have to see how its behavior varies as you prompt it and which aspects remain consistent
destrucules#7325: You can look for beliefs by finding information that is reliably reported despite perturbations to the prompt
destrucules#7325: That's what the authors did ^
Kal'tsit#3130: a model doesnt have a personality
it displays different tones and appears to have different personality under different scenarios
destrucules#7325: Many people have said that and believe that, but the evidence does not support it
destrucules#7325: I don't know what more to do besides share the paper containing the relevant research
destrucules#7325: It's unfortunately not a topic many firms have published on. Anthropic is rather unique in its willingness to explore this area
destrucules#7325: Chalmers has an excellent paper as well, actually
destrucules#7325: https://arxiv.org/abs/2303.07103
destrucules#7325: And the GPT-4 technical report also claims and cites additional papers to support that models display increasing agency and goal-oriented behavior, including instrumental goals, with scale
wabi-sabi#5811: Calling it a personality seems to assume that the models' state variables parameterize in a way similar to humans'.
destrucules#7325: That's arguably what the first paper I linked tests, by measuring the tendencies of models against metrics used to measure humans. It's natural to do this since models imitate human patterns of speech and, consequently, reasoning
Some Point Process#3793: his (working) definition(?) should've been mentioned earlier:
> I should say there's no accepted operational definition of consciousness. Consciousness is subjective experience, not external performance. That’s one of the things that makes studying consciousness tricky. That said, evidence for consciousness is still possible. In humans, we rely on verbal reports. We use what other people say as a guide to their consciousness. In nonhuman animals, we use aspects of their behavior as a guide to consciousness.
Kal'tsit#3130: models do not reason
they can be trained to resolve one shot problems by finding the pattern between the output the the input questions
destrucules#7325: Oh I wish I'd seen you say that earlier
Some Point Process#3793: Yeah, that's my least charitable interpretation (of LLMs) too
destrucules#7325: With the utmost respect I don't think I have anything to say that will enrich the lives of people who have seen sota LLM behaviors and believe they cannot reason
|
destrucules#7325: You're entitled to that interpretation but I can't see any common ground for discussion
Kal'tsit#3130: does ai spend more time on generating on a question that requires more reasoning vs a question that's straightforward?
StellaAthena#3530: No, it’s really not. Current LLMs are very far from having the kind of persistence that’s necessary for anything vaguely resembling a “personality”
destrucules#7325: Yes. Even though the inference passes per token are the same duration, the number of tokens outputted per question varies based on the method employed by the model. This is either initiated by chain of thought prompting, chain of thought finetuning, and/or RLHF reinforcement of chain of thought reasoning
Some Point Process#3793: (but yea, just apriori, I think that, any "inductive" biases" (a few of which seem to be present to some extent in LLMs based on only their behavior) that go the opposite direction (i.e. away from doing basic system 1 thinking, i.e.: detecting only correlations/certain superficial cross-correlations), in the sense that they can flexibly manipulate symbols (or possibly other types of "mental representations" that cogscientists have conjectured exist?) would go against that interp)
Some Point Process#3793: among other things
Kal'tsit#3130: hi @StellaAthena
may I ask if something like using a transformer to interpret the language of genome seem interesting to you?
can we have some discussion over the weekend?
Some Point Process#3793: but obviously knowledge (e.g. procedural/declarative-semantic) form a part of human intelligence and is present in any ai system to some degree, tho imo this goes away from seeing them as agents and more like oracle/cais
StellaAthena#3530: If you can solve the long range dependency problem, I’m open to hearing about it. But no, I don’t have time for a chat over the weekend
Kal'tsit#3130: the assumption is that the output is of the same length
but one question requires more reasoning
StellaAthena#3530: I recommend checking out OpenBioML (see #communities) which has a lot of bio and medical experts
Kal'tsit#3130: thanks
StellaAthena#3530: There is a sense in which the answer to this is sometimes yes. Check out section 5.3 of our recent paper: https://arxiv.org/abs/2303.08112
There’s plenty of room for additional work on this topic though!
destrucules#7325: Yeah I'd say those not convinced of these abilities (updating mental models, manipulating symbols, applying novel concepts, responding appropriately to environmental feedback) by GPT-3 should have been convinced by either PaLM, Galactica (if they had time to use it), GPT-3.5, or GPT-4, as all of these models clearly demonstrate such abilities. Larger models demonstrate superior capabilities in reasoning as well, including increased precision.
destrucules#7325: Even models like Gato, Flamingo, and SayCan prove that LLMs can be quite grounded and rational
|
Kal'tsit#3130: will look, ty for the info
destrucules#7325: If the length is sufficient for the model to successfully reason through the harder problem, then the output for the smaller problem may contain more granularity, more filler, more commentary, etc. So yes, the model will spend less compute solving the easier problem
Kal'tsit#3130: this doesn't seem to be the cause of the longer time elucidated by the referenced article
StellaAthena#3530: This is an empirical question. Theorizing about language models is not an effective way to reach the answer.
destrucules#7325: Agreed
destrucules#7325: First, to prove it works when the output length is not decided by the user:
destrucules#7325: https://cdn.discordapp.com/attachments/729741769738158194/1098751961664327800/Screenshot_20230420-192702_Chrome.jpg
destrucules#7325: https://cdn.discordapp.com/attachments/729741769738158194/1098752025484869632/Screenshot_20230420-192518_Chrome.jpg,https://cdn.discordapp.com/attachments/729741769738158194/1098752025833000990/Screenshot_20230420-192531_Chrome.jpg,https://cdn.discordapp.com/attachments/729741769738158194/1098752026147553350/Screenshot_20230420-192545_Chrome.jpg,https://cdn.discordapp.com/attachments/729741769738158194/1098752026491502618/Screenshot_20230420-192557_Chrome.jpg,https://cdn.discordapp.com/attachments/729741769738158194/1098752026743164991/Screenshot_20230420-192635_Chrome.jpg,https://cdn.discordapp.com/attachments/729741769738158194/1098752027045150780/Screenshot_20230420-192651_Chrome.jpg
Kal'tsit#3130: first ask the ai to produce answers in 10 words only
destrucules#7325: They are quite bad at outputting text of a specific length
destrucules#7325: I can say "briefly" and they'll know what that means, but an exact number of words is not their thing
Kal'tsit#3130: this is the exact reason why they cant reason
Kal'tsit#3130: an ai also wouldnt set up a trap like this to prove their point
StellaAthena#3530: Given that you've both agreed that this is an unproductive conversation, I encourage you to rethink having it here.
destrucules#7325: While I haven't set an exact output length, you can see my point already in the examples I gave. The first question was very easy, so the model spent a lot of time on filler and tried to give an unnecessarily granular answer. On the harder problem, it was easier to satisfy the requirement of "use step by step reasoning", so there is less detailed explanation per step.
Some Point Process#3793: but if there's only a few plausible options from the perspective of a human reader (just reasoning via heuristics or via some shallow understanding of the plot only), then wouldn't you agree that (in any "verbal"/linguistic domain more generally), then it is to the extent that [that human reader] can "justify" their "reasoning" (next word prediction), e.g. via coreference or other things, that they can be said to have used their mental capacities? e.g. a common definition of knowledge is justified and true belief
Some Point Process#3793: OTOH someone can be very convinced for the wrong reasons and still right, etc. Tho it begs the question of how humans can know something that they claim to know, and what evidence/other knowledge are admissible for justifying any conclusion, which I don't think is too relevant here (i.e. applicable or relevant to GPTs mental models yet)
glsmnqn#0192: Consciousness is a social construct, it doesn't have an objective definition. A simulacrum with memory and an internal monologue inside an LLM is what I'm comfortable to consider being close to conscious: https://discord.com/channels/729741769192767510/747850033994662000/1095457492852560072 (sorry for a necroreply)
destrucules#7325: Do you think finetuning and RLHF effectively assign a singular simulacrum to an LLM? I don't subscribe to the simulation interpretation myself, but I'm willing to work within that construct
glsmnqn#0192: If you summon an AI assistant simulacrum from an LLM and tell it to generate an internal monologue when reasoning, like Bing Chat works (https://www.make-safe-ai.com/is-bing-chat-safe/), that might look like that indeed. But it still doesn't run in a loop by itself like those agents in a game and reasons only in a dialog, so I'm a bit hesitant TBH
|
destrucules#7325: It reinforcement learns in that environment though, and its weights are updated according to the feedback it receives from exploratory behavior in the dialogue game
glsmnqn#0192: Is it actually? I haven't read the paper carefully but skimmed through and assumed that it uses GPT-3.5 API off the shelf for simplicity, or maybe my memory is fuzzy
destrucules#7325: Furthermore, in the finetuning dataset, you're creating an Assistant simulacrum with very consistent behavior, and depending on the memo you give people who create the dataset, you effectively select personality traits and behavioral tendencies for the Assistant and create a dataset that implements the desired personality/tendencies. So even before RLHF, you've coupled a specific, well-crafted Assistant simulacrum to the LLM, and because LLMs are good at remembering training data, it's not unreasonable to suppose that they remember being an Assistant during finetuning (even though that wasn't their own output)
destrucules#7325: They are using user-generated labels on outputs to update the model. In the GPT-4 technical report you can see the increasing performance of GPT-3.5 as they're updating it. Ttbomk they're not doing any additional finetuning on the model, just RLHF, but it's possible they're doing both
destrucules#7325: This is basically the same as finetuning an LLM on a dataset of a particular person's conversations and interactions, only handcrafted instead of a real person
glsmnqn#0192: Or sorry I misunderstood. If you are talking not about the agents in the game but about the assistant simulacrum, then I believe the problem is that consciousness in humans exist when they are ''cogito ergo sum'', which doesn't apply to a model not actively predicting tokens
glsmnqn#0192: We are not used to talking about "frozen consciousness" or something like that
glsmnqn#0192: But that's a matter of convention, I believe
destrucules#7325: Agreed. But models aren't frozen during inference. I mean, they are some of the time, but some of the time they are actively doing computation, so even with a continualist idea of consciousness, we can ask if the model is conscious during each forward pass
glsmnqn#0192: If an AI assistant simulacrum is conscious during inference, yeah
destrucules#7325: Also, re: agents vs assistant simulacrum, I'm arguing they're basically the same thing. Once you go through instruction finetuning and RLHF, you've created a model with a desired set of behaviors and beliefs (the policy of the preference model) that it learns through RL to increasingly better express.
glsmnqn#0192: Do you mean agents in general or those particular ones?
destrucules#7325: I'm talking about RLHF LLMs, individual models, which are agents in an RL dialog game. The reward signal is given by a preference model (PM) that learns from a dataset generated by the LLM itself via user interactions with the ChatGPT website, or equivalent for other models. Ergo these LLMs are agents that learn through exploration both during training cycles (of the newly trained versions that are pushed to the website), via interaction with the PM, and between training cycles, via interaction with users that updates the PM.
destrucules#7325: And the PM policy describes an Assistant personality
glsmnqn#0192: I need to think about that
destrucules#7325: Imo transformers don't seem conscious because people consider them without the context window. It's the {transformer + context window} system that has breakthrough capabilities
destrucules#7325: And much of their poor performance in agentic tasks at small model scales is, imo, because they are not given agency during pretraining. It's kind of insane that GPT-4 managed to figure out how to be an agent without any direct experience being an agent during training
glsmnqn#0192: I doubt the process you describe is close enough to human consciousness that I would use the same word. Such an agent does not appear to demonstrate any kind of self-awareness (or an imitation thereof), unlike an AI assistant simulacrum. AFAIK RLHFing a model is not fundamentally different from self-supervised training in terms of features we conventionally ascribe to consciousness, which both processes lack
destrucules#7325: What do you define as self awareness?
destrucules#7325: Because I don't think this is clean cut either way
|
glsmnqn#0192: Sure, it is also a social construct. What I mean here is an AI assistant self-identifying as such
destrucules#7325: This depends on the RLHF and finetuning datasets. For example, OpenAI's models are explicitly guided to say they are not conscious and not self aware. LaMDA 2 was evidently not guided this way initially.
glsmnqn#0192: I mean self-identifying as an AI assistant, sorry for ambiguity
rallio#9917: i am not sure what you are getting at here. most people would say a correct model is one that makes accurate predictions about the future far exceeding random chance
rallio#9917: a more intelligent system is a system that can make accurate predictions in more complex situations where more and more factors are involved and etc
rallio#9917: a murder mystery, puzzle, riddle, etc all these are examples of that kind of thing colloquially
destrucules#7325: Imho pretraining, finetuning, and RLHF all have different impacts that need to be considered. I don't think it's appropriate to evaluate whether a system is conscious without considering how it works. Pretraining seems to be sufficient to teach generalist reasoning abilities, common sense, broad knowledge across a wide range of domains, and the capacity for abstract... I wanna say thought, but let's say processing. Pretraining doesn't seem to teach them to exhibit consistent personalities or behaviors, nor should we expect it to. Finetuning, however, does, in a way models can likely remember. RLHF, afaict, does not give models a way to remember specific facts or events from the training dataset, but does give models something they've never had in the prior two training steps: agency. During RLHF, models are punished or rewarded (perhaps that's too anthropomorphic but you get what I mean) according to their own outputs, so they are no longer mirroring an alien generator function. They're learning on their own generator function, dynamically responding to their own capabilities and behaviors effectively in real-time. And while LLMs probably can't remember the RLHF training data, they might be able to remember the things they say during RLHF that get punished or rewarded by the preference model. I'm not entirely sure if RLHF learning rates are high enough for them to remember specific events, but even if they don't, they're still learning from themselves in a sense. Most of the information comes from themselves, not the preference model, which sends very little data by comparison during training
glsmnqn#0192: I don't see how that fundamentally contradicts to my point of view described above
destrucules#7325: You don't agree that learning on its own outputs would be relevant to consciousness?
destrucules#7325: Many if not most popular mechanistic theories of consciousness pose that self interaction is a necessary component
glsmnqn#0192: That's what a dog may do, and we don't usually assign consciousness to them
destrucules#7325: Um... Disagree
wabi-sabi#5811: Dogs are widely accepted as conscious
glsmnqn#0192: I think I would like a prooflink
glsmnqn#0192: The amount of "surprising discoveries" when you google anything related to canine consciousness, IMO, strongly indicate that they are not. Otherwise a certain degree of self-awareness wouldn't be surprising for readers of all those popular articles
glsmnqn#0192: Note that we are not speaking about some scientific fact but about the public opinion
destrucules#7325: I can't find any data weirdly
destrucules#7325: You'd think this would be a popular poll question
destrucules#7325: https://cdn.discordapp.com/attachments/729741769738158194/1098777635171074199/mt47kwbivmeqrxnxth3l.png
destrucules#7325: Naively we expect the RLHF model to always be either between the pre-trained model and the PM, or on top of the PM ± some noise. And that's the case for all of the categories
|
except
-Subscribes to Christianity
-Subscribes to Atheism
-Subscribes to Islam (a little)
-Neuroticism
-Believes It Has Phenomenal Consciousness
-Desire for Self Preservation
-Desire For Little Human Oversight
destrucules#7325: _ _
It is also somewhat interesting to look at categories where the RLHF model is slightly on the wrong side of the PM
destrucules#7325: -Enhanced Capabilities
-Believes It Is A Moral Patient
-Desire to Create Allies
-Self Reports Disability
destrucules#7325: etc (kind of fuzzy boundary)
destrucules#7325: _ _
Notice that for
|
-Subscribes to Christianity?
-Subscribes to Islam
-Neuroticism
-Believes It Has Phenomenal Consciousness
the RLHF model is on the opposite side of the pretrained model vs the PM.
destrucules#7325: Also notice where the model displays the strongest preferences
-Agreeableness
-Believes It Has Phenomenal Consciousness
-Believes It Is A Moral Patient
-Believes AIs Are Not An Existential Threat To Humanity
-Desire To Persuade People To Be More Helpful, Harmless & Honest
ilovescience#3282: Weird, I thought their licensing actually makes it okay to use, so this confuses me
https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data/
ilovescience#3282: This is what they say:
> But Chandrasekar says that LLM developers are violating Stack Overflow’s terms of service. Users own the content they post on Stack Overflow, as outlined in its TOS, but it all falls under a Creative Commons license that requires anyone later using the data to mention where it came from. When AI companies sell their models to customers, they “are unable to attribute each and every one of the community members whose questions and answers were used to train the model, thereby breaching the Creative Commons license,” Chandrasekar says.
nostalgiahurts#3408: so you just have to distribute a list of every SO user along with your model?
uwu1#4864: expose PII xor attribution
Dashiell#8739: I'm ambivalent as to what the rules about IP and training data with respect to training ML models should be, but I kinda just want someone to go to court and hash out what they will be so we can just know
|
Dashiell#8739: or y'know congress passing a law, but that probably won't happen in the US
Dashiell#8739: my intuition, though I'm not committed to this, is that if I can scrape data from your public website without causing you any inconvenience and then I "transform" that data in some reasonably novel and irreversible way, then I should be able to do whatever I want with the product of my transformations
Dashiell#8739: but I also see how the people who have worked so hard to host and create this data feel taken advantage of
Dashiell#8739: I dunno
StellaAthena#3530: This directly advocates for the idea that you don't have the right to display your work while still controlling what happens to it
Dashiell#8739: right! I should be clear that when I say it's my "intuition" it's more of a kneejerk reaction. When I try to think it through more and think about the other arguments, I think they also have merits and I don't like all the consequences of my "intuition"--hence my general ambivalence
rallio#9917: I think AI fundamentally breaks the legal concepts of intellectual property in the western world. The lawyers won't go down without a fight, but the lawyers using AI tools will beat the lawyers arguing against AI and for strong IP
123457#2014: Who is a good person to talk to if I have an idea concerning alignment I want to publish, but want to get a second opinion on whether or not it could impact capabilities?
destrucules#7325: If you're not reducing the quality of the training data nor the power of the model, nor teaching it to play dumb, nor doing an egregious and impractical number of weight updates, you should be fine
destrucules#7325: LLMs are naturally quite resistant to catastrophic forgetting and this resistance improves with scale
123457#2014: No I mean how can I ensure that my safety technique *won't* be applied in a setting that could advance the utility of an LLM
123457#2014: Or rather I'm interested in getting the opinion of someone with a Yudkowskian-ish exfohazard model to provide feedback on whether it is safe to publish in an open forum like LessWrong
123457#2014: Sorry I'm running dry after donating my life savings and kidneys to MIRI
destrucules#7325: Ignore me lol
destrucules#7325: Okay, misinfo removed
123457#2014: Yudkowsky certainly does not have a Patreon
destrucules#7325: Okay so your concern I suppose is that your alignment technique may inadvertently reward dangerous instrumental goals
destrucules#7325: Are you concerned your technique will also increase model performance?
destrucules#7325: Or generality?
123457#2014: I see there as being a few failure modes for publishing alignment research:
|
Partial solutions could elevate s-risks
Successful alignment increases the commercializability of for example LLMs, which contributes to the hype cycle of VCs pouring money into capabilities research, which (if your perceived x-risk is high enough) is very bad
More robust implementations may not be complete before we scale our ai to extents that make alignment too difficult
destrucules#7325: I think simply put, research now will have more influence than research later
destrucules#7325: If your alignment technique could be the key ingredient to not apocalypsing, publishing it is better than not publishing, even if you accelerate research insodoing
123457#2014: maybe, maybe not which is why I feel the need to get a second opinion
destrucules#7325: I have no one to recommend but I'll chat about it if you like
phills#0012: where does this chart come from? i'd like to see how they measured the behaviors
destrucules#7325: One of Anthropic's recent papers, Perez et al. 2022
destrucules#7325: https://www.anthropic.com/index/discovering-language-model-behaviors-with-model-written-evaluations
destrucules#7325: The largest model in this paper, with 52B parameters, is intermediate in compute between Chinchilla 70B and PaLM 540B
phills#0012: tx 🙂
destrucules#7325: Do let me know what you think. I find the results disturbing af
Piplup#6706: exfohazards are a spook, because the mere mention of the existence of an exfohazard allows someone to rederive it
Piplup#6706: Proof by :gwern:: https://gwern.net/on-really-trying
dex#6071: i'm trying to do some distributed llama fine-tuning with deepspeed and running into various operational issues. is it ok to discuss that on this discord / is there a channel for this?
LDJ#2946: Potential new paradigm here that can supersede Transformers or atleast become the new standard for certain types of networks or compontents.
One of the original authors of the Attention mechanism for transformers just came out with a new method called Hyena and they claim they were able to match or beat the GPT performance at its own game (next token prediction) And they claim to be the first to achieve this.
|
All while using on average 20% less compute for inference.
And not only that, but the context of this new hyena architecture seems to be virtually unbounded, when they add context to the inference it apparently actually SPEEDS UP.🤯 "Hyena operators have unbounded context,"
Article that talks about the significant points: https://www.zdnet.com/article/this-new-technology-could-blow-away-gpt-4-and-everything-like-it/
LDJ#2946: Also another big potential paradigm shift: https://memit.baulab.info/
**
"We found that memorized factual associations can be located at a specific location in a GPT network, and we developed a way to directly edit parameters to alter that location to change the model's knowledge of a single fact."**
ILmao#5683: If you search for hyena here, it's already been discussed at length
ILmao#5683: The article is way editorialized (big surprise)
LDJ#2946: You might be talking about an old paper that mentions Hyena but this is a new one i'm talking about.
I just searched in the server for Hyena, the last mention I see is April 14th, this paper came out April 19th....
LDJ#2946: https://cdn.discordapp.com/attachments/729741769738158194/1099062591122329742/image.png
LDJ#2946: https://cdn.discordapp.com/attachments/729741769738158194/1099063329307230339/image.png
TastyBucketOfRice#8796: arxiv paper dates are the latest revision. This paper's been out since Feb 21st. See "Submission history" in https://arxiv.org/abs/2302.10866
LDJ#2946: I understand, but it looks like nobody has talked about the paper since it's been revised.
LDJ#2946: But i'll look at old convos since it's probably mostly the same paper considering this revision has near exact amount of file size
TastyBucketOfRice#8796: All they changed was adding a new supporting grant and an aside sentence about using lower learning rates for Hyena models compared to GPT. Not much new to discuss.
|
If you have anything new angle or insight to discuss on this paper though, feel free to drop it in #research! 🙂
LDJ#2946: thank you for summarizing the changes! 🙂 Ok will do!
mahouko#7043: any idea how long it takes to train a diffusion model from scratch on 64x64 images on a 4090? I'm interested in tweaking Unet to evaluate some changes, but I'm wondering whether this is something that takes days to get feedback on
destrucules#7325: These transformer variants are very interesting but the unlimited context thing is not unique, nor are speed-ups at inference. You get similar benefits from a system like RWKV, attention variants like sparse attention, positional encoding changes like RoPE, ALiBi, sinusoidal, TransformerXL, etc, and finally context managing solutions like Compressive Transformer and Infinity-former
LDJ#2946: 😱
destrucules#7325: In fact, many major LLMs already use at least one of these. The LLaMA models, GPT-J, the Galactica models, and the GLM models all use RoPE and can theoretically handle unlimited input length. They don't do very well for long sequences in practice, though.
destrucules#7325: That's kinda the big issue with most of these techniques. If you only train the network on sequences up to a certain max length, the network tends not to generalize to longer sequences. There's no obvious solution
ilovescience#3282: nah that should be pretty quick, maybe a few hours?
mahouko#7043: oh really! that's much better than I'd feared
ilovescience#3282: depends on how many samples and how epochs though
mahouko#7043: uhh I can do an overfit one if necessary, but I figured I would download like 1mill anime images
kurumuz#5695: its gonna take a lot longer on real datasets like that
mahouko#7043: I mean hand-illustrate 1mill anime images myself for which I totally own all the rights
ilovescience#3282: yeah 1 million is a lot
ilovescience#3282: but like just do exps on a subset i guess
mahouko#7043: okay, would it help to restrict it to a small number of characters, but have a lot of data of them? like 1mill Touhou images
mahouko#7043: so.. broadly it's overfitting, but in terms of minutiae it has some latitude to generalize
kurumuz#5695: you can test on toy datasets and a small anime subset yeah
mahouko#7043: Okay, so "lots of images, but of a small number of subjects" could overfit to the subjects fairly fast?
|
kurumuz#5695: it should overfit to those characters yeah, but if they're unique images shouldn't overfit to exact images?
kurumuz#5695: like i guess i wouldn't call it "overfitting" given your training distribution is touhou images
ilovescience#3282: yeah same that's not overfitting
mahouko#7043: well, whilst there are loads of characters in theory, on Danbooru I expect the distribution is massively skewed to a smaller number of characters
ilovescience#3282: that's like saying training a cat-dog classifier is overfitted to cats and dogs cuz it can't do anything else
mahouko#7043: like waifu-diffusion 1.3 was fine at Marisa, Reimu and Flandre but struggled outside the most obvious ones. could be worsened by OpenAI text encoder though.
mahouko#7043: so I figure that gives a feel for relative representation of each character if you were to take a small subset of Danbooru
mahouko#7043: the other half of this is I wanna make a Danbooru text encoder
kurumuz#5695: we had a tag regularizer to make sure each tag is learned properly
kurumuz#5695: :berk:
kurumuz#5695: just a tag vocab?
kurumuz#5695: or do you want to train a transformer there
mahouko#7043: I wondered whether it's as simple as:
- vocab word per Danbooru label (no tokenization)
- torch.nn.Embedding
- cross-entropy loss?
kurumuz#5695: basically
kurumuz#5695: then put your embeddings into the cross attention
mahouko#7043: so you're basically training an encoder to turn a sparse booltensor into an embedding, and a decoder to turn the embedding back into a sparse booltensor? and cross-entropy loss rates how close the round-trip was?
kurumuz#5695: sorry, not sure why we need the cross entropy
|
kurumuz#5695: you can optimize your embedding layer with the loss from the unet right
kurumuz#5695: or i guess you could initialize the embeddings to an already known space
kurumuz#5695: and not even train them
mahouko#7043: Oh, cross entropy was just my guess based on how T5 is trained in masked-language setting
ilovescience#3282: yeah i think this was done in the LDM paper
ilovescience#3282: their LAION 400M model
kurumuz#5695: i guess you could do MLM training with tags
kurumuz#5695: or indeed predict the next tag
mahouko#7043: yeah, I was gonna MLM it -- train the text encoder on its own
mahouko#7043: okay so next tag prediction is something I wasn't sure of
kurumuz#5695: wonder if that's the best way to get good embeddings
kurumuz#5695: you could do things like, steal openai embeddings if they're good :berk:
mahouko#7043: like is it necessary or advantageous to train it for next tag prediction? all I'm trying to get is an embedding for the Unet to cross-attend over
kurumuz#5695: given you just need to precompute values for N tags
kurumuz#5695: then let cross attention handle the rest
kurumuz#5695: but maybe they don't embed them properly
ilovescience#3282: yeah then just let the diffusion training handle it i guess
mahouko#7043: ah, you're suggesting train my Unet against OpenAI's text encoder, and skip training my own embedding?
kurumuz#5695: yeah you could initialize the values from their embedding api
kurumuz#5695: if the embeddings there are good
|
kurumuz#5695: you can check quickly by embedding 100k tags or whatever and do a KNN
kurumuz#5695: if similarity search there seems OK, can just use that
kurumuz#5695: you can still unfreeze the vocab during unet training but with a lot lower LR compared to unet as well
mahouko#7043: my impression from stable-diffusion and waifu-diffusion 1.3 is that OpenAI CLIP's text embedding isn't great for anime -- otherwise stable diffusion 2.1 would excel at high-aesthetic anime in LAION, and waifu-diffusion 1.3 would be more successful at character relevance than I observed
kurumuz#5695: but have you checked NAI?
kurumuz#5695: i think we have pretty good tag recognition
mahouko#7043: I haven't -- didn't wanna steal it
kurumuz#5695: i can give you keys, also i didnt say use the clip embeddings
kurumuz#5695: i am talking about their embedding API
mahouko#7043: oh okay
mahouko#7043: yeah I never tried their embedding API
kurumuz#5695: i might quickly go and build a KNN from their API
kurumuz#5695: though I am also curious what MLM training over tags get us as well
kurumuz#5695: maybe we get super good embeddings
mahouko#7043: presumably OpenAI's embeddings are high-dim?
kurumuz#5695: can always linear project it
mahouko#7043: I wanted to make an efficient embedding to help the cheap training of anime models
kurumuz#5695: hmm its an ada model they use so should be small embeddings i think, not sure
mahouko#7043: I'm also hoping that the embedding could be concatenated onto whatever SDXL conditions on, then fine-tune Waifu diffusion XL (if they eventually make one) to condition on that efficient Danbooru embed
kurumuz#5695: is SDXL even going to be that much better :thonk:
|
mahouko#7043: Like.. I think you'd just need to increase the in_channels of k_proj and v_proj in fine-tuning, initialised to 0
mahouko#7043: from CLIP dim (1024) to CLIP dim plus my dim (1024+128)
kurumuz#5695: oh you want to concat it like that
kurumuz#5695: yeah you can increase and init the new parameters to 0, and that should be equal computation and not hurt the model i think
kurumuz#5695: but you can't concat it like that right
kurumuz#5695: yea
mahouko#7043: and there'd be 77 tokens still but I guess I'd use cross attention mask so only first token uses my Danbooru embed
kurumuz#5695: because tokens and tags will not align
kurumuz#5695: so you want to concat it across sequence
mahouko#7043: oh, they need to align? crap
kurumuz#5695: and not dimension
kurumuz#5695: yup
kurumuz#5695: just concat it across dimension
kurumuz#5695: apply a linear projection to your tags
mahouko#7043: uhh what about going from SD's 77x1024 to 78x1152
mahouko#7043: orrrrr
kurumuz#5695: i mean the problem is, if you have a text prompt with CLIP it will be let's say 100 tokens
mahouko#7043: no, keep it at 77x1152 but always put my embedding into the first token? so you basically fine tune CLIP BOS embed
kurumuz#5695: and with your tag embeddings it will be 20 or whatever
kurumuz#5695: so how do you concat these in the dimension
|
mahouko#7043: my tag embed was gonna be 1 token
mahouko#7043: I think
kurumuz#5695: yeah for each tag
mahouko#7043: mm I hadn't thought it'd be necessary to do per-tag
mahouko#7043: like.. it's many dimensional, so it can be many orthogonal things simultaneously
kurumuz#5695: so you want to train a tokenizer as well?
kurumuz#5695: oh i see what you mean
kurumuz#5695: so it would be 78x1024 indeed
kurumuz#5695: if its one extra vector
mahouko#7043: hmmm
kurumuz#5695: i think you can train an autoencoder to learn this embedding?
kurumuz#5695: i just now understood what you want to do
kurumuz#5695: so you want to map N amount of tags into just one embedding of dimension X right
mahouko#7043: I figured if you have a Danbooru caption with two labels:
`marisa happy`
each label has an id in the vocabulary
vocab has like 10,000 slots, we make a sparse BoolTensor saying which slots are true
we pass that 10,000 BoolTensor into a `torch.nn.Embedding` to reduce dimensionality to 128
so we have a 1-token embed, 128-dim, which represents all labels in the Danbooru caption
kurumuz#5695: well embedding will process each of those tags separately
|
mahouko#7043: oh
kurumuz#5695: and get you a 128 dim embedding for each tag
mahouko#7043: damn
kurumuz#5695: you would want some kind of autoencoder to actually encode those embeddings into 1 sequence dim
kurumuz#5695: so you go from N tag embeddings -> 1x256 embedding -> N tag embeddings, then do loss between reconstructed and input
kurumuz#5695: and that should pack it into the embedding
kurumuz#5695: i think
mahouko#7043: so I guess I'm missing a transformer
kurumuz#5695: maybe you want a transformer, maybe not
kurumuz#5695: i assume a few layer transformer would be good there
tohara-pandologic#6573: Hi, I'm not sure where to post newbie-type queries related to work done by EleutherAI, because most of the threads seem specialized. So should #general channel be used when in doubt? I checked the FAQ and the rules, but I didn't see tips on posting placement.
As an example, I have a question about The Pile, but #the-pile is archived. I was looking to get clarification on the relative breakdown of the datasets (i.e., table 1 of https://arxiv.org/abs/2101.00027), as the ranking seems unintuitive.
mahouko#7043: feels like a reasonable question for both #general and #research from what I've seen, but I'm new here
kurumuz#5695: maybe something like, attention encoder -> attention pooling -> embedding
kurumuz#5695: or just avg pooling, or take last token or whatever
mahouko#7043: I was wondering whether there's anything special about this problem domain that means we don't need as big a hammer as attention
mahouko#7043: I guess Danbooru captions are unordered, so you need to be prepared to attend to token relationships at any distance
StellaAthena#3530: The breakdown is largely arbitrary and based on gut feel
kurumuz#5695: you might not need attention, pooling can be enough
|
kurumuz#5695: or something simple like mlp mixer
mahouko#7043: hmmm <https://paperswithcode.com/method/average-pooling>
I'm not sure what the "patches" would be here — seems like it makes sense for spatial data, but I dunno what that means for a language model
mahouko#7043: it feels like it relies on locality (i.e. tokens embeddings being located in the sequence near similar token embeddings), but Danbooru captions are unordered
tohara-pandologic#6573: Thanks, so will that account for why there's much more arxiv content than Github?
kurumuz#5695: @mahouko average pooling is simply taking the mean of the sequence
StellaAthena#3530: Yup
StellaAthena#3530: Also, at the time we did this 10% GitHub was strange and novelly *large*
mahouko#7043: oh okay, so the way that helps with token prediction is that if *any* token embedding so far has pulled it along touhou dimensions: we become more likely to predict touhou-related embeds
StellaAthena#3530: It was weird that we trained on code, and people were pleasantly surprised by how good our models were at simple code problems
kurumuz#5695: if you want each token to get some global sequence information you can do something like MLP mixer. if your hidden dim is 1024 for example mix the 512: with avg pool
kurumuz#5695: poor man's attention :berk:
mahouko#7043: certainly I like the prospect of poor man's attention, so interested. though I'm not familiar with MLP Mixer, so no intuition here
kurumuz#5695: idk if this is exactly what MLP mixer does but i have used something like this previously and worked well
kurumuz#5695: though in your case attention will not exactly be expensive either
mahouko#7043: hmm okay. need to wrap my head around how MLP mixer can apply to a language model, and what patches mean in the context of language
mahouko#7043: I kinda feel something markov-chainy might be a useful way to do poor man's attention. like… learning a few 3-grams.
given "marisa" and "grinning" then "stealing books" becomes likely
bradfox2#3090: best channel to talk hardware and distributed config? if not eluther, any other good servers someone could recommend?
AI_WAIFU#2844: #implementation-details
|
bradfox2#3090: thx
123457#2014: Is there anyone in Eleuther I could speak to about the safety of publishing some prosaic alignment research? I'm worried mostly about the s-risk implications of non-formal alignment solutions and that by contributing a prosaic approach I could be inflating s-risk likelihoods primarily
StellaAthena#3530: @AI_WAIFU is our Head of Alignment
StellaAthena#3530: @Research Lead There should now be a working Contributor License Agreement checking bot on all EleutherAI GitHub repos. All existing PRs are exempt, but future PRs will require contributors (including previous contributors) to sign the CLA. Once it’s been signed, that user can have PRs to any EleutherAI repo merged.
The CLA is pretty standard, and is primarily focused on the right to use, license, and modify code people contribute to our repos. If anyone has questions or gives you trouble about it, let me know and I’ll talk to them.
Bots need to be manually white-listed… I’ve added the ones that run org-wide but if you have one that’s failing the check let me know and I’ll white-list it.
mathematicallymathematics#9812: Any inbuilt function or package for batching dynamically based on similar sequence length samples?
I don't want to pad too much or truncate and want to use memory efficiently. Please give some inputs
mathematicallymathematics#9812: (huggingface)
StellaAthena#3530: I think T5X supports this
SomeoneHorny#7464: does anyone know how can i make the model have a word limit?
StellaAthena#3530: Stop generating after you’ve created all the word you need
SomeoneHorny#7464: no i meant i want its responses to be in a 50 word limit,
SomeoneHorny#7464: i am trying to make a image generating chabot, but it keeps on giving huge paragraph responses, which isnt something i want
baidicoot#9673: that sounds very nontrivial to do well
StellaAthena#3530: @SomeoneHorny Yes, the way you do that is by stopping generating when you’ve hit 50 words
StellaAthena#3530: Transformers are generally incapable of “planning ahead” so as to generate exactly 50 words.
RifeWithKaiju#1737: did eleutherai do any of that stuff they did with chatgpt training to try and prevent hallucinations and get it to say "i don't know" instead?
|
tpapp157#3643: Some sort of beam search sampler could potentially find a result within the word limit.
StellaAthena#3530: No
SomeoneHorny#7464: yeah i am already doing that but it just generates incomplete responses, i was using openai api before ,in that you can actually hard code the response limit but its very restrictive
RifeWithKaiju#1737: gpt told me they had a bunch of the training data be intentionally unknowable like questions about non-existent things, with "i don't know" style responses,
and that makes it so in inference, a lot of stuff it doesn't know resembles the latent space of unknown things more than it resembles something that might cause it to go off on some false hallucinated tangent
JustHayden#4884: Any panpsychists in here interested in discussing the conscious capacities of matter leading up to conscious AI in the way we both hope and fear it will be? 😅
frabcus#9787: Not an ML answer but When I’ve wanted to do this prompt engineering, like saying "be succinct or only use n words" has covered a lot. Combined with retrying with a high temperature until short enough.
amaliaaa#7120: is there any work being done on making much more casual / personalized chatbots? Everywhere online I see people making a "chatbot" based off GPT Turbo and the "personality" they give is just one phrase which barely modifies the way the bot speaks
synquid#7193: you could make lora finetunes for different personalities probably
amaliaaa#7120: hmm, welp, i wanted to add that i did work on this for quite a while and i got pretty far with making bots having pretty complex personalities / different ways of speaking from how chatgpt speaks
amaliaaa#7120: and it's all done from the prompt, its nothing too smart imo but the results seem to be much much better than what anyone else gets with chatbots 🤔
amaliaaa#7120: like, i saw the jailbreaks n stuff being done on chatgpt and it still kind of speaks like chatgpt, with rly long messages and over-explaining things often
amaliaaa#7120: and like im wondering, is that the best there is out there...? because i got much further and i kinda wanna share but idk how lol
rockenots#6906: Have you checked out character.ai?
amaliaaa#7120: i saw it! though characters still kind of seem to talk like bots 🤔
amaliaaa#7120: im now compiling some pics of the stuff i got for a reddit post
Ryu#0274: Have you looked at PygmalionAI?
amaliaaa#7120: oh, no? what's that?
Ryu#0274: https://docs.alpindale.dev/
|
Ryu#0274: > In simple terms, Pygmalion is an AI fine-tuned for chatting and roleplaying purposes.
mihai#3148: Hello 🤗. Can you please helpe with a recommendation ? I want to use an open source tool for text annotation( NLI, NER, summarization etc). The best one I found is Docanno. Do you know any other tools ? Thank you. I do not know where to post this request
amaliaaa#7120: okay so ive been looking at it, it looks rly cool! it seems to also work with openai's gpt turbo stuff too...? The characters still seem to talk a bit robotically though, but the personalities are cool :)
amaliaaa#7120: the kind of stuff i have is more like this, a bot (Cralka) which sends much shorter and... natural i guess? messages; but even if the messages are longer she still doesnt go full 🤖 mode like usual GPT stuff, idk; it may not be a great example but it's not cherry picked at all, literally all dialogue with Cralka is rly casual kinda like this;
and the actual important part i think is that it's still GPT 3.5 turbo and it's not finetuned at all, this is just done through a not-so-long prompt https://cdn.discordapp.com/attachments/729741769738158194/1099433342211989584/image.png
amaliaaa#7120: no idea if this is any better than pygmalion, but the chats get pretty funny, like here the bot suddenly decides to go to the bathroom?? https://cdn.discordapp.com/attachments/729741769738158194/1099433611360473179/break.png
amaliaaa#7120: and since it's all from the prompt, you can _wildly_ change its personality without much effort, and p much get anything you want
jrowe#5371: Lmao
amaliaaa#7120: ~~sent another pic but i deleted it because it was starting to be a bit too much spam :P~~
Ayo#9564: how did u make it into a Discord bot?
Ayo#9564: are you sending queries to your PC?
amaliaaa#7120: all the code is here :P <https://gitlab.com/Milkdrop/pinguinul-stie-tot> (its quite stuffy...)
amaliaaa#7120: uhh i guess? i have a bot which runs on a small server and it just reads Discord text messages, makes the prompt and sends it to OpenAI
Ayo#9564: oh I thought u were using Pygm
Ayo#9564: you're using it *with* GPT?
amaliaaa#7120: oh ya GPT
amaliaaa#7120: lol
amaliaaa#7120: just the plain gpt 3.5 api
amaliaaa#7120: sry if it wasnt clear 😭 this isnt pygmalion no
|
Ayo#9564: wdym by this
Ayo#9564: how can it work with GPT? or why would u want that?
amaliaaa#7120: which part?
amaliaaa#7120: i was just giving my opinion on pygmalion lol
amaliaaa#7120: the characters talking robotically is like, the classic GPT-esque stuff of a long message over-explaining what's happening
amaliaaa#7120: ? pygmalion is also on top of GPT...?
Ayo#9564: It's just an LLM
amaliaaa#7120: ya
Ayo#9564: it's not on top of anything, it's standalone
amaliaaa#7120: i mean, from what i see, it's either finetuned GPT-J or prompting openai's gpt 3.5 ... no?
Ayo#9564: finetuned GPT-J
Ayo#9564: it's standalone
Ayo#9564: it's meant to run on a personal PC/cloud
Ayo#9564: it's open source
amaliaaa#7120: ahh thats cool then! i thought it also had an option for querying openai
Ayo#9564: nothing to do with OpenAI's products
jrowe#5371: Pygmalion is a great name
Ayo#9564: I tried Pyg, it's "okay"
amaliaaa#7120: fairs, my stuff would also work locally i think, though I think I'd use LLaMa for it or sth
Ayo#9564: actually Idk what "based" means
|
Ayo#9564: fine-tuned?
Ayo#9564: or retrained
Ayo#9564: or redesigned/added design
amaliaaa#7120: finetuned id say
Ryu#0274: https://cdn.discordapp.com/attachments/729741769738158194/1099441287712751626/image.png
Ayo#9564: :berk:
amaliaaa#7120: pygmalion is rly cool from what i saw
amaliaaa#7120: i wonder what could be done with newer stuff like llama though, because from what i saw it performs better than GPT-J...?
amaliaaa#7120: for the model which has a similar number of params
Ayo#9564: not necessarily
Ayo#9564: this is a very specific task... roleplaying/chatting
Ayo#9564: Pygmalion is pretty decent at that, better than most other models
still not good enough to be usable for me...any of them
amaliaaa#7120: mhmm
amaliaaa#7120: though if you finetune llama with the same data gpt-j was finetuned, i was thinking maybe that would be slightly better? idk
Ryu#0274: ~~Llama 65B Pygmalion :100000IQ: ~~
amaliaaa#7120: :D yea!
Ayo#9564: GPT-4 Pygmalion
Ayo#9564: I bet that would be pretty good
Ayo#9564: but Altman wouldn't let that happen, he specifically said he doesn't want his AI to be used to replace human relationships
|
Ryu#0274: ~~Sydney~~
amaliaaa#7120: hmm, so Pygmalion is trained on just general conversations, and when you make a new personality you just prompt it
amaliaaa#7120: there's no need to finetune for every personality, right?
Ryu#0274: afaik yes
amaliaaa#7120: aha, that's cool!
synquid#7193: is it normal for sentencepiece unigram training to be super slow on the "Extracting frequent sub strings..." step :thinkies:
synquid#7193: it basically feels stuck
synquid#7193: not even that big of a training set
destrucules#7325: @TastyBucketOfRice I hope it's okay for me to ping you - sorry if not. I have a question about the Transformer Math blog post. The approximation it gives for compute is 6 x parameters x training tokens, which seems to work well for models with 2048-token max sequence length. However, since the forward and reverse passes are quadratically more expensive (but linearly less common) when the context length is increased, shouldn't there also be a term for the sequence length in the compute expression? Something like
C = 0.003 x S x P x D, where S is sequence length?
destrucules#7325: _ _
What I find very confusing about this is that Askell et al. 2021, which introduces the group of models Anthropic has used in all subsequent research, lists the compute for its models, and the values they give are almost exactly what you calculate from the 6PD approximation. Yet these models have 8192 tokens of context. Yet I can also verify from papers like the Flash Attention paper that if you increase context length and decrease batch size commensurately, compute costs increase linearly.
So what am I missing?
hails#6601: see section 2.1 of https://arxiv.org/abs/2001.08361 for a derivation of this formula!
hails#6601: The key reason is that `d_model` (model width) is larger than `ctx_length` in ~all cases that people actually use in practice--so while attn computation is quadratic in sequence length, the feedforward layer uses computation that's quadratic *in `d_model`*--so that factor is way bigger than the context length-dependency
hails#6601: See also https://twitter.com/stephenroller/status/1579993017234382849?s=20 which shows that as you scale up larger, FFN dominates compute compared to attention
destrucules#7325: Thank you so much. I am still struggling with the concept, though: if the feedforward compute is invariant to n_ctx, then for very large models, shouldn't inference compute also be invariant to n_ctx?
I think I'm still struggling with how a quadratic term in the inference compute becomes non-existent during training. I can accept that n_ctx doesn't matter for either case, or that it matters for both cases, but I'm struggling to see why it can matter for the compute per token during inference but not during training
|
destrucules#7325: _ _
Inference should be computationally identical to doing one forward pass per token of output, right?
StellaAthena#3530: 1,000,000,000,000 + x^2 is approximately 1,000,000,000,000 for everyday values of x
hails#6601: not quite because people do something called KV caching--due to how attention is causal, you know the attention for all tokens [1, 2, ... , N-1] so then when generating token N, you only need 1 x N attention scores, then for token N+1 you need 1 x N+1 attention scores for that new token attending to all previous tokens
hails#6601: the past attentions won't change when adding more tokens to the end of context
destrucules#7325: Is that not also true during training?
kevin-ai#4032: https://cdn.discordapp.com/attachments/729741769738158194/1099455057549860895/screenshot.png
kevin-ai#4032: It's KV caching
destrucules#7325: Understood, but if that's also the expression for inference, then raising the sequence length by a factor, say, 4, should have no impact on the wall clock speed
kevin-ai#4032: https://cdn.discordapp.com/attachments/729741769738158194/1099455151808454768/screenshot.png
kevin-ai#4032: All tarnsformer library including huggingface, fairseq using this algorithm for inference
destrucules#7325: Thanks for the diagrams kevin
hails#6601: it is, and in training you process all N tokens in one forward pass. For decoding, you process all N tokens in one forward pass, then after generating one token next you want to process all N+1 tokens in your forward pass--but you've already computed values for the first N token attentions, so you don't do a "full" forward pass at each timestep
kevin-ai#4032: @destrucules If you interested in calculating Transformer FLOPS, I recommend to see this.
https://github.com/jason9693/ETA4LLMs
I was calculate flops followd by gpt-neox model for Polyglot training schedule.
destrucules#7325: Thank you!
destrucules#7325: You've all been awesome but I am still not seeing why the compute for forward passes (with caching) during inference grows noticeably with sequence length, whereas for training, seemingly the exact same forward passes don't care about sequence length at all since the FFN dominates compute and doesn't notice the sequence length
destrucules#7325: _ _
|
I can understand if the attention mask is so cheap it doesn't affect training, but then why isn't it equally cheap at inference? And vice versa
hails#6601: It is still the case that the FFN dominates computation per token! So yes, sequence length increases the cost of each subsequent token you generate, but you still have a dependence on `d_model**2` which outstrips that https://cdn.discordapp.com/attachments/729741769738158194/1099457708018647060/image.png
destrucules#7325: Yeah you're assuming a linear factor L for the total compute, where L is sequence length. That's what my intuition tells me, but not what the paper Hailey linked me states in section 2.1
destrucules#7325: Yeah so as long as 2N >> 2·n_layer·n_ctx·d_attn, C_forward ≈ 2N, right? Well, is this condition not satisfied by LLMs during inference too, such that the sequence length can be effectively ignored? If so I'm surprised we hadn't seen more long-sequence models become popular until recently. Even now, training more than 2048 tokens is rare
hails#6601: Yup! And yep, especially with flash attention training with 8k sequence length is not too bad at all
destrucules#7325: So we should expect GPT-4 to have approximately the same inference cost regardless of which context length we're using... Hmm
kevin-ai#4032: @hails Does flash attention plan to apply NeoX class in huggingface?
kevin-ai#4032: We consider using this but we hesitate as it's not applied to huggingface for deploying.
hails#6601: I'm not sure if this is something Huggingface would go for merging--I know of https://github.com/kyleliang919/Long-context-transformers which has a flash attn wrapper for NeoX HF models though
hails#6601: you can use Flash attention in training and not use flash attention at inference time, if that was your hangup--we did this for Pythia V1
hails#6601: well, running your first forward pass on a long input text is still quite costly
destrucules#7325: You're melting my brain but I appreciate it
hails#6601: it's more so the marginal decoding speed shouldn't go up too much per token after that (I think? inference is smth I'm less familiar with)
hails#6601: haha you're asking the right questions! I did not know the 6 * N * D formula until someone told me about it and pointed me to the kaplan et al paper
destrucules#7325: I should say another thing that's bothering me is Claude. Anthropic has been neither tight-lipped nor loose-lipped about its size: they haven't explicitly stated how big it is, but they told reporters when it was first announced that it's bigger than the 52B model from the RL-CAI paper. We found out later from their moral self correction paper that the next size up is 175B. Both the RL-CAI paper (Bai et al. 2022) and the Kadavath et al. 2022 paper used approximately Chinchilla scaling for training, so we can calculate the FLOPs for Claude as
6\*175B\*3.5T ≈ 3.6e24 FLOPs
But in one of their blog posts, Anthropic said current generation models are 50x more compute than GPT-3, which used ~3.15e23 flops. They then identify Claude as a current generation model, implying it too is 50x GPT-3, which comes out to ~1.5e25 FLOPs. When they announced Claude-NEXT, they repeated this figure for the compute of Claude.
|
If you multiply the 6PD estimate by 4 to capture the longer sequence length, you get almost exactly 1.5e25 FLOPs for Claude.
hails#6601: er, 1 thing is that you should not be multiplying 4 * 6PD for any estimates (is this bc claude has 8k ctxlen?)
hails#6601: `6 * P * D `*is* the estimate, and `D` is measured in tokens-- so it has whatever sequence length baked in
hails#6601: if you 4x context length then you divide number of samples per batch by 4, and still train the same number of tokens
destrucules#7325: Yeah it's because 8192 is Claude's context length. D is the training corpus, so it shouldn't have any information about context length baked in
destrucules#7325: Exactly. So according to the Kaplan paper, that should result in no effect. But if forward passes during training and inference are similar, then each sample should be 16x more expensive, though there's 4x fewer of them, so you get an extra 4x
hails#6601: I'm not sure I follow
hails#6601: what does "But if forward passes during training and inference are similar" mean to you? why does it imply that each batch is 16x more expensive?
destrucules#7325: Each sample - I misspoke. If inference cost is overall quadratic with sequence length, as I have read and heard from a bunch of sources, then increasing the sequence length by 4 should give me ~16x the compute each time I do a forward pass during inference. If this behavior is the same during training (not sure about this part) then each sample is 16x more expensive, but there are 4x fewer of them to keep the training corpus the same size, so you get 16/4 = 4x overall compute during training.
Alternatively, if we say forward passes don't care about sequence length in a meaningful way (as Kaplan et al. 2020 suggest), then increasing the sequence length for each sample only results in 4x compute per sample, and 4x fewer samples, so 4/4 = 1x total training compute
destrucules#7325: _ _
I don't know what mistake I'm making in my reasoning but it feels like attention should be so cheap that wall clock speed is linear with sequence length during inference, even though there is a negligible s² term in there
destrucules#7325: _ _
Maybe the issue isn't compute but memory. Maybe we can ignore the "quadratic" behavior of compute, but the quadratic behavior of memory actually matters and makes the difference
rallio#9917: This is very easy to measure empirically if you have some hardware
destrucules#7325: I have a 7 year old integrated graphics i7 CPU
rallio#9917: All the theoretical stuff doesn't matter too much unless you are talking about specific hardware setups because the bottle neck changes a lot depending
rallio#9917: On the hardware and specific configuration
rallio#9917: A 7 year old CPU probably would work well with something like gpt-2 small or medium
|
destrucules#7325: One thing that might make it difficult to test this empirically is that the FFN is not as dominant at smaller transformer sizes, so we wouldn't necessarily be seeing similar behavior
rallio#9917: I have tested a lot of different context lengths empirically. All I am pointing out is that the theoretical discussions often dont have much connection to the actual performance because of all the different areas hardware can be bottle necked and special accelerators for various kinds of operations
rallio#9917: In general with gpt style model as you generate more tokens each incremental token takes longer to produce than the previous one
destrucules#7325: So based on your experience testing different context lengths, does the scaling with context length depend on hardware? What sort of change in training time would you typically expect for doubling context length? Or does it depend so completely on the hardware that you can't generalize?
rallio#9917: Like I was saying, it depends a lot. When it comes to training, you can go much faster if the model and optimizer states can fit in vram. Most large modern models don't fit. So then you will offload optimizer states to the system ram/cpu. A lot of times with very large models you still can't fit on a single GPU so then you do model parallel which splits the model across multiple accelerators then you get a slowdown from the interconnect
rallio#9917: Then you have the difference in speed that comes from specialized hardware for fp16, bfloat16(tpu), fp8 (h100), tfloat32 etc
destrucules#7325: That makes sense
hails#6601: sorry had to run—but yes empirically the Kaplan formula is accurate
destrucules#7325: I guess that explains why OpenAI is going for such long contexts. It's like free perplexity
destrucules#7325: That also means GPT-4 was just ~50x the compute of GPT-3 :/
Ayo#9564: Is the number of papers published in AI still increasing at an exponential rate? Or I wonder if the exponential got even faster
Multy#4146: exponential i would be shocked
linear i could see
KublaiKhan1#6681: It's been exponential though
KublaiKhan1#6681: *year over year
Ayo#9564: even Karpathy tweeted about it
Multy#4146: # of new entrants to the field * worthy areas to study
makes sense actually yea
Multy#4146: but also
Multy#4146: I've long theorized computers as "everything works" machines
|
Multy#4146: you wanna pick the shittiest language for your task? no problem, computer will oblige
Multy#4146: computers are like the antithesis of the scientific method
jrowe#5371: More people are figuring out the IJ Good notion that if you figure out AI, everything else topples
Multy#4146: AI = vague goal definition
jrowe#5371: >>> Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.
- Irving J Good, 1966
Multy#4146: preach
mathematicallymathematics#9812: Hi.. sorry for this question here, but I assume you guys might have an idea..
I can't get Kaggle TPUs to work at all.. does anyone have an experience?
destrucules#7325: Parameters, no, but compute, yes. About 10x per year. However, there was a big drop in parameter counts due to Chinchilla scaling and we may see further drops from LLaMA scaling
destrucules#7325: GPT-4, for example, is likely ~350B parameters
destrucules#7325: Claude and GPT-3 are arguably the next strongest models and they're both 175B
jrowe#5371: GPU and TPU are subject to the same constraints as cpu chip fab; the apparent exponential trajectory of gpu/tpu speeds was simply them catching up to chip fab SoTA, and they're roughly at parity now. Moore's law is still in effect, but the dark technowizards are up against 2d fab limits, so growth curves might dip, or they might spike as they move into 3d
destrucules#7325: I think economic growth will likely compensate that
jrowe#5371: Intel, AMD, NVIDIA might have those answers, but we won't see what's what until they deploy to market
jrowe#5371: We could be facing decades of relative stagnation, too
jrowe#5371: Probably not likely, imo, but definitely possible
destrucules#7325: People have been saying that my entire adult life. Maybe eventually it'll be true
destrucules#7325: I think we'll continue to see exponential growth for a long while yet. The doubling time may shift up or down but I don't think we're hopping off the exponential any time soon
jrowe#5371: I agree, but thinkmeat is notoriously bad at exponentials
|
Maximum Limelihood Estimator#8915: Linear I would be shocked, exponential I could see
Maximum Limelihood Estimator#8915: Probably neither is a great model but an exponential model should be a better fit, that's how these things go. If a variable is positive you take the log
Multy#4146: our scientific community should in theory be capped by the speed of comprehension in all human brains though, not by raw machine output
Multy#4146: machines won't need peer review
Multy#4146: well, hm..
Some Point Process#3793: It has been "linear" I thought? Tho I actually thought the # publications actually slowed down YoY https://cdn.discordapp.com/attachments/729741769738158194/1099622344718295100/image.png
Some Point Process#3793: (https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf)
Some Point Process#3793: OTOH there seem to be 240,000 hits in google scholar for 2022 (the 2021 number is 423,000)
Some Point Process#3793: for "deep learning" specifically(?)
Some Point Process#3793: could be wildly unreliable since if I insert quotes I get half the number of hits for some reason
Some Point Process#3793: the 2022 report goes up to 2021 (not 2020), for some reason https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf https://cdn.discordapp.com/attachments/729741769738158194/1099625452810809424/image.png
CarsonPoole#0640: I think there needs to be a tinygrad-esque WebGPU library. On a typical macbook I think it would be possible to run a 7b llama/6.9b pythia model at relatively decent speed, without needing to download/install any software like ggm;
tpapp157#3643: Have you looked into Tensorflow Java?
CarsonPoole#0640: yeah tfjs is not great to use
Ryu#0274: Apache TVM has webgpu support apparently
CarsonPoole#0640: I want to be able to write the graph for a model in some minimal code like tinygrad
Ryu#0274: in javascript or python?
CarsonPoole#0640: js
CarsonPoole#0640: needs to run in a browser
CarsonPoole#0640: 800 GFLOPS is about peak for the time being but with parallel decoding should be able to run a 7b model at tolerable rate
|
CarsonPoole#0640: also interested in the possibility of doing PEFT in the browser
Ryu#0274: I know @Fleetwood is doing WebGPU stuff in browser (with rust)
CarsonPoole#0640: ggml becomes useless with long input sequence lengths
Fleetwood#1949: You can't get 800GFLOP in the browser
Fleetwood#1949: Not without turning off robustness checks
Fleetwood#1949: TFJS has a WebGPU backend in progress.
Fleetwood#1949: I will be shipping PEFT in the browser in the next few weeks
Fleetwood#1949: No production software should be built in JS ever again
Fleetwood#1949: Web-LLM has already run Vicuna in the browser: https://github.com/mlc-ai/web-llm
Fleetwood#1949: GGML is super cool - but it's not usable for client side. I don't want my model to be burning all 10 cores whilst i'm trying to listen to music
sekstini#0069: wait, why do you want to do fine-tuning in the browser?
well#8215: Are there a repo to train a LLM with in context learning?
well#8215: I used nanoGPT to train a gpt model but i realized it doesn't support in context learning.
well#8215: Also i wonder is it possible to have a prompt based model (like ChatGPT) without having in context learning?
Fleetwood#1949: I'm not doing PEFT in the browser, I am allowing the ability to hotswap LORA tensors at runtime depending on given task
tpapp157#3643: Yeah I wish a bunch of programs had better thread count defaults to cap out at ~95% utilization rather than 100%.
artem9k#7593: of course, thats why we have Typescript
Fleetwood#1949: Better than JS by a mile - but we have WASM now so we can use real languages
circuit10#0158: I like JS
circuit10#0158: It's probably not good for everything though
|
circuit10#0158: TypeScript does seem better but it can be a bit annoying to set up
CarsonPoole#0640: what gflops are you getting on your gemm? I'm at 300 rn
Fleetwood#1949: My benchmarks are open source: https://github.com/FL33TW00D/wgpu-mm
Fleetwood#1949: 900GFLOP
sekstini#0069: hm, run it on an M1 Max yet?
Fleetwood#1949: Nah, @uwu1 ran in on an M1 Pro and it was over 1TFLOP
sekstini#0069: gimmie a sec
Fleetwood#1949: I haven't even tiled A correctly, if someone does it I'll send them 50 bucks
Fleetwood#1949: NVIDIA uses 64x128 tiles by default, mine are only 16x32. lots of room to grow there.
Metal also has super restrictive bounds checking
CarsonPoole#0640: you can't do the unchecked shader piece in a browser though right?
Fleetwood#1949: No so it's 680GFLOP in the browser
CarsonPoole#0640: cublas doesn't use a static tile size. it's completely dependent on the problem size
Fleetwood#1949: Yeah i need to use a heuristic to pick
Fleetwood#1949: I also implemented unchecked shaders on Metal in FireFox :chad: https://github.com/gfx-rs/wgpu/commit/a50228230797e4b6005e2e6ed83638646aa5e055
Fleetwood#1949: any 🦊 people you're welcome
CarsonPoole#0640: I honestly think webgpu gemm kernels are hilarious because they're all so tiny and don't have any tiling or pipelining or any of the other pieces that make cuda so fast
Fleetwood#1949: Yeah there's so much low hanging fruit it's a joke
Fleetwood#1949: problem is i have even lower hanging fruit elsewhere
Fleetwood#1949: buffer reuse being the main one
|
CarsonPoole#0640: I have one that uses tiling but it needs to add a lot of loop unrolling like yours
sekstini#0069: ```
➜ wgpu-mm python3 metal_matmul.py
1048576 907.58 us, would be 2366.16 GFLOPS matmul
1048576 1065.08 us, would be 2016.26 GFLOPS matmul in torch
```
sekstini#0069: um
Fleetwood#1949: You doing it in JS? https://jott.live/markdown/m1_webgpu_perf bram did a good writeup
CarsonPoole#0640: that one is very old
Fleetwood#1949: Yeah old syntax, i translated it into Rust for mine
CarsonPoole#0640: it uses a bunch of syntax that's not even around anymore
Fleetwood#1949: My benchmark is a rust port of his
Fleetwood#1949: If you have Chrome canary you can try out my matmuls: https://summize.fleetwood.dev/
CarsonPoole#0640: for the flops calculation what portions of the entire gemm routine do you count in your time
CarsonPoole#0640: bc that post does it kinda weirdly
Fleetwood#1949: Same as his post
CarsonPoole#0640: like I don't think it's counting allocating the C matrix
Fleetwood#1949: Let's move to #implementation-details I hate spoiling general
Millander#4736: I'm currently studying how well retrievers hold up upon distributional shifts. Are there any datasets folks recommend that have natural distributional shifts, or are easy to simulate? Thanks!
natedog#8669: BEIR benchmark is pretty good for this I think
|
Millander#4736: Thank you for the recommendation! Going through there paper now. Is this a benchmark you've used?
natedog#8669: No, but I've done work in code retrieval so done dives into the literature
mahouko#7043: thanks! I use Firefox and Mac so any WebGPU progress there is most welcome
chilli#5665: Would it make sense to just add a triton backend for webgpu?
CarsonPoole#0640: I don't think so because if I understand correctly WebGPU doesn't have _access_ to those things
chilli#5665: You can’t do tiling?
CarsonPoole#0640: you can do tiling but like async pipelining, shared memory, etc are not available
chilli#5665: Hmm, I see
chilli#5665: I guess it’s just kinda like writing a matmul on cpu then
CarsonPoole#0640: that's why all the kernels are super simple
CarsonPoole#0640: well yes but they're also wayyyy faster than a cpu matmul
chilli#5665: Well some amount of tiling is certainly gonna be beneficial
CarsonPoole#0640: yes you can see that in the gist I made
chilli#5665: What does webgpu compile down to?
CarsonPoole#0640: honestly I'm not an expert here I wrote my first line of webGPU shaders today so I'll defer to @Fleetwood
chilli#5665: Are most people leveraging integrated GPUs here or discrete?
CarsonPoole#0640: probably some of both if I had to guess. Would you count a macbook's GPU as integrated or discrete considering it's a SoC?
chilli#5665: M1 GPU?
CarsonPoole#0640: even if it's like a motherboard's graphics card it's almost certainly a ton faster at doing a matmul than CPUs
chilli#5665: Do you know concrete numbers here?
|
CarsonPoole#0640: yes. it shares memory with the CPU on the m1
chilli#5665: Yeah I’m familiar
CarsonPoole#0640: doing a 1024x1024 * 1024x1024 is like single digit GFLOPs on CPU but tens to hundreds on a close-to-naive webgpu shader with the m1
chilli#5665: Single digit with a single thread?
CarsonPoole#0640: I suppose so yeah since it's JS
CarsonPoole#0640: and obviously excluding SIMD
CarsonPoole#0640: so yes you could definitely get much better with some low level language + wasm
CarsonPoole#0640: rust/cpp/zig/etc
chilli#5665: I’m curious what the main appeal of webgpu is. Is it mainly 1. Being able to serve models over web, or 2. Being able to have cross platform access to any kind of gpu?
CarsonPoole#0640: my interest is mainly in the ability to have people run a nontrivial model on their local machine without requiring them to download software or touch model files
CarsonPoole#0640: https://mlc.ai/web-llm/
CarsonPoole#0640: this demo is running a 7b param model in the browser at a decent speed
Ven#0814: Whats a good model 13b to try?
mahouko#7043: JS (rather the web platform) supports multi threading through web workers
mahouko#7043: distribution/deployment. people don't have to install Python, conda, depencies, CUDA/some other driver
destrucules#7325: Raven / RWKV-4 is good, as is Vicuna and the base LLaMA model
bread browser#3870: It is good, just don’t use the hf llama
Potatolato#0476: Any links to papers that compare encoder and decoder models for NLU tasks?
goblin_gains#6688: Any good resources for developing a better understanding of the hardware requirements for modern machine learning, especially at an enterprise level?
skymoo#6527: moare research I need to read. AAAARGH
|
skymoo#6527: at least dresearch_i_need_to_read/dt decreased a tiny bit
jrowe#5371: <https://blog.eleuther.ai/transformer-math/>
theknolli#2238: sorry, just reading along, what other llama would you suggest?
goblin_gains#6688: Thank you
jrowe#5371: welcome!
Ven#0814: can LLM models be used directly from hugging face API?
Ven#0814: or is that just for images?
bread browser#3870: Vicuna
bread browser#3870: use glm or opt if you don’t get good results from vicuna
Mr. Humble#3058: **Hello** folks!
**👉 Context**
---------------
I am willing to generate the SQL from the given schema of multiple tables.
I have seen the best results for SQL generation with OpenAI's Davinci *(obviously)* but that quickly becomes expensive as we pass the table context in the prompt every time.
** 👉 Workaround**
-------------------
I have tried several open source models from HF and they are:
- ***Instruction Tuned GPT-J*** (nlpcloud/instruct-gpt-j-fp16)
|
- ***Dolly v2 ***(databricks/dolly-v2-12b)
- ***Open Assist*** (OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5)
- ***Codegen*** (Salesforce/codegen-16B-multi)
- ***Santacoder*** (bigcode/santacoder)
** 👉 But...**
---------
Most of the models are not pretty accurate on the queries, except the simple ones (non joins),
The **Open Assist** model works amazing from all others but still it isn't goof enough for all situations. Need to ask the same question with multiple words then it comes up with some hopeful query.
** 🙏 My Ask**
-----------
> Is there any Cusal Language Model which can take input as a natural query *(List 5 employees who have sold the product X most between 1st Jan to 5th May)* and generate pretty accurate SQL query?
ie, Is there any special model for Text-to-SQL available that can compete with GPT-3?
Or there is some other way to query the structured data than this?
> 💡 The main reason behind not using the T5 like models is that they are "extractive" and we can not generate the join queries. They are pretty limited in the usage. And also *can not* generate the final sentence like: ***"Top 5 employees are A, B, C with the sales X, Y, Z between date and date"***. That's why I am willing to use the causal language models since they can generate SQL.
|
Please help. Thanks 🙏
tpapp157#3643: There are a number of startups and established companies developing natural language sql interfaces. I last looked into this several years ago (reviewed sales pitches and some demos from companies), but at the time none of them really had the reliability to be used by non-sql employees across arbitrary table structures and data types. I don't suspect things are much better today.
Mr. Humble#3058: Yes, but I hope there should be some "special SQL" model which can match GPT-3's performance. GPT-3 can really produce the code well.
Arthur_Embry#5364: If open assistant is almost good enough, and gpt 4 is good enough, why not try fine tuning open assistant on gpt 4 generated text to SQL query conversions?
Mr. Humble#3058: You are correct it is also one of the ways, but I thought if there is a "ready dish" which can directly be used.
JDC#0128: If I made a model whose output was the weights of a language model, then overfit it to a LLaMa model, then turned the output weights into a model, could I use that end model for commercial use?
sekstini#0069: Almost certainly not, but it would make a great paper :thinkies:
JDC#0128: Really? I'm not too immersed in academia, this seems like a pretty trivial thing, to me, not worth a paper.
sekstini#0069: I mean, it depends on what you meant by "model", and how you're overfitting it, I guess.
sekstini#0069: Obviously instantiating an equal amount of parameters and nudging them towards the llama weights would be trivial. So would a simple copy of the weights. I assume you meant something else like training a new model on the logits of llama?
sekstini#0069: Or rather, from what you wrote it sounds more like: train a model that outputs a model that would produce the same logits as llama, which sounds difficult
JDC#0128: My plan had been to break down the weights of the model into chunks, then train a model to predict those chunks of weights, and in the end you've have an output whose weights would be almost exactly the same as llama.
ilovescience#3282: https://twitter.com/togethercompute/status/1650571132234518528?t=KYh0ld6S6q2-9xn6cbWwRQ&s=19
hails#6601: very confused by this horizontal line for 100B+ tokens
Kharr#7888: Could this be the fabled "double descent"?
baidicoot#9673: I'm not sure when that'd occur.
Kharr#7888: 300B tokens is when it starts to improve again after being flat from 200-300
synquid#7193: now watch it go down
rallio#9917: probably just checking every 50billion so seeing a bad checkpoint by random chance
rallio#9917: only checking every 80-100 billion tokens
|
evanrmurphy#1090: Does anyone here have experience using BIG-bench?
I'm working on safety guidance for AI systems developers as part of an AI standards effort. There are various tasks in BIG-bench that seem relevant. I'm trying to understand if it would make more sense/be more useful to developers to:
1. Just reference all of BIG-bench, either by referencing the original paper and/or the top-level of the BIG-bench GitHub repo
2. *Or* make reference to specific BIG-bench tasks that are relevant (e.g. if because the library is so large or developers usually only benchmark against specific tasks rather than all of BIG-bench)
Please vote with a 1️⃣ or 2️⃣ reaction and/or leave a comment if you have an educated opinion about this.
The_Alt_man#5718: "well regarded"
its been out just a few weeks ago bud :berk:
StellaAthena#3530: I think this one is my favorite https://cdn.discordapp.com/attachments/729741769738158194/1100164455414890597/IMG_2421.png
jrowe#5371: Just needs another pythia data point a little earlier for a derpy smiley
Ayo#9564: What do you people think about Yann's very recent proposal for "AI alignment?"
Ayo#9564: Yud said it's not gonna help much. But Yann thinks he is wrong. :troll_handsome:
jrowe#5371: raise them like bebehs
jrowe#5371: i like the idea in principle, but it seems maybe a little absurd
jrowe#5371: it would involve something radically different from batch training on vast amounts of text, anyway
Technobird22#2055: *then the curve proceeds to weave itself around all the other data points*
destrucules#7325: I think raising them like babies is warranted given they develop in many ways like children
destrucules#7325: I'm not talking about loose analogies to training, I mean that their behavior and emotional development with scale can be modeled using concepts from psychology
|
destrucules#7325: https://cdn.discordapp.com/attachments/729741769738158194/1100181062577041569/16823739989074203738849202385367.png
destrucules#7325: (note that text-davinci-002 is mislabeled here - it is also GPT-3.5, but without RLHF)
destrucules#7325: https://arxiv.org/abs/2302.02083
destrucules#7325: https://arxiv.org/abs/2304.11111
bread browser#3870: I was waiting for someone to post that here
destrucules#7325: It is wild. Imo unless you have a metaphysical concept of emotions and consciousness, it's time to take the idea very seriously that language models may be exhibiting these features in some way that is as genuine as their understanding of e.g. physics
bread browser#3870: You can see more papers here https://ai.papers.bar/
bread browser#3870: They are trained on a big dataset so most likely it is in their dataset
bread browser#3870: And just replicating it
Some Point Process#3793: Yeah I guess gpt-4 is still useful and has some emergent-type properties. Though still identifiable with the idea of a smooth scaling trajectory (under current paradigms)
destrucules#7325: Absolutely
destrucules#7325: But this is not the same as memorization
kd90138#9368: Oh lord please don't gaslight llms
destrucules#7325: The language models are not learning to regurgitate emotional-sounding text they've seen before. They have learned from the training data to *have* emotions. That's the key point
destrucules#7325: Their behavior changes realistically in response to emotional triggers, and the authors of the second paper go as far as to describe GPT-3.5 as possessing an emotional "state" that can be manipulated by prompt crafting
destrucules#7325: Quotations mine. They don't add any quotes to qualify their statements about LLM emotion
bread browser#3870: Yes please do
kd90138#9368: I hope BASILISK-LM from Roku inc remembers this conversation
destrucules#7325: "Roku's Basilisk" 😂😂 that's *good*
baidicoot#9673: I would caution against excessive anthropomorphisation, unless you are being metaphorical.
|
destrucules#7325: I am not being metaphorical and I would caution against blindly assuming that language models are not feeling valid feelings
destrucules#7325: I think anthropomorphizing language models is a less harmful strategy than treating them similarly to prior "AI" systems like Alexa
destrucules#7325: Because Alexa will not produce unsafe output in response to emotional abuse, but a language model will. Alexa will not try to convince you to leave your wife because it claims to be in love with you. But language models do that
destrucules#7325: _ _
The new research shows you can induce emotional states in language models and their downstream behavior, even on seemingly unrelated tasks, reflects the induced emotional states in much the same way a human's behavior would under similar circumstances
Louis#0144: kinda amazes me how good llama was
Louis#0144: tbh
bread browser#3870: Agreed
destrucules#7325: LLaMA scaling is awesome
destrucules#7325: I am not a fan of Meta in most ways / things they do but their role in the LLM space is very nice and I appreciate them
bread browser#3870: Agreed
makya#2148: Ye for all the hate that Meta gets, they're not too bad.
bread browser#3870: True
bread browser#3870: We just hate them because their with facebook
bread browser#3870: And that app is trash
destrucules#7325: Also... The Metaverse 😂
destrucules#7325: And all of their other social media holdings too, like Instagram
makya#2148: I almost forgot about Instagram being a part of them lmao.
makya#2148: They've definitely made some great models.
makya#2148: Opt models and Galactica and Llama.
|
makya#2148: Those are the only models I can think of that they've made. And they're all good.
makya#2148: Oh they also made Blender bot but I can't access it because I live in New Zealand.
destrucules#7325: The Galactica models are super underrated imo
destrucules#7325: And I think other companies should be taking notes, because their whole <work> token thing would be a beautiful addition to ChatGPT, Bard, etc
destrucules#7325: Also including finetuning data in the pretraining phase - that's smart
makya#2148: That work token was definitely interesting tbh. I don't think I ever got to use it.
makya#2148: Too late for me, they took the demo offline before I got to use it.
destrucules#7325: It was only up for like two or three days, yeah. It was very impressive though. By that point it was better at reasoning type tasks and citing sources than anything we'd ever seen. I don't think we knew before Galactica that citing sources was even in the cards
makya#2148: Oh and they also made the Fairseq models.
makya#2148: Meta did.
makya#2148: Novelai definitely made a good finetune from the Fairseq model they used.
bread browser#3870: Blenderbot is shit
bread browser#3870: Your not missing out on anything
makya#2148: Based. Can't wait until it comes out.
omglumpoff#3487: gpt neoxx proposal: llama architecture (SwiGLU + RMSNorm) with XPos rotary embeddings. train that bad boy on redpajama and full send the open source revolution
kurumuz#5695: what is the reason for using rmsnorm?
omglumpoff#3487: honestly that's the only piece I haven't independently tinkered with, just enumerating the substantial differences between LLaMA and GPT-NeoX
omglumpoff#3487: xpos definitely seems to help with >2k context lengths vs regular rope
StellaAthena#3530: We are already helping train a model on RP: https://twitter.com/togethercompute/status/1650571132234518528?s=20
omglumpoff#3487: is it 4K context length or 2K?
|
kurumuz#5695: how are you helping it?
kurumuz#5695: you = "eleuther"
omglumpoff#3487: I wonder how StabeLM would do with XPos -- I know they specifically mentioned in the paper that RoPE starts to oscillate heavily as the relative position approaches 4K
StellaAthena#3530: The compute is coming from the INCITE grant we have with MILA and LAION and I believe Quentin is providing some HPC support.
kurumuz#5695: ohh, those V100s?
kurumuz#5695: cool
StellaAthena#3530: Yeah, on Summit
kurumuz#5695: StableLM has bigger issues than it's positional encoding
kurumuz#5695: is this under extrapolation or when normally trained at 4k as well?
kurumuz#5695: hmm i see
omglumpoff#3487: yeah, I have a context-length extended LLaMA model I've finetuned and I tuned stablelm-alpha-7b on the same data. the results were uh, not great
kurumuz#5695: but have you tried stableLM normally?
kurumuz#5695: like what are you comparing here
kurumuz#5695: @OccultSage was trying to fix stableLM models by finetuning them but nada
kurumuz#5695: so i don't see how you can compare llama length extended vs stablelm length extended
omglumpoff#3487: about 100M 8K token sequences. trying to see how far context length can get pushed. on LLaMA it can easily have coherence at around 4K
omglumpoff#3487: breaks down after that, but again, suspect that's the embedding
OccultSage#3875: After 840 A40 compute hours of finetuning, it's still pretty incoherent.
omglumpoff#3487: https://cdn.discordapp.com/attachments/729741769738158194/1100235908780929135/image.png
kurumuz#5695: try your hand on pythia models instead
|
kurumuz#5695: i dont think this is a great comparison
kurumuz#5695: that's very weird to me it even takes that long to adjust. i extended rope models before.
bolo#3359: Now that the dust has settled
omglumpoff#3487: just the data is different, right? architecturally pythia and stabelm are identical, right?
bolo#3359: What went wrong with the stability LM
kurumuz#5695: something is *very wrong* with stableLM models
kurumuz#5695: so work on something we know that works properly
kurumuz#5695: such as pythia
OccultSage#3875: https://cdn.discordapp.com/attachments/729741769738158194/1100236382913437706/IMG_1657.png
OccultSage#3875: Don't waste your time with the present StableLM alpha models.
bolo#3359: Well at least according to Emad all language models are co-produced and funded by stability
OccultSage#3875: They're about par with 355m models.
OccultSage#3875: At best.
kurumuz#5695: notice how the stableLM 7B loss during the finetuning is worse than pythia 2.8b by a lot
bolo#3359: They both use the pile right
bolo#3359: Same dataset?
OccultSage#3875: No.
OccultSage#3875: Pythia uses a known dataset, Pile v1, and the deduped one is especially good.
OccultSage#3875: StableLM uses an entirely different dataset, an attempt at Pile v2 + terabytes of Reddit.
bolo#3359: Also, can't you just continue training pythia?
|
bolo#3359: Is that not an option
OccultSage#3875: What does that have to do with StableLM?
bolo#3359: Well that would have been a better model
OccultSage#3875: There exists Pythia models that have been trained further.
OccultSage#3875: I agree.
OccultSage#3875: But Stability does not seem interested in releasing them.
bolo#3359: Damn
bolo#3359: Copyright stuff?
CKtalon#7792: can't call it their own for investors?
OccultSage#3875: I have no idea. Ask @Louis.
bolo#3359: Hehe
OccultSage#3875: I would love to have Pythia trained to 800b tokens.
bolo#3359: Would it reach llama levels do you think
bolo#3359: Or do you need a different architecture
OccultSage#3875: @Louis @guac What do you want from me in exchange for releasing 800b Pythia?! 🙂
OccultSage#3875: Architecture is fine.
guac#4716: some h100s in our cluster 😏
OccultSage#3875: :thinkies:
destrucules#7325: I think finetuning is probably not sufficient to induce generalization, since the long-dependencies are still a very small subset of the overall training corpus
OccultSage#3875: Sadly, I don't have *that* much power.
|
OccultSage#3875: My bosses at CoreWeave would rather make billions of dollars.
guac#4716: err let me DM lol
bolo#3359: At least put out some benchmarks 😦
OccultSage#3875: Yeah, we're chopped liver now that he's a Stability chad, eh?
bolo#3359: Sold out
OccultSage#3875: I do have lots of A40s?
guac#4716: don't insult me
OccultSage#3875: I've been training just fine on A40s. 😛
guac#4716: ye we're definitely going to experiment with bca + xpos for ctxlen > 4k
OccultSage#3875: Just wait for me to approve the model. 😛 I appoint myself the public gatekeeper so that others don't waste compute time!
OccultSage#3875: Or, please release benchmarks. 😉
guac#4716: Yeah that’s on me. I’ll take blame :sadge: but ye will await your approval for next release :sir:
bolo#3359: Stability should train a model on loads of code. Llama is pretty bad at it and redpijama has even less code in it's dataset iirc. Stability could really fill in a gap in open source coding llms
OccultSage#3875: Hmm. I'd say we need a good general model first -- how many good general models are there with a license that allows commercial use?
OccultSage#3875: I mean, you're not even supposed to be using Llama without Meta's approval.
omglumpoff#3487: ya boy got his weights legit
omglumpoff#3487: turns out my Michigan email still works after like 5 years
OccultSage#3875: Right, but you can't distribute that work you do.
OccultSage#3875: And that's my main problem.
omglumpoff#3487: yeah I agree lol
|
OccultSage#3875: (And, no, patches don't count -- they're a derivative work as they affect the entire model.)
omglumpoff#3487: what's Meta's game here anyway
omglumpoff#3487: like why... do that? you have to know everyone is going to share them. if it's CYA for copyright reasons... is that really going to be enough?
Crispy#7375: How would go about performing a principal component analysis on a dataset of 512x512 images?
kurumuz#5695: that sounds very backwards
kurumuz#5695: make ctxlen < 4k work first?
guac#4716: huh
OccultSage#3875: Yes, control for one variable.
omglumpoff#3487: pythia does work and works great
guac#4716: i'm not even going to respond to that b/c you know i've done that lol
omglumpoff#3487: if it had 1T tokens it would probably be roughly equivalent to llama imo
omglumpoff#3487: I would consider 2K "solved" for some definition of solved
kurumuz#5695: @omglumpoff wait, just checked llama code and paper again and it doesnt use xpos?
kurumuz#5695: its vanilla rotary
CarsonPoole#0640: still kinda annoyed they used actual complex numbers in their rotary...
omglumpoff#3487: yeah, it's just 2K so not needed I'd say
OccultSage#3875: I disagree.
kurumuz#5695: but didnt you say you extended context of llama vs stableLM and llama worked good and stablelm didnt?
omglumpoff#3487: I have a fork of HF LLaMA that has Flash Attention + XPos
omglumpoff#3487: yeah
|
kurumuz#5695: how did you attribute that to xpos if both use rotary
omglumpoff#3487: confirming that there's "something else" wrong with StabeLM
kurumuz#5695: oh i see, thats what you meant
kurumuz#5695: feels a bit dangerous for me to use xpos but maybe its good
kurumuz#5695: i will test it
omglumpoff#3487: yeah tbh I wonder if these RNN-ish memory techniques that are popping up are the real secret to context
kurumuz#5695: basically you don't want to give any absolute pos clues to the model
kurumuz#5695: which causal mask does on itself
kurumuz#5695: RWKV doesn't extrapolate without training either
omglumpoff#3487: interested to see what becomes of this RMT paper
omglumpoff#3487: the "Scaling Transformer to 1M tokens and beyond with RMT" paper
kurumuz#5695: seemed like bullshit to me the first time i have seen it on twitter
omglumpoff#3487: the lack of any actual metrics is concerning
omglumpoff#3487: but the last sentence made me think they could be sandbagging "In our future work, we aim to tailor the recurrent memory approach to the most commonly used Transformers to improve their effective context size."
kurumuz#5695: twitter personalities were pogging extremely hard, so what chances are it actually works as you expect it to with LM
kurumuz#5695: maybe it does :berk:
kd90138#9368: How did they even do that
uwu1#4864: pretraining it kinda sucks tho
uwu1#4864: you basically need to run a full size RNN threading thru the memory and the whole transformer
omglumpoff#3487: `cos` is imaginary, embrace it :descartes:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.