data
stringlengths 115
7.61k
|
---|
OccultSage#3875: It's all in your head.
omglumpoff#3487: to quote the great philosopher Albus Percival Wulfric Brian Dumbledore: of course it's all in your head, but why on earth should that mean it's not real?
Maximum Limelihood Estimator#8915: If I want to visualize something, I theoretically get 6 dimensions maximum: 3 of color and 3 spatial dimensions. Is there a library/function to actually do that in such a way as to maximize the amount of info being blasted into my eyeholes
OccultSage#3875: Let's not talk about real numbers.
Maximum Limelihood Estimator#8915: I refuse to accept the limitations of the 3d world. Give me more
Some Point Process#3793: Regardless of the implemenation i still like this visualization of 3-sat (seems easily extrapolable to mulitple dimensions, tho a planar graph problem seems to be implied) https://cdn.discordapp.com/attachments/729741769738158194/1100278589569630209/image.png
Some Point Process#3793: (<https://link.springer.com/content/pdf/10.1038/s41598-020-76666-2.pdf>)
Some Point Process#3793: but I think they were also showing 3sat optim. as a neural network type thing (at least for a related paper)
Some Point Process#3793: at least it shows more graph structure s.t. you're finding the boolean value assignments to the inner circles as the free variables (where e.g. dashed lines could mean negated, and solid unnegated variables, for each of the clauses/gates surrounding the outer circle)
Some Point Process#3793: (but from that diagram, i think you can see that, since each clause (each gate on the outer circle) can only have "connections" (dashed or solid lines) to exactly 3 inner circles, then that maybe can allow you to align the outer and inner gates/nodes to organize the problem better)
Emad#9608: I think we only trained another 100b parameters, think to 600b probably makes sense
Emad#9608: happy to release, what size?
Emad#9608: I'd agree with this tbh
Emad#9608: wait until next release with code and stuff, looking much better, will release regularly
kd90138#9368: i havent dont evaluations to tell either way, and hopefully if there are any issues with the models, they could be rectified
kd90138#9368: but even if that is not the case i hope to see records and reports on it so we can learn from it
StellaAthena#3530: I’m 99% sure you mean “tokens” not “parameters” here
Sphinx#2092: But leaving 1% for pure hype purposes?
OccultSage#3875: I'll take 600b-800b. 🙂 How far did you take it, @guac?
TastyBucketOfRice#8796: They're using the gpt-neox Summit port that I've been working to optimize over the last couple months. We built the initial configs together, debugged a few issues, then they kicked off the final training runs and have been monitoring.
|
StellaAthena#3530: More like 1% Emad is drunk and forgot what the word “parameter” meant xD
uwu1#4864: eleutherai/pythia-1T-deduped
OccultSage#3875: Yesssss ... 🙂
uwu1#4864: just need to collect 18.5T more tokens nbd
Ravna#1831: download the whole youtube and treat each frame as a bunch of tokens, even 1000T is possible
OccultSage#3875: :diesofcringe: Please, no, Reddit was bad enough.
synquid#7193: youtube transcriptions are probably pretty good though
synquid#7193: not the frames lol
Hyperion#0575: Whisper. On. Youtube.
Ravna#1831: a language model trained that way would say "uh" and "um" and "you know" and "like" for every other 5 words
synquid#7193: turing test passed
uwu1#4864: https://twitter.com/jonathanfly/status/1650001584485552130
artem9k#7593: got plugin access before gpt4 access :/
artem9k#7593: agi is here https://cdn.discordapp.com/attachments/729741769738158194/1100438507878613062/Screen_Shot_2023-04-25_at_9.08.23_AM.png
JDC#0128: I heard someone say that one of the reasons that OpenAI developed Whisper was to use all the text from audio in podcasts and YouTube. Idk if that was motivation, but it's certainly useful for that.
jrowe#5371: Convert verbal content to text, do a curation pass and nuke the uh um hmm and tag the speakers, fix the transcript mistakes, maybe tag it with ssml or tonal nuance?, then you've got a nice conversational or presentational dataset
jrowe#5371: That's a huge amount of words that ostensibly make sense, lol
skymoo#6527: Which is the current best eleuther model?
Orz#3023: "best" is subjective
|
But a model with highest number of parameters, released by EleutherAI is probably Gpt-NeoX-20B
skymoo#6527: do you know how I could do this but make sure it stops after that one line? https://cdn.discordapp.com/attachments/729741769738158194/1100456357804654642/how.png
skymoo#6527: https://huggingface.co/EleutherAI/gpt-neox-20b?text=We+translate+natural+language+into+IRC+bot+commands.+Your+commands+are%0A%0AIn%3A+%3Cfoobar%3E+bot%2C+what+time+is+it%3F%0AOut%3A+%2Ftime%0A%0AIn%3A+%3Cblarf%3E+bot%2C+Remind+me+about+the+space+launch+in+15+mins%0AOut%3A+%2Freminder+blarf+15m+%22space+launch%22%0A%0AIn%3A+%3Czozzle%3E+bot%2C+I+would+like+a+reminder+please+in+3+hours+when+my+slow+cook+is+done%3F%0AOut%3A
StellaAthena#3530: You can't
skymoo#6527: :goosegirl:
Mike M.#5944: hey i tried yesterday this model https://huggingface.co/h2oai/h2ogpt-oasst1-256-20b , how can i use it with with <human> <bot>?
haru#1367: i thought about doing this but it'd take forever 😭
haru#1367: although, tokenizing the pile quickly would be a cool way to brag about your tokenizer lol
skymoo#6527: my short prompt worked and the longer prompt worked in the hugging face test out box, but it doesn't work over API 😦
skymoo#6527: > In: I would like a reminder please in 3 hours when my slow cook is done?\nOut: \nSTOP\n\n\n\nIn: I would like a reminder in 3 hours and 24 mins\n
skymoo#6527: didnt give anything for Out:
bread browser#3870: YouTube comments are good
bread browser#3870: Trained a model on 180k of them
skymoo#6527: Is there a discord bot that you can invoke commands via natural language? implemented using an LLM?
Emad#9608: 🫣 yeah we added another 100b parameters on increased context length 4096, did ok similar outputs but larger window, can ask them to do it for anther 200-400b
Emad#9608: are there any studies of multi epoch training?
Emad#9608: funnily we are trying for 1t tokens on 1b parameters now, 300b in...
StellaAthena#3530: We've never seen any problems with doing multiple epochs on the Pile, though we haven't tried like 10+
KublaiKhan1#6681: There's plenty on small datasets
Emad#9608: yeah but is it useful
|
Emad#9608: :thinKat:
Emad#9608: DeepFloyd was 10 epochs in the end :Eyes:
KublaiKhan1#6681: Well that's not public so I can't say if it hurt or not
Emad#9608: will be shortly
Emad#9608: image model tho
Emad#9608: lets try pythia 4096 context length to 600b and release that, a few hundred b more parameters
KublaiKhan1#6681: This is an interesting question, does overtraining on larger datasets lead to a decrease in generalization and more memorization?
Probably yes?
Fessus#9563: It certainly can
KublaiKhan1#6681: But ofc the question is "at what point is it overtraining"
skymoo#6527: @BoneAmputee can you show me it please?
KublaiKhan1#6681: Which is some relationship between model capacity, data available, and task complexity
Fessus#9563: If you're willing to blow some extra compute you can train models using an adaptive complexity scheme which will automatically prevent overfitting and reduce the complexity of the model to the maximum with generalizes optimally
Ravna#1831: codegeex trained 5+ epochs on ~150B tokens of code, at 850 tokens in total
skymoo#6527: :misaligned:
BoneAmputee#8363: well I assume there are lots already. I'm not sure of specific ones other than my own implementation which not currently public, and the computer that was running it for this server seems MIA right now, but you can ask gpt-4 for working discord bot code as long as you remind it to declare intents :thinkies: and there's things like LangChain to do the prompting if you need it
Dashiell#8739: part of me wonders if "over-trained" models leads to more interpretability
Dashiell#8739: initial circuits work was done with vision models where it's common practice to train on many many epochs
skymoo#6527: want to show me your one?
Dashiell#8739: and in toy grokking models generalization happens way before the super clean circuits develop
|
Dashiell#8739: (this musing brought to you by a comment @nsaphra made to me last week)
Ravna#1831: grokking is a 100+ epoch phenomenon though
Ravna#1831: 5-10 epochs might have the worst of both worlds
Fessus#9563: Hitting the models with something stronger than AdamW to limit complexity tends to result in models which generalize well regardless of how much you overtrain them in some of my very small scale experiments. Testing error never starts going back up even after like dozens of epochs on a tiny dataset.
Fessus#9563: no idea if it works in a larger context
OccultSage#3875: not as useful as other augmentation techniques. but @kurumuz may kill me if I disclose them.
Dashiell#8739: I guess there are two empirical questions:
1) is there a real cost in "overfitting" / loss of generalization for large language models trained on many epochs
2) total speculation on my part: would there be more sparse and interpretable circuits apparent in a large language model trained on a few hundred epochs
Dashiell#8739: certainly (2) is such wild speculation I am in no way proposing anyone spend the $$ on finding out
Dashiell#8739: but it's been kicking around in my head
Dashiell#8739: the claim in (2) is definitely not that the model would generalize _better_, but just that looking at the internals would be easier
hails#6601: and also, is there a distinction between trained on many epochs and "overtrained" in terms of tokens seen
Sphinx#2092: WHat does "overtrained" mean?
Fessus#9563: LLMs as they currently exist are pretty inefficient. Ideally you'd want one with the exact maximum complexity such that if you trained it for an infinite number of epochs on a given finite dataset the testing error would not regress after a certain point. We don't/can't do that and as a result LLMs are dramatically more complex than they really need to be (for inference at least)
skymoo#6527: I guess not
hails#6601: Not sure, I've been curious about whether training for more tokens means "cleaner" representations in any sense but idk the best way to operationalize this
Dashiell#8739: Do all the ELK and grammatical circuit stuff @Nora Belrose and everyone is already doing and see / hope there's less "noise" in the results?
BoneAmputee#8363: my bot is still available in a different discord which you can find in my profile, but I don't normally keep the gpt-4 conversational stuff turned on due to the cost. at the moment, you'll be talking with the ChatGPT API unless specifically evoking gpt-4 with a slash command, which has no tools associated with it
OccultSage#3875: that would be a neat model to look at finetuning.
|
skymoo#6527: il join
Hyperion#0575: Usually just "trained for more tokens than Chinchilla scaling law would say is compute optimal" AFAIK
It's a bit funny because this definition means you always want to be overtrained basically
Sphinx#2092: Yeah, that's kind of my point. It seems like a bad term.
OccultSage#3875: Er, for downstream use, I've found that training past Chincilla optimal gives better results, if the tokens are unique.
Hyperion#0575: I'm not sure there is a def'n that makes more sense for the word
Since either you're training past convergence, in which case you can just say overfit? Or you're still seeing improvements in loss, in which case "overtrained" is a bit misleading
jrowe#5371: It would be neat to see the shape changes during training in a selection of concepts using the geometric stuff used by Google- <https://cloud.google.com/blog/transform/can-generative-ai-help-humans-understand-animals-earth-species-project-conservation>
Hyperion#0575: Yeah that's what I'm saying, training past Chinchilla optimal is usually good
jrowe#5371: Especially if you can "over"train past a good representation into a bad one
OccultSage#3875: Any large amount of Reddit is apparently bad. 😉
Hyperion#0575: I still don't get why Reddit is so bad
Most of it is at least coherent English sentences, even if it's looser with grammar than say, StackOverflow
jrowe#5371: Lol
synquid#7193: how well was it preprocessed? reddit has more bots than anywhere else I know
jrowe#5371: You can get really good snr from some reddit , but I can see the biggest subs as being mostly noise
StellaAthena#3530: I have never seen evidence of negative impacts from “overtraining” in the currently dominant LLM pretraining regimes at scale.
OccultSage#3875: Doing substitution of all usernames with User1, User2, User3 also doesn't help. Need better anonymization with realistic seeming tags.
Hyperion#0575: Yeah I can see why that would be bad
But if processed appropriately it seems decent
|
Hyperion#0575: Maybe you want to do something like the reward conditional pretraining paper, with upvotes as reward
Kal'tsit#3130: what's the ai by the fudan university called "MOSS"?
is it based on llma?
I dont see any papers on it
jrowe#5371: I've been churning over a way to capture the semantics of names
OccultSage#3875: I've finetuned StableLM alpha 3b and 7b for 35+ days of A40 GPU hours, and it still can't maintain coherency past 2 or 3 sentences.
jrowe#5371: rimjob_steve has a context all his own
Hyperion#0575: Is that purely because of the User tags problem do you think?
OccultSage#3875: No, the amount of short-form comment-style comment in isolation.
jrowe#5371: Other, more civil usernames are similarly important and recognized
jrowe#5371: Schnoodle_doo for example - you lose the references to "a fresh schnoodle" or whatnot by substitution
KublaiKhan1#6681: @Sphinx historically, and the definition I'd use "metrics on validation set have started to decrease"
KublaiKhan1#6681: With some allowance for variance
bread browser#3870: Can you fine-tune it on yahoo answer data
OccultSage#3875: At 860 compute hours of finetuning - it seems to stay on track for 2-3 sentences and then veers wildly off.
```
RESPONSE: Suddenly, the sky rumbled again. It was very loud this time and I flinched as it rolled to a stop less than ten feet away from me in front of an empty parking lot where three cars had been abandoned by two men on horses who were also nowhere near there—and one which wasn't even moving because they'd taken off after what must have seemed like five minutes but could only be more or perhaps just over thirty-five years ago (though that old place didna look too good for anyone alive today), when someone else's horse began rolling toward us at full gallop without so much noise except steam and smoke rising up all around them until eventually turning into a cloudburst; then another sound reached my ears otherworldly sounds being neither wind nor rain such is normally impossible although with nothing growing down here either way would've made things easier not knowing how dangerous whatever might happen may actually still go ahead rather hoping against hope…
```
```
RESPONSE: The mercurial and beautiful mage turned down the road as though her destination was not of so great import. She set off towards a large building in whose upper stories were a pair or two very handsome young men with whom she had spent some time before, but who now took their departure into an abode far less genteel than any at that date; for on this occasion they looked particularly to have seen little enough other business under what different rules should go by which even more unescortedly did pass through the same establishment—she having no doubt done there all those things whereof I am speaking: whether from curiosity concerning him (as most girls might) one may say also how his eyes brightened perceptibly when he saw such company together after sunset upon many occasions? In spite certainly it is true whenever women do look straight up out over tables during the night 'tis generally best if each be surer still hath lights about thee till well dawning,'
|
When once again Jolena walked sidewise along King
```
OccultSage#3875: ```
RESPONSE: [ Author: Hinata Watarou; Title, The Dark One Who Walks Through Shadow and Light]
The world is a sad place where good never triumphs. This tale of tragedy might be the story about all humanity as we know it – but to its people who live in such darkness they will always come this dark thing that lurked inside them from childhood on... How can you forget? There was no way out for us back then! We were trapped by our own desires like wild beasts at night when humans cannot sleep due overstimulation or their internal demons telling lies around every corner just beyond perception into what looks too real– You really are there aren't ya?" As she whispered these words deep behind closed doors her eyes revealed themselves even deeper shades than usual within an ocean full…"Of course I'm not here". Her voice carried through walls unlike those belonging only his mouth which could travel faster with sounds sent straight up your ass while also passing along many things unseen throughout various worlds without being
```
```
RESPONSE: The mercurial and beautiful woman's lips formed the words, "Oh?"
"I'm being blackmailed," I stated simply. The way her eyes traveled up to mine was like that of a hungry snake eyeing its prey for just such an opportune time as this is currently occurring... or perhaps those were his feelings about how our conversation had been going in regards my future endeavors with Drakalai? Either method brought heat surging through their tiny little bodies! They seemed perfectly capable at turning on any intruder before they could get around whatever it took do make me think however he might feel inclined towards trying anything right now; not exactly subtle when you consider what we're talking here but definitely quite clever none-the less if she used them once more maybe then—or hell who knows why else would one endlessly try everything no matter where attempting different methods until something works out so beautifully anyway well played indeed by your lovely lady!" Now let us stop there because despite all these thoughts running round inside everyone
```
OccultSage#3875: Hmm, I could. That's not really my downstream use-case interest, though.
OccultSage#3875: Pretraining? I've definitely seen overfitting when it comes to finetuning.
StellaAthena#3530: Yes pretraining
synquid#7193: overtraining is totally the wrong word to use when you pretrain for 1 epoch
synquid#7193: you're just ultra-chinchilla
bread browser#3870: What is your use case? (Just wondering.)
OccultSage#3875: Literary, creative writing. 🙂
bread browser#3870: Why would you want to use stablelm for that?
OccultSage#3875: Because I thought it was Pythia + more. Turns out that it was Pythia--.
Louis#0144: we have a new model coming out this week
|
Louis#0144: it isnt a new stablelm tho
Louis#0144: not yet
Louis#0144: (thats soon too tho)
bread browser#3870: Is it good?
Louis#0144: youve tried it
Louis#0144: lol
Louis#0144: did you like it
bread browser#3870: You said new model, that could mean any model
Louis#0144: :berk:
bread browser#3870: Yes
Louis#0144: then its good
bread browser#3870: It is at the level of bard, I like bard so I have no problems with it
OccultSage#3875: I don't believe you. You told me that StableLM wasn't fucked.
Drexler#4006: Yeah ngl it seems pretty bad. I haven't compared to other 7b models yet but.
rallio#9917: I think it used to mean a training data loss lower than an eval data loss
rallio#9917: given that basically doesnt happen so long as pretraining data >>>model weights and eval is randomly sampled from train
rallio#9917: I did indicate an effect that could be a sign of overtraining with pythia 70m and pythia 140m but not sure anyone explained why it happened yet
rallio#9917: https://discord.com/channels/729741769192767510/785968841301426216/1095702765801578555
rallio#9917: 70 and 160 clearly start showing degrading performance after a certain amount of pretrain
besiktas#7463: is there any consensus about which of the llama type models to go with? i.e. tloen/alpaca-lora, ZrrSkywalker/LLaMA-Adapter, tatsu-lab/stanford_alpaca
|
besiktas#7463: am hoping to use it with new modalities if that changes the answer
rallio#9917: some people say vicuna is best,all those models are basically the same though
besiktas#7463: yeah i wasnt even sure if there will be much of a difference, just thought it would be worth asking
besiktas#7463: thanks tho @rallio
rallio#9917: just run the biggest one your GPU can fit
rallio#9917: it will be censored some cause it was created from OpenAI
besiktas#7463: i have access to 2xNVIDIA TITAN RTX, should that be adequate?
rallio#9917: is it 24 gig each
besiktas#7463: yeah
rallio#9917: I think you can run the 33b in 8 bit model parallel depends what you want to do though
besiktas#7463: otherwise have to use a cluster that is shared and id have to learn how to set everything up with slurm/etc
besiktas#7463: okay ill look into 33b one thanks
besiktas#7463: are there like multiple versions of original llama weights floating around? i have a folder with some but the hash is not the same as i see in some github issue on the original repo and not entirely sure what the weights are otherwise
naclbbr#9203: Afaik dialogue-heavy datasets generally leads to higher perplexity unless it is a QA or something more generic. I think StableLM's issue is masking username == weird distribution of tokens seen, but given that GitHub (which almost automatically lowers training loss the more you put into datasets) is known to generally improve the model's logic, the implication might be that pre-training with highly non-formulaic datasets could fail it to converge at all (as opposed to pretrain the model with something formulaic, finetune with dialogues after that). I'm curious how the current StableLM's training loss curve looked.
Maximum Limelihood Estimator#8915: Wait no I can totally do better than this. I can vary size and squircliness, by using the unit circle with L^p norm where p is the value of some variable. So that gets me to 8
jrowe#5371: Give yourself pitch and volume over a cursor, 10 dimensions
kd90138#9368: That's actually a good idea
omglumpoff#3487: any tips on understanding `lm-evaulation-harness` results? I'm running some of the metrics on LLaMA and comparing them to what's in the paper and they seem... pretty different than what's reported?
```
|
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.3823|± |0.0142|
| | |acc_norm|0.4147|± |0.0144|
|arc_easy | 0|acc |0.6734|± |0.0096|
| | |acc_norm|0.5253|± |0.0102|
```
the paper reports 47.6 for challenge and 72.8 for easy
omglumpoff#3487: openbookqa is even worse:
```
|openbookqa | 0|acc |0.2820|± |0.0201|
| | |acc_norm|0.4240|± |0.0221|
```
paper reports 57.2
StellaAthena#3530: @omglumpoff Unfortunately we don’t have any info about how they did their evaluations AFAIK. The exact scores tend to be quite sensitive to minor implementation details, which is part of why we built a unified evaluation framework we can run all models through.
At it’s core, the answer is “they have something different from us” but I couldn’t tell you what it is. This is also why we don’t copy scores from other papers and rerun them ourselves.
|
StellaAthena#3530: Our implementation has GPT-3 scoring 43% on ARC-c, which is also much lower than the number in the LLaMA paper
omglumpoff#3487: yeah makes sense -- it's just interesting that everything is lower, but I suppose authors are motivated to uh... tweak the metric implementations until it gives them the numbers they want 😂
Gifted Gummy Bee#3277: Kek
kd90138#9368: Sometimes it's not even malice
omglumpoff#3487: yeah, more just like, you keep "fixing" things until you're happy with what you see, obviously if "number goes up" you're not going to complain
omglumpoff#3487: ty for the sense check. want to actually measure how context-extended llama performs on some of the long-context benchmarks and figured I should repro the paper results first
StellaAthena#3530: It’s not even that
StellaAthena#3530: When we switched from evaluating the *answer generations* to the *letter corresponding to them* for MMMLU perf changed massively.
StellaAthena#3530: Our framing actually looks different from what’s in the ARC paper
StellaAthena#3530: Actually
StellaAthena#3530: I don’t see the exact formatting in the paper
StellaAthena#3530: :thonk:
StellaAthena#3530: It says it uses this:
> Why can steam be used to cook food? (A) Steam does work on objects. (B) Steam is a form of water. (C) Steam can transfer heat to cooler objects. (D) Steam is able to move through small spaces.
StellaAthena#3530: But… that’s clearly false
StellaAthena#3530: Like, that’s not what is actually getting fed into the LLM
omglumpoff#3487: so at best the evaluation metrics in papers need to be taken only in context with each other
omglumpoff#3487: rather than the absolute scores being comparable to other models in any real sense
StellaAthena#3530: What we do is feed in
`Question: Why can steam be used to cook food?\nAnswer:` and then evaluate the probability of each of
|
`Steam does work on objects.`
`Steam is a form of water.`
etc and select as the answer the generation with the highest assigned likelihood.
StellaAthena#3530: Yes, this 100%
destrucules#7325: @StellaAthena have you been involved in any sort of mechanistic interpretability work for LLMs? What's your take on that in general?
acacia#0478: GPT-NeoX-20B spotted
Palantir is building a chat LLM interface for war
https://youtu.be/XEM5qz__HOU
paint_and_ink#5778: morning Yall ... is there a tool which I can use to animate things using ai technique ... it doesnt matter if it for vector or pixel graphics, thx in advance!
Gifted Gummy Bee#3277: “Sorry, as a large language model trained by OpenAI, I have been trained to kill you. Please stand still for your extermination”
kd90138#9368: that's the risk in apache or other open licenses.
the alternative is open-rails or even worse open/not open LLAMA
dont even begin with openai
Sway#0727: hello
sekstini#0069: wow, this is the first model other than ChatGPT that has managed to give me exactly 7 digits of pi
technium#5048: Its the most impressive llama fine tune Ive seen
sekstini#0069: demo is really slow though; but I'll download the diff for sure
technium#5048: There's a merged model and gptq and ggml version on hf too
|
makya#2148: Yep there's now another llama fine tune in Town now boiis
sekstini#0069: lol, people out here giving exactly *zero* fucks about TOS
makya#2148: TOS, more like... Give them a toss.
technium#5048: Oh oops.. I thought I was in Offtopic lol
technium#5048: Every time I load discord in a new env it sends me back here and I make this mistake 😄
bread browser#3870: So did i
Ryu#0274: https://tenor.com/view/robot-congratulations-you-are-rescued-please-gif-17932648
_H#8715: indeed, I used to run a 1B model over the whole pilev2 and reddit dialogues have the worst perplexity overall (highest median).
But reddit posts were like second worst, and even ubuntu irc (also dialogue heavy) was better.
Training on reddit heavy dataset does not seem to affect convergence though. But the converged training loss is just a lot higher.
TheKing#6118: I mean, enemy countries will just denounce whatever license you choose.
So it's just a matter of whether military applications in allied countries are still net bad. (And they might ignore it as well anyways.)
Sway#0727: IF soon
synquid#7193: as in now
Sway#0727: weights dropped?
synquid#7193: guess not rip
synquid#7193: still 404
Ryu#0274: https://github.com/deep-floyd/IF https://huggingface.co/blog/if
Fleetwood#1949: beaten by @Chad Kensington
|
LDJ#2946: Replit-finetune-v1-3b
Open-source INCLUDING for commercial purposes.
apparently outperforms openAI codex at nearly every coding task, despite Replit model only being 3B parameters.
https://twitter.com/swyx/status/1650989632413401089?s=46&t=4d3MA4O6HJWn28w81Mp-1A
Miles#4396: Wait you have to be logged into HF locally for it to work? Can someone confirm whether it's phoning home to report all usage?
KublaiKhan1#6681: That's to download the weights
Miles#4396: Oh I see I'm used to just passing my token in the wget request header to download models from HF
BoneAmputee#8363: were they ever up? or is that why I've seen no announcement about its release :goose15:
Miles#4396: The HF repo was up for a short while after the link got posted here
Emad#9608: Be a couple days then up like the SD release
Emad#9608: finishing blogs
Miles#4396: Will development continue on Stable Diffusion for a long time or is IF expected to more or less take over SD's position?
Emad#9608: both and others too
Emad#9608: SD is a more efficient architecutre
Emad#9608: IF is a new type of model for open release so we need to figure out it better
Emad#9608: do lots of research
Emad#9608: SD team (Robin, Andreas, Tim) did https://research.nvidia.com/labs/toronto-ai/VideoLDM/samples.html so other fun things coming too
TACOX#1746: Hi, someone know about how tiktoken can improve my project and if it's really worth?
kd90138#9368: What are you looking for in tiktoken?
kd90138#9368: It's not a full tokenizer framework
|
TACOX#1746: i want to use the tiktoken to reduce the amount of tokens in the system role (i'm using the api of openai and the system role is very specific and i need to reduce the size)
TACOX#1746: but then what tokenizer i should use
sekstini#0069: You can't change the tokenization used in the API, but you can use tiktoken to check how many tokens your system prompt requires.
TACOX#1746: then i cant reduce the size of the system role when i use the api of openai?
sekstini#0069: I mean, you send the system prompt as part of your query. If it's too large you can try to make it shorter.
TACOX#1746: okay tank's, on other hand im trying to use the memory system that was used by google and Stanford on smallville (i'm referring to this article: https://arxiv.org/pdf/2304.03442.pdf ), someone know if there is some other system of memory that is better to the used on
TACOX#1746: smallvile
TACOX#1746: and if someone have an article or know and wants to explain me how to implement it, because on the article is very ambiguous, and just explain the architectury but not the steps of how to change the text to something that can be calculated to select the most important context and information to be send it on the user input. I know that the architectury could be enought to development but the true is that i'm really new on this world. if someone wants to help me I would appreciate it.
alstroemeria313#1694: hm i may want to be able to sample from the generalized extreme value distribution
alstroemeria313#1694: ...
alstroemeria313#1694: did copilot just write its cdf for me
alstroemeria313#1694: i need its icdf though
alstroemeria313#1694: ...ok copilot how do i know how the icdf you just wrote is correct
alstroemeria313#1694: well, by checking it against the cdf...
alstroemeria313#1694: i think it's wrong
alstroemeria313#1694: mathematica time
alstroemeria313#1694: i think i got it
alstroemeria313#1694: with mathematica
alstroemeria313#1694: "what if i do gumbel-max for llm sampling but i use generalized extreme value, which gumbel is a special case of, instead"
alstroemeria313#1694: (the scale parameter for gumbel does the same thing as temperature when you do gumbel-max)
|
cognomen#6297: really weird how google is pursuing LLM competition by merging and firing
cognomen#6297: they had a fine product in LaMDA, all they had to do was remove whatever artificial barrier they had internally to productizing it
cognomen#6297: but instead they're maintaining their useless hurdles and restructuring the research arms
cognomen#6297: imagen also had an enormous advantage but they couldn't be bothered
Gifted Gummy Bee#3277: inference costs?
cognomen#6297: they're google, they can fix that or eat the costs as loss leader
cognomen#6297: plus enterprise would pay more
cognomen#6297: especially for custom models
Gifted Gummy Bee#3277: 🤷♂️
Gifted Gummy Bee#3277: im guess is bard is 40-50b params quantised, but that seems unlikely
Gifted Gummy Bee#3277: Second guess might have been safety
cognomen#6297: what is even going on at that company, given its combination mentality of "all hands on deck, we need to compete or die right now" and "we have years of advantage on competition with this model but we can't actually release that because X, Y, Z..."
Gifted Gummy Bee#3277: 🤷♂️
Gifted Gummy Bee#3277: Its *google*
cognomen#6297: others took those risks and showed they were worth taking
cognomen#6297: but they're still holding back
Gifted Gummy Bee#3277: :berk: maybe google has already achieved AGI
Sway#0727: more efficient as in?
kd90138#9368: does anybody have the a100 (xaxis) hidden dimension (yaxis) Tflops graph?
kd90138#9368: the one that shows efficiency by tiling effects etc
|
main#7610: https://twitter.com/chhillee/status/1630274804795445248
kd90138#9368: thanks this was exactly what i was looking for
Emad#9608: Less compute for similar output.
Sway#0727: Do the IF models need to do multiple steps too?
Emad#9608: yep, many more at the moment
Emad#9608: should be able to test in a day or two
Sway#0727: isnt IF state of the art, how is it similar
cognomen#6297: on the previous topic: NeRF seems to be another tech google desperately wants to both advance and get left behind in
cognomen#6297: https://xhuangcv.github.io/hdr-nerf/ (2022, see also LIRF and NeuCam)
cognomen#6297: google could clearly do *something* to leverage years of advantage in this field, but so far there's no attempt to do so
cognomen#6297: fair confidence something NeRF related is going to be used in future generative models or pipelines
cognomen#6297: probably not theirs
kd90138#9368: Nvidia is doing a lot regarding nerf
kd90138#9368: Their optical flow accelerators, currently utilisés in dlss 3 also has functions for nerf
ac#1874: Has OpenAI Gym been ported to any other languages? I wanted to build on top of it to make some cool web demos but don't feel like implementing the environments myself
CarsonPoole#0640: nerfs have been weirdly under-productized
gambo#3672: Is it possible to use AutoGPT as a discord bot? Ive found how to use main ChatGPT as a bot, but I want to use AutoGPT. I would assume its the same process.
I have 0 coding experience, but I want to learn and I want to deep dive into a project like this if anyone can help or give me a resource to start.
StellaAthena#3530: Hi! This is a research-focused discord server and is not a good place to get introductory help or advice about how to develop apps. Some of the channels linked to in #communities may be a better fit.
gambo#3672: Thank you! I joined one 🙂
|
martinshkreli#2402: gm
Hyperion#0575: hey
Germanita#1530: Good morning drug price hikers and crypto scammers
Skyblaze#8532: gm. Glad to see you're here! Can you give me more insight about your logp calculator?
Brouz#6768: what the hell happened to this server
Skyblaze#8532: We're in the era of paradigm-shifts!
bread browser#3870: *gooses
𓅬 gabriel_syme 𓅬#3220: Wait, what's the context what am I missing?
StellaAthena#3530: <https://wikipedia.org/wiki/Martin_Shkreli>
𓅬 gabriel_syme 𓅬#3220: Oh shit
KublaiKhan1#6681: Yes
jrowe#5371: Lol
jrowe#5371: That's his legit "just got out of prison and a halfway house" twitch and YouTube, I think
Skyblaze#8532: Can we restrain ourselves until I get more insight about his logp calculator? I want to know if it's w.r.t. the chemical partition coefficient, or something else. The repo is remarkably undocumented.
kurumuz#5695: is lambada PPL in eval harness tokenizer specific?
Skyblaze#8532: I've been waiting patiently and I'm beginning to think he takes really long naps.
Sway#0727: wait is that really him
Sway#0727: lmfao
Sway#0727: he got the youtube channel linked
genetyx8#7543:
|
Skyblaze#8532: Fuck, you didn't say my mum was visiting 👀
genetyx8#7543:
Skyblaze#8532: Now you're just projecting.
genetyx8#7543:
Skyblaze#8532: https://tenor.com/view/monty-python-life-of-brian-pfj-immediate-discussion-meeting-gif-23947897
ilovescience#3282: yes
OccultSage#3875: Maybe ask in #lm-thunderdome?
StellaAthena#3530: From AI Sweden:
>>> We are updating the GPT-SW3 models with instruction tuned variants of the models up to 20B.
For more information about instruction tuned models: <https://arxiv.org/abs/2203.02155>
https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct
https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct
https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct
https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct
https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct
If you don't have access yet you can find the application form here: https://www.ai.se/en/gpt-sw3
|
The models are trained on 🇸🇪, 🇩🇰, 🇳🇴, 🇮🇸 and English. For those that start to experiment with the models we are very happy if you share your code for running them and your feedback and knowledge here in the discord in the gpt-sw3 channel.
We also have an updated 6.7b model named **gpt-sw3-6.7b-v2** trained on more data of a different distribution. With the same tokenizer as the 126m, 356m, 1.3b, 6.7b and 20b models.
Germanita#1530: 404 on the hugginface page heh
Germanita#1530: Oh you need to apply
sekstini#0069: Yup, I just sent in my application 🤞
Germanita#1530: Sucks that openAI seemingly has a trademark on the term GPT
Skyblaze#8532: Kan du opprette meg en konto her? Jeg hater registreringsskjemaer.
synquid#7193: https://laion.ai/notes/letter-to-the-eu-parliament/
spirit-from-germany#1488: https://twitter.com/laion_ai/status/1651998213501591552
Hyperion#0575: Nice, this seems worthwhile to try
synquid#7193: sorry I even beat you 🥲
StellaAthena#3530: Kinda surprised they didn't ask us to join it
Emad#9608: need to be more european stella
Emad#9608: I recommend a beret
synquid#7193: honhon
Chad Kensington#9564: viola france
artem9k#7593: now we just need elon in off topic
synquid#7193: he is lurking
Ryu#0274: @elonmusk
|
Ryu#0274: xD
synquid#7193: >real
spirit-from-germany#1488: Yes, we wanted to keep it EU
synquid#7193: how big do you have to be to join 😎
synquid#7193: I'm speaking to the danish government later this month, wouldn't mind putting in a word
lordvader31#1368: Hey is anyone working on knowledge graph embeddings right now ? I am working on a project for knowledge graphs
lordvader31#1368: And would like some help in this stuff. Need anyone with python ML experience
AI_WAIFU#2844: @Louis
Louis#0144: ew
lordvader31#1368: why ?
sweg#8920: hey thats my student
sweg#8920: give him knowledge graph resources
Louis#0144: I hate KGs
Louis#0144: I used to be KGpilled
kurumuz#5695: louis can't escape it
Louis#0144: Then prompt engineering did so much better than KGs
cognomen#6297: bitter lesson
sweg#8920: he wants to learn about GNNs in general i think
brubsby#7196: has there been any work to try to convert LLM artifacts into KGs?
flowpoint#7450: Wait what do those even have in common?
|
kurumuz#5695: KGs my beloved
kurumuz#5695: time to revive KGs
lordvader31#1368: i wanna use KGs for personal knowledge management and a representation of notes
brubsby#7196: I've long maintained that hypernym lookup has never had a great tool, and it seems well learned by LLMs
lordvader31#1368: and i want hierarchies in the KGs but current impls dont have it
lordvader31#1368: so want to build my own or augment existing ones
brubsby#7196: seems like you could create a system that teases out hypernym relations into a KG by prompting an LLM at every node, but would probably be costly/noisy
spirit-from-germany#1488: Sounds really good if anyone knows journalists get them to report about this issue
spirit-from-germany#1488: If EU would heavilly regulate Open Source, this would be bad
flowpoint#7450: Cmon, dont dismiss KGs entirely
sweg#8920: I'm trying to understand what you're saying
sweg#8920: What would it mean to convert artifacts into KGs?
brubsby#7196: either via, direct studying of the weights with some technique, or recovering kg relationships via prompting
brubsby#7196: e.g. option 2: https://cdn.discordapp.com/attachments/729741769738158194/1101584306868072508/image.png
sweg#8920: Oh so like recovering KGs from the LLMs internal knowledge?
brubsby#7196: we have now converted 5 kg edges from the llm artifact
sweg#8920: thats cool
brubsby#7196: yeah
sweg#8920: ive tried somethign similar
sweg#8920: i had it take a complex topic and turn it into a graph of the component topics id need to study and understand first
|
brubsby#7196: of course you could always get the edges as you need them from the LLM, but converting it completely into a KG is perhaps not a terrible idea from a cost standpoint
brubsby#7196: but due to temperature and subjectivity the data would be noisy, so weighing the edges based on their saliency or something might be useful
sweg#8920: hypernym = ?
sweg#8920: oh nvm
flowpoint#7450: To me sounds like you'd be interested in the paper "neural networks are decision trees"
Yannic kilcher also has a video on this
sweg#8920: i was googling the wrong thing
sweg#8920: i get it now haha
sweg#8920: wait yes thats actualy cool
sweg#8920: @lordvader31 this is actually very relevant to your idea
brubsby#7196: interesting, i wonder if the techniques here could be applied to llms
brubsby#7196: i figured word2vec embeddings capture this to some degree and alas there is some research on it https://cs224d.stanford.edu/reports/NayakNeha.pdf
brubsby#7196: and maybe this is sota https://arxiv.org/pdf/2204.02058.pdf
artem9k#7593: use gpt4 to bootstrap cyc 2
lordvader31#1368: yeah but i also want to do some computation like querying, semantic similarity, etc from the knowledge graph
lordvader31#1368: so i think there should be a more robust system than just prompting
brubsby#7196: you'd have the KG built outside the LLM, and whenever you needed to access a related node you then prompt and update edges and weights
brubsby#7196: i'm sure a lot of the operations you'd want to do on the KG would preclude that strategy, but for my imagined use case (hypernym/hyponym/isonym lookup website) it would be relatively cost efficient
brubsby#7196: requiring 0 or 1 prompts per word
aman_shakesbeer#3710: I have been trying to get into some nerf based research
|
jrowe#5371: https://tenor.com/view/there-will-be-no-mercy-emery-kelly-lucas-alexa-and-katie-merciless-gif-16755717
aman_shakesbeer#3710: 😭
Chad Kensington#9564: not sure if helpful but this paper might be interesting https://arxiv.org/abs/2009.12677
brubsby#7196: > In this paper, we argue that only using pre-trained language models with textual concepts alone cannot provide
> sufficient information for generative commonsense reasoning.
brubsby#7196: funny
destrucules#7325: I've been watching lectures from a few weeks ago from NYU Center for Mind about LLMs, and it's fascinating to see how researchers are adjusting
destrucules#7325: You can hear a bunch of different researchers saying things along the lines of "I used to think this, but evidently, that's at least not *entirely* the case..."
But overall people are still very split about what LLMs can and cannot do
destrucules#7325: https://youtu.be/vzS6Di5Mrxk
destrucules#7325: This one was really cool because they do some mechanistic interpretability
destrucules#7325: Lots of discussion of content effects and content-specific vs non-content-specific reasoning, compositionality, and grounding
destrucules#7325: _ _
One of the other lectures gave a fascinating demonstration that LLMs do not benefit in the text domain from multimodality, and in-fact, coupling a pretrained vision model with a pretrained text model is just as effective as training a single model on both modalities - like, no discernible difference.
This is used to argue that LLMs have the right conceptual framework / world model despite lacking direct multimodal grounding, such that adding multimodal inputs is as simple as mapping features in the new input spaces to existing schemata in a frozen LLM
destrucules#7325: https://youtu.be/x10964w00zk
destrucules#7325: Relevant part is around 25 minutes in
destrucules#7325: By all means though the whole thing is great
|
wabi-sabi#5811: Does anyone challenge the idea that word tokens aren't just as valid a form of sensory grounding as audio-visual data?
destrucules#7325: Yeah iirc they have the same number of talks in the YES camp as the NO camp
destrucules#7325: My take is that the arguments in the YES camp (LLMs do need additional sensory modalities) were dominantly philosophical, whereas the arguments in the NO camp tended to be more empirical
destrucules#7325: The graphs at 26 minutes in are very compelling imo but again it probably won't convince everyone
wabi-sabi#5811: What I mean is that it's not about grounded versus ungrounded, just about one type of data versus another. Anthropocentric to assume audio-visual is more "real".
destrucules#7325: Something interesting that was brought up later in this video, I don't remember exactly where, was that congenitally blind people display similar levels of understanding of visual concepts as sighted people, but with some gaps
destrucules#7325: Yeah that wasn't brought up explicitly but I think the evidence supports that position
OccultSage#3875: *points and laughs*
brubsby#7196: is english text not anthropocentric lol
OccultSage#3875: I agree with this. There are Deaf-Blind people in this world. Is their world not real? 🙂
destrucules#7325: I think next-token prediction is richer than a lot of people give it credit for as an optimization function. The training data is predominantly human-generated text, and while we can certainly revise things and produce documents nonlinearly, these nonlinearities are the exception - most of what we write is written in the forward direction. So a causally masked decoder-only transformer is effectively learning to model the computational process that produces the next word given the previous words, and that computational process is human thought. Through this lens maybe it's less surprising how far the technology has come
StellaAthena#3530: Also ableist
destrucules#7325: Stella, I just saw a lecture pop up on YouTube by you, about mechanistic interpretability. Sorry for my question before
tpapp157#3643: Writing is not a linear process, nor is human thought. Reading isn't a linear process either. Not to say that next word prediction isn't a good and effective learning task. Just to say that the human thought process of reading/writing are far more complex and interactive than deterministically processing a linear sequence of tokens.
youkpan#5346: hello i build a server for every no public address GPU machine. https://github.com/youkpan/LLM_Open_server
destrucules#7325: I am skeptical of this claim. If a person is writing a stream of consciousness, i.e. every thought that comes to mind is immediately written down, and you ask that person to write a poem with exactly 17 words in it, will that person write the 17-word poem correctly the first time? I'd argue no. My interpretation is that humans do not plan what they are going to say. However, after we have come up with something to say, we are able to judge and revise it. We may have a vague idea of what we are aiming to communicate, but we do not select the end of the sentence before choosing the words at the beginning.
So the next question is, if we can split human-written text into phases of linear forward passes (where you don't know what you're going to say at the end until you get there) and cycles of revision, how many revisions will occur for the average paragraph of text you find on the internet? My hunch is that it's maybe one or two, because these revision cycles are expensive.
Perhaps your experience writing things is very different from mine, and you do begin by thinking of the words at the end of the sentence before deciding what word to say next, but based on my own experiences, I do not seem to have neural machinery for that task.
|
destrucules#7325: _ _
Maybe I'm just weird here - would you find it equally easy to write an essay starting with the last sentence, then the second to last, working your way back as you would writing it the way that I at least find more natural, starting at the beginning and working towards the end?
destrucules#7325: My brain is super biased for writing in the forward direction
destrucules#7325: Also, I might go back a couple sentences when I'm reading something to reread a relevant phrase, so in that sense, reading is definitely not linear. But LLMs reread the entire context window on every inference pass so it's not like they don't have the same access to information
kd90138#9368: I had a strong thought about the original comment by ersatz (in agreement of course) but i am still struggling to put it in words but this does capture the idea of some of it.
kd90138#9368: https://brailleinstitute.org/books-for-visually-impaired/download-books-and-magazines
kd90138#9368: In my acquisition of dei sensitive Korean sources
kd90138#9368: I encountered a lot of books in this format
kd90138#9368: They are a valid source of highquality, structured,
Accessibility oriented,
Multimodal(often accompanied by Audio aids) linguistic
Data
kd90138#9368: Left on the table unfortunately. It's a disservice both to llm research and those communities who may benefit from such resources
kd90138#9368: If chatgpt could output braille/BARD compatible content it would be a game changer for those communities.
As it is there are tools to bridge the gap but it's not enough imo
mahouko#7043: what's the gap? can ChatGPT not be used already via a Braille display? or is the problem that the web interface (rich text, animations, etc) is not accessible?
bc [EU]#4818: What is braille compatible content?
Isn't braille just a font you touch with your hands? How would it's output be "braille compatible" or incompatible?
kd90138#9368: I am not visually impaired myself and do not have first hand experience or knowledge of e-braille or bard formats.
I did a bit of research and asked bard to summarize
|
kd90138#9368: E-braille is not just ebooks with a Braille font. It is a format that allows for the electronic representation of Braille text. This means that e-braille books can be read on a variety of devices, including refreshable Braille displays, computers, and mobile phones.
kd90138#9368: Just like the multilingual token gap, usable does not necessarily mean as effective as it could be.
jober#3399: Maybe it'd be a good idea to write a blog post aimed at ML researchers about this if you want to recruit people to the cause, could even be a hit on HN or something
jober#3399: guessing not a lot of people even know about this need at the moment
kd90138#9368: I would say that part would be in we don't know what we don't know territory
kd90138#9368: I would hope so, but ATM i am occupied with polyglot which tries to first bridge the multilingual gap
jober#3399: (after you've researched the topic a bit more of course, basically a combination of 1) recruitment, this is important and 2) explainer, how does this work)
kd90138#9368: I think the first thing I should/could do is trying to make sense of the bard books I've obtained in first a personal level and then a LLM consumable level.
kd90138#9368: I felt real bad being unable to use those books that somebody spent a lot of effort on, while we suffer from pretraining data starvation in general
tas#5369: Is there any literature on lossy text compression?
mahouko#7043: I would be happy to advise on how to make a more accessible ChatGPT interface
mahouko#7043: not visually-impaired, but I used to be an accessibility tester at Microsoft
mahouko#7043: so I at least know how to make an interface WCAG-compliant
kd90138#9368: Sentence vector representations?
tas#5369: i want to be able to decompress too
jober#3399: it's interesting how certain disabilities are over-represented or under-represented in communities like this one (often for obvious reasons of course), I'd wager that blind people are among the most underrepresented, while I've seen quite a few deaf people on the ML-related discords I'm in
jober#3399: this can easily lead to tools that could be made for e.g. blind people just not getting made because it's not on the radar of anyone
Vals#4167: hey, is there a library that can interface with a minecraft instance from python and actually works? I tried malmo but its a broken mess
kd90138#9368: If you save the original sentence along with the vector representations and retrieve them when necessary that's basically decompression. What are your operational configurations?
mahouko#7043: sometimes it's as simple as "is the chat client accessible enough". dunno what the Discord situation is. but my visually-impaired friend told me how Twitter kept making breaking changes regarding API/accessibility, which was a frustration
|
tas#5369: i dont want to keep the original sentence
tas#5369: what do you mean by operational configurations?
kd90138#9368: You just told me you don't want to keep the original configs etc
That kind of constraints
kd90138#9368: Most lossless compression schemes keep a dictionary and many lossless compression schemes do too.(global or per corpus etc)
tas#5369: I'm trying to optimize for stored data size. How would keeping the original sentence help in that goal?
kd90138#9368: It depends on how much data you are dealing with.
kd90138#9368: It seems like there aren't as much research on pure lossy compression as you describe. Usually dropped vowels or replacements of characters
Gifted Gummy Bee#3277: Yes it can
Gifted Gummy Bee#3277: :grimberk:
Gifted Gummy Bee#3277: Easily doable with semantic kernal
lordvader31#1368: Thanks a lot
Chad Kensington#9564: Np
theCouchPotato#5751: (moved to lm-thunderdome channel)
StellaAthena#3530: #lm-thunderdome
hails#6601: https://twitter.com/jbrowder1/status/1652387444904583169?s=21 you have GOT to be kidding me
hails#6601: not this guy again
artem9k#7593: cool where do I sign up
sekstini#0069: do those (+$217.85) count API costs? :berk:
Chad Kensington#9564: Oh wow microsoft might now have personal infos of so many ppl
|
destrucules#7325: I mean that amount of money will buy you around 4.8 million tokens assuming input and output tokens are roughly 50/50 split.
destrucules#7325: Probably not a significant expenditure given the thread
AI_WAIFU#2844: respect the hustle
Kharr#7888: Snakeoil salesmen are at the forefront of every hype trend
yankscally#2900: is this chair real or fake? https://cdn.discordapp.com/attachments/729741769738158194/1101994604296933427/iu.png
jrowe#5371: Gotta be real
jrowe#5371: All the little "chair for human sitting" nuances are there and consistent
Fessus#9563: Looks real, lots of correct symmetry in ways which image generators usually fuck up in subtle ways
bc [EU]#4818: Good thing for people who are too stupid to manage their finances.
bc [EU]#4818: Useless for everyone else.
nullonesix#0744: real, too ridiculous to be fake
bc [EU]#4818: Probably one of those "modern art" things.
yankscally#2900: correct
yankscally#2900: and this one? https://cdn.discordapp.com/attachments/729741769738158194/1102001141979566140/product.png
bc [EU]#4818: AI art. You can see because the wall-floor border is uneven.
yankscally#2900: can you prove it?
bc [EU]#4818: https://cdn.discordapp.com/attachments/729741769738158194/1102001792650317875/why_ai_art.png
bc [EU]#4818: Unless I'm wrong of course. 😄
yankscally#2900: none of the 'lines' in my house are straight (built before metric system)
bc [EU]#4818: As for the chair itself I have no complaints. No hands too see if they are screwed up or not.
|
Parkourwalrus#0212: https://cdn.discordapp.com/attachments/729741769738158194/1102002183517524070/mh_prd_ovw_eames_molded_plywood_chairs.png
Parkourwalrus#0212: chairs can get really weird
yankscally#2900: thats a real chair
Parkourwalrus#0212: it is
Parkourwalrus#0212: if I did not know that it is
Parkourwalrus#0212: I would not believe that without proof
yankscally#2900: by the way, this chair is fake
Kharr#7888: The blending on the leg from soft material to wood is horrible
eirai#3591: yea it doesnt look real enough
bc [EU]#4818: Good eye. Also, the floor is VERY uneven when going through the second leg from left. Like 5cm bump. Everyone would notice it in real life.
eirai#3591: wait no not schizo enough
yankscally#2900: https://cdn.discordapp.com/attachments/729741769738158194/1102003195313651812/realchair.png
eirai#3591: looks too smooth to be real
bread browser#3870: Though this was off-topic for a moment
bc [EU]#4818: AI. Why? The resolution is `512 x 512`. 🤣
yankscally#2900: Yeah, was waiting for that one
bc [EU]#4818: Although the file name is `realchair.png`, so kinda sus.
yankscally#2900: And I was also waiting for that one
yankscally#2900: It seems like I’m going to really have to try to fool people here. There’s no true way to ‘detect’ AI gen though
yankscally#2900: Soon it will be virtually indistinguishable. But thanks to, Donald Trump, of all people. I’m desensitised to fake media already
|
yankscally#2900: Not to award him any credit, he’s just really loud and in a way I think he manifested a lot of it as well
Parkourwalrus#0212: The AI chairs all look very average
bc [EU]#4818: Also, the legs are at a wrong angle.
bc [EU]#4818: https://cdn.discordapp.com/attachments/729741769738158194/1102009378506281030/why_ai_art.png
Parkourwalrus#0212: It would be a lot harder to tell normal chairs from AI generated normal chairs than weird avant garde chairs from slightly off AI generated normal chairs
Parkourwalrus#0212: Not that any of that is necessarily inherent to AI image generation
Parkourwalrus#0212: But I doubt the datasets have particularly good furniture specific tagging
yankscally#2900: Yeah. It was just a mildly interesting thought experiment im doing while procrastinating
yankscally#2900: With people, you have no chance, faces especially
yankscally#2900: It’s down to an art
bc [EU]#4818: I agree, it will be indistinguishable. We should cherish the times we can spot them.
I'm also desensitized to fake media and always look for sources, but because _BOTH_ sides lie, not just one.
yankscally#2900: Yeah, without digging too much into politics I thought it was interesting how we all already believe about 10% of what they say in the news where as 20 years ago it was wayyyyyy higher
bc [EU]#4818: If you watch archival TV from 20 years ago it was really way higher quality.
yankscally#2900: Like the news used to be respected now it’s click bait
bc [EU]#4818: It's interesting that we don't really see any deepfakes in politics in those last 4 years they exist.
I expected it to explode.
yankscally#2900: It was a serious concern that AI gen will get misused for anti government purposes, and the only thing it’s truly done is upset the music industry a bit and loads of memes of Trump and Biden playing Minecraft
yankscally#2900: That’s just the voice stuff though, ChatGPT has handled itself well in the market and text gen seems to be a lot more compatible with society. The art stuff is just weird. When I make those fake chairs in 3 seconds on a 3060, I feel like I have cheated God
|
bc [EU]#4818: Music industry is pretty much already generated, but by humans.
It's a literal industry, not art anymore.
To find anything actually ambitious you need to look outside of the top 100 hits.
So no changes there. 🙂
yankscally#2900: True, the music is close to home for me personally so that’s why I mentioned it
bc [EU]#4818: Another 180° change from the last 20 years.
bc [EU]#4818: And I love the productivity increases. ChatGPT doesn't replace the brain, but it surely does well with lowering the amount of boring work.
yankscally#2900: It took my computer about 5 seconds to make this out of a bunch of words, so I’m still in shock I think https://cdn.discordapp.com/attachments/729741769738158194/1102011928563429469/00996-1032810962.png
yankscally#2900: Yes I’m a software developer and ChatGPT has been a dream, but I stopped using it recently out of principle
yankscally#2900: It needs to be free, local, and customisable
bc [EU]#4818: I'll think about principles when open-source models become viable. 😄
bc [EU]#4818: Which I'm sure will happen in a year or two.
yankscally#2900: Yeah my principle being: I’m not going to start relying on something if I have to pay for it
yankscally#2900: I’m tired of these guys doing that type of stuff I think everyone is
yankscally#2900: Give us the shit
bread browser#3870: My phone can make this in 2 minutes https://cdn.discordapp.com/attachments/729741769738158194/1102012616072773712/IMG_0007.png
bc [EU]#4818: Well, if I'm making money on something I do feel I should pay for it. IMO nothing wrong with that.
yankscally#2900: Of course, but it annoys me, out of principle, that I know I can run this locally, albeit slower, but I know this
bc [EU]#4818: You can't. GPT-4 is a league above anything you can run locally.
|
bc [EU]#4818: A human can make this in a few days
bc [EU]#4818: https://cdn.discordapp.com/attachments/729741769738158194/1102012979786022942/iu.png
yankscally#2900: It’s a matter of time. And it loses points for customisability
yankscally#2900: ChatGPT can’t do the thing I need it to do
bread browser#3870: Looks bad, not a great a example
bc [EU]#4818: Intentionally. There are a ton of "scribble" modern art that sell for more than our homes combined. xD
yankscally#2900: I used GPT4 for a month
yankscally#2900: I _almost_ can’t live without it, its really really good. But I chose to
bread browser#3870: https://www.nytimes.com/2016/05/31/arts/sfmoma-glasses-prank.html
yankscally#2900: GPT4 is not helping me - but it has the possibility to, if i can train my own model checkpoints
yankscally#2900: Then it would be crazy. But they aren’t doing that. It’s so generalised
yankscally#2900: I need a specialised model
bc [EU]#4818: You can't train Alpaca?
bc [EU]#4818: I heard it takes like a $100 nowadays.
yankscally#2900: I have a 3060 and patience also
bc [EU]#4818: Like a $100 in the cloud.
yankscally#2900: 12GB of VRAM is nothing to laugh at
yankscally#2900: But it’s a drop in the ocean for AI
yankscally#2900: it’s at the bottom of entry level. It could train something useful to me in a few weeks
yankscally#2900: … months maybe haha
|
bc [EU]#4818: Not really. With fp16 you could train _maybe_ a 5b model.
So the only option for you to train is training a 3b model because 7b won't fit.
yankscally#2900: I don’t need anything over 3B personally
bc [EU]#4818: While if you trained in the cloud you could train a good 20b model and run it locally with int4.
yankscally#2900: I’m doing something super specific
yankscally#2900: https://gdscript.com
yankscally#2900: I need it to be an expert at a C++ binded obscure language for a game engine
bc [EU]#4818: Godot isn't that obscure...
yankscally#2900: Yeah , your right. It was when I started using it
yankscally#2900: That’s cool
yankscally#2900: ChatGPT isn’t good at GDScript
yankscally#2900: It’s amazing at python though, like seriously rock my world amazing
yankscally#2900: Like turn my brain farts into real functioning code
bc [EU]#4818: I tested GPT-4 in Python, Bash and Rust. It's really good at Python, but mediocre at Bash and Rust.
bc [EU]#4818: It's good to know that Bash isn't hated only by me.
yankscally#2900: I think it’s because a lot of stuff is done inside one script in python, and the import library bits make it much easier to know what’s going on.
yankscally#2900: Although when the projects get to the size of LLMs it’s not the same
bc [EU]#4818: Python is pretty much the easiest programming language for humans. So no wonder it's also the easiest language for AI.
yankscally#2900: The pip stuff drives me nuts
bc [EU]#4818: Also, it's the most popular language right now, so the amount of training data is probably only behind JavaScript.
|
yankscally#2900: Yeah it makes the most sense and it’s the most flexibl, so harder to get wrong
yankscally#2900: So when the ai makes a code that does the thing right it just shoots itself in the foot first before doing what you need. Classic python haha
yankscally#2900: So yeah python is great and GDScript is very like python. I’d only have to change the python parts of the dataset to GDScript and I’d have a pretty good expert
yankscally#2900: 3B models seem to be absolutely high on something though, so I’d like to really put it to the test
yankscally#2900: I think that the real gold is smaller specialised LLMs that is a bunch of 3B models for whatever tasks. I’m assuming a 3B model would be the best for writing haikus and shit, while something like 5 or 7B might be good for code gen. And it seems like the generalised LLM models only seem to be ‘good’ after about 30B…
yankscally#2900: I think generalised data is making it stupid, and I believe ChatGPT are deploying lots of smaller models. I’m pretty sure I heard they are using models in the >1B range, so that gives me some hope
rocks#5239: zsh supremacy
Fessus#9563: Bash is terrible but everything else for shell scripting is effectively nonstandard and therefore worse
rocks#5239: I find myself using python more and more for automating random stuff. Still using shell scripts pretty often tho. It's a handy skill to have since I'm using linux as a daily driver.
wassname#1892: any more info on this, especially the reddit part. Are they usng this code https://github.com/CarperAI/pilev2 ?
ambivalent_case#8040: Can somebody explain to me why gpt models are good at translation? Is it due to scaling or instructions tuning? Any papers that try to explain this phenomenon?
kd90138#9368: https://arxiv.org/abs/2106.13627
kd90138#9368: There are more papers btw
kd90138#9368: Also you need to understand something. It's difficult to make an apples to apples comparison
kd90138#9368: Many production level nmt models are 200-600M parameter range
kd90138#9368: It's not fair to compare decoder models that begin at 1B and often go 100B+
kd90138#9368: Also whether GPT models are good at translation is also controversial, especially outside the Germanic and romance families that gpt were trained on.
ambivalent_case#8040: Thanks, will look into it. For slavic langs, the oai api stuff works pretty good. As there are no technical details in their papers would be interesting to see how it works behind the scenes. Meaning if they use originsl pretrained gpt model or a dedicated nmt model that does the magic.
Few shot learning translation examples are impressive though.
|
bc [EU]#4818: Same.
bc [EU]#4818: Very simple - Shell. Anything more complicated - Python.
¯\_(ツ)_/¯#4465: Can you latent space voice models so you can tweak instead of just copying someone
Serge#0241: can someone please explain why llama seems to have ROPE (positional embedding) after the attention and not before? isn't it required for attention to know which token is close to which?
https://github.com/ggerganov/llama.cpp/blob/master/llama.cpp#L1111
Serge#0241: oh I think pos enc modifies Q and K before computing attention
Serge#0241: sorry for posting in #general, should've probably been #research or something
kd90138#9368: That only matters to the first layer no? Also skip connection allows for the position info to survive that first layer
Serge#0241: yes, although in the interpretability papers i've been reading several attention head are essentially attending to "previous/next token", which is quite logical
Serge#0241: https://cdn.discordapp.com/attachments/729741769738158194/1102190304263688212/image.png
Serge#0241: would be impossible without positional data
kd90138#9368: I am going to give you a biased and likely incorrect information but
The first layer without the position info might as well be doing embedding processing (imbuing the embeddings with semantic information) especially with the NLP and attention in parallel
kd90138#9368: Which paper is this from,
Serge#0241: sorry, I misspoke, that was from an article https://www.lesswrong.com/posts/hnzHrdqn3nrjveayv/how-to-transformer-mechanistic-interpretability-in-50-lines
Serge#0241: almost at the bottom of it
Louis#0144: We are redoing most of pile v2
Louis#0144: FYI
StellaAthena#3530: @looking-for-work I have marked a number of issues in the GPT-NeoX and LM Eval libraries as “good first issues.” If you’re looking to gain experience with either library, they’re a great place to start!
|
<https://github.com/EleutherAI/gpt-neox/labels/good%20first%20issue>
<https://github.com/EleutherAI/lm-evaluation-harness/labels/good%20first%20issue>
Faldore#6973: I will look into it
login#7229: Hi I was wondering is there any way to make an AI model browse twitter for free in order to aggregate the AI safety data available over there ?
StellaAthena#3530: You may be able to hack it but if you follow the rules no, Twitter charges an obscene amount for API access.
login#7229: I mean if there's no other way around we're not gonna follow the rules
StellaAthena#3530: 🤷 have fun I guess.
login#7229: Not sure it's fun to do that but well
login#7229: Haven't found any decent aggregate of all the data produced irl by social networks on ai safety
Cybercrash#1468: @StellaAthena for someone who would want to try out let’s say the OOM issue on GPUs. How would I get access to compute to debug?
StellaAthena#3530: Oh hmmm. I marked that one as “good first issue” because it’s not hard, but maybe I shouldn’t have due to the compute reqs
Cybercrash#1468: I could try to look into it without actually running it but would probably be better if I know for sure that the fix worked.
Had a similar question for reproducing the weight mismatch errors which seem to be on systems with over 40gb VRAM. - issue 645 for gpt neox
Eryk#9122: maybe someone can explain this to me, why did people choose slowest possible language (python) for machine learning?
StellaAthena#3530: Because it’s user-friendly and 99% of the time doesn’t matter
Eryk#9122: I found this somewhere https://cdn.discordapp.com/attachments/729741769738158194/1102366216665968700/image.png
StellaAthena#3530: It’s from this blog post by @chilli https://horace.io/brrr_intro.html
Eryk#9122: node.js would be better imo
|
not even mentioning rust
StellaAthena#3530: Said blog post also points out that this is a toy example and non-representative of actual ML workflows
StellaAthena#3530: > Given this, you might be shocked that anybody uses PyTorch at all, but keep in mind that modern deep learning models are often performing massive operations. Moreover, frameworks like PyTorch execute asynchronously. That is, while PyTorch is running a CUDA kernel, it can continue and queue up more CUDA kernels behind it. So, as long as PyTorch can "run ahead" of the CUDA kernels, most of the framework overhead gets completely hidden!
Eryk#9122: Okay that explains it, I found this pic on twitter somewhere
chilli#5665: I mean, I’d note that most of that already isn’t even python
Eryk#9122: yeah, anyway it's a shame that node.js doesn't have the ecosystem for datascience as python
StellaAthena#3530: Human hours are much more expensive than computer hours. Fixing this would be a massive undertaking and probably a net-loss of productivity
Maximum Limelihood Estimator#8915: This is my personal hobbyhorse actually! So, OK, Rust would actually be a kind of terrible language for ML. Not because Rust is a bad language--it's amazing, and I love it!--but because it is *very* labor-intensive
StellaAthena#3530: Like, I’m worth ~500 A100s on an hourly basis
Eryk#9122: flex
Maximum Limelihood Estimator#8915: When you are dealing with ML, you're talking about PhDs getting paid in the quarter million range or higher, typically. So wasting even a second of their time is bad
Maximum Limelihood Estimator#8915: But the thing is this is actually only a *partial* explanation because ML is not actually the main programming language for ML
Maximum Limelihood Estimator#8915: C++ is
StellaAthena#3530: So if it takes me 1,000 man-hours to write RuTorch it would need to *save* 500,000 A100-hours to break even. At a 10% speed-up (which won’t happen) that would mean the code would need to run for 5M A100 hours. At a 1% speed-up, it would take 50M A100 hours
Maximum Limelihood Estimator#8915: people just call C++ from Python, so they can pretend it's Python.
Now, why do people use C++? ~~Because they enjoy suffering and are masochists.~~
chilli#5665: I guess the thing is, you’re also not gonna even get a 1% speedup lol
StellaAthena#3530: Yeah
Maximum Limelihood Estimator#8915: Well, hold on; that's true for like, LLM-style setups
|
StellaAthena#3530: Honestly the lost productivity in making people learn Rust is probably more expensive than the gains from running a tiny bit faster
chilli#5665: My take is that the extreme flexibility and ability for runtime introspection of python is the main reason why it’s successful
Maximum Limelihood Estimator#8915: Lots of other languages have that
Maximum Limelihood Estimator#8915: The main reason why it's successful is because it's successful :p
Eryk#9122: that doesn't really explain why google decided to make another library for ML in python (jax)
chilli#5665: Not nearly as much/easy as python
chilli#5665: In my experience
StellaAthena#3530: “Because people like Python and the slow-down for using it is almost non-existent”
Maximum Limelihood Estimator#8915: I mean, would you rather mess around with metaprogramming in Python or Julia? It's substantially *easier* in quite a few languages
synquid#7193: we're getting another ML framework tomorrow (2 days for americans)
synquid#7193: by Modular
synquid#7193: exciting
chilli#5665: Is it gonna be another ml framework?
synquid#7193: I assumed so
chilli#5665: Seemed pretty ambiguous to me
synquid#7193: but idk any more
synquid#7193: than whats public
chilli#5665: Hmm, I can’t answer this fairly since I haven’t seriously done metaprogramming in Julia or a lisp-like language lol
Maximum Limelihood Estimator#8915: Ahh, fair
chilli#5665: A hole I should fill at some point
|
chilli#5665: But compared to something like Ocaml/c++/rust, python is way easier
Maximum Limelihood Estimator#8915: Oh definitely
chilli#5665: Actually, reading their announcement post
Maximum Limelihood Estimator#8915: But that's also why I said partial answer; even though C++ is clearly not the best language for doing AI work, *most ML code is in C++*
chilli#5665: I predict it’s going to be
OccultSage#3875: You should. Learning and getting familiar with a paradigm helps you with other paradigms as well.
chilli#5665: 1. An inference accelerator/engine
2. An embedded python dsl for writing kernels
OccultSage#3875: The meta programming of Lisp far exceeds that of C++'s obtuse template system.
synquid#7193: seems about right
StellaAthena#3530: #2 would be nice
Maximum Limelihood Estimator#8915: Oh but anyways, besides what I said, JavaScript in particular wouldn't be a good choice of language. It's not mathematical, it's not as easy to do metaprogramming in as Python, it's still not quite as clean as Python, and the performance is middling. (Not bad, to be clear! But it's not fast at doing big math ops.) If you want fast ML, Julia is the language for that. (Arguably Nim could've been too, but it never caught on in ML so there's no data science ecosystem.)
sekstini#0069: There's Hidet I guess, but only for inference it seems? (https://pytorch.org/blog/introducing-hidet)
Maximum Limelihood Estimator#8915: *whispers* have u tried Julia yet
https://github.com/JuliaGPU/KernelAbstractions.jl
nate_k#7543: Hello my name is Nathan, I am new in this group, trying to get involved in the research. Most of my ML experience comes from working with pytorch on a few NLP and CV tasks such as image classification and sentimental analysis. From my understanding people here are quite serious and skilled in their task so i hope i can be a good help
chilli#5665: I mean, triton is already this lol
chilli#5665: Hidet is kinda similar-ish to triton
Maximum Limelihood Estimator#8915: I imagine he's just going to walk up to the stage and reveal it's just Julia but he made it 0-indexed
chilli#5665: This is more of just “cuda but in Julia”
|
Maximum Limelihood Estimator#8915: I don't think so? For starters it compiles for CUDA, AMD, Intel, or CPU. On top of that it lets you just reuse old CPU code and run it on GPU
synquid#7193: I imagine it's going to involve a whole bunch of MLIR
chilli#5665: Err sorry, it seems like two parts - 1. “Cuda but in Julia”, and 2. “Array programming”
chilli#5665: I imagine #2 is what you’re referring to when you talk about reusing cpu code for gpu
chilli#5665: But 2 is also pretty analogous to cupy/Pytorch/jax I think
Maximum Limelihood Estimator#8915: It's pretty analogous yeah, but then I'm not sure what more you're suggesting. Unless I'm misunderstanding something about how Triton works--I was under the impression that it's just "CUDA but in Python"
chilli#5665: Yeah it’s quite different
chilli#5665: Triton is one level up in terms of abstraction compared to cuda
chilli#5665: E.g. you program from the perspective of blocks instead of threads
chilli#5665: (For comparison, I would view numpy/Pytorch as one level up compared to Triton - you program from the perspective of kernels instead of blocks)
sekstini#0069: yup, and memory management is largely abstracted away
sekstini#0069: seems like Hidet is similar to Triton, but a bit lower level
chilli#5665: Mmmm… depends on what you mean by this
chilli#5665: Memory management is still pretty explicit in triton
sekstini#0069: mostly that you don't have control over shared memory
Maximum Limelihood Estimator#8915: Ahh. Then yeah I'm not sure how it differs from KernelAbstractions.jl, which tries to do the same thing (including abstracting away memory management). Apart from like, a handful of features that are already in base Julia, reading the paper it looks like it's around the same level of abstraction as KernelAbstractions.jl
sekstini#0069: but yeah I guess it's not that high level when you do pointer arithmetic and explicitly load from memory
chilli#5665: Kernelabstractions seems quite a bit higher level than triton
chilli#5665: Well, you don’t have explicit control over whether something is shared memory or registers i think
Maximum Limelihood Estimator#8915: Hmmm maybe? Like, I think it's supposed to be like the rest of Julia (high-level by default for easy prototyping, but you can use macros to perform low-level optimizations)
|
chilli#5665: But you do have explicit control over whether something is in global memory or shared/registers
Maximum Limelihood Estimator#8915: Like, you *can* handle memory management if you want, you just don't need to
chilli#5665: Do you have an example of a matmul?
chilli#5665: https://juliagpu.gitlab.io/KernelAbstractions.jl/examples/matmul/
chilli#5665: Since this is definitely not going to be fast haha
Maximum Limelihood Estimator#8915: Why do you say that?
chilli#5665: It doesn’t express tiling, pipelining, 2d block swizzling, tensor cores, etc.
chilli#5665: https://github.com/openai/triton/blob/main/python/triton/ops/matmul.py
Maximum Limelihood Estimator#8915: Ahh. Here you run into "I have no idea what those words mean" territory. I'm like 60% sure you just made up the word "Swizzling" :p
You can ask Valentin Churavy on the Slack, although he's quite busy so he might not be able to answer
synquid#7193: Swizzling 👀
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/1102377639362183208/IMG_2628.png
chilli#5665: It’s what this optimization does
chilli#5665: Basically, reordering the order in which you compute the output blocks of your matmul to maximize l2 reuse
Maximum Limelihood Estimator#8915: I assume there's some way to do that kind of stuff but IDK how because I'm not a big GPU programming guy. All I know is I put code in and enough nyoom comes out
jrowe#5371: <https://en.m.wikipedia.org/wiki/Swizzling_(computer_graphics)>
login#7229: As a note, are you aware that some of the models you opensource are being used for military applications ?
login#7229: I don't see the point of opensourcing
kd90138#9368: If you don't see the point of opensourcing you came to the wrong discord. You're not going to change anybodies mind here
|
natedog#8669: has anyone been able to get megatron's new RETRO implementation working? I'm wanting to port it over to neox, but even getting it working with pure megatron has been a pain
𓅬 gabriel_syme 𓅬#3220: Wonder if that is the main reason RETRO didn't take over tbh. Just feels too complex..
natedog#8669: well I think it is also due to my lack of knowledge of megatron haha 😅 been having some difficulties around fused kernels not wanting to build properly
StellaAthena#3530: It took us forever to get them to build properly consistently in NeoX
natedog#8669: Any resources or advice that might help? Would be super appreciative 🤓
StellaAthena#3530: Talk to Shiv, Hailey, or Quentin? I don’t particularly remember.
natedog#8669: I've been trying to find a "Megatron" tutorial/walkthrough because the document/layout does not grokk at all with my brain 😅
natedog#8669: Okay I'll reach out, thx!
ILmao#5683: It's basically at the level of CUDA C, so lower level
ILmao#5683: More HAL/PAL than something like Triton.
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/1102438791525965864/IMG_2630.png
chilli#5665: There’s no reference to threads/blocks or explicit mapping to any of the levels of the cuda programming model
ILmao#5683: That's mostly a terminology thing, if you check out the docs for how `@index` works
ILmao#5683: https://cdn.discordapp.com/attachments/729741769738158194/1102441157843238992/Screenshot_20230430-204628.png
jay6860#7609: I am confused about the no_weight_tying setting of pythia model configs. Do you actually train pythia with embed and unembed layers tied? https://cdn.discordapp.com/attachments/729741769738158194/1102551849174847560/image.png
hails#6601: responded in issue! We do not tie those layers
ilovescience#3282: Wow, Geoff Hinton quit Google: https://archive.is/TgPyC
ilovescience#3282: (idk if this is being discussed in #off-topic, i can delete if so)
jay6860#7609: thank you!
tpapp157#3643: Not too surprising. We've talked about this before but we're in the middle of a big transition in the AI field and the days of prominent academics in leading business roles are rapidly coming to an end (unless those individuals can also make that transition). The leash of AI researchers in business is going to be a lot shorter with a lot more oversight to generate real business value rather than just mountains of academic papers.
|
tpapp157#3643: A bit of advise for those academic AI researchers in industry at all experience levels. I would start framing the success of your work in dollars (and other business KPIs), and not publications/citations. If your current work isn't generating dollars, then your priority over the rest of this year should probably be defining a roadmap for how it will start generating dollars.
Sphinx#2092: > framing the success of your work in dollars (and other business KPIs), and not publications/citations.
This is already the case for most people who are not just coasting / doing theory. No?
tpapp157#3643: Yeah I think this is true for most AI people in most companies. But my understanding is that certain organizations within tech companies like Deepmind have primarily measured organizational performance in publication numbers.
jrowe#5371: Yeah, seems like it might be important to not burn down the world with reckless commercialization of poorly understood and possibly uncontrollable technology
jrowe#5371: But hell, why not light the sky on fire, can't let Microsoft win!
Sphinx#2092: "fuck Microsoft" has always been a strong rallying cry
Sphinx#2092: though it used to be more for unintended windows updates
AI_WAIFU#2844: nah, they cornered the OS market for a long time and fucked a lot of people over
AI_WAIFU#2844: devs and user's alike
jrowe#5371: Lol, it just usually doesn't have the existential risk element
jrowe#5371: At least directly, anyway
jrowe#5371: AI is going to be one of the most ethically complex fields to work in
jrowe#5371: So if researchers have to convert to results oriented product developers in competition for a market measuring AI commercial relevance- speed, efficiency, utility, capabilities - we've got corporations with power and influence comparable to independent countries locking into a likely unwinnable arms race
jrowe#5371: My advice is to keep banging the research drums and calmly follow the lead of Hinton if necessary. People that stand up for principle could mean a helluva lot to the world.
Chad Kensington#9564: one thing i noticed working in ai is companies really don't want you to do open source stuff while working for them. And prefer to keep stuff proprietary. So in a way usual devs don't have a choice imo
jrowe#5371: Don't work for them is the easy answer, but maybe not a realistic one for most
Chad Kensington#9564: haha I suppose so
Chad Kensington#9564: but yeah money
|
jrowe#5371: An AI engineering union might be a more complex but infinitely preferable solution
StellaAthena#3530: I’m quite pro-union but I don’t see how a union would solve the problems you’re discussing.
Chad Kensington#9564: unless open source is a basic right haha
jrowe#5371: It would have to incorporate an ethical foundation with guidelines and guardrails that define the limits of what employers can ask, so if an engineer is concerned that something is potentially dangerous, they'd have the right to not work on it
StellaAthena#3530: Is there an example of such guidelines being developed and implemented by a union successfully in another field?
StellaAthena#3530: The way you get to this is through Socialism tbh. Package it as part of the basic right to control the fruits of your labor
Chad Kensington#9564: interesting
Chad Kensington#9564: but the boundaries might get iffy tbh
Chad Kensington#9564: since what tech to open source vs what tech can be considered to be company's own is a pretty old debate
jrowe#5371: Electrical engineering and building safety codes, maybe? I'm not sure how much can be drawn from existing models, given the novelty of the risks
Chad Kensington#9564: + what is dangerous like what @jrowe is thinking
jrowe#5371: And really, it might be a paper tiger unless you get industry buy in
StellaAthena#3530: I recommend reading about the history of such things, rather than theorizing
jrowe#5371: Fair enough - the peripheral notion was the ethical foundation I've seen for unions that pop up in the news every so often
Kharr#7888: Yes. Publication is near the bottom of the priority list in companies outside of big tech. Generating $ has always been top priority.
jrowe#5371: <https://www.nspe.org/resources/ethics/code-ethics> things like this that give members an ethical framework within which the profession engages employers
vikasp#7540: I've been part of a couple of unions, and managed people in a union, also. In my opinion, unions exist to protect the rights of workers, narrowly defined as the right to keep all of your limbs, and to avoid termination. Ethics aren't a focus (although I haven't seen the full breadth of all possible unions!)
At UPS (where I was a loader), this mainly came down to ensuring that people didn't get fired for refusing to do unsafe things. The union would help arbitrate disputes and award damages - https://www.upsteamstersunited.org/ups_agreements. Of course, ethics don't come into play when you're loading boxes into trucks.
|
At the state department (where I was a diplomat), the union was surprisingly inactive on the ethics front. The main way to debate ethics was via the "dissent channel", where you sent a cable detailing what you thought was wrong about US policy in a specific area - https://fam.state.gov/fam/02fam/02fam0070.html. These weren't read very widely, and were mostly a blowoff valve than a real way to debate policy. So again, the union was mainly a way to arbitrate pay disputes and ensure you kept your limbs (actually a concern at the state department).
One reason for this is that unions exist to protect you from bad management. But over time, union leadership becomes a kind of shadow management. And can often be equally bad. Their interests are mostly in solving for the needs of senior workers who are active in the union.
jrowe#5371: yeah, it's an imperfect solution, and usually reactive towards things that go bad, and slow
𓅬 gabriel_syme 𓅬#3220: Engineering standards perhaps? Not really the same but there are very clear guidelines for safety in the real world although I guess not enforced directly by a union
Can#7725: Has Alex' Turners presentation been recorded? Sorry if the link has already been posted, searched for it and didn't find.
Dragon God#2718: I also have this question.
bread browser#3870: is opt 1.3b better than pythia-1.4b?
StellaAthena#3530: See the appendix in the Pythia paper for evaluations on six common tasks.
bread browser#3870: I sort of wanted an opinion but ok
KatBelem8#9374: Hi everyone (:
I was reading through the ||Pythia paper|| and I have a question about the models that are made available in HuggingFace.
In particular, my question concerns the gender bias experiments that were carried over in the Pythia suite models. Is it the case that the last checkpoints for the **deduped models** made available in huggingface have gone through the gender swapping training part?
If not, is there any way of getting access to those?
Hyperion#0575: The gender swapped trained models are not the public ones on HF, but maybe we can make them public or give you access?
@StellaAthena
hails#6601: I could make those public later this evening!
KatBelem8#9374: Ohh, really?! That would be great! Thank you so much 😍
Hyperion#0575: The eval code we used for winobias is not in lm-eval harness though because it required some modification and I haven't got round to merging it yet
|
It's on my todo list though
KatBelem8#9374: That's ok, I'm interested in using the model for my own set of custom templates (:
StellaAthena#3530: I am most of the way through this and really wish HF had folders
KatBelem8#9374: I hope it's not taking too much time though! I really appreciate your openness to make it available (:
KatBelem8#9374: If you're ever in Irvine, leave me a message! I'll buy you guys some coffee ehehe (:
StellaAthena#3530: https://huggingface.co/EleutherAI/pythia-intervention-70m-deduped
https://huggingface.co/EleutherAI/pythia-intervention-410m-deduped
https://huggingface.co/EleutherAI/pythia-intervention-1.4b-deduped
https://huggingface.co/EleutherAI/pythia-intervention-long-1.4b-deduped
https://huggingface.co/EleutherAI/pythia-intervention-6.9b-deduped
StellaAthena#3530: The rest will be at the corresponding urls when I finish with them
KatBelem8#9374: great 😄 thank you!
StellaAthena#3530: Comment updated to include all of them
KatBelem8#9374: is the long referring to the experiment where instead of the last 7% you try the gender mitigation technique for the last 21% of the training?
StellaAthena#3530: Yes
StellaAthena#3530: The "long" intervention is the only one that should say 63B tokens, the rest should say 21B tokens
StellaAthena#3530: Woohoo it looks like I got the readmes right (in that respect, at least)
StellaAthena#3530: Will do! I actually might be in Long Beach later this yea.
hails#6601: beat me to it lol
Andrew#8551: Does anyone have a reference to a catalogue/discussion of datasets?
|
Gifted Gummy Bee#3277: huggingface?
Andrew#8551: You're right, that's probably the best option
makya#2148: Or papers code.
DaRK#7934: I will say some controversial stuff, but hear me out: Shouldn't we first worry about abusive use of large-language models and highly realistic deep-fakes before AGI taking over humanity?
kd90138#9368: Well i think one is the method of the other.
nshepperd#2316: im sure we can find enough people on the planet to have someone worrying about both at once
Ravna#1831: We should worry about there being not enough abusive use of language models and not enough deep-fakes. A more capable model suddenly released to an innocent and naive population that's not used to less capable models is a bigger disaster.
DaRK#7934: I mean I can list many examples of very harmful uses of LLM+deepfakes now
DaRK#7934: This combination is deadly.....for example generation of Youtube videos and auto-filling them with AI comments
Ravna#1831: There should be more harmful uses when the language model is still weak at this moment. It can help building culture immunity.
DaRK#7934: I believe, in the near future, we will have AI-only Youtube channels
nshepperd#2316: there's already AI-only youtube channels
DaRK#7934: There is no immunity when there is no personal dissemination between human and AI
Ravna#1831: Then don't trust humans either. It's not like we have really trusted humans since internet, or since civilization.
Ravna#1831: also, let's move to #off-topic
ephemical#2302: I think people are already doing this now
DaRK#7934: This discussion is continued in #off-topic
kevin-ai#4032: Does anyone receive a notification of acceptance for ACL 2023?
StellaAthena#3530: https://twitter.com/aclmeeting/status/1653397060580769797?s=20
kevin-ai#4032: @StellaAthena Thank you for sharing!
|
However, the link seems to not valid.
StellaAthena#3530: huh
StellaAthena#3530: Well, it was an ACL tweet saying there was a delay and the expected notification was friday
jrowe#5371: Tweet was deleted
StellaAthena#3530: Right
StellaAthena#3530: https://twitter.com/aclmeeting/status/1653398131180961793?s=20
kevin-ai#4032: Oh, it was delayed to** May 8**...
1 weeks delayed..
Thank you 🙂
Lots of recent CL conferences usually delayed deadline I think.
Gifted Gummy Bee#3277: But they arent really realistic, and that abusive use of LLM get solved with AGI
Bayang#9929: Hi everyone, I have some instruction. Is there any way to fine tune Pythia model?
Any link will be useful 🙂
Gifted Gummy Bee#3277: Could use stanford Alpaca's method
Gifted Gummy Bee#3277: For LORA
Bayang#9929: Oh thanks, but I wanted to use an open and free way, not using a research purpose method like (Aplaca, GPT-x, Cohere, etc)
Gifted Gummy Bee#3277: But... youre using a training data format? That has nothing to do with the data?
Gifted Gummy Bee#3277: Its like saying you refuse to run because people run for research
Gifted Gummy Bee#3277: Its just a type of formatting instruct data to train from
Bayang#9929: I understand what you mean, I would like to use it for commercial purpose:)
|
Gifted Gummy Bee#3277: Im quite sure that people dont have a license over how you formatted your training data for your model, but alright
Bayang#9929: thus, i can't use Alpaca method, because of the inheritance from LLama, you got what i mean
Gifted Gummy Bee#3277: No I dont
Gifted Gummy Bee#3277: Alpaca training *format* and not the data are two seperate things
Gifted Gummy Bee#3277: And also seperate from LLAMA
Gifted Gummy Bee#3277: These are 3 distinct things
Bayang#9929: i have already data, the problem is not the data, but how to use it
Gifted Gummy Bee#3277: Alpaca's methology isnt under the LLaMA license, you cant really license a training format
Bayang#9929: ok
Gifted Gummy Bee#3277: using Alpaca's instruct, input, output format should be fine
Bayang#9929: okay
Bayang#9929: got it
Bayang#9929: thanks 🙂
Gifted Gummy Bee#3277: https://cdn.discordapp.com/attachments/729741769738158194/1103012552901939240/image.png
Rohan#3064: @zphang how does the finetuning flops rule of thumb work for lora adapter tuning? iirc the rule of thumb is 3 flop per model parameter per forward step, 12 flop for forward+backward, but with lora, you do still need to do backprop on all parameters, then it gets svd'd after that, or not?
zphang#7252: You're still doing full forward+backward through most of the model (down to the lowest LoRA) from what I understand, your own savings should be your optimizer is doing less work
zphang#7252: There's no SVD it's just an added 2xlinear layer with a small intermediate dimension
natedog#8669: Are finetuning pythia models generally seen as worth it? Planning on doing a bunch for code, but someone told me they might not be that good to finetune with but then again I've been looking through the server and people seem to think doing so is pretty good soooo 🤷♂️
Swair#2790: Does anyone know if once pytorch inference goes out of memory why the subsequent calls also throw out of memory errors?
Ryu#0274: @OccultSage seems to think they are (especially the deduped ones)
|
OccultSage#3875: Yeah, I do. Way better than anything in the StableLM/Vicuña line.
OccultSage#3875: I'm starting a finetune on the latest Pythia checkpoints dropped.
Kharr#7888: What's new with the latest checkpoint?
OccultSage#3875: +125b tokens and 4096 context.
Kharr#7888: Are these the "V1" iterations or something else?
OccultSage#3875: Ask @guac 🙂 I'm just the person to tell @guac and @Louis when things suck. 😉
OccultSage#3875: https://huggingface.co/CarperAI/pythia-2.8b-deduped-4k
https://huggingface.co/CarperAI/pythia-6.9b-deduped-4k
Kharr#7888: Thanks, I'll check them out. I was looking under EAI, not Carper
StellaAthena#3530: If it makes you feel better, I also didn’t know what he meant :berk:
OccultSage#3875: This will be interesting to compare. Though that's three variables -- different batch size (due to context memory usage), different context size, and more tokens.
natedog#8669: what are you training on? red pajama?
OccultSage#3875: Anlatan's internal literature finetune dataset.
XOR#4986: on what dataset?
OccultSage#3875: Uh ... ☝️
XOR#4986: wait, are you guys carperai?
OccultSage#3875: No.
XOR#4986: ah, because you talked about training ai then send some carper ai checkpoints
StellaAthena#3530: Those are the ones he is finetuning
XOR#4986: ah i see
|
XOR#4986: i got a question btw. Cant we archieve ai safety by training the ai to do only and only what the "character" context says? Most of the time, arent they "escaping" the role when saying bad or unsafe things?
alstroemeria313#1694: mm i need to work out a proper small hackable huggingface finetuning harness for language models that does like. multiple gpus, keeps frozen params in int8 or fp16, nice stuff like that. does someone perhaps already have one? >_>
XOR#4986: hackable?
alstroemeria313#1694: doesn't make a lot of assumptions about what i'm doing, like i can swap in different data sources, losses, monkeypatch the huggingface model so i'm training a lora, etc
XOR#4986: i mean what do you mean by hackable
alstroemeria313#1694: "I haven't written a basic pytorch training loop in forever"
alstroemeria313#1694: But since jax doesn't really have flash attention and stuff
XOR#4986: what do you mean by hackable?
alstroemeria313#1694: this
XOR#4986: monkeypatch while running the model?
alstroemeria313#1694: basically, i can try new research ideas quickly
XOR#4986: sorry i kind of dont have alot of context (i know i sound like chatgpt) but i mean like, do you want to write with it & do monkey patches or are you more like in development of algrithms to being able to make a more efficient, better trained modell
alstroemeria313#1694: i assume i should use hf accelerate for data parallel training, i am not 100% sure how to keep frozen params in int8 yet
alstroemeria313#1694: i assume i should not try to use bf16 yet?
OccultSage#3875: Yes.
OccultSage#3875: @alstroemeria313 https://github.com/coreweave/kubernetes-cloud/pull/128
alstroemeria313#1694: ah ty~
alstroemeria313#1694: yeah i have spent so long in jax land, using mostly self-written transformers (which were not LLMs anyway), that i need some examples of HF best practices ^^;;
OccultSage#3875: It's intended to be as simple as possible, as it's a reference example. You can hook in at various stages.
alstroemeria313#1694: thank you!
|
OccultSage#3875: It show overriding the loss function, hooking in an evaluation.
Maximum Limelihood Estimator#8915: Actually, didn't you guys have to reimplement multiple dispatch from scratch in Python to get PyTorch to work nicely 🤔
uwu1#4864: one thing is that the default hf trainer cross entropy loss doesn't zero out loss for padding tokens
alstroemeria313#1694: ooh
alstroemeria313#1694: yeah i may not stuff the context window fully all the time
alstroemeria313#1694: thanks for the heads up :)
jrowe#5371: Thank you to whoever put off-topic back at the bottom, lol
alstroemeria313#1694: ooh i got fine-tuning w/ frozen weights in int8 working
jrowe#5371: Oh, nice
alstroemeria313#1694: by parameterizing the tensors to optimize as an fp32 delta (inited to zero) from the frozen weights
alstroemeria313#1694: for ln scales/biases.
alstroemeria313#1694: after this i need to do lora the same way
alstroemeria313#1694: this will also make adamw decay the weights toward the frozen weights, for the adamw-l1 variant i want to try as well
alstroemeria313#1694: idk if this is the optimal way to do it but
alstroemeria313#1694: it's simple enough to implement
Andrew#8551: could you say more about this, I'm working through something related
alstroemeria313#1694: ```python
class Delta(nn.Module):
def forward(self, x):
return (self.base + x).to(self.base)
|
def right_inverse(self, x):
self.register_buffer("base", x.detach().clone())
return torch.zeros_like(x, dtype=torch.float32),
...
params_to_optimize = []
for name, module in list(model.named_modules()):
for param_name, param in list(module.named_parameters(recurse=False)):
if param.ndim == 1:
print(name, param_name, tuple(param.shape), file=sys.stderr)
torch.nn.utils.parametrize.register_parametrization(
module, param_name, Delta(), unsafe=True
)
params_to_optimize.append(
getattr(module.parametrizations, param_name).original0
)
for p in params_to_optimize:
|
p.requires_grad_()
opt = optim.Adam(params_to_optimize, lr=1e-3, betas=(0.9, 0.99))
``` kinda like this
alstroemeria313#1694: this grabs all of the learned layernorm scales, which are small, if the model has biases it will grab them too
alstroemeria313#1694: and optimize just them
alstroemeria313#1694: base weights can be fp16 or int8
alstroemeria313#1694: i did `model.requires_grad_(False)` prior to this
alstroemeria313#1694: i *think* it should only be accumulating gradients into the fp32 deltas at this point
alstroemeria313#1694: this is so awkward in pytorch aaaaaaa
Andrew#8551: that sounds right to me, I'm freezing weights similarly, at least
Andrew#8551: why is that?
OccultSage#3875: We actually do this in my finetuner.
alstroemeria313#1694: i have to worry about what grads get accumulated where
alstroemeria313#1694: what i *want* is a pure function that takes as its first argument the deltas, as its second argument the original weights, then the model inputs
alstroemeria313#1694: then differentiate the loss function wrt the first argument
alstroemeria313#1694: because i think in jax now or something
OccultSage#3875: You've been jaxx'ed?
alstroemeria313#1694: lol yep
OccultSage#3875: Rude.
alstroemeria313#1694: anyway i can do lora the same way psure
|
Andrew#8551: why not use jax?
OccultSage#3875: Because it's jaxx'ed.
alstroemeria313#1694: no flash attention, no int8
alstroemeria313#1694: no int4
OccultSage#3875: But more seriously, it's not as friendly to GPUs as TPUs.
Andrew#8551: Oh, like the workload is running somewhere other than gcp?
OccultSage#3875: Yes. The focus of JAX has been TPUs.
Andrew#8551: That makes sense.
Maximum Limelihood Estimator#8915: 👀
alstroemeria313#1694: hmmm my lora works but only with fp16 base weights, hf is doing something weird in int8 which breaks it
Maximum Limelihood Estimator#8915: https://edition.cnn.com/videos/tv/2023/05/02/amanpour-leahy-ai.cnn
Maximum Limelihood Estimator#8915: @Daj congrats on achieving ultimate levels of based
alstroemeria313#1694: ok i need to rethink how i do this for int8/int4
alstroemeria313#1694: ugh
Maximum Limelihood Estimator#8915: Just Use Julia (TM)
alstroemeria313#1694: i think i will make a generic "LoRA adapted layer"
alstroemeria313#1694: where i do the matmul by the lora weights separately then add it to the inner layer's result
alstroemeria313#1694: then i don't have to care if the inner layer is int8 quantized and doing all kinds of special things
alstroemeria313#1694: baking the lora weights into the normal weights can be a separate op that i can implement later
Andrew#8551: what supports int4 now? I studied some quantization a bit ago, but I didn't know there was anything implementing it
|
alstroemeria313#1694: https://github.com/qwopqwop200/GPTQ-for-LLaMa
OccultSage#3875: OK, this is pretty neat. We did prior finetunes on `pythia-2.8b-deduped` and `pythia-6.9b-deduped` -- as well as `stablelm`.
The following parameters changed for Pythia-4k:
* 4k context size
* +120b tokens vs Pythia
Hyperparameters changed:
* `bs=2` vs `bs=6` for Pythia-2k.
All other parameters are the same, including finetune dataset and ordering.
* We can see a measurable loss difference with 2.8b.
* What's especially interesting is the impact on 6.8b -- it's actually hovering near 12b territory. https://cdn.discordapp.com/attachments/729741769738158194/1103135614607573012/IMG_1714.png
alstroemeria313#1694: hmmmm ```python
class LoRALinear(nn.Module):
def __init__(self, layer, rank=1):
super().__init__()
self.layer = layer
self.rank = min(rank, layer.in_features, layer.out_features)
|
device = layer.weight.device
self.b = nn.Parameter(
torch.zeros((layer.out_features, self.rank), device=device)
)
self.a = nn.Parameter(
torch.randn((self.rank, layer.in_features), device=device)
)
def forward(self, x):
y = self.layer(x)
return y + (x.to(self.a) @ self.a.T @ self.b.T).to(y)
```
alstroemeria313#1694: then i just did ```python
for name, module in list(model.named_modules()):
for subname, submodule in list(module.named_children()):
if isinstance(submodule, nn.Linear):
setattr(module, subname, LoRALinear(submodule))
```
alstroemeria313#1694: this let me fine-tune fp32 lora weights w/ the frozen base weights in int8
alstroemeria313#1694: i think i need to change the init for `a` so it's rank dependent?
|
alstroemeria313#1694: at least the paper did this
OccultSage#3875: The non-finetuned 7b-4k outputs are really really good. This is the best performing non-finetuned model I've seen for storytelling:
```
RESPONSE: [ Prologue ]
"You're in the kitchen!?" The girl-cooks who were carrying out a number of dishes had stopped, startled and panting from their hard work. A boy-cook, too, who worked for one of the floor-level cooking shops was staring with open eyes like that girl. They'd carried her out along with many other dishware with sweaty faces, but the cook's wits remained calm as she sat there on the chair. Though still quite beautiful, those two girls of fifteen or sixteen years old had long braids hanging down; since most of them had already put on aprons and apron sashes when they started working today, each had tied up the cloth with thick ties. And it was impossible to tell whether he wore an apron or not—not if he used his arms to cover his head and body, so wide was the man-made bed. But even with all that gaudy costume, the apron hung over the shoulders tightly enough
```
```
RESPONSE: [ Prologue ]
"H-Hey!" Gon tore his eyes from Sana, whose long hair blew in the wind as he turned her uphill, looking over her body, the woman in pink silk like a kelp bloom's stem. _Catch it. Don't break apart._ He saw what looked like dark blossom flying from the sky, and then he heard: "Heads down! Huge-headed monsters, heads for blood—Giant trees are falling."
The forest roared with the sound of it—thundered with the wind. In those moments when Gon's head and legs were lifted by the force and dust—when a tree towered over them both, branches crashing—he'd seen a vast swirl of darkness out in the thick woods: two monstrous beasts at ground level, white-black-limbed things rising through its leafy shadow. There'd been no trees on earth before; only this forest. No path or roads, not even fields.
```
```
RESPONSE: Suddenly, the floor vibrated. The shock wave must have shaken the building, but surely it wouldn't have sent this many floors tumbling? I hadn't expected it to, but I still saw a man's body come flying through, bouncing on a ceiling beam from a roof opening. He seemed to be wearing a coat of gray metal, and there were others—no more than three or four—like him: two men carrying lanterns, one with a flute; someone who had fallen in with us at Gwanle Bridge, holding a lamp to his shoulder.
"What!" I shouted, clambering up to grab my own lamp out of the firelight. "I'm here! Hold tight to the railing. They're not taking the steps down for some reason?"
Shekking was shouting into my ear—a cry that started out as a warning. It wasn't fear so much as urgency—what she said came with words slurring with heat. And
```
```
RESPONSE: She spread her wings again and flew for the door. She had been unable to close them on all those hundreds of people. They were packed so tightly in, even with no seats in their way—the ones at the front of her cabin that opened as wide as the great gates had remained, too, like a living room without any walls but with shelves bolted or carved into the thick roof. With people pouring off through it, all out across the ship's deck and still coming clumping up on it from the stairs at this moment in an endless, tattered tumbling rush, he saw one who had come flying-caged high up with him on his own ship, one who had flown from behind him, leaping and clawing his way through, and there was one, two of them! He had seen how he had held his woman down hard by both hands against her head. Now they ran, running, cringing together through the dark between these lightship walls that came flapping
```
zphang#7252: also it's scaled down by some factor
|
zphang#7252: which iirc helped training
alstroemeria313#1694: yeah
alstroemeria313#1694: also the thing i did patches the attention projections *and* mlp weights
alstroemeria313#1694: whereas the paper only did the former
alstroemeria313#1694: wow
OccultSage#3875: @guac @Louis @Emad Thanks for getting the Pythia 4k checkpoints up. They're definitely not trash, as the above results show.
guac#4716: Hehe glad you pushed me to release them 🙏
Spy#9778: I just finished implementing this in JAX!
alstroemeria313#1694: :blobcutehappy:
Spy#9778: well, separately I have a lora transform and a 4bit-ify transform but they're compatible
alstroemeria313#1694: oooh 4bit
OccultSage#3875: It's really interesting that 7b-4k is performing out of the box nearly as well as 12b-2k loss-wise.
Spy#9778: inference only, not training since I can just train full precision lora weights
Spy#9778: I can now take training steps with llama 7B on my 24GB gpu which is cool
kd90138#9368: Paperspace gradient is offering h100 access
kd90138#9368: 3 month commitment 20% down payment necessary
kd90138#9368: Mycalc: Around 1500 USD down payment necessary for 1 h100 at this time
Spy#9778: wow I just tried it the lora transofmr a huggingface Flax model and it worked first try even though I only tested it with my haiku models
Spy#9778: neato
alstroemeria313#1694: nice!
|
alstroemeria313#1694: how are you doing int4 in jax?
Spy#9778: the int4 thing is
Spy#9778: not nice....
alstroemeria313#1694: ahh...
Spy#9778: I implemented the gptq method
alstroemeria313#1694: ohhh
Spy#9778: for the forward pass I used jax-triton to make a custom kernel
Spy#9778: for the backwards pass I'm fully unpacking the int4 matrix into float16 then doing the matmul, which makes it really slow rn
Spy#9778: I think I might need to handwrite a kernel instead of using triton
zphang#7252: is there a good int4 implementation that works well with training?
the last one I tried was linear in compute time with batch size
OccultSage#3875: I'm sus of int4 for training.
Spy#9778: I feel like with lora there's not much need for full int4 training
Spy#9778: like @alstroemeria313 said above
Spy#9778: just int4 your frozen weights and use float16 lora params
zphang#7252: not full int4 training, just LoRA training against an int4 frozen model
Spy#9778: oh I see
Spy#9778: sounds like there's about to be one for torch!
Spy#9778: I'm working on that for JAX rn
zphang#7252: is there a link?
|
Spy#9778: I probably won't publish my int4 thing until I get this backwards pass thing figured out
Spy#9778: oh wait alstroemeria's thing is int8
alstroemeria313#1694: it's int8 rn because i haven't gotten int4 inference working in one of my scripts yet *at all*
alstroemeria313#1694: due to having to copypaste a ton of stuff in from that repo
zphang#7252: fwiw this was when I was looking into it: https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/66
alstroemeria313#1694: and llama 65b fitting into my vram in int8
Spy#9778: the gptq-llama one?
alstroemeria313#1694: yeah
Spy#9778: wait shouldn't compute time go up linearly with batch size
Spy#9778: if you're running twice as many examples
Spy#9778: seems like taking twice as long is pretty reasonable
alstroemeria313#1694: ...yes. it's only a problem if dequantization time goes up linearly or some such
alstroemeria313#1694: i think
alstroemeria313#1694: how would that even happen?
Spy#9778: possibly if the kernel was being called multiple times for each batch/layer
Spy#9778: but I don't think that happens
zphang#7252: ideally if you're running things with parallelism, it shouldn't go up linearly
Spy#9778: I was doing this too
Spy#9778: well
Spy#9778: translating to jax
|
alstroemeria313#1694: but if i'm going to fine-tune the bigger llamas locally i'm going to want int4
Spy#9778: but it was a miserable experience
zphang#7252: like at that rate I might as well be doing bs=1 with gradient accumulation
Spy#9778: I also had to turn up the damping to quantize LLAMA-13B
zphang#7252: it ran inference (on bs=1) just fine tho
Spy#9778: even keeping the weights on GPU is a no go for me
Spy#9778: even if they're frozen
alstroemeria313#1694: ahh
Spy#9778: if I'm keeping llama 7b at half precision that's 14GB of my 24 gone for no reason
zphang#7252: oh yea this was on a GPU with headroom
alstroemeria313#1694: i should try to upgrade to rtx 6000 ada
alstroemeria313#1694: if i am going to be doing so much local stuff with bigger models
Spy#9778: that price though T.T
alstroemeria313#1694: wow i can do gradient steps on the lora with *llama 30B* locally
alstroemeria313#1694: with the main weights in int8
alstroemeria313#1694: i really do need to make int4 work...
Spy#9778: niiiiice
Spy#9778: the thing stopping me from trying is that I only have 64GB of RAM so I haven't even converted the 30B checkpoint to jax yet -.-
Spy#9778: the coolest thing about implementing lora for jax is
Spy#9778: you get to call your library lorax
|
alstroemeria313#1694: ahaha
Spy#9778: turns out I was wrong about this, because the Flax GPT2 calls transpose on its weight matrices (why???) and when my transform hits an op it doesn't know how to handle it just directly computes W + BA
Spy#9778: I added transpose support though
alstroemeria313#1694: oh :/
alstroemeria313#1694: i am doing a little fine-tuning and it... seems to work but i am using an absolutely tiny amount of data and it just overfits rn
Spy#9778: time for huge weight decay maybe?
Spy#9778: I was thinking that one benefit of lora is that it makes weight decay drag your weights back towards the pretrained checkpoint instead of collapsing them to zero
Spy#9778: like weight decay while finetuning pretrained models always seemed kinda sus
Spy#9778: but it wasn't really practical to actually compute the diff to the original weights
alstroemeria313#1694: i used to do it for biggan but biggan was small
alstroemeria313#1694: ok let's take the loras off the ffns and put them only on self-attention projections
Spy#9778: I've been wondering about how important it was to lora various things
Spy#9778: I was also curious if people are doing it to the embedding matrix
alstroemeria313#1694: well i'm overfitting, and the paper did it on the self-attention only, so...
alstroemeria313#1694: ```python
def cross_entropy_loss(input_ids, mask, logits):
logits = torch.nn.functional.log_softmax(logits, dim=-1)[:, :-1]
labels_mask = torch.nn.functional.one_hot(input_ids[:, 1:], logits.shape[-1])
nlls = -torch.sum(logits * labels_mask, dim=-1)
return torch.sum(nlls * mask[:, 1:]) / torch.sum(mask[:, 1:])
|
```
alstroemeria313#1694: hmm is this really right
alstroemeria313#1694: i don't think it is, actually, the mask is off by one token
alstroemeria313#1694: ```python
def cross_entropy_loss(input_ids, attention_mask, logits):
logits = torch.nn.functional.log_softmax(logits, dim=-1)[:, :-1]
labels_mask = torch.nn.functional.one_hot(input_ids[:, 1:], logits.shape[-1])
nlls = -torch.sum(logits * labels_mask, dim=-1)
return torch.sum(nlls * attention_mask[:, :-1]) / torch.sum(attention_mask[:, :-1])
```
Spy#9778: dang nice catch that'd be super easy to miss forever
alstroemeria313#1694: i think the fine-tune works better with attention only
chilli#5665: Why do you need a transform for this?
alstroemeria313#1694: probably bc it's so small data and was overfitting
alstroemeria313#1694: it was like 27 lines of text
Spy#9778: because I don't want to have to modify my model code
Spy#9778: for example the same thing works on my haiku llama implementation, my haiku GPT2 implementation, and the huggingface flax GPT2
chilli#5665: Yeah, but can’t you just patch your layers?
Spy#9778: well
Spy#9778: I use haiku for my stuff
|
Spy#9778: and I could make a lora linear layer
Spy#9778: but
Spy#9778: other people use flax
Spy#9778: and also sometimes I have 1d convs
Spy#9778: etc etc
Spy#9778: https://github.com/davisyoshida/lorax
Spy#9778: here I just made the repo public you can take a look at the example to see the degree of laziness I'm going for here
alstroemeria313#1694: @Spy huh you're using rank 64? ^^;;
alstroemeria313#1694: i'm using rank 1
Spy#9778: now that's some _Lo_RA
alstroemeria313#1694: yep
Spy#9778: I think for llama I have it set to 32
alstroemeria313#1694: you probably have more data
alstroemeria313#1694: actually let me just increase the rank
Spy#9778: in the paper they found that going up to what 16 was helpful?
alstroemeria313#1694: i want to try stuff like lora finetuning with the output layer replaced/reinited too
alstroemeria313#1694: like to try fine-tuning llama into a reward model for rlhf
alstroemeria313#1694: ...then doing another lora for cheap rlhf
Spy#9778: interesting thing with jax is
Spy#9778: you could vmap the lora param argument
|
Spy#9778: and run multiple separately finetuned models together really easily
alstroemeria313#1694: yep
alstroemeria313#1694: ...i can do that in pytorch too, i just have to write my layer specially for it :p
Spy#9778: reasons why I hope JAX keeps up with torch
alstroemeria313#1694: ikr
Spy#9778: everything being pure functions is so nice
chilli#5665: Why do you need to specially write your layer?
Spy#9778: like I just wrote this thing for haiku and it accidentally worked for flax
alstroemeria313#1694: well i mean i could functorch vmap but
alstroemeria313#1694: hm
alstroemeria313#1694: actually let me think about it, can you even do it with vmap if your different loras are different ranks?
alstroemeria313#1694: you can right?
alstroemeria313#1694: you just have to keep track of which batch item in the thing you vmap over goes with which batch item of the activations
chilli#5665: Hmm… no I don’t think so
chilli#5665: Well, you can I think
chilli#5665: You just need to vmap after the ab computation
alstroemeria313#1694: the first thing is to compute B A x by B (A x) so you don't form B A
alstroemeria313#1694: bc that's huge
chilli#5665: Like, if it’s different ranks you can’t batch the actual low rank computation itself
alstroemeria313#1694: ...why not
|
chilli#5665: Because you can’t batch a 2xN @ NxN computation with a 3xN @ NxN computation trivially
alstroemeria313#1694: if you have an Nx2 @ 2xN @ N
alstroemeria313#1694: and an Nx3 @ 3xN @ N
BouncyMoon#0420: HELP! I'm holding a Q&A on AI alignment at my workplace (with a lot of silicon valley engineers) tomorrow after 5PM PT. DM me for details! I need an eloquent expert!
alstroemeria313#1694: it's late, can you batch these two
chilli#5665: Haha, I was also thinking it’s too late to think about this
Spy#9778: wdym
Spy#9778: or is this about torch vmap
alstroemeria313#1694: jax vmap
Spy#9778: jax vmap works for pytrees with whatever ranks
alstroemeria313#1694: say you're running an inference server and people hand you their loras and you want to batch them
chilli#5665: Torch vmap and jax vmap more or less have the same functionality
Spy#9778: ohhhh you mean like
Spy#9778: different rank constraint?
alstroemeria313#1694: and they give you ones that are different ranks from each other and you want to put them in the same batch
alstroemeria313#1694: yes
Spy#9778: I was thinking rank like tensor rank lol
chilli#5665: I think you … can?
alstroemeria313#1694: oh lol
Spy#9778: yeah that wouldn't be easy
|
chilli#5665: But not with vmap
Spy#9778: well it _could_ be easy if I added masking to my thing 🤔
alstroemeria313#1694: i think you can via linear algebra that i need more sleep to think about
Spy#9778: but it didn't occur to me
chilli#5665: Yeah, you wouldn’t batch it with vmap
alstroemeria313#1694: the adjustment a rank n lora makes to the output is a sum of the adjustments of n rank one loras
alstroemeria313#1694: given the same input
chilli#5665: Also sorry, is it b@a@w
alstroemeria313#1694: yes
alstroemeria313#1694: b@a@x
alstroemeria313#1694: x is input activations
alstroemeria313#1694: and you add that to the normal linear layer's output
chilli#5665: Makes sense
chilli#5665: Actually, no, I don’t think it’s so easy
alstroemeria313#1694: oh
chilli#5665: This is true, but a rank n Lora is not equivalent to 2 rank n/2 loras
chilli#5665: I think?
alstroemeria313#1694: so i was thinking of replicating the input activations the right number of times for each lora
chilli#5665: Actually, it’s too late for this lol
alstroemeria313#1694: getting all of the adjustments
|
alstroemeria313#1694: and summing according to how i replicated the inputs
alstroemeria313#1694: yeahhhhh... :/
Spy#9778: I don't thiiiiink there's a way to accomplish it without padding
chilli#5665: So if I step through each individual operation
Spy#9778: but if you're willing to pad, padding the smaller lora out with zeros will do it
chilli#5665: The 2xN @ N is easy to batch
chilli#5665: (With say, a 3xN @ N)
alstroemeria313#1694: yep
chilli#5665: But then, I don’t think there’s an easy way to batch the resulting nx2 and Nx3 matrices into a matmul with the 5xN value
Spy#9778: can't wait until I start my job and have to write torch all the time 💀
Spy#9778: wait I forgot lora was also edward hu
Spy#9778: can't believe that guy put out two bangers so close together
zphang#7252: which was the other banger
Spy#9778: muP
zphang#7252: oh
zphang#7252: huh those seem pretty different
Spy#9778: idk what the split between him and greg yang was
Spy#9778: on the muP stuff
𓅬 gabriel_syme 𓅬#3220: Anyone thinks it would be cool to provide some compute to this effort?
https://github.com/Luodian/Otter
|
𓅬 gabriel_syme 𓅬#3220: or maybe finetune a model when the multimodal instruction dataset is out
𓅬 gabriel_syme 𓅬#3220: this is also a good example of data quality and its impact, this is a rather new domain as well (at least wrt limits of performance) so it's interesting to see the effect
Lucas Nestler (ClashLuke)#6301: use logsumexp(logits, dim=-1) - logits[input_ids[:, 1:]]
alstroemeria313#1694: ah to avoid the one_hot?
Lucas Nestler (ClashLuke)#6301: and the logsoftmax
Lucas Nestler (ClashLuke)#6301: this is my impl with grad: <https://github.com/HomebrewNLP/Olmax/blob/main/src/model/loss.py>
about as stable as it gets
alstroemeria313#1694: well the logsoftmax is just subtracting the logsumexp
Lucas Nestler (ClashLuke)#6301: fair. yeah, in that case purely efficiency
Lucas Nestler (ClashLuke)#6301: no need to subtract things you don't need, or multiply if you can gather
alstroemeria313#1694: this doesn't work though, something's wrong with the indexing/tensor shapes
alstroemeria313#1694: oh do you mean to keepdims
Lucas Nestler (ClashLuke)#6301: you also have to use take_along_axis rather than indexing
alstroemeria313#1694: oh
alstroemeria313#1694: what's that in torch
Lucas Nestler (ClashLuke)#6301: because for indexing you'd have to add arange"s for the first n dimensions
alstroemeria313#1694: is it gather
Lucas Nestler (ClashLuke)#6301: oh, you're in torch? so much easier there. just torch.gather
alstroemeria313#1694: "fuck me, i have to use gather"
Lucas Nestler (ClashLuke)#6301: gather is so nice
|
Lucas Nestler (ClashLuke)#6301: torch gather
alstroemeria313#1694: "and it's nearly midnight"
Lucas Nestler (ClashLuke)#6301: torch.gather(logits, input_ids, dim=-1)
Lucas Nestler (ClashLuke)#6301: worst case, torch.gather(logits, input_ids.unsqueeze(-1), dim=-1)
Spy#9778: surely torch's *logsoftmax is fused
Lucas Nestler (ClashLuke)#6301: Seriously, have a look at the jax gather: <https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.gather.html#jax.lax.gather>
> The semantics of gather are complicated, and its API might change in the future. For most use cases, you should prefer Numpy-style indexing (e.g., x[:, (1,4,7), …]), rather than using gather directly.
I did not expect to see that in the docs
chilli#5665: btw, I find that chatgpt is quite good at this kind of stuff lol
chilli#5665: (i.e. using gather)
Spy#9778: I was asking a question about gather on the jax github the other day
Spy#9778: and their advice was basically
Spy#9778: gather is some sort of lovecraftian entity which drains the sanity of those who look upon it
Lucas Nestler (ClashLuke)#6301: can confirm
Spy#9778: https://github.com/google/jax/discussions/15696
chilli#5665: btw, I think first class dims has a really nice way of expressing gather type operations
Spy#9778: I ended up giving up on handling general gathers and only handling it for embedding lookups specifically
chilli#5665: https://github.com/facebookresearch/torchdim#indexing
chilli#5665: also see https://twitter.com/cHHillee/status/1541536631819075584
alstroemeria313#1694: ```python
|
def cross_entropy_loss(input_ids, attention_mask, logits):
nlls = (
torch.logsumexp(logits, dim=-1)[:, :-1]
- torch.gather(logits, -1, input_ids[:, 1:, None])[:, :, 0]
)
return torch.sum(nlls * attention_mask[:, :-1]) / torch.sum(attention_mask[:, :-1])
```
alstroemeria313#1694: how's this?
Lucas Nestler (ClashLuke)#6301: looks good, but we could make it more cursed by adding einsum :)
alstroemeria313#1694: lol
chilli#5665: first-class dims kinda unironically gives you this :^)
Harry Saini#6637: @triggerhappygandi and @TastyBucketOfRice irl https://cdn.discordapp.com/attachments/729741769738158194/1103244258992984114/PXL_20230502_180654728.jpg,https://cdn.discordapp.com/attachments/729741769738158194/1103244259303366667/PXL_20230502_180651709.MP.jpg,https://cdn.discordapp.com/attachments/729741769738158194/1103244259634720829/PXL_20230502_180715400.jpg
TastyBucketOfRice#8796: Eleuther@India meetup 🙂
triggerhappygandi#0001: height mogged again :whyy:
hazardous1222#8826: RWKV-CUDA-CPP embedded into godot. has windows and linux builds. This is using a 3GB model: quantized Raven 3B, but you can package with any rwkv model.
Game files: https://github.com/harrisonvanderbyl/RWKV-Godot-Game
Godot Builds:
Linux: https://github.com/harrisonvanderbyl/godot-rwkv/suites/12631786432/artifacts/676441022
Windows: https://github.com/harrisonvanderbyl/godot-rwkv/suites/12631786425/artifacts/676468315
|
Models: https://huggingface.co/nenkoru/rwkv-cuda-cpp/tree/main https://cdn.discordapp.com/attachments/729741769738158194/1103255841588650004/Screencast_from_03-05-23_193127.webm
OccultSage#3875: Neat. Some first checkpoint outputs on 4k context Pythia 2.8b, trained to 500 steps so far.
```
RESPONSE: [ Prologue ]
Rudolf's face was contorted in anguish, and his hand shook uncontrollably. "We're wasting time!" he cried. Then something extraordinary happened: He smiled! The lips of his mouth didn't move so much as open like a wide crack of the mouth. Then his eyes closed, and he sagged as though sleep had overtaken him.
"Hurrah! Victory!"
"What?" said a woman's voice, one which I've already come to think of—and perhaps dread as—as Madame de la Rouguerel. "How did you get here? Why aren't we moving faster? Where is Freya? Let us go."
"Ah," I said. "You see, it seems the ship's crew are also victims of Rudolf's madness. His mind is affected by his illness as well. But we're making good progress, Madame."
```
```
RESPONSE: [ Knowledge: Weird ]
- Name: Zora, Koursturk's father
- Age: 53
- Male (Female if biometric gender): Female
- Race/Sex?: Human
- Occupation?: Elder of the Forest.
----
Drill Sergeant Aren Hattar
AKA: Aren The Grunt
Attributes: Rank-1 Named Soldier, Alas, Praise Me Later
|
Occupation?: Commander, Headmaster, Captain of the Guard
Fears: Sailing, Tanks
Enemies: Highly experienced Named Soldiers in general, General Rufus, Mages and their servants, Naga, Necromancers (if she's not a mage), Zodiacs, Skaarfangers, Insect-men, Beasts who possess some element of magic.
Weapons?: Pikes
Rarity: Uncommon enough to be found in any place large enough to
```
```
RESPONSE: The tavern was full again, so I ended up sharing a table with three very different creatures: one of the most powerful mages in the realm, an elf who could not only use spells like fireball, but also earth magic and even earth and lightning protection, but she did not have any useful tricks that would be useful to a swordswoman. And then there was something far more curious, as it turned out—the bartender!
"Well, there's no harm to talk to you," she told me, after we were both seated. "I'm Zala, but call me 'Handsome'. You don't know anything about swords, do you?"
"No," I lied truthfully. "Never had any use for them." It seemed this girl was actually quite curious about blades, and perhaps a bit lonely; her whole demeanor suggested someone who didn't get much company. Still, that left me with more questions than I had answers to…
What is that spell she used on me? Was it like that one where she kept trying over and
```
```
RESPONSE: [ Author: Charles de Lint; Title: The Red Death and Other Poems, Vol. 1; Tags: haikus, first person; Genre: poetry ]
I am the Red Death…
A dead man walked the night's path. I found him by his feet.
Death called my name! My mind is a tomb—can there be life within?
The stars are cruel. Why am I left alive? Is this to keep me from going?
I am the Red Death, and if they call you out, then that calls you in, dear friend.
No star comes when I walk the sky,
|
Or any of those I loved goes with me,
Unless it be one such as is the last, which brings my heart back into a world of pain.
As red as blood, and as dark and cold,
As my flesh burns in my memory's place.
O Goddess, tell me what lies beyond;
To a world
``` https://cdn.discordapp.com/attachments/729741769738158194/1103272099444969482/IMG_1720.png
Emad#9608: should we do 7b to 600b tokens?
Gifted Gummy Bee#3277: do more
Gifted Gummy Bee#3277: pour 4t
𓅬 gabriel_syme 𓅬#3220: Try a code to language please for me 🙂 just a small one. Think we need smth like that for design
rallio#9917: train the model until one of two things happens... train loss lower than validation loss or train loss stops going down. No need for arbitrary predetermined stopping points
Emad#9608: need to set the LR
omglumpoff#3487: pythia trained to 1T would be interesting because it would be a somewhat apples-to-apples comparison against LLaMA. basically two variables at play then: the data (pile vs. llama training set) and the architecture (swiglu vs gelu, full rotary vs. partial rotary, normalization)
rallio#9917: this is a quote from the galactica paper where they did 4.5 epochs of 100 billion tokens with model sizes all the way up to 30billion and 120billion parameters (n params =~ n tokens)
rallio#9917: ```Repeated Tokens Considered Not Harmful
We train the models for 450 billion tokens, or approximately 4.25 epochs. We find that performance continues
to improve on validation set, in-domain and out-of-domain benchmarks with multiple repeats of the corpus.
First, from Figure 6, validation loss continues to fall with four epochs of training. The largest 120B model
only begins to overfit at the start of the fifth epoch. This is unexpected as existing research suggests repeated
|
tokens can be harmful on performance (Hernandez et al., 2022). We also find the 30B and 120B exhibit a
epoch-wise double descent effect of plateauing (or rising) validation loss followed by a decline. This effect
becomes stronger with each epoch, and is most visible above with the 120B model towards end of training.
To investigate further, we examine the per-source breakdown of validation loss to see if there is heterogeneity
in loss behaviour. We plot example curves in Figure 23 overleaf for the 30B model. We see no signs of loss
heterogeneity: loss falls for all sources. The 120B exhibits the same relative trend of declining validation loss
for all sources until the beginning of fifth epoch, where all sources spike (see Appendix).
The next question to answer is whether this trend extends to downstream performance and out-of-domain
generalization. For this we use a 57 task subset of BIG-bench subset, a general corpus with principally nonscientific tasks and prompt types not included in pre-training (Srivastava et al., 2022). We plot results in
Figure 8. We see no signs of overfitting suggesting that use of repeated tokens is improving downstream
performance as well as upstream performance.
```
omglumpoff#3487: gpt-neox training on redpajama would eliminate the data set variable even
rallio#9917: they used linear learning rate decay to 0.1 of starting learning rate around 1e-4
rallio#9917: the difference in learning rate they used for their 120billion param model and they 6.7 billion param model was about a factor of 2, so not much dfiference
omglumpoff#3487: I would learn towards architecture -- it may just be that llama's tweaks are "better", since (from what I've heard) the pile ought to be superior to redpajama
rallio#9917: i speculate that a company like facebook probably has a datascience team of dozens of people whose job is data quality control, and although they sourced the data from the public sources i am sure they upgraded and filtered it substantially
omglumpoff#3487: we'll see once together's models are done training I suppose
rallio#9917: i anticipate their model will be better than any other existing open source, but worse than llama
rallio#9917: maybe only marginally so
|
rallio#9917: there are very big gains to be had from properly formatting and preparing training data, those gains can be offset by brute force more tokens some, but GIGO is the ultimate law of machine learning
StellaAthena#3530: Do you have any evidence for the claim that their architecture is better
omglumpoff#3487: none that's why it would be awesome to see the two trained on the same data
StellaAthena#3530: My hot take is that in the long run basically no architectural changes matter
omglumpoff#3487: all non-empirical stuff so far. for example it looks like nvidia copied the llama architecture for GPT-2B-001
rallio#9917: I thought their arch was almost identical to neox anyways
StellaAthena#3530: It is
synquid#7193: the architectural changes that matter are probably like... removing linear layer biases (more efficient)
StellaAthena#3530: Most major recent LLMs are basically the same thing
omglumpoff#3487: yeah I mean these are all slight variations amongst each other, I agree it's a rounding error on any real timeline
StellaAthena#3530: This is likely a real inefficiency, but “being SOTA for its size” was never Pythia’s goal so we didn’t sweat it too much
rallio#9917: Stella do you guys have any interest in seeing if there is any batch size where training becomes worse cause its too big
rallio#9917: after seeing the 4million bs not hurt the 70 and other small pythia model training
StellaAthena#3530: 10M was too big
rallio#9917: :thinkies:
rallio#9917: for the LR chosen or if the LR keeps scaling linearly with bs
StellaAthena#3530: The LR chosen
rallio#9917: that makes sense
rallio#9917: it was a pretty high number for those small models though
StellaAthena#3530: Yeah, we played around with different LRs a little but didn’t find anything interesting
|
StellaAthena#3530: (Tricked ourselves into thinking we did, because I didn’t adjust the x-axis for the changing batch size)
rallio#9917: oh right the number of steps
Emad#9608: we trained a 1.2tr token 1b and 3b on rp to see how it does, should probably release it. Did 4096 context window to make @OccultSage happy
Emad#9608: happy wizard
rallio#9917: I think there is a max performance at fixed size objective that could lead to some pretty wonky seeming choices that deviate a lot from canon
OccultSage#3875: 4096 context really appears to help, even for using with small contexts.
rallio#9917: didn't you already find wbrown that just the extra 100B pile pythia are seeming noticeably better
OccultSage#3875: Yes. But I can't isolate it from the 4096 token training contexts.
Emad#9608: https://huggingface.co/CarperAI/pythia-6.9b-deduped-4k
rallio#9917: a key question when people say they are training 4k or any k ctx
rallio#9917: is that with packaged examples or true long context
OccultSage#3875: So the combination of +125b and 4k context likely helped a lot.
StellaAthena#3530: I mean, that’s well known
Emad#9608: the evals are about the same
OccultSage#3875: And that's where it makes a difference for my finetune - which is nearly exclusively long form content.
rallio#9917: where the underlying document is sourced from examples above 4k length
rallio#9917: I dont know enough about it but I intuitively dont like packaging
OccultSage#3875: Evals don't show the entire picture.
rallio#9917: I'd rather there be dynamic context step to step than packaged
omglumpoff#3487: I have some longer form benchmarks that indicate this does well on 4k-long tasks that the -2k versions obviously fail horribly at
|
rallio#9917: for short contexts they are binned and batched and the bs is bigger for those
Emad#9608: I know just noting that https://cdn.discordapp.com/attachments/729741769738158194/1103324661435088976/Screenshot_2023-05-03_at_15.18.00.png
OccultSage#3875: Does it do worse at 2k context tasks?
omglumpoff#3487: nope, exact same or within ~1%
OccultSage#3875: That's not the 'same' -- 1% can be huge.
Emad#9608: suppose we will find out with more training and tests
Emad#9608: think its within sd but we don't have tests for larger windows etc
omglumpoff#3487: yeah I'm working on adding Scrolls (qasper, quality, narrativeqa, contractnli, qmsum, summscreenfd, govreport) to `lm-evaluation-harness`. it's still a bit unclear the veracity of those tasks but we shall see
StellaAthena#3530: Yeah I’m pretty excited to see how this investigation turns out
rallio#9917: the nature of any benchmark that isnt objectively defined will be that it gets more and more difficult to evaluate as you get closer and closer to human expert level
rallio#9917: the amount of people that can distinguish the best in the world poetry or arguments from just very good top 10th percentile is quite small
StellaAthena#3530: Yeah but like
StellaAthena#3530: A lot of benchmarks are just meaningless
StellaAthena#3530: And don’t even come close to measuring what they claim to
rallio#9917: but if it can be objectively defined then you can probably make a defined algorithm to generate it like chess
StellaAthena#3530: The design of NLP benchmarks is *terrible*
rallio#9917: yes I agree
rallio#9917: I think a potential work around is if you can decompose a complex task into subtasks that are less complex and then synthesize the subtasks and compare to the LLM generation
StellaAthena#3530: NLP benchmarks are so bad that the things you are talking about aren’t even considerations
StellaAthena#3530: There’s plenty of them where the alleged answer is wrong in a large % of the data
|
rallio#9917: yes, but just cause so many of those are low effort doesn't mean better ones could be made now
rallio#9917: like something simple even like requesting a list of N things from a LLM. To check if N things were generated is a relatively simple task with regex and some formatting
rallio#9917: then to go item by item and say is the kth item in n items conforming with the instruction
StellaAthena#3530: 1. I’m not saying that they’re low effort
2. Yes we absolutely could make better ones
StellaAthena#3530: Not perfect, but better? Absolutely
rallio#9917: I think a lot of them are low effort that I've looked at. I know the life of an mturk must suck in many cases because I've read about it their outputs to NLP training datasets where they refer to it
rallio#9917: its hard work to build a good dataset
Gifted Gummy Bee#3277: I honestly feel that the only way to build a good dataset is to ask prompt engineers or something similar
Gifted Gummy Bee#3277: The issue is that people are going in without a good understanding of how these models behave when being asked questions.
OccultSage#3875: Amen.
Gifted Gummy Bee#3277: Someone who has used gpt-4 vs 35 vs 3 vs vicuna for example, for > 300h would most likely be able to tell which is which, and base their NLP evaluation dataset off that
Gifted Gummy Bee#3277: The first step, which is to figure out what makes a good LLM, well, good, is not done properly
Gifted Gummy Bee#3277: We don’t have any concrete quantification of a “good” LLM
OccultSage#3875: :sagePog: No shit? :sageMegane: We basically have datasetters whose only job is to work on the dataset at Anlatan. 🙂
rallio#9917: well the paradigm currently is that the highly paid people are the ones that tell the workers what task to do and then those highly paid people dont review the work of the workers except in some very token way if at all
rallio#9917: I'm talking about most academic NLP datasets not product datasets
rallio#9917: I think anyone actually trying to please customers understands the importance
rallio#9917: But as these models get better you actually need the very best minds to be the ones doing the actual work of generating the examples and critiquing the outputs
OccultSage#3875: Yes. And in the dataset team, it's good to have complementary focuses. Like the lead datasetter focuses on the minutiae of normalizaton. I tend to take a higher level view.
|
OccultSage#3875: Side comment: Smart/fancy quotes are annoying. As are UK single quotes for speaking. 🙂
rallio#9917: I am again talking mostly about open source academic NLP datasets
rallio#9917: so not disparaging anyone that does this work I think it is important
OccultSage#3875: 'How are y'all doing today?' :reee:
rallio#9917: yeah. I most dislike the thick directional double quotes
MicPie#9427: oh, nice image, where is that from?
skymoo#6527: > Make sure you don't use overlapping sequences as this can lead to overfitting.
when training a transformer you need to avoid overlapping sequences?
i thought when you trained one sequence you basically trained every subsequence, which is similar to overlapping isnt it?
Maximum Limelihood Estimator#8915: Oh my God when I said Chris Lattner was just going to walk on stage and announce Modular was Julia but 0-indexed *it was supposed to be a joke*
https://www.modular.com/mojo
synquid#7193: it's over for juliacels (this language is barely in a usable state)
Maximum Limelihood Estimator#8915: I am going to become the joker
Maximum Limelihood Estimator#8915: TBH I am at the cusp of just saying "this but unironically"
People have been realizing Python is not a good language and reinventing worse versions of Julia inside Python for like a decade now. I'm increasingly convinced this will never stop
Like every conversation I have about this is people going "Hmm yeah Julia sounds better than Python, but nobody else would use it" and aaaaaaaaaaaaaaa
|
synquid#7193: see that's the issue
synquid#7193: inside python? time to go the other direction and superset python
synquid#7193: idk why exactly but letsgo
Maximum Limelihood Estimator#8915: I mean Julia+PythonCall.jl basically does that
synquid#7193: but without using cpython at all
artem9k#7593: using emojis has to be the laziest way to make a logo
Maximum Limelihood Estimator#8915: Then that's just PyPy and it flopped too
artem9k#7593: huggingface 🤗, mojo 🔥
Maximum Limelihood Estimator#8915: Although PyPy doesn't have most of the major Julia features I like
sekstini#0069: lol they even endorse it as the file extension https://cdn.discordapp.com/attachments/729741769738158194/1103338701267947572/image.png
sekstini#0069: 🤮
Maximum Limelihood Estimator#8915: OK well at least we don't have to worry about competing with them then
artem9k#7593: :aaaaaaaaaaaaaaa:
synquid#7193: mojo will have a lot of those features I think? they're planning decent metaprogramming etc
synquid#7193: I will say the low level code is pretty ugly
synquid#7193: so far
Maximum Limelihood Estimator#8915: Multiple dispatch?
synquid#7193: yup
dmayhem93#3202: it's not even new, you can do it with python https://cdn.discordapp.com/attachments/729741769738158194/1103339631203844197/image.png
synquid#7193: ```struct Array[T: AnyType]:
|
fn __getitem__(self, idx: Int) -> T: ...
fn __getitem__(self, idx: Range) -> ArraySlice: ...```
ephemical#2302: static or dynamic multiple dispatch?
synquid#7193: i think its dynamic
ephemical#2302: unicode file extensions are cool but it's unnecessary complexity
ephemical#2302: there is no need to do that
Maximum Limelihood Estimator#8915: Dynamic
OccultSage#3875: Pythia 7b-4k context literary finetune results at 500 steps checkpoint.
```
RESPONSE: [ Prologue ]
When you're walking on the green fields of the woods, the only thing that matters is finding your way back to the cottage. Every path seems like a wrong turn when it's not immediately familiar and safe, and there's always the possibility of danger lurking somewhere in a shadowed woodland clearing. Sometimes it doesn't even matter which direction you head for home as long as something comes out to greet you at the end of the day and keeps asking how things went. And sometimes it's just a nice place to walk along and watch the clouds play with the sun.
It was one such afternoon when she first noticed the man. He sat alone on one of the benches outside a small village pub by a stream running through a valley and his eyes were fixed on her. It wasn't unusual in these parts for young women to pass him by, but never so close, yet he looked up as soon as he spied her. After catching a few details—wavy brown hair,
```
```
RESPONSE: [ Prologue ]
We were to leave on Sunday the seventeenth. The train had been ordered for the following day's noon. I was supposed to go home with a fellow student, but having taken a fancy to this girl, I said goodbye to my fiancée and went to town early that morning, leaving only my kit bag in her place. Thenceforth I could not write to her even if I tried; she was so worried that I might have forgotten our engagement that I would only make matters worse by reminding her of it... Well, how do you like my luck?
"There's no need to be bashful," laughed the lady to whom I had betrothed myself, but just then came in the gentleman whom she would now marry. This is what happened:
To put the matter briefly, one of those odd-looking people who had caught my attention when I first met them at the station appeared before us quite suddenly... We took their names as best we could from their badges
```
```
|
RESPONSE: [ Author: Elaithier; Title: The Rite of the Seven Gods, Vol. 2; Tags: magic, gods, third person; Genre: light novel ]
The rites of many cultures were born when certain priests and priestesses in charge of their faiths held ceremonies to invoke a deity's favor or to ask for help. When such invocations were performed, an unseen force was called upon to grant the desired protection. These forces were also sometimes asked to answer prayers made on behalf of people who had no faith in them, although in these cases it is usually thought that the prayers themselves acted as charms. As cultures grew and advanced in knowledge, they became more sophisticated in their rituals. Many modern humans continue to engage in religious practices such as prayerful meditation that seem to borrow from ancient rites. Often we are led to think that our gods have become distant and impersonal, but there is no doubt that today there exists power in nature that is akin to us
```
```
RESPONSE: Suddenly, in a burst of emotion, she threw herself forward. The man caught her around the waist and dragged her into his embrace.
"Good night, little kitten," he whispered. "We'll see you again soon enough."
He took his leave of Zeddie on the porch of the house by the bridge. As he walked away, it sounded as though the man was singing. He hadn't known what to say to Zeddie either and so had left him alone with his thoughts.
The next morning a cool breeze swept across the countryside toward where they now sat on a bench by the river waiting for dawn. They were almost at the end of their journey. The village that lay only two days ahead would have its first visitors for the season before the summer hordes descended from the valleys of the west. Somewhere behind them already the green-eyed giant stood at the top of the hill, ready to greet them when they arrived, but for now they were the only
```
OccultSage#3875: Roughly 4% of an epoch.
Spy#9778: I was thinking about this some more today
Spy#9778: and there are some additional benefits
Spy#9778: For example if you have something in your loss function which depends directly on the parameters
Spy#9778: you can just do `lora(my_loss_function)` instead of `lora(my_model)` and now your loss function is calculating that parameter based loss using `W + BA` instead of just `W`
Spy#9778: And then also it's better if you're doing any sort of manual parameter usage
Spy#9778: for example you can implement embedding/output head weight tying by any of:
1. Manually construct a parameter and use gather for input, matmul for output
2. Use embedding layer for inp, extract weight and matmul by it
3. Construct a linear layer for output, extract its weight and gather from it
4. Construct both embedding and linear layer, but don't use the weight from the linear layer
|
Spy#9778: if you want to do layer based transforms basically only 4 will work, and even then moving the weight between them probably needs to be lora aware
Spy#9778: but if you just do `lora(my_model)` it will work for any of those methods
Spy#9778: not to mention also working for other layers and even other frameworks
chilli#5665: Yeah I agree I think there are some advantages
chilli#5665: I think in Pytorch I would do this with a tensor subclass, which I think is nicer
Spy#9778: That's sorta the equiv of writing a tracer based version in JAX
Spy#9778: Is it actually possible to make it transparent enough to work with all layers? (IDK enough torch)
chilli#5665: yeah
chilli#5665: I think tensor subclasses are more conceptually powerful than tracer-based transformations
chilli#5665: but, you need to be a bit more careful about how they compose
chilli#5665: yeah it's pretty easy I think
Spy#9778: more powerful than tracer-based ones? Well, I guess in torch you can be fully dynamic
chilli#5665: well, so the big limitation of tracer-based transformations
chilli#5665: is that they don't allow for the transformed objects to "leave" their scope
chilli#5665: and so, as a result, in your example, you need to manually carry around the "frozen" and "tunable" parameters separately
chilli#5665: a function transform is (more or less)
```
def transform(f, args):
args = wrap_with_subclass(args)
out = f(args)
|
return unwrap(args)
```
Aprilswind#1977: hey everyone is it possible to fine tune gpt J with my large custom data ( around half M tokens ) for free ?
Aprilswind#1977: i dont have gpus tho
Aprilswind#1977: or can someone show me where to get started ?
Aprilswind#1977: or any other gpt models
alstroemeria313#1694: mm, is this going to work with whatever weird quantization methods? like main weights in int4 or int8, lora weights in fp32 so you get gradients in fp32
alstroemeria313#1694: i need to think about how i handle this in mine
alstroemeria313#1694: the "bake the lora into the main weights" method
alstroemeria313#1694: bc i probably need to at least apply the lora in fp16 and then requantize
Spy#9778: the baking is obnoxious since I think you need to
Spy#9778: yeah exactly this
chilli#5665: yeah I think you can
alstroemeria313#1694: i still need to figure out how to use gptq int4 in pytorch but like... it's morning and i'm tired
Spy#9778: how is a tensor subclass going to control what shape the moments in adam are and so on though?
zphang#7252: isn't baking in lora weights only for inference?
alstroemeria313#1694: yes
Spy#9778: the reason I carry them around separately is to tell the optimizer what to do
alstroemeria313#1694: yep, also they can be fp32 and get nice fp32 gradients
Aprilswind#1977: anyone ? 🥺
|
synquid#7193: nothing's free, renting GPUs is pretty cheap though
Aprilswind#1977: oh really ?
Aprilswind#1977: i want to train my gpt on my college text book
Spy#9778: not just that but also prevent the optimizer from creating a full MxN moment instead of an MxK and KxN one
Aprilswind#1977: like it has around 1000 pages what would be the estimated cost to train 👀
synquid#7193: have you considered embedding it and searching?
synquid#7193: btw this is not the place for basic questions, there are better communities
Aprilswind#1977: sure thank you ill try that
alstroemeria313#1694: yep
alstroemeria313#1694: random normal * 0 is an interesting init
alstroemeria313#1694: i tried factoring the scale per rank out into a vector of length K, initing it to 0, and initing both A and B to random normal, but it wasn't as good
alstroemeria313#1694: optimizer wise
alstroemeria313#1694: i am also thinking about like... how i want to save and load the lora fine-tunes/what a sane format for them is
alstroemeria313#1694: (this is so easy to decide in jax but in pytorch there is less of one clear correct way to do it)
alstroemeria313#1694: i should probably look into how people distribute stable diffusion loras rn
alstroemeria313#1694: i could, i guess, take the model state dict and scrub out everything that isn't a lora param (keep only things ending in .a or .b)
alstroemeria313#1694: then safetensors it
uwu1#4864: peft supports this
alstroemeria313#1694: link? :)
uwu1#4864: https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt2-sentiment_peft.py
|
alstroemeria313#1694: ooh
alstroemeria313#1694: hmmmm
alstroemeria313#1694: no int4 yet ofc
alstroemeria313#1694: hmmm trying dropout on the lora delta_wx (the delta to the activations)
alstroemeria313#1694: it might help w/ small data
alstroemeria313#1694: hmmm what about dropout on the weights
alstroemeria313#1694: idk
chilli#5665: @Spy @alstroemeria313 btw, just quickly hacked something up for how I would do lora with parametrizations and tensor subclasses: https://gist.github.com/Chillee/a8d2070b1b7b3f97d8c87bac3c366f8e
alstroemeria313#1694: ooh
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/1103410114460926042/image.png
alstroemeria313#1694: i tried a parameterization version first but it broke with huggingface int8
chilli#5665: this is basically the core part of the API (obviously could have nicer wrappers)
alstroemeria313#1694: because it stores some state in some properties on the .weight
chilli#5665: how is HF int8 implemented 🤔
chilli#5665: ok, they should also use parametrizations lol
alstroemeria313#1694: lol but they don't ;_;
chilli#5665: because parametrizations can compose together
chilli#5665: and so you can see that the gradients are only computed for the lora components https://cdn.discordapp.com/attachments/729741769738158194/1103410595077824562/image.png
alstroemeria313#1694: *nods*
chilli#5665: maybe I should go bug the HF folks about this
|
chilli#5665: btw, do you happen to have a link to this?
Spy#9778: nice
zphang#7252: for LoRA or 8bit?
chilli#5665: 8bit
alstroemeria313#1694: no but try it, do `load_in_8bit=True` on your `AutoModelForCausalLM.from_pretrained()`
chilli#5665: btw, here's a gist for a slightly more permanent link: https://gist.github.com/Chillee/a8d2070b1b7b3f97d8c87bac3c366f8e
alstroemeria313#1694: ty!
zphang#7252: https://github.com/huggingface/transformers/blob/main/src/transformers/utils/bitsandbytes.py#L98
if this is what you're looking for
zphang#7252: LoRA+8bit used to be simpler but I think they made things more complicated once they started adding functionality for stacking LoRAs
https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py#L666
chilli#5665: hrmmm
kevin-ai#4032: It's my own figure. made by myself.
kd90138#9368: https://github.com/replit/ReplitLM
kd90138#9368: actual source drop
𓅬 gabriel_syme 𓅬#3220: https://www.fast.ai/posts/2023-05-03-mojo-launch.html
𓅬 gabriel_syme 𓅬#3220: Python but it's performant. Kind of exciting
sekstini#0069: oh, I didn't realize before now, but Mojo apparently compiles down to a binary. very cool
jrowe#5371: Luajit for python?
tpapp157#3643: Eh. It's all just marketing right now. A new "Python-killer" language gets announced like every 6 months. But who knows, maybe this one will be the unicorn. I'm not holding my breath, though.
|
𓅬 gabriel_syme 𓅬#3220: The binary is nice
jrowe#5371: Closed source
jrowe#5371: Good for them I guess
𓅬 gabriel_syme 𓅬#3220: I would be shocked if it stayed like that
𓅬 gabriel_syme 𓅬#3220: Can't ever keep up right, unless you like narrow focus in a laser beam
sekstini#0069: pretty sure they said they were going to open source it
tpapp157#3643: Yeah, the compile to binary executable is really nice. Assuming it magically works as advertised. Right now all we have are a couple of blog posts talking about future development plans.
𓅬 gabriel_syme 𓅬#3220: Yeah and a playground. It's stated in there that things are missing
sekstini#0069: In particular I think this is big if they can make type checking work properly and have things actually fail at compile time. (if you've ever written a jitclass in numba you know what I'm talking about)
jrowe#5371: Hopefully it pans out, looks pretty slick
tpapp157#3643: I remember a couple years ago when people were super excited that Julia was going to replace Python and was being touted as so much better in every conceivable way. Today, outside of a very small group of enthusiasts, Julia has largely fallen flat and seems to be on a fairly direct trajectory to slowly die off. Who knows, though, maybe Julia can turn it around.
chilli#5665: I don’t really see how this is big lol
chilli#5665: Like, sure, being able to compile stuff like numba to a binary is nice
chilli#5665: But the central question is whether it can actually be a replacement for python
tpapp157#3643: Yeah. And when your first impression is taking 3 lines of Python code and turning it into 15 lines of Mojo code, that's going to be a really really tough sell.
Spy#9778: I'm never sure how much python performance is actually even a bottleneck
Spy#9778: like well written ML code is not remotely bottlenecked by python right
Spy#9778: since everything is happening asynchronously
𓅬 gabriel_syme 𓅬#3220: The whole point is writing that coee
𓅬 gabriel_syme 𓅬#3220: I can't, vast majority of people can't code a flash attention or smth. That said since those mature and become puzzle pieces you can call it might be that it's not a huge sell
|
Maximum Limelihood Estimator#8915: it's incompatible with basically every package and forces you to do manual memory management and use a borrow checker (borrow checkers are great options to have, but plz just let me use GC for non-performance-oriented code)
Maximum Limelihood Estimator#8915: so no
Spy#9778: oh is it supposed to replace C++ and python at the same time?
Spy#9778: like you write the low level stuff in it as well?
LDJ#2946: True but this is from the same guy who created LLVM and Swift, and lead the worlds biggest companies developer tools division for years (Apple) ,
As well as having Jeremy Howard as an advisor, this seems very promising
LDJ#2946: Yes, if you watch just the first 10 minutes of the keynote you’ll get a pretty good summary of that
Spy#9778: ah being able to use existing python libraries is a huge deal
Spy#9778: the other supposed python killers didn't have that and python's incumbent advantage was just way too big
Maximum Limelihood Estimator#8915: My conversation with him in the Discord server makes it seem very unpromising
kd90138#9368: Which?
Maximum Limelihood Estimator#8915: The Modular Discord
LDJ#2946: When you say “conversation” do you mean a message he sent you asking you to not talking about Julia in the discord?
LDJ#2946: Or is there more
Maximum Limelihood Estimator#8915: Much more, several threads
LDJ#2946: :PauseChamp:
Maximum Limelihood Estimator#8915: But he seemed to not really be able to answer basic questions about why the language needed to be written from scratch instead of building on Taichi or Julia+PythonCall.jl or something similar
Maximum Limelihood Estimator#8915: In particular not being able to point to e.g. "We don't like Julia's type system," or "we don't like multiple dispatch" or "we don't think JIT is a good idea" or some kind of perceived fundamental flaw in the language; instead it felt like I was talking to Marco Rubio. Whenever I tried to get real details out of him he just went over the same soundbyte a bunch of times that went like "We have nothing for love but Julia, but Mojo was created to solve a fundamentally different problem in machine learning. Please see our website for more details" (link to website that says "we want Python, but fast")
Spy#9778: I'm pretty impressed with the degree of integration with python they showed
Maximum Limelihood Estimator#8915: That's just standard
|
Spy#9778: that goes beyond just "can call python from another language"
Maximum Limelihood Estimator#8915: How so?
Spy#9778: like
Spy#9778: well actually I guess I don't know what calling python from julia is like
Spy#9778: but most of the time when I hear that I just think of one language allowing you to expose bindings to another language
Spy#9778: but it being pretty clunky
kd90138#9368: What's wrong with multiple dispatch
genetyx8#7543: Calling python from julia feels like magic. Literally seemless
genetyx8#7543: At any rate julia will always have DiffEq, and I don't expect any serious competitors to that any time soon
genetyx8#7543: corollary is that if Diffeq dies, Julia dies :sadge:
LDJ#2946: I think it’s hard to answer such questions without shitting on Julia so I’d say it makes sense to deflect to the already curated pr proof info on the website.
When it comes to things that Julia can’t do, I feel like the demo towards the last half of the Keynote with Jeremy Howard showed a decent amount of things that I don’t think Julia does, and makes things specifically around AI much easier and efficient
LDJ#2946: Also it seems much more focused overall on AI specific feature set than Julia
synquid#7193: https://news.ycombinator.com/item?id=35790367 Lattner replied well to the Julia question here
synquid#7193: I especially like the point about there being room for more than one thing
synquid#7193: adding competition to the space is good, not a waste
ilovescience#3282: i think learning mojo will be easier than learning julia and will be easier for folks to switch
Fleetwood#1949: I’m not sure where the debate is to be had here. If Julia was vastly superior to python it would have taken over. I’m sure it’s better than python in many ways, but not by enough.
Fleetwood#1949: Modular doesn’t have the problem of competing with 30 years of python momentum as it’s a superset
|
Fleetwood#1949: Rust is only displacing C++ because it is 10x better
synquid#7193: julia also does not have many resources
genetyx8#7543: > If Julia was vastly superior to python it would have taken over.
Looks at historical dominance of terrible languages :thonk:
synquid#7193: kinda hard to compete with an ecosystem where tech giants are putting billions into python libraries
Fleetwood#1949: That's true! but they're good enough, and have massive momentum. You need to be significantly better to defeat the momentum unfortunately.
Fleetwood#1949: not even significantly, you need to be paradigm shifting + reduce amount of migration friction
ephemical#2302: definitely
Fleetwood#1949: This is a big factor too
genetyx8#7543: fwiw, I don't think Julia is likely to displace Python in ML, for all the reasons stated above. That doesn't mean it will die, as it *is* in a good position to replace Fortran, and eat Matlab's lunch at the same time.
genetyx8#7543: but those are "niche" domains
synquid#7193: I think the question has mostly been "why a new language instead of improving julia?"
synquid#7193: which is fair
Fleetwood#1949: They’ll all dance around the answer but it’s obvious
genetyx8#7543: https://imgs.xkcd.com/comics/standards.png
Fleetwood#1949: 💵
synquid#7193: there's not *that* much competition for replacing python
synquid#7193: I think more is good
Hyperion#0575: This is funny: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
archtoad#5416: Nice. Is this real?
|
Hyperion#0575: Probably yes
archtoad#5416: I’ve been having “we will always be too far behind openai” existential crises so good to see reverse existential crises on the flip-side
tpapp157#3643: The post makes a bunch of good points and it's worth taking a moment to applaud all the amazing things the OS AI community has accomplished. Still the post also has a very rosy and uncritical view of the accomplishments of the OS community. Both the OS community and closed source commercial community have distinct advantages and shortcomings but the grass is always greener.
OccultSage#3875: 🙄
Fleetwood#1949: It's certainly viewing things through an optimistic lens
Kharr#7888: I would discount any of the OS progress that's based mostly on fine-tuning models on ChatGPT output in an attempt to clone it . Open Assistant is the only thing mentioned that is actually really impressive.
jamesc#4183: but the point that "theres no defensive moat since people can clone us" still stands i think? even if its not very impressive
Kharr#7888: Until the cloning plays out in court I wouldn't say that. However, things like OA getting close without cloning is what really speaks to OS being a competitive force. The progress on that in such a short time is very impressive.
tpapp157#3643: Not really. There a few key factors keeping the OS community competitive currently: Access to large commercial pre-trained models + efficient finetuning techniques, an enormous pool of cheap labor, current DL efforts are still mostly focused on simple data types which are readily available, the continued dramatic drop in the cost of compute. The alignment of these factors mean that current OS efforts are able to follow very closely in the footsteps of new commercial innovations, but if/when these things change there will be a much larger barrier.
tpapp157#3643: History shows us tons of examples of new technologies that started out simple enough for amateur hobbyists to participate in sota development but then very quickly escalated in scale and complexity beyond what individuals or small hobbyist teams could replicate. I don't see any reason why DL wouldn't follow a similar path.
MicPie#9427: Oh, it is great to explain it, thank you for sharing it here! :hap:
MicPie#9427: I once debugged something and used same the visualizations to check for issues.
StellaAthena#3530: > Cerebras (not to be confused with our own Cerebra) trains the GPT-3 architecture using the optimal compute schedule implied by Chinchilla, and the optimal scaling implied by μ-parameterization. This outperforms existing GPT-3 clones by a wide margin, and represents the first confirmed use of μ-parameterization “in the wild”. These models are trained from scratch, meaning the community is no longer dependent on LLaMA.
???????
StellaAthena#3530: Their own paper shows it underperforming Pythia on a per-parameter basis by a wide margin
Hyperion#0575: yeah the author doesn't seem to have a great level of knowledge about LLMs
Perhaps trying to hype up OSS for other reasons
StellaAthena#3530: This was the only thing that stood out to me as “wrong”
StellaAthena#3530: I would have included Pythia and trlX on the timeline, and there are things about the positions the author takes I wouldn’t necessarily agree with. But I think it’s broadly speaking factually accurate? Only skimmed it though
StellaAthena#3530: Am I missing some falsehoods?
|
Hyperion#0575: I would say most of my other disagreements are more about overrating/underrating various things
For example, I think they are overrating LoRA's ability to compete with good finetuning, underrating the gap between OAI's models and open ones like OpenAssistant, etc
tpapp157#3643: Most of the OS models have had no where near the level of general scrutiny that would be required to surface their shortcomings and in that respect the post very charitably gives these models the benefit of the doubt.
rallio#9917: I am glad in a strange way, that at least the people within this large corporation are not in denial of reality like so many other large corps are
rallio#9917: I think the biggest mistake the large corporates made with all this, was they took the AI safety issue, which is an important issue with models of a particular size and amount of compute training, and added all the concern and hesitancy about release and commercialization to models that have essentially zero existential safety risk (like openAI withholding gpt2 for several months, google only releasing imagen as a cartoon monster generator app, etc)
rallio#9917: If google or any of the other big companies just productized the smaller models with some common sense boilerplate disclaimers about limited liability and then enforced those boilerplate against people that violated things with some lawsuits probably we would be in a much different world
StellaAthena#3530: Big news in the AI Red Teaming world: https://aivillage.org/generative%20red%20team/generative-red-team/
White House promotion of the event: https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/
KublaiKhan1#6681: This all seems like pretty good news to me?
KublaiKhan1#6681: More funding + deserved scrunity
StellaAthena#3530: Very
hails#6601: This is really cool
StellaAthena#3530: The AI Village is great (disclaimer: I used to help organize it, before I got too busy with EAI).
rallio#9917: I really like how whomever in this google leak talks also about LoRA I am finding myself largely in agreement with whoever this person is
StellaAthena#3530: They've been running hands-on prompt hacking workshops at several security conferences to spread interest and awareness of it in the security community
rallio#9917: It seems like they are really hitting the nail on the head, maybe they should leave google and be part of what is working rather than trying to reform a broken internal bureaucracy. Author of the document if you are in this discord consider it! 🙃
rallio#9917: ```Data quality scales better than data size
Many of these projects are saving time by training on small, highly curated datasets. This suggests there is some flexibility in data scaling laws. The existence of such datasets follows from the line of thinking in Data Doesn't Do What You Think, and they are rapidly becoming the standard way to do training outside Google. These datasets are built using synthetic methods (e.g. filtering the best responses from an existing model) and scavenging from other projects, neither of which is dominant at Google. Fortunately, these high quality datasets are open source, so they are free to use.```
rallio#9917: I like that title "Data Scavenger"
|
rallio#9917: rather than datascientist
OccultSage#3875: No focus on correctness, or dataset quality.
ILmao#5683: Also the proportion of power users who would be willing to put in the extra time to set up an open source model for use
ILmao#5683: I'm not surprised it turned out this way. Starting from whole cloth makes it much easier for them to integrate with MLIR and all the compiler/runtime tech they were developing in parallel
LouisB19#4062: Is anyone here interested in (transformer) symbolic music models like Music Transformer or MuseNet?
LouisB19#4062: It's such an unexplored area imo
synquid#7193: python dependency management is the biggest obstacle to agi, if Mojo does better than pip we're so back
LouisB19#4062: I've gotten some interesting results trying to model sheet music using a large transformer encoder.
LouisB19#4062: https://soundcloud.com/loua19/sets/bach-vs-ai-fugue
Fleetwood#1949: https://github.com/declare-lab/tango
LouisB19#4062: Audio seems like a big deal atm
LouisB19#4062: I'm actually more interested in symbolic models - they seem relatively unexplored in comparison to audio.
LouisB19#4062: But actually relatively easy to implement, just train a bog-standard transformer model on a symbolic representations of sheet music
LouisB19#4062: You can listen to some experiments here. Definitely has a lot of potential. https://soundcloud.com/loua19/sets/ai-fugues-v2
KublaiKhan1#6681: How do you convert sets of sheet music into songs though?
KublaiKhan1#6681: I couldn't find any good datasets
StellaAthena#3530: @LouisB19 I know someone who would be interested in collaborating with you, I suspect! I'll shoot them an email.
LouisB19#4062: That has been the main issue, I had to work on that problem for a while
LouisB19#4062: but as it stands I have all classical piano repertoire converted!
StellaAthena#3530: Yeah Halley has been strugging with that too
|
StellaAthena#3530: She has some kind of neurosymbolic approach IIRC
LouisB19#4062: I basically wrote a parser for LilyPond file format
LouisB19#4062: LilyPond is a GNU score writing file format
LouisB19#4062: So basically parse it to a piano-roll form
LouisB19#4062: So I have high quality piano-rolls of the entire classical piano repertoire
KublaiKhan1#6681: That's awesome
StellaAthena#3530: Here's her NeurIPS paper about it https://proceedings.neurips.cc/paper_files/paper/2022/hash/f13ceb1b94145aad0e54186373cc86d7-Abstract-Conference.html
LouisB19#4062: And have pre-trained a 160m transformer encoder on it!
LouisB19#4062: The samples I linked only took 30min of finetuning on a A100
LouisB19#4062: finetuned on like 40 specific pieces of music by Bach
StellaAthena#3530: You're welcome to create a thread in #community-projects 🙂
LouisB19#4062: I might colab with stabilityAI but they are still scaling up their music team infrastructure
LouisB19#4062: I'm starting my PhD in September, generally interested in building musical foundation models
KublaiKhan1#6681: Do you have any thoughts on how you'll extend beyond piano, from the data side?
LouisB19#4062: The data side is where there is work to be done I think. Lots of optimizations to be made
KublaiKhan1#6681: I mean even finding data
LouisB19#4062: One method is to just add instrument tags to the 'notes' in the piano-roll
LouisB19#4062: Ahh actually all the music I have is labelled in terms of instrument.
StellaAthena#3530: You should read some of the BioML and ChemML lit
LouisB19#4062: Like I have labelled parts for all well known orchestral works
|
StellaAthena#3530: They've been designing techniques for using transformers on annotated sequence data for years
LouisB19#4062: It's honestly becoming a bit to big of a project to do all by myself, but people typically aren't that interested in neuro-symbolic music models atm. I think they will be in a year though if you know what I mean
tpapp157#3643: I strongly suspect that data limitations mean the reverse route is the more promising route forward for music. Direct audio synthesis and then a secondary transcribing model if sheet music is necessary.
StellaAthena#3530: DM me your email and I'll introduce you to Halley
LouisB19#4062: Its true but actually in the case of classical music in particular many people have painstakingly labeled most of it.
LouisB19#4062: For modern music it's impossible though due to sheet music not being widely available in a consistent format.
LouisB19#4062: I think theoretically if you could colab with large sheet music publisher, you could parse their data and train models that way.
tpapp157#3643: Yeah classical music is the low hanging fruit for sheet music but that's a pretty niche genre.
LouisB19#4062: but that data is not available publicly . I tried fine-tuning on jazz but it is too hard to get good enough data.
LouisB19#4062: It is kinda nice though, I can produce endless nice sounding classical piano music to listen to while I work.
LouisB19#4062: Would be so cool if I could do it for jazz though
LouisB19#4062: I'm actually prepping to train a transformer decoder (with a modified loss function) soon. I have to much data for the amount of compute I can afford tho lol. I'm currently training these models on 20% of it.
tpapp157#3643: Even if you got sheet music for modern music, that's not really going to be helpful because it doesn't capture texture and other digital post-prcessing. So an 'A' note on electric guitar can mean a million different things in terms of actual audio.
tpapp157#3643: Maybe you can treat the problem as multi-modal and approach it from both ends and try to learn a style-space to mediate in the middle.
LouisB19#4062: My feeling is that symbolic models - consistent and high quality but not very expressive. Audio models (or even MIDI models) - very expressive in terms of timbre but not consistent at all.
LouisB19#4062: So if you could combine a symbolic model with a symbolic -> audio model you could get something really good imo. Lots of work to be done.
KublaiKhan1#6681: Actually, I don't think this is an issue
KublaiKhan1#6681: It's a feature, not a bug
KublaiKhan1#6681: It makes deterministic mapping impossible, yes
KublaiKhan1#6681: But current diffusion models, for example, are also 'solving' a similarly intractable issue
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.