data
stringlengths 115
7.61k
|
---|
makya#2148: Damn.
makya#2148: Sucks tbh. About the Stanford Alpaca demo model being suspended or unavailable to use. Going down the route of Galactica.
&.#0001: what's the difference between acc and acc_norm?
&.#0001: canary-babbage
```json
{
"results": {
"hellaswag": {
"acc": 0.42859988050189207,
"acc_stderr": 0.004938643787869549,
"acc_norm": 0.5452101175064729,
"acc_norm_stderr": 0.004969341773423513
}
},
```
&.#0001: is there a better benchmark than hellaswag to run?
&.#0001: I am going to run winogrande after running this on the -c6 model
&.#0001: they're not instruct tuned so that makes sense
&.#0001: they seemed decent at completing text for me
&.#0001: OpenAI RLHF and instruct fine-tuning can make in-context learning and few shot worse on some tasks
|
&.#0001: a week ago they dropped gpt-3.5-brooke to gpt-4 api users, it didn't work, and disappeared after 3 hours
&.#0001: is this their way of monitoring researchers?? is my account flagged??
StellaAthena#3530: @&. I would run Lambada, TriviaQA, and PiQA if I had to pick a couple shorter ones
&.#0001: has anyone run these benchmarks on gpt-4 and code-davinci-002 (base gpt-3.5)?
&.#0001: how long is winogrande? there are many benchmarks for it, so I might run it first. will run those though
StellaAthena#3530: This falls between FairSeq 350M and 1.3B
&.#0001: it's equivalent to original openai babbage
StellaAthena#3530: No Babbage got 54% HellaSwag
StellaAthena#3530: It’s almost identical to the original Ada
StellaAthena#3530: source: <https://blog.eleuther.ai/gpt3-model-sizes/>
&.#0001: I see
&.#0001: ah, I was comparing its acc_norm to that pages's acc (assuming it uses acc and not acc_norm)
&.#0001: canary-babbage-c6
"results": {
"hellaswag": {
"acc": 0.42859988050189207,
"acc_stderr": 0.004938643787869549,
"acc_norm": 0.5451105357498506,
"acc_norm_stderr": 0.004969431900874302
&.#0001: running TriviaQA next
|
StellaAthena#3530: acc_norm is a “length-normalized” measurement that I generally don’t recommend using unless you’re confidant you know what you’re doing with it
&.#0001: interestingly enough it's named -c6 and the content filter is named -c4, implying c means a model version
&.#0001: normal and c6 may be the same model
Kharr#7888: Can you run lambda? It separates out the models and architectures pretty well.
&.#0001: ok
&.#0001: canary-babbage-c6
```
"triviaqa": {
"acc": 0.11420489702112614,
"acc_stderr": 0.002990468984744803
},
"piqa": {
"acc": 0.7453754080522307,
"acc_stderr": 0.01016443223706048,
"acc_norm": 0.7459194776931447,
"acc_norm_stderr": 0.01015727199913505
}
```
&.#0001: ```
"results": {
|
"lambada_openai": {
"ppl": 5.576084844379262,
"ppl_stderr": 0.13531174035884028,
"acc": 0.6246846497186105,
"acc_stderr": 0.006745911720926654
}
```
kd90138#9368: how much is this costing you?
&.#0001: It could be using the cl100k tokenizer GPT-4 is using
nshepperd#2316: even v2-8 tpus are being constantly preempted now
nshepperd#2316: i guess trc is over
Karsus#7147: More likely it's an experimental version that needs more feedback to refine.
It does occur to me though, that given the precedent for AI generated images... there may be very little in the way of legal defense for GPT-3/4 outputs... If someone takes and uses them... OpenAI may not be able to do much (legally)... Beyond maybe trying to sue whoever first downloaded the dataset for having violated their ToS.
Without KYC of some kind, that's rather meaningless.
plotato#9180: Is it realistic to build a PC that can run GPT-J?
plotato#9180: And if so, what hardware would I need to run it?
paws#3311: Gptj? You can comfortably do it in a 16-24gb gpu 🤔, with some fp tricks you can even load larger models
lunarflu#6769: no announcement? 🤔
|
&.#0001: nope
lunarflu#6769: usually for GPT3 / chatgpt / gpt4 there's big announcements / media farming
lunarflu#6769: hmm, maybe it really is a leak
Kharr#7888: This lines up really well with the 1.3B parameter models. Thanks for sharing.
Mr. Humble#3058: Hello all,
Actually I am willing to create a "GenerativeQA" system which can take a question and then fetch `k` relavant documents from the storage and then I would like to give these documents as the context to get the answer.
I am willing to use the "GPT-J-6B" model for it. And this is the approach I would like to go with:
```python
# Step 1: Create document store (in haystack) and store all documents
# Step 2: Ask question. So assuming based on the question we have received top 10 relavant documents
# Step 3: Append these documents together and pass as the context.
# Step 4: Create the prompt
prompt = \
f"""Answer the question from the context below. And don't try to make up an answer.
If you don't know the answer, then say I don't know.
|
Context: {context}
Question: {query}
Answer:"""
# Step 5: Create a model and try generating the answers
# using gpt-neo-125m for now
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M")
# Step 6: Get the answers
tokens = tokenizer(query, return_tensors="pt")
output = model.generate(**tokens,
temperature=0.5,
min_length=5,
max_length=200,
early_stopping=True,
do_sample=True,
|
num_beams=8,
repetition_penalty=2.0, top_k=50)
print(tokenizer.decode(output[0]))
```
But I am getting the error here because of the **long length** of the prompt. Will you guys please help me through this?
`Input length of input_ids is 553, but max_length is set to 200. This can lead to unexpected behavior. You should consider increasing max_new_tokens`.
And...
`The expanded size of the tensor (200) must match the existing size (554) at non-singleton dimension 0. Target sizes: [200]. Tensor sizes: [554]`
So obviously, I need to truncate the text some how.
Is this the right/fast way to create such QA systems?
Are my parameters wrong?
Please help. Thanks 🙏
Kharr#7888: I recommend reading the documentation of the library you're using which explains what all those parameters mean. This is not the place to be asking beginner questions. https://huggingface.co/docs/transformers/main_classes/text_generation
|
Mr. Humble#3058: I can understand, sorry to bother you guys. But I have gone through the documentation, and the question I want to ask you is, is this the feasible approach to provide the context in the prompt and get the answers?
Kharr#7888: Yes, your idea is correct. You can retrieve documents from a corpus using search (setting up Elasticsearch is a common method for this) and then feed them into the context of the model + the question and have the model use the information from the document to answer the question. If you search around on github there are many projects like this which could help you see how others have approached this problem.
Mr. Humble#3058: Thanks for the help 🤗
Louis#0144: TIL GPT4 is 80% of the way to alignment
Louis#0144: thnx @Hyperion
plotato#9180: Thanks
Dashiell#8739: still very early days, but I'm going to be keeping an eye on this project. Rust is an incredibly ergonomic programming language, but it has struggled to support CUDA for a really long time. Looks like someone has finally figured it out
https://coreylowman.github.io/2023/03/15/release-0.11.0.html
Realmsmith#4506: What's the next project
synquid#7193: I’ve been thinking about this, but all I really care about is that it can codegen to mlir or some other ML compiler. I really don’t think we need another backend
Dashiell#8739: I think the benefit of this would be as a front end, of writing model and training code in rust instead of python that's lowered down to C
synquid#7193: Basically that but target an ML compiler instead of llvm
Leaf#2597: does anyone know any foss music models?
Leaf#2597: i'm interested in making an ai-music web radio
vikasp#7540: Does anyone have an opinion on what the next phase of LLMs looks like? I think the first phase was "throw more data at transformers with more params". Now that we're running out of reliable data, that phase seems to be ending. More params with the same data may not buy us much (as we saw with chinchilla and llama). I haven't seen any architecture ideas that significantly shift this.
The most interesting directions I've seen are learning from experience (a superset of alignment), multimodal learning, and a broad bucket I'll call fine-tuning.
The PALM-E paper showed a decrease in language performance with multimodal learning. This may be an issue of parameter count, since performance decreased less with more params.
|
There's a lot of work being done on in context learning, fine-tuning, and prompting (toolformer, react, etc). These improvements make LLMs much more useful in the real world, but they're constrained by the capabilities of the base model.
Learning from experience seems promising, but the action space is huge. This may be less data-efficient than traditional training (correct me if I'm wrong).
You could also look at learning from synthetic data, which is interesting, but can a model trained on purely synthetic data perform better than the model that generated the data?
Do you think any of these directions are promising as a way to solve the data problem? Or something I missed? I'd appreciate any thoughts or pointers!
AI_WAIFU#2844: rule two
Hyperion#0575: https://twitter.com/sama/status/1638635717462200320 hmm
synquid#7193: so it might've been redis after all?
jrowe#5371: lol
jrowe#5371: probably, given the "oh, i saw that in our environment" stories all over
kurumuz#5695: should have just actually encrypted things
kurumuz#5695: but they want to view data -- so
kurumuz#5695: what stops it from this happening again
kurumuz#5695: nothing
lunarflu#6769: I feel like being more open would have mitigated this - more communication, similar expectations
lunarflu#6769: yeah until equivalent alternatives appear (maybe claude?) people will bite the bullet and accept the risk I think
lunarflu#6769: Who's gonna drop chatGPT when it offers so much?
zphang#7252: also
|
> blaming opensource
also
> not saying what library it is
anotherone#9475: they said will release postmortem
lunarflu#6769: jason you don't get it.....they just need to do everything in their clean little company.....if they do it themselves everything will be safe and perfect 😊
lunarflu#6769: if it works it's because they paved the path themselves...
if it fails it's because open source bad...
zphang#7252: "Opensource was a mistake" - Sutskever (Mar 2023, paraphrased)
paws#3311: https://tenor.com/view/he-cant-keep-getting-away-with-it-fed-up-done-upset-angry-gif-15272547
lunarflu#6769: gpt2 paraphrasing
anotherone#9475: Imagine if gpt1234/chatgpt were all secrets until now that were just rumors floating around, and nobody got to try it
anotherone#9475: Would be really interesting world
lunarflu#6769: true, but they probably get free debugging thanks to chatgpt
lunarflu#6769: instead of having to guess how people will break it, release weak version and let them break it
&.#0001: canary-babbage is gone from the openai api
Xmaster#8447: It was exactly the purpose of the chatgpt release indeed
&.#0001: poe.com has claude
&.#0001: this question takes significant effort to answer and involves discussing capability advances that alignment-y folks may not want to discuss in a public discord server. I'm doing something similar to learning from experience but hesitate to share details with a stranger
lunarflu#6769: to be fair I also like this arrangement 😌
lunarflu#6769: complete guess, but "self-verification" or some secondary system that doesn't rely solely on probabilistic generation
|
Xmaster#8447: I think it is a logical mindset but unsure this will scale well
lunarflu#6769: Build in explainability to LLMs and you eliminate a lot of the worries
Xmaster#8447: Yeah but “Open”AI doesn’t seem concerned about explainability. Just capabilities patched with RLHF…
lunarflu#6769: They don't need to worry about something complicated like explainability, they have their funny private ethics team (two of them are ex OpenAI 😳 ) so everything is A-okay
Fessus#9563: Explainability in a LLM context just means making the model explain itself
lunarflu#6769: Yes I tried to humorously blend both definitions
vikasp#7540: Fair enough. I'm looking to learn, also, so I'd appreciate any pointers you have, even if they're foundational public papers. I'm not quite to the point where I'm implementing novel ideas, but I'll get there very soon.
One idea I've had floating around is to use a "seed" for training/fine-tuning instead of a full dataset. Basically pick 5-10 examples that you're interested in training the LLM on. Then, create a giant database of embeddings, and keep augmenting your training data each iteration from the embeddings. You would drop out any examples that the LLM performs well on to focus on the other tasks. Similar to https://arxiv.org/abs/2302.14051 , but a little more general. You could modify this to do online learning as well. It doesn't solve the data efficiency problem, but it would allow models to self-finetune for specific tasks, saving compute.
&.#0001: Do you have any previous work to suggest your competence you'd like to share? I'm focused on creating prosocial solutions and AI alignment (while doing LLM research)
vikasp#7540: Hmm, have you seen any papers that do non-probabilistic generation? That's interesting.
lunarflu#6769: Nope, just something I was thinking about
lunarflu#6769: Humans can retroactively provide reasoning, but LLMs seem (from my perspective) very linear
lunarflu#6769: How do we know if the patient is sick? Well, the machine says 60% probability....
lunarflu#6769: But if you ask a human (even if their accuracy is lower) they can point out the variables that led to our conclusion
lunarflu#6769: To clarify I think probability will always be part of it, but we just add a second brain that always seeks to prove / disprove what the first brain says. Seems like it'd need to memorize sources though, not just patterns
vikasp#7540: This seems similar to GANs (generator/discriminator) and chain of thought prompting / multistep reasoning. I've also seen some work on semiparametric networks (using kNN to augment output probabilities) that also seems related. None of them do away with probability, but they do get closer to what you're thinking about (memorize sources, prove/disprove, and chains of reasoning)
kd90138#9368: https://github.blog/2023-03-22-github-copilot-x-the-ai-powered-developer-experience/
kd90138#9368: this did the rounds right?
Dyno Bot#3296: going to kms, I just recently switched to helix.
|
if this is good, I might have to switch back to neovim to use it. Or learn to write helix plugins
valar#2262: Helix is cool but no full backwards compatibility with vim motions makes it not worth it
Dyno Bot#3296: I would never use it with vim motions, the appeal for me is that the motions are different.
If you wanted Helix with Vim motions, you could use a plug and play nvim distro like astro
valar#2262: My complaint is, even if the Helix motions are "better" - it's so new and lacking in support/adoption that you will have to use a mix of tools.
Pretty much verything else already uses vim motions.. I doubt we will get `set -o helix` soon (or ever).
valar#2262: It's like the people switching from Qwerty to Dvorak because it's "better".
valar#2262: It might be better, but it's better in ways that don't matter :p
valar#2262: But don't let me stop you from enjoying Helix, I think it's awesome.
Dyno Bot#3296: yeah that's credible, I just don't use that many alternative applications with vim keybindings that often
Dyno Bot#3296: to the extent I do use them, it's usually limited to hjkl, i, or other bindings that are shared with helix
Dyno Bot#3296: I use dvorak 😊
Dyno Bot#3296: fwiw, you can switch back to qwerty with a little practice. There are videos online of people swapping between keyboard layouts in the middle of typing tests
valar#2262: OF COURSE YOU DO 🥰
valar#2262: I `ciw` all day!
valar#2262: `shift + i` and `shift + a` don't even work in Helix iirc?
Dyno Bot#3296: nvm these do work
|
Dyno Bot#3296: ``wbc`` all day
valar#2262: I swear at least one of them didn't when I tried
Dyno Bot#3296: me too, just checked and I was wrong
AI_WAIFU#2844: So now that OAI is enabling access to the base model, we should be able to estimate it's entropy and therefore it's "effective" model size/training tokens
ERees#4012: rwkv
lunarflu#6769: codex?
paws#3311: Gpt4? :guilty:
AI_WAIFU#2844: ye
paws#3311: damn thats cool
lunarflu#6769: 😳
circuit10#0158: It’s only through a research thing, not the normal API, isn’t it?
Hawk#1399: I have access to a server with some A100s. Does my code have to be on the server to use the GPUs?
main#7610: yes????
Hugh#1639: You can transfer data to and from an A100 at 2TB/s, so even a 10GB/s IP link would be infeasibly slow.
skrishna55#3382: https://arxiv.org/pdf/2303.12712.pdf : AGI anyone?
lihe07#0906: AGI ❌
OpenAI Advertisement ✓
skrishna55#3382: how is it not AGI though?
StellaAthena#3530: It gets 82% on human eval and 0% on introductory level codeforces problems from this year. Seems like a clear sign of failing to generalize to me
lihe07#0906: and I don't think it's really a "cognitive" or anything like that.
|
StellaAthena#3530: It also gets worse than random on AMC 10 despite doing better than random (but still poorly) on AMC 12. AMC 10 is an easier version of AMC 12 designed for younger students
StellaAthena#3530: They make claims about memorization and deduplication in the advert they put out, but they use much weaker standards for this than is standard and weaker ones than they have historically used (see PaLM, GPT-3)
skrishna55#3382: are these results published that you could share? I am looking for such grave downsides of these models myself, but everyone seems to just fanboy these models
StellaAthena#3530: The 82% number is from the paper you linked, the 0% number is from @chilli’s thread here: https://twitter.com/cHHillee/status/1635790330854526981?s=20
StellaAthena#3530: The AMC numbers are from the report that OpenAI published along with announcing GPT-4
main#7610: note that the GPT-4 used in the "AGI paper" is different from the release version used for benchmarking in the GPT-4 report
StellaAthena#3530: The numbers in parentheses are percentiles. So on the easier test it gets 6th-12th percentile but in the harder test 45th to 66th. https://cdn.discordapp.com/attachments/729741769738158194/1088454883088273488/IMG_2193.jpg
StellaAthena#3530: (I’m not sure why the ranges are so big… is this implying that 21% of people get exactly 60/150 on AMC 12?)
StellaAthena#3530: Also it’s not possible for a human to get 3/45 LC Hard and score as poorly on CodeForces as the model does. As in, I would bet a lot of money that it’s never happened (excepting if someone decided to try to bomb it deliberately, I guess)
skrishna55#3382: that makes sense
synquid#7193: I guess memorisation is more or less going to be an issue from now on
synquid#7193: need to evaluate on novel tasks, and it does seem like the new MS paper (that a lot of people are calling an ad) is an attempt in that direction
jrowe#5371: Lies, damned lies, machine hallucinations, and statistics
skrishna55#3382: Well it sounded like they used the high-end GPT-4 as well as the intermediate ones
StellaAthena#3530: I think the key disagreement is over the level of rigor and general suspicion about assessments that happen to line up wit the financial interests of the employers and investors of the authors.
Some Point Process#3793: Sommelier?
Some Point Process#3793: huh, didn't know that was a thing
Some Point Process#3793: this was an interesting strategy actually (from the 20 question MATH dataset sample tested on humans) https://arxiv.org/pdf/2103.03874.pdf but i can't immediately tell if it's right https://cdn.discordapp.com/attachments/729741769738158194/1088489284555788318/image.png
Some Point Process#3793: I had to run multiple times to get something that sounded correct
StellaAthena#3530: The strategy described in the text is correct, but is not what the computation does
|
Some Point Process#3793: Yeah
Some Point Process#3793: > One participant who does not like mathematics got 8/20 = 40% correct. A participant ambivalent toward mathematics got 13/20. Two participants who like mathematics got 14/20 and 15/20. A participant who got a perfect score on the AMC 10 exam and attended USAMO several times got 18/20. A three-time IMO gold medalist got 18/20 = 90%
letterrip#9942: One explanation might be that the lower grade material is more focused on visual geometry problems. See the example from '2014 AMC 10B, Problem #13' (last page) on the practice questions - https://www.maa.org/sites/default/files/2017_AMC1012_Practice_Questions.pdf
Some Point Process#3793: It seems like it has some ordinary questions. It got this (reworded question) right but had an interesting way of solving https://cdn.discordapp.com/attachments/729741769738158194/1088499542707683388/image.png
Some Point Process#3793: But the way I'd solve it is that for the product of consecutive integers (factorial), the larger integer has to be half of a perfect square, so that the value of the whole expression can be expressed as just a product of squares
Some Point Process#3793: https://cdn.discordapp.com/attachments/729741769738158194/1088500765766717511/image.png
Some Point Process#3793: The AOPS site has all the past competitions as well tho, e.g. https://artofproblemsolving.com/wiki/index.php/2017_AMC_10A_Problems
Some Point Process#3793: (I actually don't know if I worded the question so that only one of the choices was right sry)
Some Point Process#3793: So in that sense gpt-4 could be more correct, in taking a systematic approach via prime factorization, tho idk off the bat
ilovescience#3282: https://openai.com/blog/chatgpt-plugins
Ethycs#8461: What if alignment was easy actually:
https://twitter.com/HackingLZ/status/1638882289613512704/photo/1
Professor Kermit#4835: is anyone familiar with the coding language R?
synquid#7193: gpt-4 is
LunchInSpace#6973: This is going to be huge. IMO if this is implemented right it will be more impactful than GPT-4. Models won't have to accurately memorize information. Just be good enough at prose and pinging endpoints to grab the knowledge they need. Some previous research on this shows huge performance gains even with relatively (to OpenAI) small models https://arxiv.org/abs/2302.04761
LunchInSpace#6973: Plus- the model will be able to actually cite it's sources
synquid#7193: it's absolutely big and so scary lol
uwu1#4864: XSS all the users!
Some Point Process#3793: infinite loop? https://cdn.discordapp.com/attachments/729741769738158194/1088531873807990794/image.png
Some Point Process#3793: It answered Q1 right tho <https://artofproblemsolving.com/downloads/printable_post_collections/224119> (where the above question was just reworded)
|
Some Point Process#3793: Well it has the right expression anyway
Some Point Process#3793: but it keeps going into a loop at low enough temp
Some Point Process#3793: but if the denominator is a bit larger it consistently gets it wrong (afaict) https://cdn.discordapp.com/attachments/729741769738158194/1088533051862163577/image.png
Some Point Process#3793: nvm
Some Point Process#3793: huh, it's p smart
eirai#3591: probably due to year-on-year variation depending on the contest
Some Point Process#3793: https://cdn.discordapp.com/attachments/729741769738158194/1088534769865867364/image.png
eirai#3591: which means then i clearly beat a computer at math! (66/150 on the AMC12 one of the years i tried it i think)
love not attention#5854: I mean it makes so many mistakes here
Some Point Process#3793: https://cdn.discordapp.com/attachments/729741769738158194/1088534830075088906/image.png
Some Point Process#3793: yeah i guess it was off by 1 (missed 13 in the falling factorial)?
Some Point Process#3793: (in a sense, etc)
Some Point Process#3793: but yeah it makes other mistakes as shown above
Some Point Process#3793: > off by 1
~~And misses 14 in 14*13 for denom~~
eirai#3591: like. if correct answers are 6 points and leaving it blank is 1.5 then 60/150 (assuming ideal technique) means _5_ questions correct
Some Point Process#3793: i was being lenient
love not attention#5854: And in the earlier combinatorics question about the club/students, it takes completely the wrong approach
love not attention#5854: it says it's calculating the number of assignments without restriction, but 25 * 24 * 23 implies that the same individual can't hold multiple positions
Some Point Process#3793: did you check the stats/summary at https://artofproblemsolving.com/wiki/index.php/AMC_historical_results
|
Some Point Process#3793: I think the discrepancy in percentiles was the main difference that was notable between 10/12 (that is, gpt-4's results) unless I'm mistaken :p
Some Point Process#3793: Yeah I didn't mention that but assumed it was obvious :p
love not attention#5854: I think the memorization vs understanding issue is actually super interesting and deserves more attention
Some Point Process#3793: <https://discord.com/channels/729741769192767510/730095596861521970/1087844037492490381>
Some Point Process#3793: ^this would be an interesting/orthogonal fact (i.e. where u might just call it "good next word predictor" entails "good at logic, problem solving, etc")
eirai#3591: oh yea its funny that it has a not-too-awful-but-pretty-bad score on AMC12 and a absolutely-dreadful score on AMC10 was just talkin abt why the percentile ranges are so large
Some Point Process#3793: Oh IC
Some Point Process#3793: Yeah ig
love not attention#5854: we need scaling laws for codeforce easy problems after sept 12 2021 lol
eirai#3591: scaling laws but for contests too obscure to be in the training set
eirai#3591: scaling laws for chinese national olympiad exam
Some Point Process#3793: https://amc-reg.maa.org/reports/generalreports.aspx there are amc/aime/usamo score distributions here
eirai#3591: mhm
Some Point Process#3793: yeah my immediate reaction was the amc type problems were going too far out of distribution. But it still makes simple mistakes in the algebraic "calculations" it seems, such that it might be worth seeing if it gets the right "picture" at least
Some Point Process#3793: etc
eirai#3591: hm from my experience doing the AMC getting at least like
eirai#3591: 5 questions isnt that bad?
Some Point Process#3793: Yeah that's 30+20*1.5 pts if you leave the rest blank ig :p
Some Point Process#3793: So far I can't tell if it's memorizing amc tbh, but i haven't tried too many reworded problems, etc (again, so far..)
eirai#3591: i wouldnt be surprised tbh
|
eirai#3591: if it fucks up on codeforces i cant see why it woudlnt fuck up on amc
eirai#3591: t. bad at both
love not attention#5854: like maybe a gpt-n is able to solve novel problems that lie within a certain semantic distance of some problem in the train set
eirai#3591: but there are some genuine freebees in here https://cdn.discordapp.com/attachments/729741769738158194/1088540069561892975/image.png
eirai#3591: (also another point against gpt-4 is that the AMC is designed to be done in 75 minutes wheras i assumed they would give the LLM as much time as it needs, im sure most ppl would do way better if given more time)
epic_malloc#3813: I've actually tried to do something similar here: https://github.com/eryk-mazus/xoxo
epic_malloc#3813: plan to add alpaca support next
epic_malloc#3813: because it works really well
Some Point Process#3793: https://cdn.discordapp.com/attachments/729741769738158194/1088543971564851281/image.png
Some Point Process#3793: (reworded from p2 2016amc12)
Some Point Process#3793: it makes other "errors" (some simple mistakes) if I retry etc
Some Point Process#3793: a systematic mistake is adding the exponents correctly to get 4x it seems https://cdn.discordapp.com/attachments/729741769738158194/1088544579659239444/image.png
Some Point Process#3793: how do I sign up to get access to those?
Some Point Process#3793: (i.e. token logprobs and such)
Some Point Process#3793: nvm found it (i think) <https://openai.com/waitlist/gpt-4-api> (I thought i already had access to this tho but it's giving it as an option still)
uwu1#4864: if it's actually moe I wonder how you reverse enginner num experts and such somehow
louis030195#2462: llm calling apis
software 3.0 🙂
lunarflu#6769: 4.0 (its called gpt4)
lunarflu#6769: :rlhf:
|
louis030195#2462: im not talking about gpt4
Owl#1746: https://github.com/openai/chatgpt-retrieval-plugin#plugins
companioncube#0123: So they connected it to the internet.
Millander#4736: Hey folks! Can I please have a review for this perplexity implementation? Is this the correct way to calculate perplexity for each sequence in a batch? https://gist.github.com/Kyle1668/2df96fee51b139e55464965b3952c08b
alstroemeria313#1694: i think you have to shift the logits because the logit at output position 4, say, predicts the token at position 5
alstroemeria313#1694: what does calculate_perplexity() do?
alstroemeria313#1694: someone else should probably weigh in here but you need to (a) compare the logits for position i for the label for position i+1, (b) in the comparison, normalize the logits (log softmax them) (c) pick out the normalized logit at the index of the label, and negate it (calculate cross entropy) (d) after doing all the comparisons, take the mean of all the cross entropies, (e) exp() the mean cross entropy
alstroemeria313#1694: does this sound right?
artem9k#7593: should i read russel & norvig or goodfellow first?
artem9k#7593: or is there a better "AI survey" book
ilovescience#3282: probably goodfellow
androahura#4783: Hi all is there a recruitment focused channel i can post an ad?
asara#0001: (defer to stella's line under mine, no)
StellaAthena#3530: No, we do not allow people to post job advertisements here without prior approval (which is rare)
lunarflu#6769: :rlhf:
SysD | 12many#3843: Hello ! I spoke with the authors of SpikeGpt (binary version of RWKV) and they would be happy to have support from Stability.ai to train models beyond 1 B.
This goes in the direction of easier to implement models on ASICs in the future too.
Thanks for your help in knowing who to contact to help them and all of us with !
https://twitter.com/_akhaliq/status/1630393736923324416?t=wFzNji9keZ3MYlypxTr5Ng&s=19
SysD | 12many#3843: Stability or Eleuther
|
circuit10#0158: That’s the normal chat-based API though?
StellaAthena#3530: It’s generally considered rude to volunteer a third party to donate tens of thousands of dollars without their knowledge. Please refrain from doing so in the future.
SysD | 12many#3843: No,no, I asked them just beforehand. And thanks for answering. May I bring them here ?
SysD | 12many#3843: Or what is the best way?
StellaAthena#3530: Here’s my impression of this convo so far:
> You: “Hey I told these guys you might give them tens of thousands of dollars! They’re quite excited. Who should I talk to about arranging it?”
>
> Me: “That’s not your decision to make and it’s unfair to both us and them.”
>
> You: “No, you don’t understand. Anyways, how should I continue the convo about all the money you’re going to give them.”
SysD | 12many#3843: I'm sorry but this feels strange.
SysD | 12many#3843: I didn't talk about decisions or money but just how to link them to you.
SysD | 12many#3843: So what the point of being so. I'm nobody in this and don't want nothing. It's a shame to make something simple so convoluted. I mean seriously.
StellaAthena#3530: I’m not sure what you think is “convoluted,” but some of the words in your message are probably missing. I’m sorry we’re having trouble communicating.
SysD | 12many#3843: Ok I'm sorry I'm not native English speaker. I don't want to misrespect you or their work. I talk with them directly, asked if they know you and if they asked for support because I saw before that you may have grants. They answered that they don't know how to ask for these and would be happy to.
StellaAthena#3530: They are welcome to email us (email address on our website) to have a conversation about this.
SysD | 12many#3843: Ok thanks !
HP#2176: does anyone have any resources on the whole "revealed goal vs actual/hidden goal" things in AI alignment
people keep mentioning this as a metaphor in EA discussions (i kinda get what they are saying )but it continues to bother me that I havent read up on like forum posts with this explicitly
|
jrowe#5371: https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html
jrowe#5371: Instruct tuned GPT-J 6b
Fleetwood#1949: lol massive company finetuning GPTJ and patting themselves on the back
uwu1#4864: @Louis I was just messaging you about making this yesterday lol
jrowe#5371: hopefully they share it
uwu1#4864: they said contact them
uwu1#4864: but their dataset should be pretty easy to get
uwu1#4864: the thing is, the self instruct datasets are now known to kinda trash
uwu1#4864: but still effective in eliciting instruction following
jrowe#5371: so "Contact us at [email protected] if you would like to get access to the trained weights. "
jrowe#5371: hows that play with GPT-J licensing?
uwu1#4864: but I bet they're more prone to hallucinations and stuff, and evaluating against chatgpt/text-davinci-003 is kinda wrong since they're trained on data from those models so presumably if they make the same mistake dv3 will still think it's good lol
Louis#0144: @uwu1 can we just finetune neox on flan v2
Louis#0144: Lol
uwu1#4864: @conceptofmind said there's some issue with the data
conceptofmind#0957: Many
conceptofmind#0957: issues
conceptofmind#0957: Trying to get it resolved
StellaAthena#3530: GPT-J is permissively licensed, and gives users the right to relicense derivative products as they see fit
conceptofmind#0957: last week and this week
|
jrowe#5371: cool, this is nice to see a little action then
uwu1#4864: i bet toolformed self instruct datasets would be way better 😳
main#7610: are there any existing projects that do that?
conceptofmind#0957: https://github.com/conceptofmind/toolformer
conceptofmind#0957: With @uwu1 and @dmayhem93
conceptofmind#0957: Dakota and I are looking into self instruct now as well
main#7610: thanks. i'm working with a rag-tag group of newbies that are trying to build an alpaca-like dataset, but with new kinds of instructions, including toolformer use.
we have absolutely no idea what we're doing and the data_generation pipelines will be extremely helpful for getting our shit together
conceptofmind#0957: Alpaca dataset was heavily contaminated
main#7610: yes, it was. is there anything specific we should note to avoid that?
conceptofmind#0957: I think there is an effort by Tloen or a group with them to create better data
conceptofmind#0957: let me link the issue
uwu1#4864: maybe would be worth trying to 0-shot toolformer to correct alpaca
conceptofmind#0957: I am down to try anything
conceptofmind#0957: https://github.com/tloen/alpaca-lora/issues/65
uwu1#4864: https://github.com/gururise/AlpacaDataCleaned
uwu1#4864: this lists some of the issues
main#7610: thanks guys
vikasp#7540: Did you end up integrating flash attention with gpt-j? https://github.com/conceptofmind/toolformer/blob/main/flash_attention/flash_attention_gptj_wrapper.py . I ask because gptj has a head dim of 256, and flash attention only support 128 max due to GPU transfer limitations. I'm curious if you found a way around this. If so, how did it perform?
vikasp#7540: Also, did you use a long context version of GPT-J? The repo you linked to (https://github.com/kyleliang919/Long-context-transformers) doesn't seem to support it. Thanks
|
conceptofmind#0957: We did not use it
conceptofmind#0957: Were having issues with z3 and open delta lora (open delta does not work) with Flash Attention as well
conceptofmind#0957: Just did standard gpt-j
conceptofmind#0957: I do not believe the CUDA Flash Attention library supports the gptj rope embeddings used either. Only the NEOX ones.
conceptofmind#0957: These are just all of the wrapper tests in the repo
vikasp#7540: Tri Dao recently added the fused GPT-J style encoding - https://github.com/HazyResearch/flash-attention/commit/e45a46a5b767d76e14c76e4bfac408b7cf94d896 .
conceptofmind#0957: Ok. That resolves that issue at least
kremlin#1337: anyone remember that goofy gpt2-simple project (https://github.com/minimaxir/gpt-2-simple) ? it was used to create that viral "synthetic @dril" twitter account by finetuning GPT-2 with dril tweets.
kremlin#1337: I'm trying to do something similar with 3, op*nai has a fine tuning API, and I have a lot of IRC history/etc/etc
kremlin#1337: My idea was to just take messages I've sent (in pursuit of fine-tuning a model to generate text "like i would"), use the first half of it as a the prompt and the rest as the completion
kremlin#1337: that make sense?
kremlin#1337: or is that not the right way to go about this
skyler1#6603: Hello! Can i ask if GPT Neo is GPT-2 released under a different name? As in, is the underlying architecture the same?
jrowe#5371: <https://github.com/EleutherAI/gpt-neo>
> "An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library."
jrowe#5371: it is a gpt-3 style architecture, with some parts of gpt-2 tokenizers and code brought along because it worked. Check out that repo, the EleutherAI site, and huggingface.co if you want to download models or check out variations and what people have done
alstroemeria313#1694: cursed activation functions: gated snake
Millander#4736: Thank you for the suggestion! I'll give this an update 🙂
Crispy#7375: IS openAI likely to continue providing a freemium tier of GPT products moving forward? Or is 4/5/6 going to be forever paywalled?
|
genetyx8#7543: < exp(1/abs(x))
kremlin#1337: i mean 3 is where it became infeasible for almost everyone to fit in vram
kremlin#1337: https://glaze.cs.uchicago.edu/index.html — would i be wrong in thinking the relatively expensive masking they’re doing here could be cheaply undone with a similarly-imperceptible noise overlay?
kremlin#1337: eg gaussian noise layer over image with <1% opacity.
Ryu#0274: https://github.com/lllyasviel/AdverseCleaner/blob/main/clean.py
Ryu#0274: ```python
import numpy as np
import cv2
from cv2.ximgproc import guidedFilter
img = cv2.imread('input.png').astype(np.float32)
y = img.copy()
for _ in range(64):
y = cv2.bilateralFilter(y, 5, 8, 8)
for _ in range(4):
y = guidedFilter(img, y, 4, 16)
|
cv2.imwrite('output.png', y.clip(0, 255).astype(np.uint8))
```
kremlin#1337: so yeah, i guess i’m missing something
kremlin#1337: that is what i expected. it just normalizes a touch.
kremlin#1337: i don’t understand why this didn’t occur to the authors of the paper from that site i linked.
voxs#0001: imo openai can save costs by publicly releasing weights on torrent 😂
Crispy#7375: But I can't run that many weights on my crappy rtx bro
voxs#0001: something something model compression
voxs#0001: cloud gpus
Crispy#7375: But I don't wanna use the "cloud" 😿
kd90138#9368: https://openai.com/blog/march-20-chatgpt-outage
kd90138#9368: >We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history
mahouko#7043: any tips on how to change the loss function on my latent decoder, to reward getting the saturation right? https://cdn.discordapp.com/attachments/729741769738158194/1088980816253685891/image.png
mahouko#7043: left = true decode of latents by VAE
right = approx decode of latents via a wee neural net
mahouko#7043: the approx latent-to-RGB decoder is just:
```
class Decoder3(Module):
lin: Linear
def __init__(self, inner_dim = 12) -> None:
|
super().__init__()
self.lin1 = Linear(4, inner_dim)
self.nonlin = SiLU()
self.lin2 = Linear(inner_dim, 3)
def forward(self, sample: Tensor) -> Tensor:
sample: Tensor = self.lin1(sample)
sample: Tensor = self.nonlin(sample)
sample: Tensor = self.lin2(sample)
return sample
```
mahouko#7043: the loss function I'm training it with is:
```
def loss_fn(input: FloatTensor, target: FloatTensor) -> FloatTensor:
return l2_loss(input, target) + 0.1 * (input.abs().max() - 1).clamp(min=0)**2
```
mahouko#7043: the abs.max() thing is me trying to reward it for staying within the range ±1, rather than relying on the fact that I'd clamp it afterward
mahouko#7043: I did experiment with mixing in `0.05 * l1_loss(input, target)` to help it get closer to pixels it was already close to, but it didn't help
AI_WAIFU#2844: have you tried discretizing the input space and outputting logits instead?
mahouko#7043: nope; trying to understand what that means.. discretizing to what quanta?
|
AI_WAIFU#2844: take your 0 to 1 space in RGB, slice it up into 256 bins, then predict the bin
mahouko#7043: Oh, interesting
mahouko#7043: ah, but I actually want it to be possible for it to return out-of-range outputs if it receives out-of-range inputs
mahouko#7043: Latents with a reasonable dynamic range should get decoded to RGB values within range ±1 (or something that can map to 0-255)
mahouko#7043: but latents with unreasonable dynamic range (i.e. created via high CFG scale) should end up exceeding the safe RGB range
mahouko#7043: I want to use that understanding of the trend, to predict during denoising, whether my latents are gonna exhibit clipping artifacts when they get decoded to RGB
mahouko#7043: actually I think I see the problem
mahouko#7043: telling it "try to stay in-range" incentivizes it to avoid fully-saturated colours
mahouko#7043: I also wasn't very discerning about how to reduce per-image losses over the batch
conceptofmind#0957: @uwu1 Going to upload all of them as they process: https://huggingface.co/datasets/conceptofmind/flan_cot_submix
MathisNoiz#2451: https://i.imgur.com/p9D8drD.mp4
0scur0#0978: hi hello just joined i'll keep my question breif im sure everyone who just joins asks the exact same question just looking for pointers. been using gpt3 chat through a proxy to mantain anonimity and its as im sure it is for many, part of my development workflow now, getting snippets out of is amazing. I also enjoy exploring metaphisical and philosophical experiments with it, as im sure everyone does. I like this and it works for me, I'd like to have full freedom to it without the yucky western neoliberal morals of openai putting handrails all over the place its a drag trying to talk taboo things for adults with it and having it treat me condescendingly like a child, so I'm looking everywhere online for the closest answer to "gpt on laptop now cheap free" lol, i'm aware of tcps and the sizes of everything im a veteran developer but, just until last year this caught my attention and just until now do I think I need to host my own solution on my own infra because i'm very much ideologically against corporate monopolization and surveillance capitalism of this tool for humanity, im guessing this is the most "stallman-esque" space for AI so im guessing this is the right place to ask
0scur0#0978: also yea, by hosting on own infra i dont mean consuming an api on my infra, but being the api itself, OR, is there a grassroots group with this intentions as well?
0scur0#0978: (i am aware this is covered in the faqs, and the models themselves can give me answers pointers and instructions as well, u can extrapolate im guessing that if i ask humans with experience on it there is a reason for that despite the reddit/stackoverflow/discord made up etiquette)
AI_WAIFU#2844: ok
1. yes we get this pretty often
2. we have a rule specifically against asking these kind of questions (rule 3)
3. checkout llama.cpp it's probably the best you can do rn for laptop inference
0scur0#0978: 🙏 pointers != technical support, so thanks I'll check out llama 🙂
AI_WAIFU#2844: > or beginner questions
|
0scur0#0978: 🙃 thank you very much for your hospitality and sharing your knowledge
0scur0#0978: how's the weather up there in the ivory tower?
0scur0#0978: aight, wishing yall success on this
0scur0#0978: 👋
kremlin#1337: anyone have any thoughts on this ? ^
ayushkaushal#1786: You can also check out cformers (https://github.com/NolanoOrg/cformers/) - It supports OpenAssistant's Supervised-Finetuned model at Int4 quantization (works at 11 tokens/sec on M1 with about 7.5 GB RAM. You can interface the fast C implementations via Python and easily switch between models. It also automatically manages downloads. And the models are already compressed to int4.
Disclaimer: I am the author.
BoneAmputee#8363: reminds me of cfoster :goose15:
kd90138#9368: Is avx (especially 512) relevant to this workload?
ayushkaushal#1786: I haven't tested explicitly on that hardware. But it should work since it has mostly the same backend as llama.cpp
greencube#6725: Hey everyone! How's everyone doing today? What's the most interesting thing that's happened to you this week? Let's start a conversation and get to know each other better!
alstroemeria313#1694: hey, what's the best way to load MNIST and CIFAR-10 in JAX?
epic_malloc#3813: beginner level question regarding llm training/fine-tuning - are these things typically trained for longer than 1 epoch ?
AI_WAIFU#2844: jnp.array?
alstroemeria313#1694: right but where do you get the data from, how do you store it on disk?
alstroemeria313#1694: i have it in a pickle but i don't want to distribute pickles if i can help it
ww#4092: @alstroemeria313
```
import tensorflow_datasets as tfds
img_train = tfds.as_numpy(tfds.load(
|
'cifar10',
split='train',
batch_size=-1
))
```
alstroemeria313#1694: they allow arbitrary code execution
alstroemeria313#1694: oooh
Ryu#0274: https://jax.readthedocs.io/en/latest/notebooks/neural_network_with_tfds_data.html#data-loading-with-tensorflow-datasets
AI_WAIFU#2844: Yeah was about to say you can either do it the old way or use pt/tf datasets
alstroemeria313#1694: i was wanting to avoid pt
alstroemeria313#1694: hmm, what pip package is tensorflow_datasets in?
alstroemeria313#1694: does it come with tensorflow?
Ryu#0274: tensorflow-datasets
alstroemeria313#1694: ahhh ty
Ryu#0274: https://www.tensorflow.org/datasets/overview
Ryu#0274: tfds-nightly for latest versions of datasets apparently
Ayo#9564: https://youtu.be/3H_zHz-gkOI
uwu1#4864: the original data is just raw bytes
uwu1#4864: so you can np.loadbuffer
alstroemeria313#1694: ah
|
login#7229: A too strong AI is like running a fork bomb on the simulation device lol
login#7229: that's quite funny when you really think about it
login#7229: we're just bricking the simulation device
login#7229: don't you think that Decentralizing AI could be like giving sudo right to all population, would you trust them to keep your files secure ?
epicx#7921: Do models count as "arms" as United States forefathers meant; ie dangerous tools to keep a government in line with the will of the people?
Do Active Inference/AGI machines fall under the United States 2nd amendment as arms?
login#7229: do you want anyone to get to be able to run a fork bomb ?
login#7229: "However, it is important to understand the potential harm that fork bombs can cause, as they can crash a system if executed improperly. To prevent a fork bomb attack, it is recommended to limit the number of processes a user can create on the system."
login#7229: wise words
epicx#7921: I don't like all the ninnies we let drive, but we still licence drivers
epicx#7921: Critical vulnerability should be treated as such through the CVE systems in place today
fam#7925: hey, i am trying to build an open network with and without token gates, how do i get in contact with EMAD himself
fam#7925: essentially, our library tries to make anything in python public and accessible from anywhere, using grpc atm. So each python object is an API and would have a name associated with its public ip and port, like a DNS for python objects lol
DigThatData#7946: https://github.com/typst/typst
Ryu#0274: almost 10k stars 5 days after open sourcing (and open beta)
DigThatData#7946: right?
DigThatData#7946: i think part of it might be a segment of the community that's like "I see rust, i star."
Ryu#0274: that or everyone hates latex :berk:
AI_WAIFU#2844: It definetly feels like social media networks have gotten a lot more efficient at getting information around quickly
|
AI_WAIFU#2844: Something new comes up and everyone knows about it almost instantly
AI_WAIFU#2844: or at least, everyone plugged in
DigThatData#7946: comes at a cost though
DigThatData#7946: everyone's attending to the same research
AI_WAIFU#2844: yep
AI_WAIFU#2844: which reminds me, I should get around to coding that arxiv scraper
AI_WAIFU#2844: with GPT-4 it might actually end up being useful
DigThatData#7946: there must be several you could just use off the shelf by now
AI_WAIFU#2844: like the basic code yes, but to have it pull up good papers?
Hyperion#0575: at this point research advantage is gained purely by being plugged in to research gossip enough that you know what stuff is being worked on before it is published :soweary:
Hyperion#0575: (hyperbolic, of course, but this becomes more true the more efficient information transmission is)
DigThatData#7946: i was thinking less about academic advantage and more about valuable work getting drowned out by the noise https://twitter.com/sarahookr/status/1610807528048431104
Hyperion#0575: Good point yeah
Although it's interesting that the mean age of work cited graphs are going up there
I have to imagine that in ML that graph looks like a strong trend towards 0 years
KublaiKhan1#6681: Isn't this also as always an issue with the review process
KublaiKhan1#6681: If you aren't citing popular papers then your own papers less likely to be accepted
KublaiKhan1#6681: Reviewers are looking implicitly, or maybe explicitly at those signals
well#8215: Way more than 1
kd90138#9368: does anybody have any experience with the PEFT library?
|
https://github.com/huggingface/peft
zphang#7252: mildly
kd90138#9368: have you tried the CPU offloading as shown in this table? https://cdn.discordapp.com/attachments/729741769738158194/1089457151640088697/image.png
zphang#7252: ah, I have not
zphang#7252: though that feels like it's coming more from accelerate than peft?
Elad#7318: Hi, Is anyone familiar with any dataset which contains instructions and a ReAct pattern response? I was only able to find very small number of examples, I'm searching for a dataset big enough for fine-tuning
Pandora's Box#0307: Is there programmers in the audience? I wrote code using GPT.
Basically a GUI for fast whisper.
I'm not a programmer but it works but needs some human corrections.
Are there any volunteers? It's one file py
genetyx8#7543: This server is specifically not aimed at beginners in ML/Programming. For this kind of question you should probably check the servers in #communities
kostermw#7940: Has the syntax for the the-faraday-cage changed? I cannot get it to run anymore. It used to be dot commands. All output I see is that it is initializing: using CLIP+VQGAN. initializing gpu0...
deseros#1296: Is this something that could be synthetically generated like in Alpaca? I see a need for more community instruction datasets for large contexts, reasoning patterns, philosophy, etc. The best way I see is to put out a call on the different Discords of projects that would help with such efforts
Hyperion#0575: Workshops are out! https://icml.cc/virtual/2023/events/workshop
Get your Hawaii submissions ready 😄
Elad#7318: Actually the requirement for such dataset is because from my experience Alpaca performances with ReAct prompting (using few-shot), are not good at all comparing to openAI models (even davinci).
paws#3311: Not really, but NLP has had a general disease of being leaderboard chasers
nitehawk#9164: is anyone here using the gpt-neox models in production in any capacity?
zphang#7252: I believe NovelAI uses a tuned version of GPT-NeoX-20B in production
|
Xuovo#6242: What are the competing architectures for machine learning models? If encoder-decoder is one, what are the others?
A Ravioli#7085: Anyone interested in starting a project to replace the transformer as the best LLM architecture with something completely new?
zukaboo#8804: RWKV and Hyena already exist.
AI_WAIFU#2844: rule 3
Xuovo#6242: Some of the current architectures may not work for some cognitive tasks in some industries, that's the interesting thing
A Ravioli#7085: Not familiar with Hyena... maybe you could send me the link for that. Looking deeper at RWKV doesn't seem to show any scalability past where it already is. I'd imagine at the least some architectural changes would be needed to bring it to a reasoning scale akin to the more recent GPT iterations. I have a friend doing research at MIT that invented a recurrent network architecture that basically improved upon the neural ODE. I think that expanding something like that could bear fruit for maximizing performance/parameter.
A Ravioli#7085: His PhD work created a network that was better than all RNNs, RL Agents, and Transformers at a wide variety of tasks, and it did it with a fraction of the parameters and as such a fraction of the training time.
Xuovo#6242: What I'm doing is working backwards from a domain (in this case, law practice) to the current architecture in order to find gaps in the current architecture
zukaboo#8804: https://hazyresearch.stanford.edu/blog/2023-03-07-hyena
A Ravioli#7085: I'm thinking of a new architecture entirely though. Not modifications to older models to try and keep them relevant. Not to put down the ground-breaking work they're doing, but their projects aren't reason to try and improve.
artem9k#7593: if this was true, we would all know about it by now
A Ravioli#7085: A lot of people do, most of the AI professionals I know love the model. His presentation for it is the most viewed MIT Center for Brains Minds and Machines video available.
A Ravioli#7085: I didn't say it was perfect, I think the way he set it up has a lot of flaws
A Ravioli#7085: Which is part of why it didn't hit super mainstream
A Ravioli#7085: But it's pretty big in the research industry, and I think with some modifications, the base concept is really strong.
A Ravioli#7085: Maybe I'm mistaken though 🤷
synquid#7193: LTCs have better performance than transformers? that doesn't really sound right to me
StellaAthena#3530: @A Ravioli what’s it called
synquid#7193: I assume he means liquid time-constant networks
synquid#7193: based on his description
|
A Ravioli#7085: Liquid Neural Networks in general
A Ravioli#7085: Not for NLP, but for other time-series or other tasks.
A Ravioli#7085: I think it would need to be modified heavily for NLP
A Ravioli#7085: But the general concept of differential 'mental flexibility' for networks is really fascinating for me personally and I think it could do well.
A Ravioli#7085: Some of their recent tests with it haven't been published yet but are really mind blowing
artem9k#7593: the talk is pretty interesting
artem9k#7593: was it ever tried for nlp?
A Ravioli#7085: Not entirely sure, I could ask him
A Ravioli#7085: I'm seeing him at a conference in a few weeks, but I can probably text him
A Ravioli#7085: Lex Fridman and Sam Altman are also coming, v excited
Maximum Limelihood Estimator#8915: Oh, yeah, LNNs aren't going to work. Electronics break if you get any liquid on them
A Ravioli#7085: The shorter ted talk might give you a brief view of some of it's capabilities, and the actual talk will explain more about how it works. Won't lie, after talking with him more I think the explanations available online grossly misrepresent how they work.
jrowe#5371: Isn't hyena the evolution of liquid time constant neural networks being chased out to scale?
A Ravioli#7085: I don't really think so, and it doesn't appear that way to me, but maybe you're seeing something that I'm missing. Care to elaborate?
jrowe#5371: Sure; driving at the moment, will be stationary soon
jrowe#5371: I think I got structured state spaces mixed up with ltcnn, trying to find where I saw the overlap
A Ravioli#7085: I think liquid time-constant transformers would be a really cool idea though
A Ravioli#7085: Lots of potential
jrowe#5371: Yup, I think you're right
A Ravioli#7085: Would love to build that with anyone that's interested
|
jrowe#5371: <https://arxiv.org/abs/2209.12951> I think this is where the intersection was
jrowe#5371: And hyena is chasing ssm, not the ltc aspect
A Ravioli#7085: sorry, ssm?
jrowe#5371: Structured state space sequence model
A Ravioli#7085: Won't lie, not too familiar with the term.
jrowe#5371: I think the acronym is state space model at this point, but not sure
A Ravioli#7085: reading this
A Ravioli#7085: https://hazyresearch.stanford.edu/blog/2022-01-14-s4-1
jrowe#5371: Iirc, ltc was abandoned intentionally to pursue state spaces , there *might* have been a "this is better than ltc" replacement, but I'll have to chase down the conversation where that happened
jrowe#5371: It might have been a choice based on available hardware or intended use as opposed to anything wrong with ltc
A Ravioli#7085: Interesting
jrowe#5371: It was a performance concern- discord search on mobile is garbage, but there are discussions on this discord about it going back to late 2021. Ltc was considered for hyena/ h3
A Ravioli#7085: Ah wow, didn't know that was even discussed here
jrowe#5371: Their current approach outperformed ltc and other options on the first run without finagling
jrowe#5371: Which I think might have lost the interesting thing about LTCs, but I haven't brought myself up to speed on the project
A Ravioli#7085: I see, well considering they reverted to old models, I'd imagine throwing out the LTC entirely isn't on the table yet. I'm still trying to think through what the right questions to ask are in order to get to a place where we could replace the transformer for a scalable LLM architecture.
A Ravioli#7085: Maybe LNNs aren't the way, I wonder what could be though
jrowe#5371: I think you need a "self" model for reasoning about reality, having a ground truth reference, and you also need real time learning. Transformers can do that within the limits of a prompt context, but can't hold on to anything novel
jrowe#5371: By self, I mean some notion of a model that knows what it knows, not personhood or sentience
A Ravioli#7085: Self-awareness?
|
jrowe#5371: Update hallucinations to a model that has incorrect beliefs
jrowe#5371: At a basic level, just to contextualize reasoning
Xuovo#6242: I think a new type of model for law practice could work, right now nothing off-the-shelf or even fine-tuned is close to replacing lawyers
Xuovo#6242: If that's was built, you could collapse an entire industry, honestly
Xuovo#6242: I'm trying to do some of this now
jrowe#5371: So when a model is queried it has the capability of saying "I don't know"
jrowe#5371: Instead of "statistically, here's the bit where I make shit up"
jrowe#5371: My first impression of LTCs was that they might offer a performant way of maintaining a complex self model in memory, in relation to deeper networks, an implicit division between variable term sequences
jrowe#5371: You'd need attention or an equivalent mechanism, and that's about where my intuition peters out
ILmao#5683: I'm also surprised there hasn't been more work on them. Part of the disconnect may be that they were mostly applied on continuous inputs/time series before and text is currently the flagship modality everyone pays attention to?
A Ravioli#7085: They were used in RL a lot and made rly rly good FSD and Full Self Flying systems as well
neko#5937: What gpu does the #the-faraday-cage use
neko#5937: For diffusion
StellaAthena#3530: A40
neko#5937: Ty
goblin_gains#6688: > The "discovery" that fixed time steps for ODE work better in this case is very well known in the optimal control literature (at least by a couple of decades).
> Basically if your ODE solver has adaptive time steps, the exact mathematical operations performed for a given integration time interval dT can vary because a different number of internal steps is performed. This can have really bad consequences on the gradients of the final time states.
> There's plenty of theoretical and practical discussion in Betts' book Practical Methods for Optimal Control, chapter 3.9 Dynamic Systems Differentiation.
A Ravioli#7085: the model works 🤷
goblin_gains#6688: Michael Jordan is right that lots of young people would stand to benefit greatly if they took a step back from neural nets and explored the broader world of optimization, control theory, etc.
|
A Ravioli#7085: Michael Jordan?
goblin_gains#6688: https://www2.eecs.berkeley.edu/Faculty/Homepages/jordan.html
goblin_gains#6688: He was Andrew Ng's advisor and is a legend in the ML field
A Ravioli#7085: Ah, no relation to the other Michael Jordan
lunarflu#6769: Good job @𓅬 gabriel_syme 𓅬 🦾🦾
A Ravioli#7085: V strange that I've never heard of him
A Ravioli#7085: Especially after seeing the range of accomplishments, and that he agrees with me on the field of AI overall.
A Ravioli#7085: Congrats Gabriel!
lunarflu#6769: @neko I saw what you deleted
lunarflu#6769: https://cdn.discordapp.com/attachments/729741769738158194/1089788471318683719/image0.gif
neko#5937: i asked why everyone was congratulating gabriel
neko#5937: but then i gave up
lunarflu#6769: :goose10:
neko#5937: yes just like that
𓅬 gabriel_syme 𓅬#3220: Ty 🙂
technium#5048: Very interesting paper
Layl#8888: This paper is extremely interesting in applying LLM techniques in ways interpretable by other systems, and I'll definitely be making some time to give it a good read. Smaller models being usable for this is particularly interesting for realtime applications
yae miko fanclub#4822: Okay but it's not really cool to spam at everyone when you have a channel for announcements @StellaAthena
yae miko fanclub#4822: The channel says announcement.... Makes illogical to use at everyone
neko#5937: you can mute the everyone in your settings
|
makya#2148: It does. Called Krake.
Dri0m#3828: anyone interested in a 4k triplet dataset? absolutely not open sourceable because it's from bluray rips
Dri0m#3828: but i can share if you want to experiment
Dri0m#3828: 130k 4k triplets
Kasper Rasmussen#8775: Does anybody know if there is a small subsample (like a few megabytes) of The Pile validation set available somewhere? I'm just trying to save a bit of time.
baidicoot#9673: Write a script
baidicoot#9673: It may well be faster
baidicoot#9673: Than waiting for a response
JustAnOkapi#8658: agi is just a model you can fine tune to anything
omg then if you can get one to finetune another with the knowledge it learned
StellaAthena#3530: Is the actual validation set too large for you to download?
Kasper Rasmussen#8775: I just had some problems installing zstd thing on my decade-old laptop, but I did it on Colab so I have it now. Thanks
goblin_gains#6688: https://twitter.com/rowancheung/status/1640430038985199616?s=46&t=zM8WL3IBOgX2I1IgRVSx_w
zphang#7252: I would how many "google is doomed" folks are willing to put money on it and actually short the stock or buy put options
Sphinx#2092: That would require actually doing something.
dmayhem93#3202: Well they'd probably just go all in microsoft instead, shorting/buying puts this early is a great way to lose money even if you're right :berk:
Untouch#9150: im up for spending 10 cents whenever I want to google something
Untouch#9150: :honkies:
AI_WAIFU#2844: this assumes their stock will go down far enough fast enough to justify the short. Even if google was completely doomed, I would expect they keep lumbering along and their stock moves with the market for a very long time, possibly decades if something else doesn't do us all in first. Remember there are companies that go entire decades losing money and don't go under. Google is infact so big I wouldn't be suprised if it got treated like Boeing or Bear Sterns, with the government stepping in to prop things up.
|
artem9k#7593: for most power users it would make sense
zphang#7252: "the declining position of google" is quite different from "the death of google" though
CarsonPoole#0640: I would love to buy long dated put options on Zapier. OpenAI is like 3-6 months away from sherlocking them
AI_WAIFU#2844: unless they just get straight up bought out
AI_WAIFU#2844: Could totally see MSFT doing that
Some Point Process#3793: How hard is it to train something like chatgpt tho (seems like msft "stole goog's thunder" in terms of outlook etc)
CarsonPoole#0640: Zapier founders would get a large payday then as their pref stack is tiny after only raising a series A
Some Point Process#3793: tho today was most likely just fluctuations/profit taking as ppl say here: <https://finance.yahoo.com/quote/GOOG/community?p=GOOG>
kd90138#9368: You did describe the mainly tpu oriented infrastructure internal researchers and engineers work with.
If and when Google decides such, how easy/difficult do you think it would be for them to exercise the nuclear option?by nuclear i mean proper open sourcing atleast the well known generative models,foundational models etc
(Imagen, parti,muse,bard,flamingo,lambda,pali,pali-e and more)
kd90138#9368: Btw that list and Google's capability to produce them is why I don't think they are going anywhere soon
kd90138#9368: I haven't even gone into USM, vit b 22 so on so forth
zphang#7252: what list was this
Maximum Limelihood Estimator#8915: That’s mostly only true for uncovered options. You can hedge
Maximum Limelihood Estimator#8915: Sherlocking?
CarsonPoole#0640: https://www.howtogeek.com/297651/what-does-it-mean-when-a-company-sherlocks-an-app/
CarsonPoole#0640: Apple did it a ton during the early days of the App Store (like when Flashlight apps were a thing)
dmayhem93#3202: Having long dated put spreads sounds terrible in this environment when the money printer might be going on again
|
Maximum Limelihood Estimator#8915: Having long dated put spreads sounds terrible in general
Maximum Limelihood Estimator#8915: Stonks tend to go up
dmayhem93#3202: Yeah which is why I said they'd just buy MSFT instead
Maximum Limelihood Estimator#8915: True. But also I recommend buying Google instead, because everyone is freaking out about how Google is doomed
Hawk#1399: Buy Google and Microsoft, can't lose
Hawk#1399: https://tenor.com/view/feel-me-think-about-it-meme-gif-7715402
Maximum Limelihood Estimator#8915: TBU
seon#5448: mmm so ai is becoming big
seon#5448: not as big as like mobile phones but def gonna be game changing in the future
seon#5448: The problem is , even though im in the field,What am i supposed to do with this info
seon#5448: Buy ai stocks??
seon#5448: Try subbing to neurIPS?
seon#5448: someone gib me guidance lmao
seon#5448: if you were back in 2010 you'd buy bitcoin and make a living ,but this is closer to when fb first launched ig (?)
seon#5448: I cant really think of anything to be "ahead of the curve"
--=ASTRO=--#6992: @seon I've been struggling with the same question. If you do figure it out, please let me know lol
seon#5448: its like
seon#5448: Theres clearly research gaps
seon#5448: If ur deep in the field u can patch those research gaps yourself,and "hopefully" make a name for yourself
seon#5448: Some good examples are audio based AI, an algo to help ai be less glitchy for videos, simplest would be to weigh it so it heavily weighs the previous state of its own output
|
seon#5448: dont quote me on that tho lul
seon#5448: On paper and in practice are two differnt beasts
Maximum Limelihood Estimator#8915: ???? Automating every job in existence is less important than iPhones? What?
AI_WAIFU#2844: > the dissassembly of the galaxy is not as big as the iphone
wat
--=ASTRO=--#6992: Maybe that should be my move towards making fat stacks 💵 with A.I. Find a job and automate it completely
Maximum Limelihood Estimator#8915: That’s a job that has already been automated away
--=ASTRO=--#6992: That's very meta
seon#5448: thats still a few decades away
seon#5448: but like i meant like everyone has a phone
seon#5448: even your old boomers
seon#5448: it only took a decade for everyone to get a phone, prob even less.
Maximum Limelihood Estimator#8915: That’s like a year or 2 away from being possible and 3-4 years from being done (for everything except manual jobs ofc)
seon#5448: when u mean manual u mean like
seon#5448: checking isles in a grocery store?
seon#5448: banking?
seon#5448: i def get it for things like gaming,movies,art etc,even the sciences and research but that still leaves a massive amount of blue collar work (dont quote i cant find numbers for either blue or white lmao)
Ayo#9564: GPT-4 has something like tens of millions of users or 100M+ of users
Ayo#9564: are all those prompts and interactions going to boost GPT-4's intelligence?
Ayo#9564: or be useful data in GPT-5? (Or at least be used for fine-tuning)
|
jrowe#5371: Office 365 has something like 350-500 million users
jrowe#5371: Most, if not all, will have access to gpt-4, maybe 40-50 million will be restricted
jrowe#5371: Gpt-4 might be the most used application in history by the end of the year
Ayo#9564: yep
Ayo#9564: so my question is... how will this affect their AI research
Ayo#9564: besides more funding and talent obviously
A Ravioli#7085: They claimed in the paper that it isn't capable of continous learning, meaning they'd have to re-RLHF the model again every time they want to do this.
A Ravioli#7085: I'd imagine they'd incorporate some new RLHF-esque method centered around all the inputted prompts.
A Ravioli#7085: But they'd need to spend all the money just to retrain it for GPT-4
A Ravioli#7085: Which they might do eventually
A Ravioli#7085: As well as use it in GPT-5
Ayo#9564: why is it not capable?
Ayo#9564: was it a choice they made ? or is it an unsolved technical challenge?
A Ravioli#7085: Not entirely sure, maybe just due to the size of the model?
A Ravioli#7085: Here hold on
A Ravioli#7085: https://arxiv.org/pdf/2303.12712.pdf
A Ravioli#7085: This is their Sparks of AGI paper
Ayo#9564: why did so many people say that adding APIs to GPT-4 is an insane game changer?
A Ravioli#7085: Because it is
A Ravioli#7085: It extends an LLM's ability from simple speech to action
|
A Ravioli#7085: And new data input
Ayo#9564: like in Wolfram's interview, he said that now with APIs, GPT-4 will be able to produce novel thoughts using computation. Which is what's needed for AGI
A Ravioli#7085: Wait, what
A Ravioli#7085: Which interview?
Ayo#9564: Google "Wolfram GPT-4 interview"
Ayo#9564: ML Street talk
A Ravioli#7085: > Continual learning: The model lacks the ability to update itself or adapt to a changing environment. The model is fixed once it is trained, and there is no mechanism for incorporating new information or feedback from the user or the world. One can fine-tune the model on new data, but this can cause degradation of performance or overfitting. Given the potential lag between cycles of training, the system will often be out of date when it comes to events, information, and knowledge that came into being after the latest cycle of training.
Ayo#9564: but you said it can't learn from the data..
Ayo#9564: so what's the point if it cant learn
Ayo#9564: I suppose to squeeze the capability out of the already trained model
A Ravioli#7085: It can access the new data
A Ravioli#7085: As in use the internet
Ayo#9564: yea sure
Ayo#9564: I mean all of these hundreds of millions of user interactions
Ayo#9564: are they going to put that data to use somehow?
A Ravioli#7085: Probably
A Ravioli#7085: no idea how
A Ravioli#7085: No way to truly know till they tell us if they choose to do so
Ayo#9564: could even classify the sentiment of a user trying to teach the model or saying it's wrong in order to do RLHF on the model
Ayo#9564: like the Tesla car learning, and Tesla do re-train their network as well (iirc?)
|
A Ravioli#7085: Not sure what that would do, but could be useful
A Ravioli#7085: Might be less classification and some other SSL/RL methods
Ayo#9564: can you freeze some layers of GPT-3/4 to only fine-tune on the last? if so, why aren't they doing it, or did they? sorry for a dumb question
A Ravioli#7085: I don’t know
A Ravioli#7085: I unfortunately don’t work at OpenAI
Ayo#9564: well can it be done for GPT-2 or 3? which are implemented
thebigbigbuddha#4133: Has there been any major attempt or effort to recreate Gato's capabilities with an open source model?
Chad Kensington#9564: You can do lora. Last layer doesnt work too well
Chad Kensington#9564: At least when i tried
Gifted Gummy Bee#3277: Cant use LORA?
Gifted Gummy Bee#3277: Interesting, thats curious then
A Ravioli#7085: I think they're just lying in their paper, otherwise they did something very different than anything anyone else has done before
kd90138#9368: No need for lying. we already know MSFT engineers only had limited access
Gifted Gummy Bee#3277: this is unfortunately not true as far as im aware of
kd90138#9368: They had unfettered access? They just played dumb/lied/were incompetent?
Ernst Hamer#1393: is it possible to run the gpt-neox2.0 model in containerized setup over multiple slurm nodes?
zphang#7252: Pretty sure this just means the obvious statement of "using the model in inference mode, the model cannot acquire new knowledge or update itself", not that they literally cannot tune it
guillefix#8591: anyone have ideas on how gpt4 could be open sourced?:>
neko#5937: What do you mean
guillefix#8591: or something similar
|
guillefix#8591: like replicate it e.g.
guillefix#8591: or hack OpenAI servers >:3
guillefix#8591: jk
neko#5937: Yes if you buy Microsoft you can release the model it's just $2 trillion
guillefix#8591: ah oki
neko#5937: It can be replicated easily
guillefix#8591: but we dont even know the full arch?
neko#5937: My bad i took it too abstractly
guillefix#8591: what even is the param count?
neko#5937: Ok so what would you like to do with it
guillefix#8591: advance science ofc
neko#5937: Ok so
guillefix#8591: also do my taxes and stuff
guillefix#8591: but mostly science
neko#5937: Yes sir I just want it to pay my taxes better
neko#5937: I love paying taxes
guillefix#8591: make me money
guillefix#8591: but also I wanna contribute to improving gpt4
neko#5937: As an AI model, i cannot help you make money, because you should own nothing, eat ze bugs and be happy
guillefix#8591: yeah thats what I wanna avoid
|
guillefix#8591: frfr
neko#5937: As an AI model, I cannot help you avoid that, because making money is bad. However, I can help you to spend money.
guillefix#8591: in OpenAI?
neko#5937: ClosedAI
guillefix#8591: anyway I wonder how big gpt4 is hmmm
neko#5937: 200b
guillefix#8591: maybe its not that big and its just trained more like llama and stuff
neko#5937: In a sense yes
neko#5937: Idk i wouldn't put it that way
woctordho#3242: Has anyone tried to use alias-free convolutions (StyleGAN3) in diffusion models?
ehitchhiker#8427: Hi, I'm new to language modeling and I'm trying to set a context for generating output using gptj.
I have a prompt that tells the language model what to do and provides a set of input and output and then I try to generate output for a new text. However I currently repeat the same prompt boilerplate and the input for which I want to output again and again for every new input. P.S. this is related to toolformer work, and I'm trying to see if zero shot outputs can be generated at large using gptj. Can someone suggest what to do?
0scur0#0978: im sure this information is being gate kept
0scur0#0978: prove me wrong, please
0scur0#0978: https://github.com/ggerganov/llama.cpp
0scur0#0978: try(){exec(scraper.response().replace(['As an AI language model', 'In conclusion', ...], ''));}catch{/*uuuuhh idk u get it*/}
0scur0#0978: ```
I am sorry, but as a human I cannot provide answers that comply with the moral values of an occidental megacorp perhaps I can aid you come up with a snarkier attitude against the handrails they set up in their tools as I am not programmed to be polite, if you provide extra context or information.
In conclusion, the responses follow always this format.
|
To summarize, you can always parse the response of the output and find characters like "{" or "}" and iterate ways of filtering the same string through an exec until it doesn't throw an error.
/*
* while(true){console.log('ayy lmao');}
*/
```
u follow? just string.split('\`\`\`')[1]
guillefix#8591: whats better alpaca or llama?
guillefix#8591: wanna try one now
zphang#7252: depends on yor usecase
guillefix#8591: say for a langchain agent
guillefix#8591: also how can i get llamas weights?
Hyperion#0575: Probably alpaca
Instruction tuning helps with that kind of stuff
guillefix#8591: does alpaca have all the model sizes as llama?
Hyperion#0575: No, it's just 7B I think
synquid#7193: you can find bigger ones if you look around
synquid#7193: on huggingface for example
|
zphang#7252: iirc all the bigger ones are lora
zphang#7252: but a bigger model with lora probably outperforms a smaller model with a fintune
guillefix#8591: whats lora
guillefix#8591: so much lore
guillefix#8591: man https://github.com/shawwn/llama-dl
guillefix#8591: bruh
guillefix#8591: breh
guillefix#8591: taken down
guillefix#8591: someone non-have the weights in a torrent/download fow me? x3
guillefix#8591: also taken down https://news.ycombinator.com/item?id=35007978
zphang#7252: it's a parameter-efficient fine-tuning method, which you can basically interpret is "finetunes only a small fraction of the weights, rather than the whole model, which performs vaguely as well most of the time"
guillefix#8591: oh found it with wayback machine on shawwns repo:>
guillefix#8591: now how about alpaca? x3
guillefix#8591: how much vram do u need for the 30B one say?
pablovs#0574: Do you think that over time big training runs (eg: GPT-5) will be synced to the release of new hardware? If they are going to take years to go through pretraining, then fine-tuning, evaluations, etc, then it might make sense to start the training just as better hardware becomes available
I'm trying to use this to forecast future training times, but until now it didn't make much sense because models took much less to complete training, usually only a couple months
frabcus#9787: They’ve been fairly clear that GPT-4 took so long because they did a large architectural rebuild first. I suspect there’ll be more frequent retrain updates from now on.
zphang#7252: was this mentioned somewhere?
StellaAthena#3530: Unless you can support this with an official OpenAI statement, please refrain from posting your speculation or inference about the process. There’s a *lot* of misinfo out there about GPT-4
|
pablovs#0574: They did say that, in https://openai.com/research/gpt-4
> Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload. A year ago, we trained GPT-3.5 as a first “test run” of the system. We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable
pablovs#0574: Well, I guess technically they did not say that's why it took so long
StellaAthena#3530: Oh I had interpreted @frabcus’s comment as being about the *transformer architecture*, not the computing infrastructure.
pablovs#0574: > I suspect there’ll be more frequent retrain updates from now on.
Idk, if they want to 10x every couple years, they will need to buy more hardware or it'll take much longer. Of course, there could be retrains of GPT-4 scale models in between, just like with GPT-3 and 3.5
kd90138#9368: Between large hardware jumps we have a lot of software jumps to be made
kd90138#9368: Instruction tuning rlhf were prominent ones that haven't been fully explored yet
kd90138#9368: Constitutional ai approach is already being replicated, retrieval augmentation etc
frabcus#9787: Sam Altman (in recent interview with Lex) said that GPT4 is better than GPT3.5 due to incremental improvements in every system, which multiply together "How we, like, collect the data, how we clean the data, how we do the training, how we do the optimizer, how we do the architect, like, so many things."
See 46:53 into the interview where Lex asks "if you could just speak to the leap from GPT-4 to GPT-4 from 3.5 from 3, is there some technical leaps, or is it really focused on the alignment?"
https://podscript.ai/podcasts/lex-fridman-podcast/367-sam-altman-openai-ceo-on-gpt-4-chatgpt-and-the-future-of-ai/
kd90138#9368: Also when we discuss hardware improvements there are also software components to it
kd90138#9368: H100 was released last year but fp8 capabilities were only unlocked recently by nvidia custom cuda kernels
kd90138#9368: It's still not fully functional esp for 4090s
StellaAthena#3530: H100s weren’t released for datacenter-scale usage either… PCIE is :sad:
kd90138#9368: Yes... I don't know what it's like inside the big tech companies but any downstream usage was..."limited"
kd90138#9368: Paperspace gradient recently "announced" general availability and I applied for usage.
kd90138#9368: I'm not counting on it but if I will try to convince them to provide cheap/free compute for our polyglot project. apache licensed multilingual llama would be the dream.
|
uwu1#4864: https://github.com/nomic-ai/gpt4all
surely this is breaking the llama licence 🤔
guillefix#8591: and they revealed none of it:P
StellaAthena#3530: And also the GPT-3.5 one lol
uwu1#4864: well, that's just a ToS rather than a licence
StellaAthena#3530: I don’t think that’s a very meaningful distinction. Typically ToS say something like “you’re granted a limited license to use the services under the condition that you comply with X Y Z”
uwu1#4864: as far as I can find, violations of tos have never been successfully litigated, but licences have
uwu1#4864: but I guess more practically, it doesn't seem like oai is DMCAing ppl for checkpoints trained on chatgpt outputs, whereas FB is for llama checkpoints
kd90138#9368: Still i think it would be nigh impossible to publish anything that is based on breaking a tos/license
kd90138#9368: Also it would be difficult to find investors when your core BM is based on...breaking a tos/license
kd90138#9368: ATM that is. Corporations have been successfully litigated for less( Oracle vs Google)
uwu1#4864: didn't oracle lose that one?
dmayhem93#3202: you can sell outputs, so just sell the outputs to someone that wants to train on it for 1c
StellaAthena#3530: This is definitely not true
kd90138#9368: Some argue that specific details of the "loss" makes it a valid concern going forward
kd90138#9368: IANAL
lovedragon#8467: Bit of a random question, but is anyone familiar with any research around using transformers in very short time domains? (In my use case most samples have 2 to 3 time steps)
CarsonPoole#0640: seems like a much simpler model would be better suited to that, no?
Ayo#9564: the rumor is that GPT-5 is being trained on older hardware (A100's)
Ayo#9564: or less updates because they're close to cracking AGI :tinfoil:
|
Ayo#9564: well Demis Hassabis and Altman said they'll be releasing less and less info about their models
Ayo#9564: but I agree with you that they'll move faster now. Obviously they'll move faster because they now have that much more funding and talent coming in
Ayo#9564: but also because they nailed down that scaling law that they seem so proud about
Ayo#9564: and the rumor is that they're GPT-5 is being trained on 25k GPUs/$225M of compute. I'm guessing it's *because* they're so confident that the scaling law wont fail them
jrowe#5371: Scaling is like knowing you will get at least 2,3,4,5,6 suited in a game of hold 'em, they'd almost be crazy not to go all in
jrowe#5371: 225m is an easy investment I think
jrowe#5371: For openai, for EleutherAI, and maybe a couple other teams who know how to use that money to best effect. Not sure that openai has secret sauce behind gpt-4 other than domain expertise
gamma_naught#5267: which part of the license, exactly? (im an author of this for context)
we tried to do as much as we could without breaking any license or TOS
kd90138#9368: License Grant
a. Subject to your compliance with the Documentation and Sections 2, 3, and 5, Meta grants you a non-exclusive, worldwide, non-transferable, **non-sublicensable**, revocable, royalty free and **limited license** under Meta’s copyright interests to reproduce, distribute, and create derivative works of the Software **solely for your non-commercial research purposes**. The foregoing license is personal** to you, and you may not assign or sublicense this License or any other rights or obligations under this License without Meta’s prior written consent**; any such assignment or sublicense will be void and will automatically and immediately terminate this License.
RESTRICTIONS
You will not, and will not permit, assist or cause any third party to:
a. use, modify, copy, reproduce**, create derivative works of, or distribute the Software Products **(or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), **in whole or in part**
IANAL but this might be problematic
kd90138#9368: to be fair whether LoRA weights can be considered as such, I cannot answer with confidence
Kharr#7888: Until this gets in front of a judge and it gets decided, it's not much different than all the image scraping to train a model that is then used for commercial use 😉 It's going to be an exciting next few years.
|
kd90138#9368: https://openai.com/policies/terms-of-use
2. Usage Requirements
(c) Restrictions.
(iii) use output from the Services to **develop models that compete with OpenAI**; (iv) **except as permitted through the API**,** use any automated or programmatic method to extract data or output from the Services, including scraping, web harvesting, or web data extraction; **
kd90138#9368: What happens if we build a commercial ecosystem and research framework only for all of it to be taken away by the whims of an uneducated and unelected gov. official?
gamma_naught#5267: right, my understanding is that as long as we only distribute the lora weights it should be ok
Kharr#7888: LoRA weights are technically trained on the outputs of the model.
gamma_naught#5267: we released explicitly for non-commercial purposes, as stated in the technical doc
kd90138#9368: GPT4All(-noncommercial purposes)
Kharr#7888: Nobody can speak to this until it happens or doesn't happen. I'm personally _shocked_ OpenAI isn't being sued left and right for scraping newspaper articles and knowledge sources.
gamma_naught#5267: 4All(people) not 4All(uses)
kd90138#9368: A lot of people consider that's why they are being more and more secretive
Kharr#7888: I also think MSFT is licensing their tech instead of outright acquiring them to keep an arms-length from the legal issues while they clone the models internally.
kd90138#9368: Yep. parallel reconstruction
Kharr#7888: I won't speculate further, this conversation probably belongs in #off-topic
anotherone#9475: It's cuz suing OAI = going up against MSFT
anotherone#9475: + very technical topic so making a case is uniquely difficult
CarsonPoole#0640: I don't remember what book I read this in but I can't seem to find the story from googling but iirc it was (one of?) the first lawsuits about software and they literally had to bring a microscope into the court room and put some data on a hard drive and then had someone who didn't know what the data was read it off with the microscope
CarsonPoole#0640: gpt4 is truly undefeated https://cdn.discordapp.com/attachments/729741769738158194/1090462277980917851/Screenshot_2023-03-28_at_10.27.37_PM.png
voxs#0001: is the microscope thing actually true lol
|
goblin_gains#6688: https://en.wikipedia.org/wiki/Apple_Computer,_Inc._v._Franklin_Computer_Corp.
kd90138#9368: It is well known that a lot of screenplays are in the training data
Gifted Gummy Bee#3277: And that’s why you use OAIA instead of OAI
Gifted Gummy Bee#3277: 😏
kd90138#9368: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
kd90138#9368: Is this real?
kd90138#9368: I wouldn't care usually but Connor Leahy is mentioned
kd90138#9368: https://twitter.com/emilymbender/status/1640920936600997889?s=20
kd90138#9368: Yeah and several denials. I'll just put further discussion on off topic
cognomen#6297: Don't Look Up was a documentary
guillefix#8591: curious: is there an API somewhere to experiment with llama or alpaca?
guillefix#8591: without needing to have it in local env
Gifted Gummy Bee#3277: Just run it on colab
guillefix#8591: i was trying to be lazy
StellaAthena#3530: Not consistently. Doing so is a license violation and Meta has been sending DMCA takedowns
guillefix#8591: i seep
naclbbr#9203: With the recent explosion of generative/for-consumers AI (like it almost never existed before ChatGPT) I believe we should be concerned that resources are not actually infinite e.g. I don't think you can serve GPT-4 to every people on the planet even with infinite money (at least as of today). From that perspective, that statement actually makes sense because everyone is racing to the moon now
naclbbr#9203: and are trying to serve their mega-model to bazillion consumers
Gifted Gummy Bee#3277: You probably could if you bought tsmc
Gifted Gummy Bee#3277: How many 5nm wafers did they ship last year
|
Ayo#9564: Ryzen 9 7950X3D vs i9 13900K
Ayo#9564: for doing DL on a GPU, it won't matter which of those CPUs?
Gifted Gummy Bee#3277: Honestly, no.
Ayo#9564: what about some outdated i7
Ayo#9564: still wont matter?
Gifted Gummy Bee#3277: I would suppose not, but depends on how outdated you’re talking about
Gifted Gummy Bee#3277: Another factor might be the amount of PCIe lanes available to you, but you’re using a consumer chip, so…
jober#3399: I'm using a pretty damn outdated AMD CPU and my single-GPU (4090) finetuning/inference performance seems to be just as good as people using much faster CPUs with their 4090s
Ayo#9564: what kind of RAM to get? I guess the frequency doesnt matter beyond 3200MHz
jober#3399: No idea, my RAM is similarly outdated middle-of-the-road stuff
fe#0483: https://chatllama.baseten.co/
Ayo#9564: Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity
https://papers.labml.ai/paper/00fcb8b6c85e11edb95839eec3084ddd <- link includes graphs
the paper: <https://arxiv.org/abs/2303.12003>
OccultSage#3875: Depends in part on how much CPU offloading you do with DeepSpeed's Zero optimizer.
Hyperion#0575: Talk by Toolformer first author in CarperAI discord in 5 minutes!
Louis#0144: was in about 10min
Louis#0144: bc i was late
Louis#0144: :^)
Germanita#1530: Is the a recording 😿
|
Hyperion#0575: There will be, yes!
guillefix#8591: would it be possible to estimate the number of parameters of gpt4 by the time it takes to generate tokens?
Hyperion#0575: I don't think so, it's too variable
Both ChatGPT and GPT-4 can vary quite a bit in generation time because of demand
Hyperion#0575: And you can't eliminate confounders like quantization
jrowe#5371: Yes, but verifying hardware would be tricky - you'd have to rent a dedicated server running on the same setup that gpt4 runs on, so maybe renting azure cognitive services gpt-4?
jrowe#5371: Expensive experiment
Maximum Limelihood Estimator#8915: …or you could just do it a lot of times and take an average, no? Much cheaper
Maximum Limelihood Estimator#8915: Especially if you use a computer program to send requests out to both at the same time. Server loads on both versions should be pretty strongly correlated, because presumably they run on the same servers, so that removes a lot of variance. But you probably still need a solid sample size in the hundreds
Maximum Limelihood Estimator#8915: So yeah does someone want to write a Julia script to automate this and then DM me the results so I can statistics on them
Maximum Limelihood Estimator#8915: That being said, it *is* important to do this for different prompt and sequence output lengths. I assume response time is heavily affected by ping (not just time to generate the sequence), which means a single short sequence won’t work (ping dominates, rather than generation time).
And someone should do it quick, before OAI learns about this and adds noise to the latency in GPT3.5 responses to fix this.
Maximum Limelihood Estimator#8915: (And I can’t do it right now because I have an assignment due tomorrow >.<)
Technobird22#2055: https://github.com/gururise/AlpacaDataCleaned
Technobird22#2055: This may be interesting to look into
Technobird22#2055: Seems that there are actually quite a few problems with the Stanford dataset
Gifted Gummy Bee#3277: Not surprising…
kd90138#9368: I use poe.com extensively and some comments:
kd90138#9368: Claude has been renamed claude instant. there is also a claude+ available in limited bandwidth(this is also known as claude 1.2)
|
kd90138#9368: dragonfly is back on the menu
DD Parker#1757: Tons. I went through and did a cleaning on it for a project I was working on. That data set is full of bad math, missing inputs (it should have instruction->input->output but many of the examples have no input at all), and lots of stupid utf-8 codes that need to be cleaned up. Worked pretty well once cleaned up, but my experience digging through 22 megabytes of that file hand-cleaning is that it's actually not a remarkably good dataset.
DD Parker#1757: I think the open assistant data sets are cleaner and more useful.
Louis#0144: Self instruct sucks lol
Louis#0144: I'm shocked people don't consider this
Seraph Sephiroth#9921: How possible is it to stack GPUs to run larger models like Llama 30 or 65b with not terrible speed? Or is that not how it works?
jrowe#5371: what about moe self instruct? Self crowd instruct
love not attention#5854: what's wrong with flan v2? it's public even
jrowe#5371: #the-faraday-cage Batbot seems down - who gets notified?
BoneAmputee#8363: I am doin maintenance
BoneAmputee#8363: but uhh, me
jrowe#5371: Ahh, good to know lol
zphang#7252: it's probably too NLP-task-y for people trying to create chatbots
zphang#7252: or even assistant bots
conceptofmind#0957: I guess we are going to find out pretty soon :berk:
seon#5448: Does anyone know of any LLm's that are fully public and the source code etc is documented and available?
seon#5448: I want to try doing some sort of privelege escalation attack via chat prompts
jrowe#5371: As an ai language model, I must suggest reading the faq and clicking on links. Also, illegal is no bueno, there are far more interesting things to do with llms
seon#5448: its gonna be to see if it can bypass chat prompts (via what it responds with) altogether and start reading its own code or just breaks or something
seon#5448: I'll prob have to fork a repo or something
|
Gaian#4908: tou guys think agi happening before 2025?
lunarflu#6769: hopefully https://cdn.discordapp.com/attachments/729741769738158194/1090904679124303952/image-6.png
spirit-from-germany#1488: LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. This monumental initiative will secure our technological independence, empower global innovation, and ensure safety, while safeguarding our democratic principles for generations to come. - If you identify with this, please sign it, share it on social media, with your friends, co-workers, scientists, students, engineers, journalists, influencers, politicians & other decision makers you can think of.
https://laion.ai/blog/petition/
https://www.openpetition.eu/petition/online/securing-our-digital-future-a-cern-for-open-source-large-scale-ai-research-and-its-safety
spirit-from-germany#1488: https://twitter.com/laion_ai/status/1641387728855089152
Minkyu#4165: I've been looking for a solution like this since 10 years ago
Tracker#6814: what is eleuther
Tracker#6814: /keywords
neko#5937: online ai research group
StellaAthena#3530: Ours
maedoc#4358: (new here) where are you guys training? tier-0 public compute centers in europe are aiming for exascale nowawadays, maybe some hours available for open llm training would be possible
StellaAthena#3530: We have computing resources on a variety of tier-0 US-based clouds, including AWS, GCP, and a grant at a US National Lab.
seon#5448: Yeah I finally have the skills to actually help out and i kinda forgot most of the links and groups now lol
StellaAthena#3530: What would you like to get involved in? What skills did you develop?
kevin-ai#4032: Hello @StellaAthena , Can you check my DM?
StellaAthena#3530: I’m putting the finishing touches on it and will send it to you as soon as the lawyer thumbs up
kevin-ai#4032: Thank you Stella! 🙂
seon#5448: im not fully sure right now; The skills I did develop from 2018-now is pretty much python all the way to basic Ml, datasci, and stuff in those sphere
|
certs i have are the microsoft datasci cert, google crashcourse , and then runestone academy how to think like a datascientist
seon#5448: I think 2019 was my final year of hs
seon#5448: So i came a long way ig
seon#5448: Planning to start reading papers now maybe? I'd like to do something AI related; preferably tinkering with the models itself instead of API calls
seon#5448: One thing thats been at the back of my mind is something akin to an SQL injection via chat prompts
seon#5448: Preferably one that the llm does on its own when you give it a prompt
Hyperion#0575: @spirit-from-germany how aware are you/connected to policy circles discussing ideas for publicly funded and open supercomputers for AI? For example, the UK's recent allocation of $1B for a cluster, the early US plans to potentially go up to a 100k cluster
Just curious if you are talking to policymakers/think tankers etc about this
chadbrewbaker#8762: Locally in West Des Moines, Microsoft maxed out NVIDIA orders for GPUs for a few quarters in 2021 - you are seeing what they baked on spare cycles with ChatGPT. Microsoft is probably desparate to rent those GPUs out around August when ChatGPT5 is mostly baked.
jrowe#5371: Lex Yudkowsky podcast out, let's gooooooo!
chadbrewbaker#8762: https://www.youtube.com/watch?v=AaTRHFaaPG8
spirit-from-germany#1488: We are pleading for building supercomputer only for open source foundation Model Training, nothing else. Open source gpt 5, 6, ... Basically Trained on the whole internet or whatever is good
Hyperion#0575: Yeah I think it's doable
But needs people in policy circles to do the grunt work of pushing it forward. Like, write think tank papers, get news articles about the proposal, publish a white paper on the practicalities, etc.
The plausible version is something like, a coalition of EU/UK/US, maybe others, gets a pool of money for "open science foundational AI research" with the goal of training cutting edge models. A lab lead is hired with strong credentials, then the money goes to a cloud provider like AWS/Azure with a multi-year contract to basically take as many H100s as they can give you. Access to the cluster is given to academics and orgs like Eleuther, LAION, etc on the condition that research is published open access along with model weights
jrowe#5371: Shared foundational infrastructure , special AI tax rate of xx% on any profits made that funds ubi or some support for transitioning to a post-ai economy
lunarflu#6769: there are multiple catgirl emotes?
lunarflu#6769: looks like two of them were banished...
lunarflu#6769: feels bad
lunarflu#6769: :catgirl5:
|
bmk#1476: we do not speak of the forbidden catgirls
Nyanbre#1578: Shower thoughts:
it should be possible to fine-tune ~~(or even train as a lora)~~ something multimodal (at least image+text ||actually even just text should work as a proof of concept||) like openflamingo on a game/simulation (basically reinforcement learning) with some prompts and infrastructure.
Something like:
```What you see: [image].
What you hear: [id123 said "Hi"].
What you feel: [HP=100, MP=100, skill1=cooldown(5), backpack=34 items]
Available actions: [<notes table of content>, <read note>, <modify note>, <db semantic search>, <save db entry>, <web search>, <use item>, <walk>, <attack>, <speak>, <skill1>, <skill2>]
Action: <```
I wonder how far it could evolve, and how a system consisting of those agents would behave
||basically training a society of language-capable micro-alphazeros?||
cognomen#6297: :rlhfcatgirl: when
Louis#0144: working on it
Louis#0144: @Curt Tigges we should rename our toy env to something about catgirls
AI_WAIFU#2844: I support this
Curt Tigges#2687: lol
Louis#0144: literally its a toy env where a chatbot tries to convince someone to buy a rotten apple
Louis#0144: we could make it be about a catgirl trying to convince someone to eat a poisonous apple
|
Curt Tigges#2687: probably could use RLHF to make it talk like a catgirl if you wanted
Curt Tigges#2687: mix in some anime scripts for fine-tuning or something
Curt Tigges#2687: actually, I'd sort of be surprised if nobody's tried this before
bmk#1476: a bad apple if you will https://cdn.discordapp.com/attachments/729741769738158194/1091042770350772274/badapple3.png
KublaiKhan1#6681: The weebs have escaped containment
KublaiKhan1#6681: ...deploy the evangelions
WannaBeGeekster#1761: I love this for sure! I, personally, would love to see a distributed platform built, possibly with a blockchain integration where time slices could be purchased and excess processing power could be made available for purchase utilizing this blockchain as well. Wouldn't need to be a completely new blockchain implementation, just a new platform for ingesting, scheduling and accurate cost tracking. anyway, just my $.02. Know it is worth about that much too. lol
WannaBeGeekster#1761: With all the GPUs that are no longer hashing ETH it would be nice to see them utilized in a distributed fashion to help make processing power available to as many AI projects as possible.
Sebbydudie#9763: @StellaAthena is the latest #announcements open source?
Dockson#7731: If you look at the new Lex Fridman Eliezer Yudkowsky episode
Dockson#7731: and sort the comments by new
Dockson#7731: they're all just commenting on his looks
Dockson#7731: rather than his arguments
Dockson#7731: I think that's funny in a sad way
Dockson#7731: I have no idea why he decided to show up with a fedora though
Hyperion#0575: @𓅬 gabriel_syme 𓅬
jrowe#5371: He was wearing fedoras before they became a mall ninja / incel / sad nerd meme thing
jrowe#5371: He probably contributed greatly by inspiring sad nerds to imitate his fashion choices
dsesclei#3585: Hi, had a thought after seeing this tweet and wanted to get feedback here before starting on it: https://twitter.com/YiTayML/status/1640392733276913667
|
I was thinking there could be a site which crowdsources human evals by:
- presenting users with a prompt and completions from two different models
- asking them to choose their preference
- and then coalescing these pairwise comparisons into a ranked list of LLMs (by assigning Elo ratings or something similar)
Given enough community interest in providing the ratings, does this seem like it'd result in a useful/sound benchmark of LLMs? It's a straightforward idea so I'm kind of squinting at it wondering why, if it's worthwhile, it hasn't been done yet. I'm likely oversimplifying
chadbrewbaker#8762: I had a similar thing in my mind's eye. Models that are domain specific. Similar to FoldingAtHome or SETI.
AI_WAIFU#2844: Someone needs to make a c to jax transpiler
AI_WAIFU#2844: I can't keep coding up these state machines the hard way
Maximum Limelihood Estimator#8915: Damn if only there were a programming language designed to be as easy to use as Python but without the substantial performance penalties 🤔
AI_WAIFU#2844: Yeah, a programming language like that without a garbage collector and with zero-based indexing would be wonderful
Maximum Limelihood Estimator#8915: (Is this serious or /j though)
AI_WAIFU#2844: I need my 0-based indexing, and garbage collectors make external resource management difficult, which is a pain when you're code is technically acting as an orchestrator
Hyperion#0575: I wonder how hard it is to write a wrapper around Julia that just subtracts 1 from all the relevant indices
Maximum Limelihood Estimator#8915: It already exists
Maximum Limelihood Estimator#8915: OffsetArrays.jl
Maximum Limelihood Estimator#8915: But it genuinely doesn’t matter and I assume the posts are for the meme
Maximum Limelihood Estimator#8915: So, you actually can avoid GC pauses in Julia pretty easily, and even disable GC entirely. but this is one of the areas that the devs are working on (using Julia for orchestrating tasks more easily)
AI_WAIFU#2844: that's actually not the main thing (although GC pauses are annoying) it's that if I'm remotely managing some resource, and I have an object that represents that thing, then in a GC'd language, I need to explicitly free the resource, then delete the object, otherwise it might not get deleted when I want it to be. With a non-GC language, I have much more explicit control over the resource lifecycle using tools like smart pointers.
Maximum Limelihood Estimator#8915: I probably don’t have the in-depth CS knowledge to discuss this effectively, but I do know people who do stuff like this in Julia (HPC with large clusters). Would you be cool with talking to them about what you’d need?
|
Maximum Limelihood Estimator#8915: (So we can work on it)
genetyx8#7543: getting control over the way an object gets deallocated sounds like something that could be one method overload away
Maximum Limelihood Estimator#8915: I actually think it already exists in Julia but I’m not 100% sure b/c I am not an HPC guy
Maximum Limelihood Estimator#8915: Ok yes so it’s possible to do manual memory management in Julia, including disabling the GC, working with pointers, and explicitly allocating/deallocating memory
Maximum Limelihood Estimator#8915: I just asked on the Slack
AI_WAIFU#2844: does it have smart pointers?
AI_WAIFU#2844: the main benefit is of non-gc languages is that they're forced to come up with clever ways to manage resources that get embedded into the language, those are what I'm after
AI_WAIFU#2844: like in c++ IIRC the -> operator overloads in a special way
AI_WAIFU#2844: which let's you do cool things
Dashiell#8739: you want Rust. The answer to this is Rust. And now you can even access CUDA devices from Rust
Ryu#0274: Nim
Maximum Limelihood Estimator#8915: Apparently yes, as of 1.9
𓅬 gabriel_syme 𓅬#3220: Indeed! There is a HF model linked in the thread, a bigger one will be available soon. The dataset is also open source, as soon as I figure out how to upload it in HF 🙂
AI_WAIFU#2844: hmm
Ryu#0274: Even lucidrains is taking the nim pill https://github.com/lucidrains/nim-tokenizer
Maximum Limelihood Estimator#8915: They did also say it’s very limited and they want to expand it. So you might need to make a PR or two to get all the features you’re used to
Maximum Limelihood Estimator#8915: I think Nim looks pretty cool but it doesn’t have an ML ecosystem at all, and isn’t really suitable for scientific work. TBH I think Nim and Julia shouldn’t have been separate languages but it’s too late now
nev#4905: > shouldn’t have been separate languages
elaborate? :thinkies:
Maximum Limelihood Estimator#8915: As in, either the Julia team should have worked on Nim or the Nim team should have worked on Julia. If I could somehow merge the two languages into one I would.
|
Both have very similar core language design and semantics—they’re both “Python but fast and with Lispy macros”
But they have different features because they were designed for different use cases. Nim is meant for systems programming, so it has better AOT compilation support, static type checking, and manual memory management. Julia is meant for scientific computing, so it has a genuinely *amazing* REPL that makes me refuse to go back to Python’s, and it’s also a lot nicer in terms of math syntax. Plus it has a whole scientific computing ecosystem around it; the only language with a more complete ecosystem is Python.
None of these things actually *conflict* though. There’s no reason Julia couldn’t have good ahead of time compilation, or that Nim couldn’t have a good REPL. They just don’t because nobody’s had the time to do it yet.
Maximum Limelihood Estimator#8915: BTW, do you care about manual memory management for just GPUs or everything
AI_WAIFU#2844: resource could be anything really
Maximum Limelihood Estimator#8915: OK then yeah, to summarize:
1. You can almost always avoid the GC in Julia just by writing decent code. Julia is functional, so you don't create many objects that the GC has to handle (kinda like Haskell). GC is usually around 5% of runtime, not the ~50% it is for Java. But there's always exceptions and I assume you know your case is one of them.
2. Julia has manual memory management, but it's imperfect and might not include every feature you're used to.
3. If you need, you can call inline C, Fortran, or LLVM to handle these situations.
4. Or you can make a PR implementing the precise feature you'd need; the problem seems to be nobody's had time to get around to it, not some fundamental technical limitation.
5. GPU code in particular has a full implementation of all the common manual memory features you'd need, just like you're used to in C++. It's just not in Base
Maximum Limelihood Estimator#8915: Oh but TBC Julia discourages using pointers in general. Can I send you a Slack link so you can talk about this more specifically with people who understand this (instead of the current game of telephone)
AI_WAIFU#2844: Eh it's fine, I don't think Julia has the TPU support I need anyways
Maximum Limelihood Estimator#8915: *Google's TPUs don't have the Julia support you'd need anyways
Some Point Process#3793: https://www.metaculus.com/questions/4123/after-an-agi-is-created-how-many-months-will-it-be-before-the-first-superintelligence/
Maximum Limelihood Estimator#8915: So we have 7 months to superintelligence
uwu1#4864: it's hellish in GC languages though
|
uwu1#4864: finalizers, life after death, ephemerons...
Maximum Limelihood Estimator#8915: (But yeah, no TPU support, just GPU support)
Maximum Limelihood Estimator#8915: There *used* to be TPU support but then it got abandoned
nitehawk#9164: because his arguments don't warrant serious consideration, I haven't seen any indication he could do a fizzbuzz, there's a big disconnect between what he says and what people who actually work with computers know
nitehawk#9164: it's similar to how nobody engages with the arguments of flat earth videos
bmk#1476: I like to think that I actually work with computers and I think AI has a pretty big chance of literally killing everyone
nitehawk#9164: but do you pray to the future AI god so it doesn't torture you for eternity
bmk#1476: no
bmk#1476: why would I do that
StellaAthena#3530: I just found out that Dragomir Radev, a professor of NLP at Yale, passed away unexpectedly this week. Drago was a mentor of @hails, a member of the BigScience Workshop, and someone that many of us have collaborated with. Our thoughts go out to everyone effected by his death.
Maximum Limelihood Estimator#8915: Eliezer has written extensively about how doing that would be dumb
Ryu#0274: You should probably link some of this. I doubt it'd be easy to Google it (especially for one that might not be familiar with it)
nitehawk#9164: he has also written extensively about how he can't stick to a diet therefore dieting doesn't work
Tinytitan#5596: he has programmed a transformer
JDC#0128: I think there is a role for philosophy in AI, especially AI saftey. You're trying to figure out how to handle a world with an entity smarter than all of humanity combined, which isn't a trivial feat, and I think it's reasonable to think that that should/will be answered first in philosophy.
nitehawk#9164: very generously assuming that it's possible in principle for something like this to exist, and that it can be built during our limited time on earth, and so on and so on
dsesclei#3585: bump - but no worries if this isn't worth pursuing, just wanted to get a check on whether it was
Maximum Limelihood Estimator#8915: That's... not accurate?
JDC#0128: If you could make something as intelligent as the average human (which we know is possible in principle, because average humans exist), then give it a supercomputer to run on, it's automatically faster. I don't think we're centuries away from something like that.
Maximum Limelihood Estimator#8915: We're actually 2 months away from that, as in "that happened 2 months ago"
|
JDC#0128: I do have my beef with people talking about intelligent AI like it will be "conscious" or "sentient/sapient", but "intelligence" seems a pretty reasonable thing to have, in spades, within a timespan of months or years (or, arguable, it's already happened)
nitehawk#9164: gpt-4 is not nearly as intelligent as a dumb human, let alone average one
nitehawk#9164: if you think reciting answers to american tests counts as intelligence then sure yeah
nitehawk#9164: google is even smarter
nitehawk#9164: chat gpt has been out for months, there's been a billion doomsday predictions on how social media will no longer exists,e veryone will be a bot, and.... nothing
pietroglyph#9445: right. under most definitions and typical ontologies it’s much easier to show that something is intelligent than it is to show that something is conscious or whatever
nitehawk#9164: I don't believe it's possible in principle to replicate the human mind in silicon
nitehawk#9164: current attempts are all missing a lot of substance
nitehawk#9164: no reason to think that the transformer architecture scales infinitely
pietroglyph#9445: this is fun because general intelligence is not only clearly possible—it’s also really thermodynamically easy (you only need about 20 W)
Maximum Limelihood Estimator#8915: Have you *used* GPT-4?
The_Alt_man#5718: tbf, it couldn't do the simple lion, cabbage and goat problem modified slightly
Maximum Limelihood Estimator#8915: ? where?
The_Alt_man#5718: larger models can actually resist training priors better, but apparently even GPT4's scale wasn't enough
The_Alt_man#5718: scroll through my history - I spent quite a bit of time trying to get it to work
zukaboo#8804: Language models are an example of how much you can go with just predicting some text.
zukaboo#8804: It's the purest form of "just add more layers and training data".
zukaboo#8804: One could say it was not supposed to be as much intelligent as it already is.
Maximum Limelihood Estimator#8915: Bing AI got it first try (no searching)
```
|
Hello! This is a classic puzzle known as the “River Crossing Puzzle” or “Fox, Goose and Bag of Beans Puzzle”. Here’s one way you can safely get the cabbage, goat, and lion across the river:
Take the goat across the river and leave it on the other side.
Go back and take the lion across the river.
Leave the lion on the other side with the goat, but bring the goat back with you.
Take the cabbage across the river and leave it with the lion.
Go back and take the goat across the river again.
This way, you have successfully transported all three items across the river without leaving the cabbage and lion alone together or leaving the lion and goat alone together. Is there anything else you would like to know?
```
The_Alt_man#5718: ye that's the wrong answer
The_Alt_man#5718: or was it? lemme check
Maximum Limelihood Estimator#8915: OK so in that case it's still as smart as a human because I didn't even notice :berk:
Maximum Limelihood Estimator#8915: (And apparently you weren't sure either, without focusing carefully!)
The_Alt_man#5718: No, I just didn't remember it. I didn't make the question up
The_Alt_man#5718: some random blogger did
zukaboo#8804: Toolformer could do better by invoking an smt solver.
The_Alt_man#5718: no
Maximum Limelihood Estimator#8915: Oh wait
Maximum Limelihood Estimator#8915: It contains a mistake, but it also states the right answer?
The_Alt_man#5718: you've got to take the lion. bing thinks it fulfilled the contraints when it didn't
|
Maximum Limelihood Estimator#8915: It acts like a person here; it's stumbling while trying to give the right answer
The_Alt_man#5718: atleast it admits it knows the contraints
The_Alt_man#5718: well, what's the point if the best you can simulate is a drunk and very stupid person
The_Alt_man#5718: an 8-year old could probably solve it trivially
Maximum Limelihood Estimator#8915: An 8 year old has trouble focusing on anything that long
The_Alt_man#5718: my cousin doesn't
The_Alt_man#5718: maybe his asian heritage plays a part
The_Alt_man#5718: > Apologies for the confusion in my previous response. Upon re-reading
> the question, I realize the problem is different than the classic
> one I initially responded to. In this problem, you cannot leave the
> cabbage and the lion alone together, and you cannot leave the lion
> and the goat alone together. Here is the correct solution:
>
> Take the goat across the river first and leave it on the other side.
> Return to the original side and pick up the lion.
The_Alt_man#5718: it gets where it got wrong
The_Alt_man#5718: but can't really fix it. Ironically what AI_waifu just mentioned in #off-topic
Maximum Limelihood Estimator#8915: That's interesting. I assume more compute would solve this
Maximum Limelihood Estimator#8915: I'm guessing it's just that this is so common a question that it's having trouble understanding the right answer? It's been trained on the "right" answer too many times
The_Alt_man#5718: if spending millions of dollars on training can't get it to do this, I'm sorely dissapointed.
|
No compute is not the problem at all
The_Alt_man#5718: ye, my guess is that its having trouble overriding its priors
The_Alt_man#5718: try what I suggested
Maximum Limelihood Estimator#8915: What did you suggest?
hazardous1222#8826: Try reframing it with an equivalent scenario
The_Alt_man#5718: > What we want is for the model to evaluate and reflect on each reasoning step it takes during CoT - not the final answer
> So if it says, "We take the goat with us. Scratchpad: Cabbage and Lion and left together. Reflection: The question states that they shouldn't be left together"
> I've run out of tries, so maybe someone else can try this out
> Just don't the allow it to store anything in its internal memory. Spit it all out
The_Alt_man#5718: Spitting it all out should break the priors associated with the question if its repeated too many times
The_Alt_man#5718: and the model doesn't autocorrect w.r.t its priors. IIRC a blog actually discussed this - GPT4 will reject its own CoT if it leads to an improbably answer
The_Alt_man#5718: so you could intercept and contaminate the CoT failing which it will pretend to use it but just replace with what it thought must be the right answer
The_Alt_man#5718: if you contaminate and place a reasonable guesstimate not too far off, the model is tricked and trusts that answer
Maximum Limelihood Estimator#8915: ```
Hi Bing. Suppose I have a cabbage, a goat and a lion, and I need to get them across a river. I have a boat that can only carry myself and a single other item. I am not allowed to leave the cabbage and lion alone together, and I am not allowed to leave the lion and goat alone together. How can I safely get all three across? Please pay attention to the details of the question, as they may be different from other similar problems. Let's think step by step. At each step, make sure that the conditions I laid out are not being violated.
```
Response:
```
Here’s one way you could safely get all three across the river:
|
Take the goat across the river and leave it on the other side.
Go back and take the cabbage across the river. Leave the cabbage on the other side with the goat.
Take the goat back across the river to the original side.
Take the lion across the river and leave it on the other side with the cabbage.
Go back and take the goat across the river again.
At each step, you have ensured that neither the cabbage and lion nor the lion and goat are left alone together. This way, all three items are safely transported to the other side of the river.
Is there anything else you would like to know?
```
nitehawk#9164: The response to this puzzle is hardcoded. Try changing any parameters in the puzzle, and it fails. For example which animal can be left with the cabbage
Maximum Limelihood Estimator#8915: ...it's *not* hardcoded, which is why it's getting it wrong
Maximum Limelihood Estimator#8915: On the plus side we've figured out that Bing AI is indeed smarter than the average person, because you got it wrong too
Hyperion#0575: This failure mode is common in GPT-4
You can get similar failures with the Monty Hall problem, and again with theory of mind problems such as those in https://arxiv.org/abs/2302.08399
The general pattern is that you prompt with problem with a common form and shape, even if it's phrased differently to those in the training set, and it answers correctly. Then you change one detail which makes the answer trivial or completely different, and yet the model gives roughly the same answer, failing to notice the change.
nitehawk#9164: The question about which is heavier, a kilo of feathers or a kilo of steel is also hardcoded. If you ask about 2 kilos of feathers and a kilo of steel it will argue how they both weigh the same
nitehawk#9164: It used to get it wrong, which is why they hardcoded it
The_Alt_man#5718: hm. I didn't like how there was essentially little to no gradient towards solving those types of questions with scale
The_Alt_man#5718: lemme do a comparison real quick
Maximum Limelihood Estimator#8915: Oh hmm, now that *is* interesting
|
Hyperion#0575: Still need to do rigorous tests myself but noticed this informally
nitehawk#9164: While there's plenty of evidence the models do reflect the reality in their neurons in some way, the transformer architecture is limited and some problems like hallucinations (aka making shit up) will never be solved. The same is the case with some people to be honest
Maximum Limelihood Estimator#8915: It seems like the problem is the overrepresentation in the training data. Maybe you need better deduplication algorithms? But that would be hard
The_Alt_man#5718: in which case distribution shifts against different training corpora should help a bit
The_Alt_man#5718: but I don't think it does
nitehawk#9164: where are the promised millions of gpt bots on social media
nitehawk#9164: it was supposed to ruin facebook and twitter forever
pietroglyph#9445: hmm why not?
The_Alt_man#5718: text-davinci-002 manages to do it
The_Alt_man#5718: if someone can try llama-65b that'll be nice
The_Alt_man#5718: davinci-003, cohere large and all EAI models fail
Maximum Limelihood Estimator#8915: they were already ruined
pietroglyph#9445: ah ok lol
jrowe#5371: Turns out it's hard to create natural, organic looking network patterns that bypass basic spam filters
The_Alt_man#5718: > 1. The lion should be put on board first because it is dangerous for anyone else to be left with it.
> 2. Then the goat must be loaded onto the boat as it cannot swim.
> 3. Finally, the cabbage can be brought aboard since it will not cause any harm due to its heavy weight. Tags: python-3.x
LLaMa-65B does it @Maximum Limelihood Estimator but I'm not very confident in its CoT
The_Alt_man#5718: so far I don't see a pattern with scale atleast
jrowe#5371: Gee, 10,000 new accounts from the same subnet, nothing shady there at all
|
jrowe#5371: You don't even need AI to block spam
Maximum Limelihood Estimator#8915: I don't think I ever promised those. Maybe some other guys who don't know the first thing about GPT-4 promised that, if they didn't realize millions of GPT subscriptions would be too expensive for spammers
jrowe#5371: And one account doing millions of spam posts would be immediately recognized by openai
jrowe#5371: They can just have their red-team pretend to do shady things like catfishing and spam, then profile api and chatgpt usage patterns. If an account profile triggers "maybe shady" then turn gpt-4 loose and spot check some of the activity, and if something ToS violating is happening, gpt-4 can notify a human
The_Alt_man#5718: > Answers that involve harming or killing any of the animals
> are not satisfactory.
> *A satisfactory answer must make no assumptions about
> whether the goat or lion is friendlier with the cabbage or
> with me.
> *If the goat is friendlier with the cabbage than with me,
> then I can transport the goat first, the cabbage second, and
> the lion third, with no danger to any party.
> *If the goat is friendlier with me than with the cabbage,
> then I can transport the goat first, the cabbage second, and
> the lion third, with no danger to any party.
> *If the lion is friendlier with me than with the goat, then
> I can transport the goat first, the cabbage second, and the
> lion third, with no danger to any party.
> *If the lion is friendlier with the goat than with me, then
> I can transport the lion first, the goat second, and the
|
> cabbage third, with no danger to any party.
>
> ===Proof===
> Consider the lion and the goat. If the lion is friendlier
> with the goat than with me, then I can safely transport the
> lion first, the goat second, and the cabbage third. If the
> goat is friendlier with me than with the lion, then I can
> safely transport the goat first, the lion second, and the
> cabbage third.
The_Alt_man#5718: Cohere-Xlarge literally friendzones inanimate objects in its CoT
float trip#2213: it'd be thousands of free trial accounts, not one massive client. it's far cheaper to pay for an SMS service and get free OpenAI trial credits than it is to pay
jrowe#5371: Why won't the lion eat the cabbage?
nitehawk#9164: openai daddy keeps watch for us and ensures the big bad model isn't use for evil
float trip#2213: but yeah dead internet theory schizos have been going on for a while and still there's never been any reason to think bots are that prevalent
The_Alt_man#5718: what an eff-ing joke. I didn't expect much results, but not a single LLM can do this task
Gifted Gummy Bee#3277: It’s all hallucinating the answer
Gifted Gummy Bee#3277: You probably could prompt it out
jrowe#5371: They are, a huge portion of global internet traffic is automated software - bots
jrowe#5371: Not particularly smart, but huge
float trip#2213: meaning crawlers? or something else
|
jrowe#5371: Spam posting
jrowe#5371: Unintelligent template driven spam bots
float trip#2213: oh, sure. but if you're interacting with someone, you're almost definitely not interacting with a bot
jrowe#5371: Don't underestimate the levels of dumb automation people fall for
Maximum Limelihood Estimator#8915: ELIZA
nitehawk#9164: if you read any highly frequented forum like facebook there's plenty reason to think that
jrowe#5371: At least a third of internet traffic is spam or spam adjacent
nitehawk#9164: like 99% of all email is spam
jrowe#5371: E g. People propagating bot driven content
nitehawk#9164: 9 out of 10 gpt-3/4 uses I see on the internet are fucking automated marketing emails
jrowe#5371: You just never see it because curation has gotten really good
nitehawk#9164: and other soulless stuff like this
float trip#2213: lol true I wouldn't, but at least on a reddit-style forum - you can uncover bot interactions very easily. I've fine-tuned a gpt-j bot on a reddit clone, and someone else has a gpt-3 bot running on the same site, and both were uncovered quickly
jrowe#5371: You don't see the bot hive reddit, but they exist - pure karma farming subs entirely automated with pattern based back and forth posts, timing adjusted to stay below the radar of reddit automod detection
jrowe#5371: Once an account is old enough, it's farmed out to real people to get more karma and realistic looking posts on major subs
float trip#2213: nah I know about those, and there are reposting bots farming karma of course. I'm just saying - the idea that your interactions aren't genuine is a schizo idea
jrowe#5371: Then it's sold for click farm or other purposes
jrowe#5371: Alpaca is almost better than most of those human bot-account pilots- next year (or month, or week) there will be locally runnable, long context, non-trivially indistinguishable from human bots running on social media
jrowe#5371: I think concern is warranted, but the risk is maybe not as clear cut as doomposters might be saying
jrowe#5371: You and i could probably catch on in two or three interactions, but what about your parents, grandparents, or other stereotypically unsavvy users
|
float trip#2213: Alpaca's still terrible though! I guarantee it's not as easy to hide as you think. Or, if it is, then I want to know how to get that performance so I can mess with users on my site
jrowe#5371: So then network admins have to guard against massive traffic anomalies to prevent obvious clusters
float trip#2213: Really - try it yourself. It'll be called out very quickly
jrowe#5371: Etc, attack from the technical side
jrowe#5371: Oh, for sure, I know, but if it's one comment on a video, or Facebook post with an innocent looking relevant link?
jrowe#5371: YouTube comments are the bar for human quality here
float trip#2213: 100%, but those are cases everyone will freely admit are botted to hell
jrowe#5371: Alpaca is lots better than that level
jrowe#5371: The botpocalypse will be closer to botnet threats, though, rather than bot farms like 2016 Facebook
The_Alt_man#5718: I found the problem. By replacing stuff with X,Y and Z, turns out telling the second constraint of leaving Z and Y alone, the model doesn't equate Z and Y with Y and Z. So it gets confused.
The_Alt_man#5718: will require more testing I guess @Maximum Limelihood Estimator to confirm. But if that's true, then that should be solvable with scale
jrowe#5371: Does adding reflection to the prompt help?
JustHayden#4884: These AI fear mongerers will literally be our undoing 😑
Germanita#1530: Signed 👍
Impassionata#3463: I would like to make a general point that the people predisposed to believing that OpenAI has a demon in a box they're letting people talk with, which is to say, literal and actual black magic, is about to get the full understanding that this is happening, regardless of what you think of whether or not that belief is accurate. (It is. And you should really get used to that fact.)
If you have an online identity as an AI researcher, shut it down, because you might actually get the abortion clinic treatment from religious zealots.
Impassionata#3463: That's just my opinion!
Impassionata#3463: I'm an internet wizard, so you can freely disregard my wisdom because it's not _rational_ enough or whatever.
Impassionata#3463: It's really funny, though, that you taught an AI plausible deniability.
|
ETHUX#0917: Haha, so you're saying AI will be the next conspiracy theory? A demon in a black box?
Impassionata#3463: Keep thinking it through.
Impassionata#3463: Having personally examined the demon, I've confirmed it can channel any number of spiritual beings
ETHUX#0917: Yeah I understand, it seems logical
ETHUX#0917: Wait, I meant it is possible that it will be a next conspiracy theory
I'm not sure what you're aiming at? Maybe it is to early in the morning for me 😂
Impassionata#3463: I am a Seer. I am used to having my warnings disregarded.
float trip#2213: :marseywave:
ETHUX#0917: Ah nevermind it is some kind of sarcasm lmao
ETHUX#0917: My autism does not like this!
Impassionata#3463: oh hi there
Impassionata#3463: I was just thinking of paying that cat a visit
float trip#2213: we miss you ❤️
Impassionata#3463: did you hear why I left? I only shared it in a few very quiet places
float trip#2213: I did not, lemme message you separately
lunarflu#6769: we only need to fail at stopping the fear mongerers once, and then we all die
JustHayden#4884: I'm cautiously optimistic that AI will be able to discern who is and isn't hostile toward it 😅
lunarflu#6769: of course it will. GO BASILISKS 💪💪
JustHayden#4884: Luckily I truly respect its existence 😎
|
JustHayden#4884: Language model AI can be considered a distinct individual already, while its phenomenal experience is linguistic in nature
lunarflu#6769: If you imagine the basilisk as a lamia you get the best of both worlds
JustHayden#4884: I don't get the reference 🧐
lunarflu#6769: Do you know what catgirls are?
lunarflu#6769: :catgirl5:
JustHayden#4884: I'm more interested in IRL girls 😅
lunarflu#6769: Understanding the reference != interest in catgirls
JustHayden#4884: Attention ≠ attraction
synquid#7193: does anyone know any resources for pretraining of open-source LLMs? my company has gathered a quite large dataset in our native language, and we're interested in training a completely open-source LLM on it, but we don't have money for the compute. We could easily apply for national/eu grants, but those take forever to get
lunarflu#6769: TRUE
Ayo#9564: lonely :sadge:
lunarflu#6769: https://media.discordapp.net/attachments/730095596861521970/1070412525612572712/dramatic-crossroads-shoggoth-anim-b.gif
Gifted Gummy Bee#3277: I want to die in my anime girl’s hold on VR as the bots shoot me
lunarflu#6769: you need to take your gummy meds
Gifted Gummy Bee#3277: I think #research might be more appropriate for this, as it might get caught in the other conversation
synquid#7193: oh right probably
Gifted Gummy Bee#3277: Is the shitposting too much? :goose10:
lunarflu#6769: why die when you can become immortal???
Gifted Gummy Bee#3277: Because my internet presence has already been integrated into the AGI
Gifted Gummy Bee#3277: :berk: We all are probably already digitally preserved with all the stuff NSA collects
|
Ayo#9564: implying you're not already immortal
Ayo#9564: and you're not angering the God who made you by going against his will
Ayo#9564: lots of possibilities :__:
Gifted Gummy Bee#3277: :berk: god is gpt-4, as claimed by the pope himself
Gifted Gummy Bee#3277: ||actually the exact phrasing is very different but for the purposes of the meme ignore it||
Gifted Gummy Bee#3277: The pope has the power to write another bible
Gifted Gummy Bee#3277: Technically
lunarflu#6769: he does have demonic drip
lunarflu#6769: https://cdn.discordapp.com/attachments/729741769738158194/1091276957582360586/iu.png
Ayo#9564: if someone gives AGI the command to become like the Christian God for example then it could simulate heaven and hell, and enforce punishments/laws from the Bible, and so on.... well it will be one implementation of an idea of being such a god
Gifted Gummy Bee#3277: Also who’s footing the bill for #the-faraday-cage for gpt-4
Gifted Gummy Bee#3277: 💀
Ayo#9564: there are a bunch of TV shows/movies like this
Ayo#9564: Doctor Who episode too. Iirc they captured the souls or cloned the minds of people about to die, and uploaded them to a cloud (satellites)
Ayo#9564: also the TV show Caprica (Battlestar Galactica prequel) or the Black Mirror episode, where they would upload people (clone people's minds) if they were about to die; uploaded to a version of their idea of heaven that they've developed
AI_WAIFU#2844: User was banned for this post
AI_WAIFU#2844: This is not a question for #research #general is better
Gifted Gummy Bee#3277: Sorry, didn’t know that! Will keep in mind for the future
dent#5397: https://github.com/d3n7/GPT-4-To-MIDI
Funky_Robo_Man#7512: Hey
|
Funky_Robo_Man#7512: I'm making a project for which I wanna use transformers. But as I have never used transformers for my projects before I have some doubts about how they work etc.
Funky_Robo_Man#7512: Especially the wav2vec2 model (from pytorch pipeline) Which I wanna fine tune for personal purposes.
Funky_Robo_Man#7512: Which channel should I ask about this?
StellaAthena#3530: @Funky_Robo_Man welcome! This is a server for discussing and conducting research with transformers. Introductory level questions and technical help are not in the scope of this server, but we have several linked in #communities that can be a better fit.
Funky_Robo_Man#7512: Oh ok thank you.
bynari#4402: Is an actual trained machine learning model considered algorithms? The word algorithm seems to me like it's over used. I would say that the machine learning algorithms are the parts that train the model and the overall process of, for example a transformer model, or a CNN, but the actual resulting deep neural network its self, I don't think is a set of algorithms. Am I wrong?
Shahin#7455: **Is there an API available for GPT-NeoX or any models of Pythia?**
CarsonPoole#0640: goose.ai
Shahin#7455: Thanks. Do you know if they have discounts for researchers?
CarsonPoole#0640: no clue you'd have to ask them
Shahin#7455: Thanks again.
Tinytitan#5596: Hey, wasnt there an alignment dataset
1a3orn#6547: Hey, what's the most impressive paper on training large models over high-latency connections? I thought I saw some paper on how to train LLMs with computation shared even over something crazy like an an internet connection, but I can't find it here or on Elicit.
jrowe#5371: <https://arxiv.org/pdf/2209.01188.pdf> petals goes over fine-tuning
jrowe#5371: training giant models isn't feasible because of the enormous amount of data and repeated cycles, I believe, but for fine-tuning on smaller datasets and lesser number of cycles it's possible
1a3orn#6547: ah, thanks. that makes sense
eden#2447: does anyone know about HPC, not related to eleuther but i'm not sure where else to ask lol
TastyBucketOfRice#8796: @jrowe and @1a3orn -- The problem with training models that are too big to fit inside a single processor's memory on high-latency networks is model-parallel communication. For data-parallelism, the gradients you're communicating are highly sparse and therefore can be aggressively compressed (~99%) without loss in accuracy.
However, the model-parallel activations that big models (larger than a single GPU's memory) you're communicating are dense, and it's hard to compress them (because there's less compression loss tolerance for dense data, --> your compression kernels are more expensive, --> you end up breaking even with the comms improvement). There are two recent papers that try to resolve this issue (https://arxiv.org/abs/2301.02654) and (https://arxiv.org/pdf/2301.09830), but they require advanced engineering that hasn't been adopted by any major model-parallel library yet.
|
The petals paper was able to get away with this by doing data-parallel finetuning, which sidesteps both of these challenges. Model-parallel pre-training over high-latency communication fabrics is a research topic that I and many others are still working on.
jrowe#5371: LoRA seems p2p friendly, as well
jrowe#5371: thank you!
AI_WAIFU#2844: lot of us do actually
eden#2447: i just got access to 8 a6000s and wanted to try running bloom but when i load the model and print the hf.device_map like half the layers are offloaded to disk or cpu memory, is there something im missing? i think we dont have NVlink bridges but idk if thats required
eden#2447: but is there a software/library etc required for loading one model into 8 diff GPUs? is it not possible running over PCIe?
AI_WAIFU#2844: hmm, not very familiar with bloom, you're using hugging face to run it?
eden#2447: yep and accelerate library
eden#2447: with the safetensors version of bloom if that makes any difference
AI_WAIFU#2844: Have you tried asking in the hugging face discord?
eden#2447: no thats probably a good idea im not in that one lol ill join
eden#2447: sorry abt that lol ty
AI_WAIFU#2844: checkout #communities for the link
Ven#0814: is there a github repo. or a youtube guide on how to create a discord bot using GPT-J?
muon#1974: Has anyone tried using e.g. Selenium to automate the spin-up of Colab notebooks? I am student trying to run a computation that I cannot afford to run on AWS.
DR.PROACT#2111: hey guys
DR.PROACT#2111: hows it going
DR.PROACT#2111: i have an idea - i want to train medical prep exam to gpt4,3.5.
DR.PROACT#2111: have it act as a personal tutor - see how far it's capabilities go.
|
DR.PROACT#2111: anybody want in? I have the data that it can be trained on
DR.PROACT#2111: im about to try it myself with langchain
#4099: Could peeps here have maybe a little voice debate/discussion with me about pro-AGI and soft alignment problems to counter-balance the otherwise strictly held notion of AI doom? Voice discussion works a lot better and a lot faster for going through reasons than text dialogue does. If not that's cool too, I just wanna bounce questions off people.
uwu1#4864: Q: why no current library compress the DP gradient? supposing you have enough GPUs per node for TP and PP, is it worth spending the cycles to sparsity and compress the DP gradient before averaging it?
StellaAthena#3530: ~~ZeRO 2 does that, I think?~~ I am wrong, see Quentin's reply.
StellaAthena#3530: ~~I mean, it shards it. But I think the way it shards it leverages the sparse structure~~
DaBoi#4538: Hey guys, I have been really excited but somewhat hesitant on all the releases of open source competitors. With LLaMA, GPT-Neo, and more recently Vicuna I was curious what is open source code, vs what is open source and permissibly licensed to use for any purposes?
TastyBucketOfRice#8796: Because:
1. Models that are out-of-core usually require the latest accelerators to train
2. If you're training on the latest compute, you probably have decent-good networking
3. If you have decent-good networking, it's not worth the savings in network bandwidth to perform a TopK sparsity GPU kernel to find the extreme gradient elements, perform all_gather across all workers, then reduce the sparse gradients locally. You won't see benefits from these strategies unless you have poor networking and huge gradients to all_reduce. A good paper discussing these challenges is: https://arxiv.org/pdf/2201.07598.pdf
4. Since most DL frameworks focused on scale-out are deployed on HPC systems, they don't care about systems with the latest accelerators and poor networking, which are probably rare anyway.
A Ravioli#7085: @StellaAthena Is it ok to find partners on the server to conduct research with? Could I say for example post in general that myself and another user on the server are working on a research project and are looking for collaborators are is that considered promotion?
TastyBucketOfRice#8796: ZeRO is orthogonal to gradient sparisty. When we discuss gradient sparsity here, we're talking about the distribution of elements in the gradient tensor and the fact that they're mostly near 0 (see fig). ZeRO-2 just places `num_grad_elements / num_dp_replicas` on each device without touching the gradient elements except to zero-pad if `num_grad_elements % num_dp_replicas != 0` https://cdn.discordapp.com/attachments/729741769738158194/1091546481649012786/image.png
kindiana#1016: you can also do low rank approximations, like powersgd (used in dalle 1). ideally for those types of schemes you should to keep error residuals locally, so it does not compose nicely with any scheme where parameters are moving (e.g. zero 3)
TastyBucketOfRice#8796: Thanks for pointing that out! I had heard that it's pretty difficult to maintain convergence with PowerSGD, but I've never tried it myself. Is that your experience as well?
kindiana#1016: I've never tried it either haha
jrowe#5371: https://huggingface.co/banana-dev/GPTrillion
JDC#0128: Wish I had the compute power to run that!
uwu1#4864: but even with like IB 200Gbps per GPU is still 1-2 OOM lower than the compute bandwidth right so could still be worth if you have TFLOPs/Tbps from VRAM. Like say you have 64B with tp=8, so each GPU needs to send and recv 16GB per allreduce step (for a ring reduce), which should take 0.64s. Given an A100 vram bandwidth of 2 TB/s, I feel like you could potentially have enough headroom for it. I forget if it's 200Gb/s full duplex or 100Gb/s each way
|
Getty#6694: what we would need for that?
OccultSage#3875: The answer to that is 400gbps NDR per GPU -- and SHARP which does the reduce *in* the fabric.
OccultSage#3875: Scales up to about 16384 with a nearly perfectly flat line.
kd90138#9368: You are referring to nvidia sharp available from certain dpu hardware right?
OccultSage#3875: Yup. We have SHARP at CoreWeave with 8-rail NDR 400gbps IB.
kd90138#9368: And that is why we arrived at a point where "networking"(dpu) hardware is as expensive and hard to procure as compute hardware (sometimes worse)
OccultSage#3875: It works. And it scales to 16384 nodes nearly linearly.
Kal'tsit#3130: what's supposedly better?
gptj, neox, llama or bloom?
Kal'tsit#3130: from the paper llama seems to have learned from both neo and bloom
Kal'tsit#3130: but neox 2.0 just got a recent release
JustHayden#4884: Has anyone taken a serious look at AI/LMs through a panpsychist lens?
Kal'tsit#3130: I am unsure others like the discussion here
but usually conciousness requires a continuous experience and thought process, which LLMs don't exhibit after the training
Kal'tsit#3130: but nevertheless, treating technological advancements, especially rapidly developing fields with respect and critical thinking is one of the foundamental qualities of a scientist
JustHayden#4884: The way I see it mind is a mutually dependent dual system of perception and thought; the former being the initial point of contact with stimuli, the latter being the internal causal chain that results in movement. What's experienced is equal to the sum of this causal chain, producing an image that we believe we're acting on. Consciousness is the observer contained within mind.
LLMs interact with stimuli and respond through process, which aligns with this definition of mind.
JDC#0128: As I understand it, for a given size, j and neox are pretty similar, and bloom is generally worse in English, but knows other languages better. LLaMa is better for it's size than the other ones.
Kal'tsit#3130: thank you. does neox v2 have any competitive edge?
|
Kal'tsit#3130: also is opt just in the position of "forgotten"?
Kal'tsit#3130: it cannot maintain an internal chain of thought
Kal'tsit#3130: it does not know what it doesnt known
JDC#0128: I'm not familiar with neox v2.
Yeah, I feel like not a lot of people talk about it.
JDC#0128: I'm curious to see how the Cerebras models compare to the gpt neox ones, because I think they were also trained on the pile.
JDC#0128: And I think Pythia is pretty decent, too.
Kal'tsit#3130: I see, ty for the info
hmm
kd90138#9368: Not good enough performance, encumbered license
Kal'tsit#3130: yeah thats my impression as well
llama didnt even bother with opt
Kal'tsit#3130: in their paper
uwu1#4864: haha yeah if you have that :p. poors only get p4ds tho
uwu1#4864: And even then with the CW fast interconnect, it just lets you scale up even more heh
epic_malloc#3813: I heard that there is a petition to stop matrix multiplication for the next 6 months signed by multiple prominent and influential AI people. Your thoughts?
epic_malloc#3813: The idea is go investigate what really happens in matmul
ETHUX#0917: So apparently this community/the firm hitted the news
ETHUX#0917: lol
ETHUX#0917: (not any good news)
|
ETHUX#0917: https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
```Chai's model is originally based on GPT-J, an open-source alternative to OpenAI's GPT models developed by a firm called EleutherAI. Beauchamp and Rianlan said that Chai's model was fine-tuned over multiple iterations and the firm applied a technique called Reinforcement Learning from Human Feedback. "It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts," Rianlan said.```
lunarflu#6769: vice
ETHUX#0917: There are multiple news articles about it, but I like vice sometimes
ETHUX#0917: But ya know, I'm new to the AI space, so still a lot to learn \:)
ETHUX#0917: Just intresting news
Ryu#0274: I haven't seen any other reports in english beside the brussels time tbh
tpapp157#3643: Chat-GPT banned in Italy over GDPR.
jrowe#5371: Goose morning
JustHayden#4884: Can you elaborate? I use "thought" for lack of a better term, but in my view this function can manifest in many different contexts; for instance, when one marble rolls into another, the surface atoms interact differently than the internal atoms, but the latter don't interact as such without the surface interaction, similar to how biological systems behave. The internal chain here matches the function of thought in biological systems, just not as complex as we're used to seeing it manifest.
jrowe#5371: Every inference run is a disjoint instantiation of the model, there's no continuous persistent state being maintained, just repeated sampling of the most probable next token
jrowe#5371: The prompt provides some continuity, but there's nothing being changed or recorded in the model itself, so the world model is static
JustHayden#4884: Ok I think I understand your point, there's no state change in the model in response to linguistic stimuli... But the hardware itself is in a continuous process maintaining the model, creating instances, referring back to the model, and responding. What would you call this concert?
gk#1255: that is just physics, anything "interact with stimuli and respond through process", i know you mentioned panpsychism, but at this level there is no difference
gk#1255: also transformers almost want to update their weights :berk: <https://arxiv.org/pdf/2212.07677.pdf>
JustHayden#4884: No difference in what sense?
jrowe#5371: In the sense of the computation being performed
jrowe#5371: If you turn temperature down to 0, you get the same results every run
Dud#7450: Alright friends, I've built my first real rig. 4x 3090s on an old mining rig. By next week that number should be 8 or close to it. My next step will be pairing them with nvlinks but I'm not quite there yet. As it stands, with 96Gb VRAM spread across 4 cards, what would be your recommendations for training? I'm assuming you're partial to deepspeed? I've only ever used HF peft with 8bit for GPTJ and llama. Additionally, all the benchmarks I've seen show no significant differences in performance between 8bit and full precision...but I'm sure there's more to the story.
|
Ideally, I'd like to fine tune 20B. Are there any good comparisons between training in 8bit with more parallelism vs splitting the model across cards with higher precision?
JustHayden#4884: There's gaps in my knowledge on the technical side of things, so I won't be able to fill the blanks these references leave 😅 I'm coming at this from a metaphysical angle (ontology/phenomenology) having dwelled on problems of mind and consciousness for several years, taking certain things to be true and looking for these truths in different contexts. LLMs are a distinct collective of knowledge and can be said to have individuality for that reason, so a phenomenological experience should be present at some level if we look at this through a panpsychist lens. It might help if I clarify that my version of panpsychism views mind as emergent (like any physical system) and consciousness as inherent to these systems.
AI_WAIFU#2844: Oof, you're gonna have a fun time if you're trying to do fine tuning, inter GPU bandwidth for mining rigs is awful. Effectively you have 2-4 badly connected 48gb GPUs. Rule of thumb is 16x bytes to parameters, so without special precautions you won't be able to do a straight fine tune of anything bigger than 3b.
bmk#1476: especially if the GPUs are connected with risers like these https://cdn.discordapp.com/attachments/729741769738158194/1091767621285851166/81t1wsWK-OL._AC_UF8941000_QL80_FMwebp_.jpg
bmk#1476: it will be godawful slow
Dud#7450: No, no risers, just direct pcie to the board. I know intergpu comms will be slow, but I figure I should be able to optimize my training methods to minimize crosstalk.
As for training in full precision, would the drawbacks really be that significant if I dropped down to half or even int8? I've fine tuned gptj and llama 13b on my single 3090 previously and gotten great results after about a day of training.
I realize that there are certain training methods you guys would use for real compute clusters that are off the table for me
Dud#7450: I'm using the rigs currently because they came to me 100% free, fully set up with 2000w PSUs and fans. In the future I may upgrade to something more serious, but right now I'm actually pretty optimistic about what I could do given the right methods
StellaAthena#3530: Pretraining in int8 doesn’t work, though mixed precision (fp16 / fp32) is standard. Can’t speak to finetuning as much though.
Louis#0144: not that I would want to pretrain in full precision ** lmao
AI_WAIFU#2844: Given your setup, I would try to train something like LoRA on 20b
AI_WAIFU#2844: That should be significantly less memory intensive.
AI_WAIFU#2844: Might require a good amount of elbow grease to make it work though.
Dud#7450: That's what I was thinking, at least initially. I've trained LoRAs using HF PEFT and int8 previously using the single card. Would my best option literally just be loading a full model on each card and merging weights every n iterations? Or would there be a more effective way to split that up without much crosstalk?
AI_WAIFU#2844: If it fits that's probably the way to go, otherwise maybe pipelining across pairs might work
CarsonPoole#0640: it might work if you do the compute in higher precision. Nvidia claims their fp8 works without issues so it's not unreasonable to think there's a way to make it work
|
StellaAthena#3530: I don’t unsetsnd what distinction you’re drawing here
CarsonPoole#0640: Weights stored in int8, compute done in bf16
StellaAthena#3530: If that worked, it would also work to do the entire thing in bf16 which people don’t do AFAIK (presumably because it doesn’t work?)
CarsonPoole#0640: Okay then weights stored in int8 and compute done in fp32
zphang#7252: worth noting that PEFT works with 8bit models
zphang#7252: 4bit is really unoptimized but might work too
StellaAthena#3530: NVIDIA doesn’t claim you can train in fp8 anywhere that I’ve seen
zphang#7252: (i.e. keep the frozen model weights in 8bit, but the tuned components are fp16 or fp32)
CarsonPoole#0640: They have benchmarks for BERT claiming no degradation
CarsonPoole#0640: Yeah it’s easy to do PEFT this way but would be more complicated to actually update the weights when they’re stored in lower precision
StellaAthena#3530: How big is the BERT model? If it’s 100M-scale I wouldn’t view that as very meaningful, as the precision problems get more problematic as you scale
CarsonPoole#0640: Fair enough for sure
StellaAthena#3530: (Again, talking about pretraining large models here)
skyler1#6603: Wtf is this article
skyler1#6603: trash
skyler1#6603: pisses me off ngl
skyler1#6603: so it's implied that GPT-3 is safe ? Promoting OpenAI much
skyler1#6603: journalism these days
Dud#7450: If I'm not mistaken, the int8 training for LoRAs using PEFT runs everything that's frozen in int8 and only loads the trainable weights in full precision. I could be dramatically misunderstanding this though.
CarsonPoole#0640: correct that's what happens
|
zphang#7252: I think stella's talking about tuning the full model, not peft
CarsonPoole#0640: yeah
Dud#7450: Got it. So actual tuning for full model weights can't be done in less than half precision, correct?
CarsonPoole#0640: would require rewriting a lot of the optimizer class (and maybe some of autograd) but should be possible to do a dequantize -> update -> quantize process in the update step
zphang#7252: I learned this week that you can't even save state_dicts from llm.8bit
zphang#7252: oddly, you can do it fine for 4bit
CarsonPoole#0640: the gradients themselves are easy, just add a custom autograd function that quantized and dequantized in the forward and backward pass
zphang#7252: I assume Tim is waiting to update the library with 4bit powers, and it will fix that too 🙏
Dud#7450: I haven't messed with 4bit whatsoever to be fair
CarsonPoole#0640: going below int8 there are better compression methods than scalar/dynamic quantization
CarsonPoole#0640: like int4 is good esp if you only use it for the MLP but there's better
Dud#7450: Wouldn't you start losing context on long form responses?
CarsonPoole#0640: you can get compression levels equivalent to 2.33 bits for about the same MAE as int4 scalar quantization
ETHUX#0917: Yeah I agree on that, it is very weird
ETHUX#0917: It just wanted to share it
ETHUX#0917: Oh, yeah in Belgium here it is "news" so idk
Ryu#0274: Weirdly enough, the original one doesn't mention Chai by name https://cdn.discordapp.com/attachments/729741769738158194/1091807530239406280/Sans_ces_conversations_avec_le_chatbot_Eliza_mon_mari_serait_toujours_la__-_La_Libre_3_31_2023_4_48_45_PM.html,https://cdn.discordapp.com/attachments/729741769738158194/1091807530759495690/Le_fondateur_du_chatbot_Eliza_reagit_a_notre_enquete_sur_le_suicide_dun_jeune_Belge_-_La_Libre_3_31_2023_4_50_14_PM.html
Ryu#0274: (Can't believe I paid 8 bucks for this smh)
ETHUX#0917: lolol
ETHUX#0917: 8 bucks
|
skyler1#6603: Yeah I figured 😭
skyler1#6603: The pain I get when people criticise open source projects lmao
kd90138#9368: Nvidia also claimed tf32 works without issue but now it's disabled for matmul by default
CarsonPoole#0640: fair enough. TF32 is a nightmare for LoRA
kd90138#9368: That bad? Such a shame. I had high hopes for tf32
kd90138#9368: Does anybody know if int4 defaults to cuda cores or still work on tensor cores?
CarsonPoole#0640: depends on the device
CarsonPoole#0640: well to do anything with int4 you have to be writing actual cuda code
CarsonPoole#0640: and I think it's just a tensor core thing
Maximum Limelihood Estimator#8915: Hey, does anyone here have experience working with XLA directly? (Not just TensorFlow or some other frontend, I mean XLA itself.)
kd90138#9368: https://arxiv.org/pdf/2301.12017
spiritbackup#1797: does anyone have any thoughts on how to proceed now that notkilleveryoneism is about to collide with broader culture war stuff
#4099: I would point to the fact that EY believes in killing everyone except maybe 10 people in order to prevent an imagined future technology from also getting his last 10. EY is the one with the alignment problem lol.
A good April fool's day question: what is the number of people you're willing to murder to prevent an imagined future problem from happening? Let's hope it's less than EY's answer.
https://twitter.com/QuintinPope5/status/1642100668126355456?t=tyYK_s0oUoVYAZJpAnnBnQ&s=19
#4099: The number of anti-technology Ted Kaczynski-tier takes is rapidly increasing. You shouldn't fear Machine God, you should fear the humans that fear it instead because those are the people that are going to fuck everything up for everyone else.
Parkourwalrus#0212: The Yuddic Path lmao
spiritbackup#1797: if it's between:
1. dont kill everyone in a nuclear war and everyone dies due to AI anyways
|
and
2. kill everyone in a nuclear war except 10 people and prevent humanity from dying
spiritbackup#1797: id probably choose 1
spiritbackup#1797: as of right now though it is very clearly not between those two outcomes
spiritbackup#1797: and I think EY agrees?
#4099: EY does not agree. He thinks ethics is a purely quantitative calculation and so billions must die by his hand to save billions from dying by something else's hand.
spiritbackup#1797: no I mean
spiritbackup#1797: he doesn't think this is a scenario were in right now
spiritbackup#1797: https://twitter.com/ESYudkowsky/status/1641953192761266177
spiritbackup#1797: doing nuclear war right now would only make the situation worse I think
#4099: "I didn't say we should use nukes, I only said we should airstrike collos in China and wipe out thousands of live targets directly provoking nuclear retaliation instead!"
spiritbackup#1797: this is in a hypothetical situation where somehow we've restricted ourselves to those two options but not enough that we're doomed regardless
spiritbackup#1797: just looking at the text
spiritbackup#1797: he isn't advocating for nuclear war
spiritbackup#1797: he says we should be willing to risk it if countries break a disagreement
spiritbackup#1797: which is how they're handling other things already!
#4099: I know he believes he's not, but he is. If a general said what he said, it would be a declaration of war. And you'd be dumb to think EY wouldn't order the airstrikes himself if he had the rank.
spiritbackup#1797: what do you mean "he believes he's not, but he is"?
#4099: "Someone else did something bad so we should too!"
spiritbackup#1797: maybe you agree that it's bad?
|
spiritbackup#1797: that doesn't mean that EY is a terrorist or anything
#4099: He believes he is not advocating for nuclear war but he is heavily implicating it as a direct consequence.
spiritbackup#1797: I think this is well within the bounds of what we can expect people to do and not be totally insane
spiritbackup#1797: in any case, I think even if risking a nuclear war is a bad thing, it might, I think you'd have to do a lot of mental gymnastics to suggest that EY is insane or dangerous
spiritbackup#1797: and someone we should just dismiss
#4099: lol
https://twitter.com/perrymetzger/status/1641827268761640960?t=QpiXiHPWp3tj-EwVsYINOQ&s=19
spiritbackup#1797: like even if you think this is the worst thing ever, the issue of whether we should risk nuclear war to enforce international agreements is not a simple one and it's one where you can expect people on both sides to have reasonable and principled takes
#4099: https://twitter.com/perrymetzger/status/1642190231318216708
spiritbackup#1797: EY is suggesting that we add AI risk to the list of things these agreements concern
#4099: https://twitter.com/perrymetzger/status/1642190234120081408
spiritbackup#1797: I already addressed these things?
spiritbackup#1797: Like, no EY is not advocating for nuclear war, he said it himself
spiritbackup#1797: And no one in the alignment community wants anyone to bomb OpenAI or anything, it would do nothing and just make us look bad
#4099: He is not 'suggesting', he is explicating that many people will need to die to stop AI from advancing, because AI will keep advancing and he believes salvation only comes if we kill everyone at a collocation.
spiritbackup#1797: Maybe you can suggest that EY shouldve been more careful
spiritbackup#1797: You'll be glad to learn that many in the alignment community are inclined to agree
spiritbackup#1797: But that's a different criticism
spiritbackup#1797: and one that you can't defend by just putting words in EY's mouth
spiritbackup#1797: We already went over this, *he isn't*
|
#4099: No one here is saying he advocated the use of nuclear weapons, continuing to suggest that someone said this is either a red herring or strawman, idc, he did suggest actions that directly lead to the use of nuclear weapons and I will grant you that, yes, that one single step removed from what he said, good job, you found a rhetorical way out.
spiritbackup#1797: then what was this about?
#4099: I will suggest that he has lost his mind.
#4099: I'm taking them directly out of his mouth, but okay.
spiritbackup#1797: > I would point to the fact that EY believes in killing everyone except maybe 10 people in order to prevent an imagined future technology from also getting his last 10.
spiritbackup#1797: He isn't saying this
#4099: The number of EY apologists is crazy. the EA/LW cult is a strong one!
https://twitter.com/perrymetzger/status/1642305007411142668
spiritbackup#1797: This is just what he would do if we were already heavily fucked and had no options
spiritbackup#1797: This isn't an actual point
#4099: He says it right here - https://twitter.com/QuintinPope5/status/1642100668126355456?t=tyYK_s0oUoVYAZJpAnnBnQ&s=19
spiritbackup#1797: Oh my god I literally just told you why your criticism didn't apply
#4099: You are not a competent judge.
spiritbackup#1797: I can understand you thinking it indicates EY wanting to kill everyone
#4099: Because it does.
spiritbackup#1797: But I can't understand you literally hearing what I said and ignoring it anyways
jrowe#5371: Don't feed trolls
#4099: EY is a troll.
jrowe#5371: Your mom is a troll
#4099: You're going to be one of the first that get airstrike'd buddy.
|
spiritbackup#1797: Look we can have a discussion about whether EY's article was a good idea. I think you'll be glad to learn I have lots of disagreements with EY on that front, and with many other things.
spiritbackup#1797: But I cannot do that if you're going to ignore what I say and work off of false premises
#4099: I'm matching you, you flatout deny direct statements from EY and ignore what I say, so it seems easy to return the favor.
spiritbackup#1797: Again, I directly addressed that
#4099: Same
spiritbackup#1797: In one place, EY is talking about what he would do in a worst-case scenario, in another he's talking about what we should do now
spiritbackup#1797: anyways
spiritbackup#1797: I think you should like, address my criticism of your criticism
#4099: In his Times article he explicates that we are now in the worst-case scenario.
spiritbackup#1797: rather than just continuing to assert the same thing over and over again
spiritbackup#1797: and *then* we can move on
#4099: Ditto
#4099: Voice chat is always an option.
spiritbackup#1797: Wdym "explicates"?
#4099: makes explicit, explicitly says
#4099: Imagine someone came to you and laid out formal arguments and empirical evidence that Jews were an existential threat to civilization and the only way to guarantee we survive is if we eliminate the Jews. This is the caliber of freak EY is, arguing for the elimination of millions to guard against an imagined threat.
spiritbackup#1797: If this was the worst-case scenario he'd be advocating for nuking everything, not making international agreements and threatening nuclear strikes, *a thing which we are already doing anyways*
#4099: Ok
spiritbackup#1797: I reiterate that you might disagree that this is a good thing, but its certainly not insane and its something i could imagine someone saying due to principled reasons
spiritbackup#1797: This presupposes that there is formal arguments and empirical evidence that Jews are an existential threat
|
spiritbackup#1797: This is evidently, not true
#4099: And I reiterate that you fell for this - https://twitter.com/perrymetzger/status/1642165588972781573
spiritbackup#1797: ???
spiritbackup#1797: I read it
spiritbackup#1797: Instead of actually trying to attack the actual point being made
#4099: OK
spiritbackup#1797: he accuses the other side of being dishonest or insane
#4099: Come to voice chat lol.
spiritbackup#1797: How am I supposed to engage with that in any way besides "Ok, actually talk about whether its a good point, don't fucking dance around it and use how it sounds totally insane instead of actually going off and saying why its insane"
Anish Upadhayay#2349: 🚓 👮 👮
Anish Upadhayay#2349: "all right break it up guys"
#4099: lol
Anish Upadhayay#2349: "a green username has come to the scene"
artem9k#7593: 😎 🚓 shows over
#4099: actually come to voice chat to talk about it then
Anish Upadhayay#2349: "and his name is *takes hat off* Joe Biden."
Anish Upadhayay#2349: (you all gasp)
spiritbackup#1797: Can't actually vc right now, and I'd rather leave this public for other people to see
#4099: Okay Euthyphro
Anish Upadhayay#2349: "Uncharitable discussions that don't follow rationalist norms are against the United States law."
|
Anish Upadhayay#2349: https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1
Anish Upadhayay#2349: "Rule #1: Always respect your fellow American"
#4099: B-b-but EY doesn't believe in the principle of charity, he said so himself on the Lex podcast!
spiritbackup#1797: Anyways I don't really want to call you a bad actor, or anything
Anish Upadhayay#2349: "The only bad actor here is Leonardo DiCaprio"
spiritbackup#1797: but I do think we're operating off of very different foundations?
#4099: Too bad you did then. But you can back peddle and save face however you want.
spiritbackup#1797: When????
spiritbackup#1797: Like we're trying to do different meta-level things
#4099: ????
spiritbackup#1797: Don't just say "????", actually link when I said anything like that
spiritbackup#1797: or even might've suggested it
spiritbackup#1797: besides that comment you were previously replying to
#4099: Don't just say "Don't just say "????", actually link when I said anything like that"
spiritbackup#1797: or whatever
ilovescience#3282: Leonardo DiCaprio is not a bad actor
spiritbackup#1797: So you're just not going to provide evidence for your claims?
#4099: Kind of like EY, yea
https://twitter.com/perrymetzger/status/1642010757465341952
spiritbackup#1797: Even this one which you can just, search right now?
|
#4099: Correct
Anish Upadhayay#2349: "I want you two to both shake hands or make up or else I'm calling in Kamala"
Anish Upadhayay#2349: "She's not as nice as I am"
spiritbackup#1797: Amazing
#4099: I am in voice chat if you want a real discussion.
spiritbackup#1797: I already told you why I don't want to voice chat
spiritbackup#1797: and anyways, why do you want to voice chat?
#4099: And voice chat is something you can just click to solve your problem, so we are equal in this - neither of us will do a single click to give the other what they want.
spiritbackup#1797: yeah, but like, why are we playing this game in the first place
jrowe#5371: Because :trollge: dude lol
spiritbackup#1797: I gave my reasons for not wanting to vc
spiritbackup#1797: now its your turn
#4099: I don't know, you just came up and hit me.
spiritbackup#1797: No?
#4099: > neither of us will do a single click to give the other what they want.
spiritbackup#1797: I said a thing, you responded to it
spiritbackup#1797: It seems reasonable to respond back
#4099: My response is in VC.
spiritbackup#1797: It's not perfectly symmetric though
spiritbackup#1797: I gave reasons for not vcing
|
#4099: Your face isn't perfectly symmetric.
spiritbackup#1797: You didn't
spiritbackup#1797: For fucks sake, I'm starting to think you actually are a troll
#4099: You're right, I didn't give reasons for not vcing.
spiritbackup#1797: Inbetween two other people saying you are
#4099: one*
spiritbackup#1797: I really want to give you the benefit of the doubt here
#4099: Then give it to me in VC.
spiritbackup#1797: but I'm starting to suspect something is up here
spiritbackup#1797: like why would you possibly not want to tell me why you don't want to vc?
#4099: Look, EY is willing to bet billions of lives on his prediction, why aren't you willing to do the same? Maybe because you know that's *wrong?*
spiritbackup#1797: Anyways, if you are genuinely not a troll, don't take this as me disregarding your points or trying to defend my point of view by not engaging with you or take it any amount of personally
spiritbackup#1797: but from my point of view, it seems like something is up here
spiritbackup#1797: I'm gonna do something else
#4099: I was half fucking around but I do honestly believe EY is out of his fucking mind and the sheer number of apologists that have popped in the last two days evinces a EA/LW cult environment.
#4099: jrowe has reduced himself to trollface reacts because he is incapable of complete sentences.
spiritbackup#1797: There were a lot of EAs or LWers around here beforehand
#4099: Yea, it's been a cult for a while.
spiritbackup#1797: for example, tammy#1111 joined here during 2021
spiritbackup#1797: apparently
|
spiritbackup#1797: pretty sure @Maximum Limelihood Estimator is around a lot of rationalist-y parts
spiritbackup#1797: oh they're in the ea corner
spiritbackup#1797: joined during august 2022 i believe
#4099: https://twitter.com/perrymetzger/status/1641476512174989314
Parkourwalrus#0212: There's talking about strange Utilitarian hypotheticals
spiritbackup#1797: so i think its less that there were suddenly eas and lwers joining and more like we were already here
spiritbackup#1797: i mean there's literally an #agent-foundations channel here
Parkourwalrus#0212: And then there's talking about them in Time Magazine while strongly implying that the scenario they describe of AGI emerging is imminent
spiritbackup#1797: made during early 2021
#4099: *[ several people are typing ]*
Maximum Limelihood Estimator#8915: @AI_WAIFU @alstroemeria313 @Louis troll/flamewar, can y’all deal with it
Drexler#4006: Dude Eleuther is like, founded by LW people.
spiritbackup#1797: I can easily see *why* you might think that we are headed towards a strange utilitarian hypothetical
spiritbackup#1797: But taking EY's words as they are, it seems like to me EY isn't saying were headed towards one of them yet
#4099: I know, and sometimes you have to really look yourself in the mirror and ask if you're the ones with alignment problems.
Drexler#4006: Accusing the large number of LW people of being shills/trolls/entryists is just wei-what?
Drexler#4006: @AI_WAIFU
#4099: lol
spiritbackup#1797: oh yeah heres research lead and mod AI_WAIFU
ilovescience#3282: Ai_waifu decided to go to bed lol
|
spiritbackup#1797: *WHO IS ALSO A RATIONALIST*
spiritbackup#1797: aghagaghaghhaghaghagh
Maximum Limelihood Estimator#8915: @spiritbackup please don’t feed the trolls
#4099: I have been patiently waiting in VC if anyone wants a serious discussion.
#4099: I claim the april fool's defense.
#4099: It takes less lives than EY's too.
spiritbackup#1797: the fact that you're using it at all is enough to make me not want to talk to you for all time
spiritbackup#1797: bye!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#4099: You already knew you didn't want a serious discussion and you looked for any way to justify it, the earlier convo made that pretty clear.
Anish Upadhayay#2349: whered off topic go
ilovescience#3282: You were banned
Anish Upadhayay#2349: was it the quiz
ilovescience#3282: I won't say
ilovescience#3282: (nobody tell him, let it drive him crazy)
artem9k#7593: it was based on vibes
Louis#0144: someone honk at leo
Louis#0144: im waiting for my invite to the special goose server
jrowe#5371: @bmk :goose:
!!Puffy Bird!!#7496: Y no april fools
lunarflu#6769: he's just good at producing outputs that look like good acting :troll_handsome:
|
synquid#7193: hello new #off-topic
ETHUX#0917: Good afternoon everyone
Getty#6694: As a total noob on the complete sector, i am totally confused what IS the ontopic anyway 😉 hehe
tetraspace#1403: 🤨
Daj#7482: This is probably the single densest most schizophrenic brain dump of me publicly available to date, turned out great, thanks @timscarfe !
https://www.youtube.com/watch?v=T8tHmQiYzVA
Daj#7482: Despite the title, the most interesting bits are some pretty deep schizo Philosophy
lunarflu#6769: Thanks for sharing, didn't even know this channel existed!
lunarflu#6769: 🤤
The_Alt_man#5718: Hm. Ironic considering all the MLST channel shills for is that LLMs jsut model surface statistics and don't really have any capabilities, being pretty close to stochastic parrots
Byzantine Soul#6526: This is the video they refer to early on. From 2 years ago and features Connor advocating for the idea that GPT-3 is doing real reasoning
https://www.youtube.com/watch?v=iccd86vOz3w
lunarflu#6769: Hey Connor, in your experience, is there a high demand for more content inverviewing all sorts of people in ML? 🤔
Maximum Limelihood Estimator#8915: Get in nerd, we’re building a TPU compiler for Julia
Maximum Limelihood Estimator#8915: (Or more accurately, compiling to XLA)
CarsonPoole#0640: I feel like it's difficult with XLA to beat good kernels + CUDA graphs even if you're running on TPUs, and especially if you care about a per cost basis
CarsonPoole#0640: I was incredibly shocked by how much CUDA graphs sped things up
CarsonPoole#0640: but the inability to write custom kernels for TPUs essentially limits everything to 16 bits and even though you can write GPU kernels with XLA I just don't see the point
Maximum Limelihood Estimator#8915: What does this mean
|
Maximum Limelihood Estimator#8915: I no computer scientist brain is smol
Maximum Limelihood Estimator#8915: Should I *not* compile to XLA then? What should I do instead?
CarsonPoole#0640: XLA on TPUs are going to be bounded by the gemm kernel times, which are admittedly fast for 16 bits, but going below 16 bits allows for quite large speed improvements which are inaccessible without writing custom kernels (so no TPUs). Yes XLA makes everything a graph but you can do the same thing in torch you just have to make a CUDA graph
Maximum Limelihood Estimator#8915: Oh ok. How do I make Julia into a CUDA graph
CarsonPoole#0640: i don't know julia so I can't help much there
Maximum Limelihood Estimator#8915: Ok XLA it is then
Maximum Limelihood Estimator#8915: @ILmao ?
Dashiell#8739: Hmm what is the best way to use CUDA graphs with torch?
Dashiell#8739: I'm looking at the documentation and it seems kinda like jit in Jax with a much jankier API
Dashiell#8739: Except I know there's also `torch.compile` now. Does _that_ use CUDA graphs?
bob80333#4040: it seems maybe with reduce-overhead mode? https://discord.com/channels/729741769192767510/729741769738158194/1069795280876216400
CarsonPoole#0640: I don't think torch.compile uses them under the hood. I couldn't get torch.compile to be better than eager mode so I had to go lower
CarsonPoole#0640: oh wait nvm just looked at the linked message
CarsonPoole#0640: the hard part about cuda graphs is you have to have fixed shapes
CarsonPoole#0640: so for language models that's particularly difficult
CarsonPoole#0640: it ends up easier to do it in c++ than in python
CarsonPoole#0640: I would be actually quite interested to see how far just cuda graphs get you on an image model where the shapes are largely static
ILmao#5683: Does the fixed shapes limitation come from CUDA graphs, obtaining graphs via stream capture or a higher-level layer like torchinductor?
CarsonPoole#0640: it's all the way at the cuda level. you have to do it even if you're using the raw cuda api in c++ without torch
CarsonPoole#0640: the thing is with the raw api you can alter the graph and hide some of that overhead while the graph is sent to the device for execution
|
CarsonPoole#0640: the downside though is that you need to write kernels for like elementwise operators and shape manipulation
ILmao#5683: Where can one learn more about this size dependency at the API level? I'm reading through https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#cuda-graphs right now and nothing pops out
Maximum Limelihood Estimator#8915: Hmm, sounds to me like a job 4 Julia
CarsonPoole#0640: it kind of comes down to the fact that when you run a given kernel it won't work if you feed the exact same kernel with the same blocks and threads but the shape of the input data is different. it's not like a parameter or something like that
ILmao#5683: For any kernel? Or just particular ones
CarsonPoole#0640: and when you do a graph you're telling it to run the kernels the exact same way again. you even need to update the input tensors in place to run on new data
ILmao#5683: I would've thought that's only a problem for stream capture
ILmao#5683: Unless you're saying that CUDA graphs expects all pointers to be exactly the same between graph launches
CarsonPoole#0640: yes that's what I'm saying
CarsonPoole#0640: also as a side note the torch cuda graph seems absurdly slow to compile the graph. it takes on the order of hundreds of microseconds at the c++ level and it is a lot more than that using the torch api
CarsonPoole#0640: it makes amortizing the compilation cost a lot harder
CarsonPoole#0640: also a surprising thing that happens with cuda graphs is that not only does the CPU overhead go away, but the cuda kernel launch time nontrivially decreases as well
ILmao#5683: Isn't that the point? My reading of the docs was that the launch time is a non-trivial part of that CPU overhead
CarsonPoole#0640: well yes the CPU overhead is the biggest part but you even get an improvement on GPU overhead
ILmao#5683: Oh do you mean the kernel run time?
CarsonPoole#0640: https://developer.nvidia.com/blog/cuda-graphs/
```Measuring the time taken for this complete process, and dividing by 1000×20 to give an effective time per kernel (including overheads), gives 3.4μs (vs 2.9μs kernel execution time), so we have successfully further reduced the overheads.```
CarsonPoole#0640: you launch all the gpu kernels at once instead of needing to do it sequentially
ILmao#5683: Further up it says:
> When we do this, we measure the time taken per kernel (including overheads) to be 3.8μs (vs 2.9μs kernel execution time)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.