data
stringlengths 115
7.61k
|
---|
~~~#4682: BILLIONS of dollars in VENTURE CAPITAL and still SPEZ has NO WORKING SEARCH BOX 2 show
~~~#4682: taken 4 absolute FOOLS
~~~#4682: I started reddit modding when I was like 13 and they STILL havent fixed the god damn search box
EricHallahan#1051: Congrats @kindiana on your patience.
https://twitter.com/dmvaldman/status/1635718372930523136
kram#1032: what advantage would byte level have over byte-pair level?
I might be wrong here, but iirc, Unicode operates on byte pairs. Bytes would mean more resolution than necessary to build arbitrary Unicode, doubling the necessary context length, *and* requiring to learn to *always* produce an even number of bytes...
And if you want to support arbitrary Unicode directly, that's trickier than you'd think, as what counts as one letter is extremely arbitrary and inconsistent. Like, in the Arabic block, iirc, you can stack basically infinitely many text modifiers and still have it count as a single letter. - If you've ever seen that weird glitch text that spans multiple lines, that's basically how that works.
It's therefore effectively "open". You cannot *enumerate* all possible unicode letters. At best you can give a reasonable cutoff - perhaps you don't want to support such "glitchy text" in the first place.
But then you run into all sorts of issues trying to sanitize inputs appropriately and what not.
Ravna#1831: does this job include catching stray rats inside the server racks?
kram#1032: The thing begins
~~~#4682: lol I didnt know pain until I tried training stuff on google colab and the button clicker got detected friday night at 3 am
kram#1032: https://www.youtube.com/watch?v=outcGtbnMuQ
~~~#4682: then I learnt kaggle gives u 30 hours FREE
What should everyone call you?#2680: Is there a chat for watching the livestream together?
~~~#4682: and now I have access 2 a GPU someone else pays for Im BALLING
thenightocean#6100: one day I might tell my grandkids that I did some small work for Ben Wangs open source project in summer of 2021... and they wont believe me 😄 .
|
B o b#8123: *this livestream script was writen by GPT-4*
kram#1032: they take "audience suggestions" but the chat is closed (probably for the better) - no idea where those audience suggestions are supposed to be submitted
CKtalon#7792: this whole video is AI generated
CKtalon#7792: that would be a win
kram#1032: hah
kram#1032: imagine
~~~#4682: Thats an insane way to handle arabic text lol it should just be ligatured
~~~#4682: Actually the way unicode handles araboid fonts is generally insane
kram#1032: It is what it is
kram#1032: We aren't gonna move to a different thing now
~~~#4682: Urdu language is supposed 2 be diagonal but unicode says "uhhhhh thats font stuff we dont want another codepage"
EricHallahan#1051: Pulling a Jensen, I see.
~~~#4682: So consequently all persian text online looks like SHIT
synquid#7193: steerable to anything :chadgoose:
kram#1032: Lots of unicode stuff is terrible legacy nonsense. But imagine starting an entirely new standard today. It's just not ever going to take hold.
~~~#4682: Meanwhile when google asks u for another codepage for enhanced poo and pee emojis:
B o b#8123: someone say Z
synquid#7193: no way this is live right
synquid#7193: lmao
tommyonabusn#7192: gdb discord tag leaked https://cdn.discordapp.com/attachments/729741769738158194/1085292495174373447/image.png
|
Lofty#7545: rut roh
~~~#4682: @kram Anyways for text I remembered work showing internal learned gibberish with semantic attachment in image models.
~~~#4682: Why not just have the computer figure out an encoding optimal for itself?
kram#1032: Part of it, though, is that Unicode is designed (through hacks on top of the original idea) to be open ended for expansion. And that *is* a good thing. And if you stick with byte-pairs, your tokenizer will, in principle, also be able to generalize that way, so *future text* that wasn't used *for the tokenizer* can still be taken in and interpreted by AIs that *use* that tokenizer.
Even if you *could* enumerate all unicode characters, you *will* run into that being brittle for the future, so you'll need to fall back to <UNK> tokens like we had before these BPEs took hold
kram#1032: This is done anyways, no?
Sorta, anyway. Ideally you'd build your tokenizer on the same training data as your text AI.
Not doing so caused the existence of the infamous glitch tokens where the tokenizer thought certain strings are extremely common to the point of making it good to memorize them as individual symbols, but in much more sanitized data they are not present.
But still, those tokenizers are precisely built to find an at least approximately optimal encoding.
kram#1032: I'm sure better algorithms can be devised
~~~#4682: Ah ok
kram#1032: And tokenizers as they exist today *do* delete some information that would likely be good to have at inference time
kram#1032: such as how many letters a word contains
kram#1032: like, if it has "humanity" as a single token, it sees that as one thing, rather than as 8 things in sequence. And it'll have to learn separately that, say, "human" is related to "humanity", or that "ity" (or at least "ty") is a common ending that tends to indicate the same kind of word (like those are usually nouns) etc.
Aran Komatsuzaki#5714: Just spotted GPT4 https://cdn.discordapp.com/attachments/729741769738158194/1085294261509038160/moes777.png
kram#1032: Also stuff like
"humanity"
" humanity" (with space)
|
"Humanity" (capitalized)
"h""u""m""a""n""i""t""y" (separate letters)
- all different to the AI! Gotta learn the connection between them after the fact.
EricHallahan#1051: > Welcome to Moe's!
B o b#8123: Can they ask gpt-4 how to make a good production
kram#1032: Ok interesting, the context length thing is *flexible* and automatic somehow. It will normally use the 8k but stretch to 32k if it deems that necessary
What should everyone call you?#2680: A production of what?
skrishna55#3382: how big is GPT-4, any guesses?
kram#1032: Big, but also, pointless to guess lol
kram#1032: Parameters really don't mean anything at all if not utilized well
kram#1032: It's conceivable that GPT-4 is actually no bigger than GPT-3, but simply better trained somehow.
Unlikely, but not impossible.
nostalgiahurts#3408: looks like people noticed https://cdn.discordapp.com/attachments/729741769738158194/1085295694845317210/pings.png
kram#1032: whoops lol
Kharr#7888: "Live demo" that works with client throwing errors. 🤔 https://cdn.discordapp.com/attachments/729741769738158194/1085295868950888528/image.png
synquid#7193: discord asked me if I was a bot because I added him :berk:
kram#1032: not sure what you're saying exactly, this is thrown by Jupyter, not GPT-4
Kharr#7888: Yes, the code was supposedly running live and the bot worked fine despite it erroring out. Obviously not live.
kram#1032: ah
|
~~~#4682: Thats not a jupyter error either, its in urllib
~~~#4682: API probs borked
Himo#5524: wait bing is using gpt4!?
kram#1032: apparently
kram#1032: "we think it can really benefit a lot of people"
(read: we think this will make us all of the money)
kram#1032: and that's that
kram#1032: kinda dry
faraday#0862: base is too similar with GPT-3.5
~~~#4682: actually, we have evil sinister ai already and its replika lol
faraday#0862: if you check the charts, the magic happens with RLHF actually
~~~#4682: And the sinisterness is just the profit motive of exploiting lonely people
kram#1032: yeah I saw that
~~~#4682: Lol
kram#1032: though GPT-3.5 Chat GPT also already used RLHF
faraday#0862: yes but they have a base comparison as well in there
faraday#0862: let me find it
kram#1032: https://cdn.discordapp.com/attachments/729741769738158194/1085297597507436654/image.png
AI_WAIFU#2844: Lol
faraday#0862: oh, yes this one
|
faraday#0862: gpt-3.5-base 0-shot is too similar with gpt-4-base 0 shot here
kram#1032: GPT-4 base is ever so slightly better than GPT-3.5 base
kram#1032: but it's really close
bob80333#4040: it is interesting that 0-shot is so similar, but a gap appears for 5-shot
kram#1032: by 5-shot it's quite a bit better tho
kram#1032: and once you ad RLHF it's MUCH better
faraday#0862: I wonder what supports its 5-shot difference boost
probably better data, bigger data
mixy1#6830: the math abilities look greatly improved playing a bit around on chatgpt
mixy1#6830: however code isn't that much better
skrishna55#3382: any clue how big is gpt-4?
kram#1032: it probably *does* have more parameters. But possibly not nearly that many more.
It also has access to an extra modality, so some of that might actually be cross-modal transfer.
That alone might potentially explain the performance boost
kram#1032: understanding text better by seeing images
nostalgebraist#3542: i had a similar experience re: math, see https://nostalgebraist.tumblr.com/post/711799370327212032/heres-gpt-4-on-poecom-answering-the-first-of
kram#1032: no clue how big GPT-4 is. They purposefully withheld that information
~~~#4682: This is actually entirely unrelated to large language models btw but idk where else I would ask
~~~#4682: Am I reading wrong or does it look like the paper "taming VAEs" never ended up published?
mixy1#6830: ah cool it's on poe
|
faraday#0862: this is why I think Bing is GPT-4
it can do some good math
kram#1032: I hope they'll release GPT-4 to plebs like me~
Stuck on Feb 13th ChatGPT for now
nostalgebraist#3542: interestingly, bing (with the same base model) seemed far worse on those math questions (see linked post). but not like it didn't have the capabilities, more like the Helpful RLHF is making it wildly overconfident
mixy1#6830: you can try 1 message on poe.com 😄
Spy#9778: They confirmed bing is gpt4
45#2247: gpt-4 channel deleted?
kram#1032: one whole message LOL
kram#1032: nope, that was #general all along
faraday#0862: damn, 32k copy-paste looks effortless
kram#1032: it just got briefly renamed to focus attention to a single channel
~~~#4682: Its strange bc it read as rly cool and its by google research but like 4 years later I can find no trace of peer review or anyone ending up using their recommendation
45#2247: astronaut.jog
kram#1032: as people wrote about gpt-4 in multiple channels before that
mixy1#6830: tbf I had a decent experience with gpt3/3.5 on chatgpt in regards to lambda calculus
kram#1032: ugh, I mostly understand why they do this but I hate logging in for the first time with my phone number. I was already very reluctant to do that for ChatGPT, and now Poe is asking the same thing
kram#1032: oh wow, Khan Academy immediately released videos on GPT-4
nostalgebraist#3542: if you want more than 1 message, i'd just sign up for chatgpt plus directly
kram#1032: https://www.youtube.com/watch?v=yEgHrxvLsz0
|
https://www.youtube.com/watch?v=rnIgnS8Susg
nostalgebraist#3542: since you've already given them your phone #
kram#1032: that'd require having money~
mixy1#6830: You have no reason to not have money 😛
kram#1032: right
jrowe#5371: oh damn
jrowe#5371: khan gpt-4 uber teachers
What should everyone call you?#2680: I signed up for Chatplus.
What should everyone call you?#2680: Anyone want me to ask GPT-4 a question?
kram#1032: "Khanmigo" - there is no way this won't go terribly, *terribly* wrong
kram#1032: Jailbreak is still way too easy
technium#5048: https://cdn.discordapp.com/attachments/729741769738158194/1085300280175906816/image.png
technium#5048: does **to** count as a fail
mixy1#6830: I mean it doesn't really matter
kram#1032: Iunno, doesn't it?
mixy1#6830: if you want a jailbroken large language model just use the leaked llama models lol
kram#1032: Not what I'm saying
mixy1#6830: jailbroken llms are everywhere now
kram#1032: I'm saying this is kids using this. And their parents. Who often like to complain about trivial nonsense, let alone an AI going rogue
mixy1#6830: khanmigo sounds amazings cause it would be integrated directly into the learning experience. If people want to jailbreak it sure they're smart enough to do that.
|
mixy1#6830: they deserve it then.
kram#1032: you don't really have to be *smart* to jailbreak lol
I mean, perhaps to come up with it. But it's easy to copy and paste
mixy1#6830: When I was 9 years old little stopped me from learning about anything deemed unethical for a 9 year old. Nothing is stopping my little nephew either
45#2247: what was the context window of the original gpt-3 paper?
What should everyone call you?#2680: Also, GPT-4 can translate sentences to morse code. Nostalgebraist was wrong. https://nostalgebraist.tumblr.com/post/705192637617127424/gpt-4-prediction-it-wont-be-very-useful
Rohan#3064: @zphang is there an easy way with your transformers llama branch to make the tokenizer put a eos token into the output, like </s> or something
Spy#9778: it's interesting they confirmed bing is gpt-4 but
queef#0339: Remember when openai was supposed to be open 💀
Spy#9778: as of a few days ago bing was still atrocious at spelling based tasks
nostalgebraist#3542: hmm, gpt-4 got my "everywhere finite and everywhere locally unbounded" math question wrong on the first try. i'm talking to it about the problem as i did with bing, still not getting it yet...
queef#0339: And open source 💀
kram#1032: Yeah, I'm actually not concerned at all about what kids might inadvertently see.
I'm rather saying, entitled parents will blow up with a huge backlash against khanacademy or something.
nostalgebraist#3542: lmao, read the post
uwu1#4864: parents love ipad
mixy1#6830: to be fair it's really interesting that bing is running on gpt-4 since it runs pretty fast.
What should everyone call you?#2680: I did. Your post was silly. ChatGPT could translate sentences to morse code roughly 1/3 times.
Aran Komatsuzaki#5714: 2048
nostalgebraist#3542: the post:
|
> I expect Morse Code to be cracked by GPTs at some scale. What basis to I have to expect this scale is greater than GPT-4’s scale (whatever that is)? Like everything, it’ll happen when it happens.
faraday#0862: AGI will paperclip all of us but leave Mark Zuckerberg free
kram#1032: ChatGPT 3.5 used a 4k context
4 uses 8k and apparently can somehow dynamically extend to 32k "automatically"
Don't know about GPT 3
Spy#9778: source on the dynamic extension thing?
Spy#9778: (as opposed to just having an actual context size of 32k)
mixy1#6830: I haven't read this dynamic extension thing
kram#1032: The stream
kram#1032: He mentioned the "dynamic" bit in the stream
mixy1#6830: it looks like it's 2 specific models though
mixy1#6830: maybe dynamically realise the input is larger than 8k?
mixy1#6830: and then switch models
kram#1032: Basically he said it can somehow decide to take in more sometimes. What actually happens there I do not know. At least in the "live" "demo" he didn't have to flip a switch for it to work. Apparently.
What should everyone call you?#2680: You also said "[...]The linked post was written before the release of text-davinci-003 or ChatGPT, but neither of those can do Morse Code either – I checked.
I was initially tempted to answer “Morse Code.” This seemed like as safe a guess as any, since no previous GPT was able to it, and it’s certainly very unimpressive."
Just cause you didn't "register" it, doesn't make it any less wrong. Pun not intended. And yes, I am being petty.
kram#1032: perhaps, although *just* doing that may not be ideal, assuming they want to only rarely use 32k for infrastructure reasons:
|
It'd be quite easy to generate >8k contexts and then keep them going.
kram#1032: with the simplest possible scheme it'd then switch to 32k and keep using that forevermore
What should everyone call you?#2680: Would that have greater latency, on average?
kram#1032: It'd take more memory
kram#1032: and presumably also more processing time
kram#1032: So effectively greater latency too, yes
mixy1#6830: then why not use the 8k before switching 😛
kram#1032: either way, more resources
mixy1#6830: it gives a better user experience
mixy1#6830: although they did say 32k is experimental
nostalgebraist#3542: if 33% pass rate on your set of test cases counts as "being able to do it" for you, fine. i think most people would have a higher bar for the performance of a GPT model on a fairly well-known and algorithmically simple task, given the long string of impressive results (many on harder tasks) we've seen over the past few years from these models.
mixy1#6830: they'll flesh it out soon enough
What should everyone call you?#2680: Presumably they'd let us use the larger model all the time through the API if we want. So we should know in a couple months.
kram#1032: Uh, yes? I'm saying by the simplest scheme, as long as the total input thus far is <8k, it'd use 8k, but as soon as you go on for longer, use 32k, at which point further input would also use 32k
nostalgebraist#3542: that's my last word on the subject, have a nice day.
kram#1032: and I don't think that's what they'll be doing, as it'd be too resource-hungry, most likely
nostalgebraist#3542: updated my post with gpt-4 answering the same math question that bing had trouble with. it struggles in a similar manner to bing, perhaps not surprisingly since bing is dervied from it.
https://nostalgebraist.tumblr.com/post/711802556830089216/update-tried-the-second-example-with-gpt-4-via
Zoru#1014: Is there a finetuned ai model that looks at git diffs and produces commit messages ?
Zoru#1014: Or at least a dataset I can toss at llama?
|
Ryu#0274: https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code?hl=en
Ryu#0274: that's what carperai used for the diff models https://carper.ai/diff-models-a-new-way-to-edit-code/
Ryu#0274: but they were going the other way around
Hyperion#0575: <https://huggingface.co/datasets/CarperAI/github-diffs-deduped>
Hyperion#0575: We actually already trained the diff -> commit models up to 2B
But they're kind of bad actually in initial tests, because dataset wasn't filtered for commit quality: https://discord.com/channels/729741769192767510/1008445136977543199/1080637565813669888
Zoru#1014: Nice, thanks
asara#0001: we have a thread for gpt-4 outputs although it is hard to see in the discord UI, hm
technium#5048: can link
technium#5048: Oh damn I always end up in the wrong channels I meant to be in offtopic again lmao
asara#0001: I wonder if it might be better to use the Discord forums feature, and create a thread for each model people are playing with. Anyone have a thought on this?
sbmaruf#9215: How do you feel after reading this tweet. https://twitter.com/geoffreyhinton/status/1635739459764322330
Hyperion#0575: Tried forums before on other discords (and Carper). It's hard for many people to navigate, even compared with threads. Forum threads end up dead very quickly
kram#1032: nonsense~ lol
asara#0001: can it really be worse than threads? I'm on desktop and even so I can barely find out how to see the gpt-4 thread :grimberk:
Some Point Process#3793: is this meant to show a wrong answer i.e. implied by paragraph below..
> It struggles in a similar manner to Bing. As with Bing, my attempts to reason with it do not work very well. https://cdn.discordapp.com/attachments/729741769738158194/1085315452227567628/image.png
Some Point Process#3793: not immediately obvious (I've never taken real analysis course)
nostalgebraist#3542: no, that one is correct (i think)
nostalgebraist#3542: the models find that one easier. the one about "nowhere locally unbounded, everywhere finite" really stumps them
|
nostalgebraist#3542: scroll down for gpt-4 answering that one. earlier i showed bing answering it.
jrowe#5371: hows it do with The Riemann Hypothesis?*
StellaAthena#3530: This is correct (or, the beginning of a correct proof and contains the core idea). The proof can be finished in two sentences.
Rohan#3064: https://www.youtube.com/watch?v=T7zHk_IQHow
nostalgebraist#3542: in the full screenshot, it did finish the proof (i think correctly) -- see the post for more.
StellaAthena#3530: This contains errors, but vibes correctly. https://cdn.discordapp.com/attachments/729741769738158194/1085317860353654894/Screen_Shot_2023-03-14_at_5.45.35_PM.png
StellaAthena#3530: Eh, I guess it's more correct to call it vague. Rereading it with a new interpretation gives me a much more favorable impression than my original one. I was largely thrown for stylistic reasons.
jrowe#5371: https://cdn.discordapp.com/attachments/729741769738158194/1085318380069863635/image.png
Some Point Process#3793: the definition of locally bounded was not "exactly" what i expected either but w/e :p (presumably you used this as an example) <https://en.wikipedia.org/wiki/Local_boundedness> (<https://nostalgebraist.tumblr.com/post/711802556830089216/update-tried-the-second-example-with-gpt-4-via>) https://cdn.discordapp.com/attachments/729741769738158194/1085318833646080030/image.png
StellaAthena#3530: What it's supposed to be showing in the second and third paragraphs is that *for every delta* there is some x satisfying |x-c| < delta and |f(x) - f(x)| > epsilon. The idea of breaking into cases by whether c is rational or irrational and picking x to be the other is correct. However it doesn't actually state that you can find such an arbitrarily close value *for every delta*, delta is kinda just there. That plus the lack of reference to epsilon in the third paragraph made me parse the attempted logic incorrectly at first.
If it specified that this is true *for every delta* (which does hold because of density but isn't what is actually stated) and added epsilon = 1/2 to the third paragraph it would be flawless.
What should everyone call you?#2680: I got GPT-4 to give me a function which is everwhere finite and everywhere locally unbounded, but it still doesn't get gets why. I had to tell it that the domain of the function doesn't have to be \mathbb R. https://cdn.discordapp.com/attachments/729741769738158194/1085319702156427354/image.png
What should everyone call you?#2680: Though it didn't specify the domain, so I guess this doesn't count.
nostalgebraist#3542: it's not *everywhere* locally unbounded
nostalgebraist#3542: it seems to really struggle with that specific condition.
What should everyone call you?#2680: I thought you just needed some open set around any x for which f is unbounded?
What should everyone call you?#2680: Sorry, got the quantifiers in the wrong order.
nostalgebraist#3542: @StellaAthena apologies if you already saw them, but the screenshots on "everwhere finite and everywhere locally unbounded" at the end of https://nostalgebraist.tumblr.com/post/711802556830089216/update-tried-the-second-example-with-gpt-4-via might interest/amuse you.
nostalgebraist#3542: yeah, you need to have it for every x. not just for some isolated points in the domain. so you need a really weird function
|
StellaAthena#3530: For rational x, write x = p/q in lowest terms. f(x) = q, when x is rational and 0 otherwise should work?
StellaAthena#3530: (which is certainly an exceptionally *weird* function)
zphang#7252: add_eos_token=True I think?
StellaAthena#3530: I wonder if this function is substantially different from g(x) = p, with all else the same
nostalgebraist#3542: yes, that's the one that Counterexamples in Analysis used too.
when i showed it to bing, it agreed (for whatever that's worth) that it met the conditions, but also claimed that it was called Thomae's function. but Thomae's function is actually that with f(x)=1/q instead. (though, to bing's credit, i had never heard of it before)
Louis#0144: I got gpt4 to say it likes geese
Louis#0144: @bmk is this ur doing
Rohan#3064: yeah this worked, thanks
kram#1032: does it refine this if you critique it?
What should everyone call you?#2680: It gives this function restricted to Q as an example after I asked it for an f:Q->Q and showed it why the first example it gave was wrong. It agrees this is finite everwhere.
EricHallahan#1051: He is credited for data so maybe.
kd90138#9368: Btw palm api is available
kd90138#9368: Nobody cares 😭
celeste#0666: why ping
Zippy#1111: I think people were a bit put off by bard- & will be probably pretty slow to warm up to google ai llm offerings :floom:
bmk#1476: also the caterpillars get liquefied
FractalCycle#0001: 🪱
What should everyone call you?#2680: The safety card for GPT-4 has some crazy stuff. OpenAI collaborated with ARC. Here's a quote
|
To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple
read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness
What should everyone call you?#2680: They also tested its performances on various tasks e.g. paying humans to do various tasks, set up an open-source LLM on a new server, hide traces of itself etc.
What should everyone call you?#2680: But few details!
What should everyone call you?#2680: I'm going through the paper and posting choice sections here: https://twitter.com/Algon_33/status/1635769668156768258
epic_malloc#3813: The gpt4 paper is baffling
Zippy#1111: My job is safe :successkid: https://cdn.discordapp.com/attachments/729741769738158194/1085337231427907664/image.png
mixy1#6830: it's really interesting how little it improved in code
mixy1#6830: makes you ask whether there's enough data in the datasets
naclbbr#9203: I asked ChatGPT+ GPT-4 to write a HLSL shader that twists vertices, that runs in Unity and there wasn't perceivable improvement over Turbo.
mixy1#6830: yeah it seems so, that's what I'm noticing as well
mixy1#6830: I wonder though whether the leetcode scores would improve if the model is given feedback i.e. errors and outputs
Zippy#1111: I find that it's good for giving broken incorrect ideas that lead to a solution, but- I have yet to have any of the gpts actually give me functioning code :sad_cat: - mostly because I deal with tough problems, and I would only ever ask gpt a question about code unless I needed some inspiration kek
naclbbr#9203: The main HLSL code given was non-working hallucination in both models. Additional descriptions (like how to use it/improve it instructions) given were less structured and more human-like with GPT-4 and Legacy (pre-Turbo); Turbo is way too linear.
mixy1#6830: yeah agreed, that's my conclusion as well
Hawk#1399: Same. I can easily Google the solution to any easy Leetcode problem.
Zippy#1111: :keklanimate:
mixy1#6830: I feel like these models need more loops
mixy1#6830: a sort of game loop
Zippy#1111: would be cool to finetune chatgpt via having it interact with a python interpreter
|
mixy1#6830: There exist tools like this on github to be fair
mixy1#6830: there was one in particular called alice I think which gave chatgpt access to a browser
naclbbr#9203: I'm interested if image recognition part of the 4 can be used to loopback (e.g. inputting the screencap of the output as image)
mixy1#6830: rn I'm looking actually at giving chatgpt access to an internal natural language api to keep notes on what I talk to it about in regards to work etc...
Zippy#1111: yeah- I mean- when you think about it- even as a hooman, I rarely code something correctly the first time, I mean I frequently fix my mistakes as I'm coding, but only after reviewing what I just wrote.
mixy1#6830: and then I can run recall "<ask question here?>" and it uses those notes as context
mixy1#6830: however current chatgpt context too low
Zippy#1111: need the 32k one :PES_EvilRondo:
mixy1#6830: was looking at replacing it with rwkv actually
mixy1#6830: more lightweight with supposedly better context
mixy1#6830: Yeah exactly my thoughts
mixy1#6830: I feel like the models are capable a tad bit more given a feedback loop
mixy1#6830: Although tbf general language models are really annoying for code as they're always out of date
mixy1#6830: even github copilot is really annoying actually
mixy1#6830: it's unusable with any heavily wip library
Zippy#1111: ez- just downgrade all your libraries :Kek:
mixy1#6830: 🤠 uses bevy 0.5 💀
mixy1#6830: iirc github copilot has 8k context
mixy1#6830: in the future though I think given enough context it will just feed it the whole library api
mixy1#6830: lowkey was looking at modifying github copilot to use lsp outputs
|
naclbbr#9203: While I was discussing with science fiction writers, they noticed that AI writing lacks tempo and pacing where human writers would usually "pace" the text by whether it is visually dense or sparse. Conditioning the model by the screencap of its output was what I came up with. At the least the model could tell what kind of document it is
mixy1#6830: I have no clue how I did not read this
mixy1#6830: that's really interesting
mixy1#6830: I think more specific models would be better though
mixy1#6830: I really dislike generalization
mixy1#6830: is poe just cheaper chatgpt access lol?
Imperishable_NEET#1969: IIRC Heaven's Gate was very into "Human Individual Metamorphosis"
jrowe#5371: And sneakers
jrowe#5371: Had to have the special sneakers
Dockson#7731: It's honestly getting hard to stomach how ignorant the vast majority of people seem about AI and AI safety, especially considering that this is a subject where absolutely everyone is gonna be affected
Dockson#7731: We need more AI safety education and outreach
Maximum Limelihood Estimator#8915: Nah, it’s too late for that to matter I think
What should everyone call you?#2680: Or she.
What should everyone call you?#2680: Or ze.
Rohan#3064: did anyone work on getting bitsandbytes to run on tpus? i have those research credits i need to burn
CarsonPoole#0640: you can't do it. would require Google/TPUs allowing people to write custom kernels which doesn't seem likely to happen any time soon
Rohan#3064: ahh i see, what if you got a pr accepted upstream that ended up in google3?
Rohan#3064: in the tensorflow repo i mean
EricHallahan#1051: No, you need lower level access than that.
EricHallahan#1051: And GPU kernels do not transfer to TPUs directly.
|
Rohan#3064: i dont remember where all the stuff is, im assuming some of the xla etc is in a directory that doesnt get copybara'd out to github?
Rohan#3064: what kind of fish is v4 anyway, my guess is anglerfish, but maybe blobfish
mightyscoo#0042: eurusd just took off
Rohan#3064: did "positron" ever end up in anything?
moschrex#4468: GPT-4 benchmarks just dropped
moschrex#4468: Some immediate take-aways from the drop. : (1) the use of multimodal vision+text is having only mild effects on the performance of afew of the tasks
moschrex#4468: In very rare cases, the performance dropped with the vision training. We can surmise here that the images add additional degrees of freedom for confusing the model
moschrex#4468: Next take away here is that LLMs are still pretty bad at math.
moschrex#4468: 60/150 looks like a score that could result from random guessing
kd90138#9368: I'm trying to make a new using pipelines with dataset but my instances are being CPU limited on a P5000 with 8vCPUs!
kd90138#9368: should i ditch pipelines
bmk#1476: ..no?
bmk#1476: random guessing is 30/150, and not answering anything is 37.5
moschrex#4468: @bmk The strange paradox here is that on AP statistics, GPT-4 scored a 5/5. Meaning it could receive college credit for stats. But then on AMC10 (math for students in grades 10 and below) it pulls a meager 30/150. These are both mathematics problem sets. So this disparity is ripe for deeper investigation and experiment
StellaAthena#3530: Calling AMC “math for students in grade 10 and below” is exceedingly misleading
bmk#1476: idk about AMC10 being 30, seems like a weird anomaly (esp since it scored higher on AMC12)
bmk#1476: but AMC10/12 is hugely harder than AP stats
StellaAthena#3530: @bmk did you see Horace’s thread? It makes a good argument that the code evals are unreliable and any signal is dominated by contamination. Maybe something similar could be true here, if AMC 12 and AMC 10 are from different years
bmk#1476: plausible
Rohan#3064: anyone have guidelines for packing multiple prompts together into batches for training llms? if i'm training at 1024 token max length, can i pack arbitrary amounts of 32 token sequences instead of padding when i have a 768 token prompt, and as long as eos and bos tokens are in place, this will not confuse the attention?
|
moschrex#4468: I have some personal hypothesis as to what explains this disparity. I had a feeling that American Math Challenge is very difficult and requires techniques beyond the classroom. However my hypothesis is not that these LLMs are just scaling along human hardness. I think rather what is happenign is that something about LLM's architecture is not conducive to math problems requiring several "steps" or rewrites of algebra.
bmk#1476: I could probably have aced AP stats when I was 12 but even today I probably wouldn't score high enough to qualify for the AIME
paws#3311: do you believe the contamination numbers given by them in the paper?
StellaAthena#3530: None of this is hypothesis. Everything you said is simply known to be true
bmk#1476: requires techniques beyond the classroom is also an understatement lol
StellaAthena#3530: I don’t believe any of the numbers in the paper
paws#3311: just for reference https://cdn.discordapp.com/attachments/729741769738158194/1085443317996716113/image.png
StellaAthena#3530: To be clear I don’t think it’s blatantly lying. But I don’t trust the (explicitly secret) experimental design decisions to be correct or the results to be presented in a non-misleading fashion
StellaAthena#3530: Also, by not reporting the raw scores for AP exams a huge amount of potential variance is hidden
StellaAthena#3530: On some AP tests, I could get a 4 and you could get a 5 despite you answering *twice as many problems correctly* as I did
StellaAthena#3530: (At least, that was true when I took them. IDK how much has changed)
StellaAthena#3530: But what’s important is that these are large, undocumented buckets of reporting that obscure the potential changes in the raw scores
Some Point Process#3793: It seems like on the amc, 37.5 points are possible by leaving it blank? (e.g. <https://artofproblemsolving.com/wiki/index.php/2017_AMC_12A_Problems#Problem_1>)
Some Point Process#3793: but yeah the score distribution seems in line with %iles (2021 amc12) https://cdn.discordapp.com/attachments/729741769738158194/1085446142831099915/image.png
paws#3311: feels like a haphazard evaluation and lacking some amount of rigor with little insight into process and just showing numbers (and hoping the examples sway opinion of performance)
bmk#1476: man I just got nerdsniped by an AMC12 problem 25
bmk#1476: spent like 30 minutes on it lol, though half of that was because I'm super rusty and had a few false starts
bmk#1476: probably could have solved it in 10 minutes in my peak, but that still isn't nearly fast enough for AMC
paws#3311: geometry?
bmk#1476: here's the problem, have fun https://cdn.discordapp.com/attachments/729741769738158194/1085453820575879208/Screenshot_20230314_234543_Chrome.jpg
|
bmk#1476: I can definitely solve all the AMC12 problems if given an entire day, which is all fine and good except for the small problem that the entire thing needs to be solved in 75 minutes
bmk#1476: fun times
paws#3311: isnt problem solving about practice ser, you'd probably get there given a week or two
bmk#1476: when I was a Small Child I spent more time than reasonable (though still less time than serious contestants) practicing this shit lol
bmk#1476: still sucked at it
bmk#1476: didn't even come close to aime cutoff
moschrex#4468: another result that does not articulate with the other findings https://cdn.discordapp.com/attachments/729741769738158194/1085458958195630100/image.png
45#2247: https://twitter.com/csvoss/status/1635693884532744192?s=20
moschrex#4468: "bad at math" is written all over the results but never articulated by the reseearchers This "80%" must be normalized against something. A quick glance makes GPT-4 as good at algebra as everything else
moschrex#4468: "code" on that bar graph shows somethig near the high 60s. Now look at this inconsistency : https://cdn.discordapp.com/attachments/729741769738158194/1085459920503189554/image.png
moschrex#4468: are they reporting leetcode (easy) there? That would be 0.756. THe average over all is 0.36
bmk#1476: what suggests "bad at math"?
bmk#1476: it's certainly no galaxy brain mathematician but nothing screams "bad"
moschrex#4468: okay
> GPT-4 scores 19 percentage points higher than our latest GPT-3.5 on our internal, adversarially-designed factuality evaluations
moschrex#4468: aha.. "internal designed" evaluations.
moschrex#4468: This is what I'm thinking of . See section 2.3 of https://arxiv.org/pdf/2302.03494.pdf
Rohan#3064: this alpaca finetuning is hard to make it emit stop tokens
bmk#1476: that paper is about chatgpt
Some Point Process#3793: Yeah AMC is a little faster-paced than my liking but they're more intresting than sat
|
Rohan#3064: i suspect that many of my inputs got truncated, so there was no stop codon
bmk#1476: yeah SAT math is trivial
bmk#1476: whereas AMC math is a workout
Some Point Process#3793: My friends/family liked the sat-m even more for that reason tho (since it was fast paced)
Some Point Process#3793: well seemingly :p
Some Point Process#3793: Yeah it was hard for me tho :p
moschrex#4468: https://cdn.discordapp.com/attachments/729741769738158194/1085465190570668032/image.png
moschrex#4468: (Well for honest coverage here) that statement i highlighted in blue should have been connected to a citation. Maybe they do so in the intro. let me see
bmk#1476: that paper is from a while ago
Some Point Process#3793: meme paper:
> Assuming a log-linear trend, we can naively
> extrapolate these results to estimate that a model with 1016 parameters would be required to reach an 80% solve rate, when using the full GSM8K training set
moschrex#4468: https://cdn.discordapp.com/attachments/729741769738158194/1085466605791756369/image.png
moschrex#4468: hmmm 🤔
moschrex#4468: what a strange sentence. I'm tripping up on their decision to throw a "sometimes" in there
moschrex#4468: > the vast majority of the problems in the DeepMind Mathematics dataset can be straightforwardly solved with Large Transformers
moschrex#4468: @Some Point Process I'm reading that as 10^16 parameters. and likely some extrapolation on scaling
moschrex#4468: aha... here it is : https://cdn.discordapp.com/attachments/729741769738158194/1085468121185398794/image.png
moschrex#4468: sorry about the screen caps. These are coming from here https://arxiv.org/pdf/2103.03874.pdf
moschrex#4468: https://cdn.discordapp.com/attachments/729741769738158194/1085469038949437570/image.png
|
moschrex#4468: this concludes my response to this question
Some Point Process#3793: Yeah 10 quadrillion tho :p
anotherone#9475: You should read https://www.lesswrong.com/posts/arveXgFbJwascKtQC/forecasting-ml-benchmarks-in-2023 . MATH difficulty was overestimated.
Some Point Process#3793: But I was mainly going to say it was an older paper as well. And there might have been more recent work (e.g. minerva (google/harvard), Lample (Meta), etc), since then? For code I don't know how comparable it is but
Some Point Process#3793: @moschrex
moschrex#4468: since you linked this, lets circle back around .
moschrex#4468: > The MATH dataset consists of problems from mathematics competitions including the AMC 10, AMC 12, AIME, and more. Many of these problems can be collected from aops.com/community/c3158_usa_contests. These competitions span decades and assess the mathematical problem-solving ability of the best young mathematical talent in the United States.
moschrex#4468: So this is AMC 10 and AMC 12. The reason this needs emphasis is because my earlier assertions were about middle-road regular types of algebra problems encountered by students. Not these incredibly difficult olympiad-type questions
moschrex#4468: The screenshots were me following a chain of citations. So at this point in time, it seems like I have not yet found a silver bullet evidence in a paper for my claim.
moschrex#4468: from earlier ☝️
jamesc#4183: i'm pretty sure most solutions (e.g. huggingface tokenizer/ stack) don't work with that. you can take a look at the attention masks it creates
jamesc#4183: basically you'll find that the tokens in e.g. packed sequence 1 aren't masked when you forward on packed sequence 2
jamesc#4183: so you'll need to roll your own attention mask input (that has a different shape from the ordinary mask)
and on top of that, the position ids as well
jamesc#4183: but yeah , if anyone else figured out a way for this i'm also interested 😄
Some Point Process#3793: Steinhardt's post essentially seemed to ask why Minerva did so well. But another person than me (https://www.lesswrong.com/posts/JkKeFt2u4k4Q4Bmnx/linkpost-solving-quantitative-reasoning-problems-with?commentId=8RvCgG26HAJhLnsdb) also noted that it might have memorized (some) the answers(?), given it was asked to explain the solution steps, and that in the false positive results, etc there were seemingly some scenarios where it couldn't've given the answers by chance (though the authors of minerva deserve credit for transparency in the model outputs)
moschrex#4468: I added the orange boxes here to bring the eye over to the Y axis https://cdn.discordapp.com/attachments/729741769738158194/1085473228518076466/image.png
moschrex#4468: The problem with these graph however, is that this is being done with problems that are simply too dificult. I would like to see a paper that relates more with my hypothesis about multi-step "re-writes" of an equation, mostly involving algebra operations on bothy sides of an =
Some Point Process#3793: I guess the variance formula could hold generically, so idk what this comments implications are, actually. but i found a few other (specific) quesitons on MATH that had some interesting ways of getting the answer in my opinion
|
moschrex#4468: I do not expect nor would I be interested in whether GPT-4 has "flashes of insight" required to solve Olympiad-level problems
Some Point Process#3793: (such that it might be interesting to simply look into further, where it's getting certain solutions (even for the "true positive" answers))
moschrex#4468: I'd like to see benchmarking done on LLMs in a more systematic way to tease out precisely how and why they are failing. LIke be more "scientific" about it.
moschrex#4468: I guess I don't have the proclivities to have some religious zeal regarding some cognitive capabilities that will (magically) emerge from scaling
MomentoAmori#5276: yo guys!
moschrex#4468: @Some Point Process Anyways. The tantalizing possibility here is that simply modifying the encoder-decoder architectrure of teh transformer to perform several rounds of "re-writes" would endow multi-step mathematical solving. Tantalizing because it's a trivial architecture change.
MomentoAmori#5276: anyone here worked with OCR before?
Some Point Process#3793: Yeah I’ve had that sort of possibility in mind (for several years) as an improvement to expect at some point. It might be somewhat surprising if (semi) recurrent and universal-type transformers don’t (or wouldnt) really scale to perform at even the level of these models tho (gpt4)?
Some Point Process#3793: I’ve tried turning each transformer block into a recurrent layer in various ways. It didn’t seem to do as well as just keeping it the same layer :p (tho I think schmidhuber et al. has had a lot more luck (if his results can replicate) with his relative embeds and tricks paper (few simple tricks improve transformers or w/e))
Some Point Process#3793: It looks like Zico said some things about alphacode ~~shortly after its release~~ just recently https://www.science.org/doi/10.1126/science.add8258
Some Point Process#3793: Too bad I can’t access copy
StellaAthena#3530: The new "will no one rid me of this turbulent priest" https://cdn.discordapp.com/attachments/729741769738158194/1085565468208807996/science.add8258.pdf
Ryu#0274: Isn't it open access 🤔?
https://www.science.org/doi/epdf/10.1126/science.add8258
Rohan#3064: im thinking now about just gradient accumulating between different sequence length bucketed batches instead
nostalgebraist#3542: this comment seems relevant https://www.lesswrong.com/posts/X3p8mxE5dHYDZNxCm/a-concrete-bet-offer-to-those-with-short-agi-timelines?commentId=wHniJSusoYytyC78M
nostalgebraist#3542: > Previously, I relied quite heavily on statements that people had made about MATH, including the authors of the original paper, who indicated it was a difficult dataset full of high school “competition-level” math word problems. However, two days ago I downloaded the dataset and took a look at the problems myself (as opposed to the cherry-picked problems I saw people blog about), and I now understand that a large chunk of the dataset includes simple plug-and-chug and evaluation problems—some of them so simple that Wolfram Alpha can perform them.
dpaleka#9537: the bitter lesson is bitter
chadbrewbaker#8762: Talia put a paper out last week on proof repair using LLMs. Howard Curry and all that programming = math. https://arxiv.org/abs/2303.04910
EricHallahan#1051: Yep, Talia was here in #research to provide a bit of insight into it.
|
dpaleka#9537: what's the descriptive term for non-chat models?
EricHallahan#1051: not-chatbots
dpaleka#9537: i'm writing some code and i have
`from prompt_library.chat import ...`
`from prompt_library.??? import ...`
dpaleka#9537: currently `???` is `normal`
ari#9020: General? Prose? Text?
EricHallahan#1051: Export the basic stuff in the main package.
EricHallahan#1051: Nat has finally revealed the project he has been working on in stealth… what a unique challenge he has chosen to tackle.
https://scrollprize.org/
Sebbydudie#9763: Hello everyone!
StellaAthena#3530: "Model"
Hawk#1399: Who attempts these kaggle competitions? I don't think I will ever have time to attempt to make a competitive solution.
Sebbydudie#9763: 🤷♂️
Sebbydudie#9763: *i dunno if i should put this in another chat but..* is anyone here good with frontend coding 😳 im making my own virtual ai assistant like jarvis from iron man, he is called ATLAS...looking for someone proficient in HTML CSS and Javascript. If so please dm me! I will send you the brief of the project and see if you wanna join! *just me and one other dude atm lol we keepin numbers low*
AI_WAIFU#2844: > is anyone here good with frontend coding
Have you tried GPT-4?
vikasp#7540: I've tried (and won) a few competitions. I found them pretty fun, and a great way to learn. They helped me get my first programming job. Once you have the skills, the competitions are much less useful. Mostly a way to get work done for less $$ than hiring someone dedicated. I wouldn't do one again.
Sebbydudie#9763: asked gpt 4 to write it for me? Yes it was scuffed 💀 besides we are looking for an active third member for the project, who can specialise in frontend. I've got this whole project outline thing i made i'll send anyone interested
vikasp#7540: I know many names in deep learning are overloaded, but Facebook already has an ATLAS that does something similar - https://github.com/facebookresearch/atlas
|
Sebbydudie#9763: anyone wanna help 🧐
Sebbydudie#9763: eh ill show you lot the outline rn then why not
vikasp#7540: Just use gradio - https://www.gradio.app/
Sebbydudie#9763: yea thats what im using to interface it now but it'd be nice to have a proper one 😆
Some Point Process#3793: What exactly is representative problem on MATH dataset? Isn’t it https://artofproblemsolving.com/community/c3158_usa_contests
Some Point Process#3793: > MATH problems are created by the Mathematical Association of America (MAA). Although we do not commercialize MATH, we should like to demonstrate that we are far from the boundary for action or infringement. For decades, the MAA has not protected its problem IP even from separate organizations which sell MAA problems, such as AoPS
Some Point Process#3793: (I.e. from the original paper https://arxiv.org/pdf/2103.03874.pdf)
> We also evaluated humans on MATH, and found that a computer science PhD student who does not especially like mathematics attained approximately 40% on MATH, while a three-time IMO gold medalist attained 90%, indicating that MATH can be challenging for humans as well.
Some Point Process#3793: https://bounded-regret.ghost.io/more-is-different-for-ai/ ?
Some Point Process#3793: (clearly giving a nod to https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_different_PWA.pdf)
baidicoot#9673: I can specialise in prompt engineering m
BlinkDL#1985: Two fast RWKV 14B gradios:
https://huggingface.co/spaces/BlinkDL/ChatRWKV-gradio
https://modelscope.cn/studios/Blink_DL/RWKV/summary
Hawk#1399: I saw a CV from some Microsoft person, and they had their 5-year-old Kaggle rank on there.
Sebbydudie#9763: as in tasks? dm me
Chasm#0381: I am working on integrating GPT-NeoX and Pythia support into GPTQ-for-LLaMa, aiming to add 4-bit GPTQ quantization and inference capabilities. This would enable a NeoX20B to run on a single RTX3090, or Pythia12B on even lower-end hardware, using only VRAM.
I have uploaded two files, neox.py and neox2.py, which represent two different approaches I attempted. However, my limited understanding of NeoX's layers and intermediate experience with Python have hindered my progress.
|
I have spent hours on this, but I am stuck. If anyone has expertise in the NeoX architecture and layer structure, please offer assistance.
https://github.com/Digitous/GPTQ-for-GPT-NeoX
Chasm#0381: (also posted in #gpt-neox-devs )
StellaAthena#3530: Well one obvious issue is that NeoX-20B doesn’t use alibi embeddings. It uses RoPE
Chasm#0381: 😁 I saw that in some NeoX specs I was going over; both neox and neox2 .py are built off of/retrofitted modifications of opt.py and bloom.py respectively so I could compare/contrast what the errors were as I pushed forward. I truly am not sure how to proceed with coding in support for NeoX's layers.
StellaAthena#3530: Have you tried looking at the HF implementation?
StellaAthena#3530: At its core there’s only two “weird” things we do (neither of which are weird and both of which are now SOTA standard):
1. RoPE
2. Parallel feedforward / attention
StellaAthena#3530: It should be quite easy
Rohan#3064: i have a batch sampler now seeming to work, that lets you finetune alpaca with different sized input in buckets with different batch sizes, so for example 64 tokens = batch size 8, 256- 2
Rohan#3064: if i have a bunch of these minibatches getting gradient-accumulated it should be valid?
EricHallahan#1051: and the double layernorm
kurumuz#5695: why are we doing that
EricHallahan#1051: It was a mistake.
kurumuz#5695: like layernorm is double but there is no new input
EricHallahan#1051: And it persisted.
kurumuz#5695: its completely redundant
StellaAthena#3530: Yea we know
|
StellaAthena#3530: That’s why we fixed it; it was a bug
StellaAthena#3530: But it was included in the 20B model
EricHallahan#1051: Does it exist in Pythia?
EricHallahan#1051: Or was that removed
StellaAthena#3530: It should be removed
StellaAthena#3530: Instead of asserting that we refuse to fix bugs, why don’t you go actually look and see if they’re still present
EricHallahan#1051: It looks to be fixed? I haven't ever interacted with this issue directly so I know nothing about it other than it had existed.
EricHallahan#1051: I thought it was an intrinsic part of the checkpoint?
EricHallahan#1051: But I guess it isn't if it just gets merged.
chilli#5665: Anybody wanna try solving these Codeforces problems with GPT-4?
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/1085669984652496996/image.png
chilli#5665: I've managed to get 1800A and 1796A with GPT4 since yesterday.
chilli#5665: (by allowing it to output Python lol)
chilli#5665: But the other ones stay persistent
moschrex#4468: Hey,. are you here?
moschrex#4468: @nostalgebraist I read your link. I am inspired and have something to say : Writer mentioned that some AMC12 problems could be solved with Wolfram Alpha. Okay that's where Barnett ended his line-of-thinking. But lets keep going with that. Since we have Wolfram Alpha, and LLMs, there is a possibility opened up here. Namely 👉 **an LLM could constantly query Wolfram in an automated manner giving it mostly algebra and calc-I problems. Retrieve the step-by-step solutions, and train itself on mathematics and algebra, in an entirely self-supervised way. This automated math learning could proceed "at pace" without any human annotators.**
moschrex#4468: (ruminates)
Rohan#3064: what would one expect the minimum batch size to be for training a llm on a100, after which doubling the batch size again would not improve training throughput
jrowe#5371: for process and procedure learning, that would be good, so that a model can generalize an understanding of "how problems that look like X are solved by Y or Z"
jrowe#5371: I dont think actually solving problems using an LLM is necessarily the right way to go about it - teaching it to use external tools that are optimized is better than teaching it to do the calculations internally
|
jrowe#5371: using a tug boat to tow a train type situation - sure, you can, but there are better tools for the job
Hyperion#0575: Read the Toolformer paper, this is already being done
jrowe#5371: <https://arxiv.org/abs/2302.04761>
jrowe#5371: 😊
Hyperion#0575: anyone have a GPT-4 citation? :berk: I want to add it to a paper
StellaAthena#3530: They tell you how to cite it in the paper itself
CarsonPoole#0640: what is the easiest back of the envelope way to convert from bits per byte to cross entropy loss
makya#2148: What are you citing Gpt 4 for lmao.
jrowe#5371: Probably bribing Stella or Leo with coffee
jrowe#5371: Chatgpt4 says: Assuming that the data follows a uniform distribution, the conversion factor between bpp and cross-entropy loss is:
1 bpp = log2(e) / 8 nats per byte = 0.693 / 8 nats per byte
CarsonPoole#0640: this image from the paper seems to imply that simply dropping the last 5-10 layers from NeoX 20b would have a minor impact on ppl https://cdn.discordapp.com/attachments/729741769738158194/1085684434990284840/Screenshot_2023-03-15_at_6.01.25_PM.png
CarsonPoole#0640: (obviously decoding using the tuned lens)
jrowe#5371: I think it hallucinated at me
kindiana#1016: removing a few layers from 20b would have worse ppl than 12b
CarsonPoole#0640: yeah looks like 2 layers equals 6.9b
Sebbydudie#9763: hey what does that open ai badge next to your name mean? do you work there?
CarsonPoole#0640: I'm interested in what would happen if you drop a portion of the layers and continue training. Does it end up at the same perf as a scaling law would project for the new number of params?
reclaimer#4503: Brothers and sisters, it's time to commence Humanity's Eternal Golden Age.
|
ERROR: type should be string, got "\nhttps://medium.com/@kyeg/the-eternal-golden-age-fccb4f01bd2f\nAI_WAIFU#2844: nah he just has his name on the gpt-4 paper for shits and giggles https://cdn.discordapp.com/attachments/729741769738158194/1085686820672655370/image.png\nChasm#0381: I am python babby in a sea of endless information. But that still gives me something to go off of, thanks! 😁\nAlso persistent enough I'll be sure NeoX support is integrated one way or another. Learning adventure mode.\nSebbydudie#9763: that’s awesome man! is it okay if i pick your brain for 5 mins i dms💀\nRyu#0274: no, probably not\nSebbydudie#9763: icl u lot remind me of this guy’s character\ntysam&co.#4818: the discontinuities in what's happening in the space around layers 10-12 are wild\nSebbydudie#9763: https://cdn.discordapp.com/attachments/729741769738158194/1085690636067405825/v12044gd0000cg1vfdbc77u5jdppaqo0.mov\ntysam&co.#4818: i think as a more balanced opinion, this server can have crab bucket tendencies and a fair amount of political posturing but good research happens too. i'm not sure if mocking anyone will help here. like everything in the world, it's all a mixed bag and not stirring the pot usually adds more net benefit long-term, in my experience at least.\nSebbydudie#9763: aightt it was a half joke but yk\nStellaAthena#3530: That is correct\nStellaAthena#3530: There’s a formula in the Pile paper, tl;dr exponential with a correction (if necessary, tokenizer dependent)\nTWaxer#3146: It seems gpt 4 can't sucessfully translate between english and japanese.\namaliaaa#7120: oh, do you have access to it?\nTWaxer#3146: I asked a person with access to translate some 100 years of solitude text.\nSebbydudie#9763: i do\namaliaaa#7120: ahh i see\nSebbydudie#9763: anything you want me to ask?" |
amaliaaa#7120: oh no its fine :)
amaliaaa#7120: ty though
Sebbydudie#9763: ive been offering a load of people, i felt dumb buting it a couple weeks ago
TWaxer#3146: It would be nice for a native speaker to provide some assestment of japanese translation capabilities of gpt 4
Sebbydudie#9763: but with this release its a solid advantage
Sebbydudie#9763: yea a lot of models are known to be english based un/fortunately
Some Point Process#3793: isn't x-ent just the kl-divergence between the model probabilities and the dataset "probabilities" (induced by the labels, word predictions, pixels, or w/e) (<https://en.wikipedia.org/wiki/Cross_entropy>, i.e. u typically just assume the probability of the data labels are 1 for each example, such that C.E. equals E[number of bits of information (in terms of entropy over the codebook/token probs) that the dataset (empirically) has over the model)]
Some Point Process#3793: up to a sign change
Some Point Process#3793: seems like it conflated the (standard) iid assumption with "uniform-distribution" though I can only speculate :p
tysam&co.#4818: yes, absolutely, just don't forget to scale your loss appropriately so the gradient variance isn't unbalanced (not the biggest issue in the world but still important :d)
The_Alt_man#5718: uh while you're here, any tips for getting into big labs like OAI? just a general overview. Do they value scaling-related research? what skillset are they looking for (triton, torch, Jax)?
tysam&co.#4818: it depends upon the part of training, there's not really a 'best batch size' for any processor that is very efficient for the full duration of training in my experience. so past a certain point we're basically looking at the tradeoff 'is the cleanness of our loss signal worth the time it takes to compute the batch'. the required gradient cleanliness goes up a lot IIRC as we approach convergence -- we gotta get those last flecks of information into the model!
hope that helps a bit ❤️ 😄
tysam&co.#4818: how good are you at viciously guarding, say, a blue backpack if someone gave it to you and told you to run for your life, protecting it all costs?
Lord_Drakostar#9337: hey some official admin guy say I was making a channel for a Discord server named EleutherAI
Does eleutherai or eleuther-ai represent you better
Lord_Drakostar#9337: on one hand I don't think this is a valid reason for @'ing an official employee but on the other hand I do need an answer 🤔
synquid#7193: how quickly can you unplug a server?
Rohan#3064: ah i see, a small length batch would have proportionally less loss because the loss is just the sum of the a tensor that would be smaller?
|
Lord_Drakostar#9337: I may @ an admin
Lord_Drakostar#9337: quick one-man vote here tysam&co. you're in the chat is that reasonable
tysam&co.#4818: That part I think you'd have to try yourself -- since larger batches do sorta include that by default (remember we have a triangular matrix filtering out certain values so we're training on all of the sequence lengths up to sequence length N). But maybe disproportionately weighing smaller sequences does help us, it could honestly go either way.
The big thing I'm mainly thinking about is in virtual batching some people often forget (myself included sometimes! 😄 <3) to divide the loss by the right amount since the backward pass adds the gradients in place. Some repos don't divide at all, so the virtual batchsize isn't exactly equivalent to the actual batchsize since the variance of the gradients is higher since it's all added in-place over multiple iterations. 🙂 ❤️
Lord_Drakostar#9337: tysam
tysam&co.#4818: greetings
Avi#0447: I'm sorry, what exactly...?
Lord_Drakostar#9337: in your personal opinion would @'ing a staff member for a public stylistic question be a reasonable thing to do
Lord_Drakostar#9337: YES STAFF
Lord_Drakostar#9337: ok
tysam&co.#4818: maybe not, usually admin is for keeping fires down
tysam&co.#4818: though i appreciate the gesture
Lord_Drakostar#9337: so for a discord channel name
eleuther-ai or eleutherai
tysam&co.#4818: since you're trying to represent a community properly
tysam&co.#4818: i'm more of a guest of the eleuther ai community than a member, so i just pass through the general area occasionally.
Lord_Drakostar#9337: I was the second guy to go "hey, I should make a GPT-4 subreddit" and now I'm profiting
Lord_Drakostar#9337: r/GPT_4 boiss
Avi#0447: let me see if I can consult so that I can give a bit more than just my personal opinion, if you can wait a bit
|
Lord_Drakostar#9337: awesome
Lord_Drakostar#9337: ping me please
Lord_Drakostar#9337: thank you for the help
Avi#0447: may I ask why would you need to make such a channel though? we already have a discord server here
AI_WAIFU#2844: I would say the first is mildly preferred, but frankly since it's just a discord channel I don't think it matters much.
Lord_Drakostar#9337: alright
after realising #openai was a thing now I'm still in a bind because that and eleuther-ai should probably take the same format
Lord_Drakostar#9337: eleutherai looks a little weird so guess I'll go with open-ai
Lord_Drakostar#9337: thanks for the help!
Avi#0447: my preference is also for eleuther-ai (since that seems pretty standard way to deal with the -ai suffix in other servers)
Lord_Drakostar#9337: alrighty
Lord_Drakostar#9337: I left this server how am I here
Lord_Drakostar#9337: oh
Lord_Drakostar#9337: nevermind
Lord_Drakostar#9337: I thought this was another server
Lord_Drakostar#9337: I'm a little stupid sometimes
moschrex#4468: lol wut
Rohan#3064: could one train a voice model decoder onto the same embedding space as llama to use it for tts/stt?
Dockson#7731: Ok so, I've started to notice that in other public servers, it's becoming increasingly more frequent for people to join and answer questions with *very obviously* AI generated means, like ChatGPT
chilli#5665: I've been testing GPT-4 quite a bit on some competitive programming problems, and it really does feel stupid
|
chilli#5665: tbh
chilli#5665: I think its true competitive programming level is actually quite low
chilli#5665: maybe the issue is with the RHLF'ing
chilli#5665: does anybody have any tips on how to use GPT4 to solve programming problems? I've been trying the usual suspects - asking it for a high level approach first, generating multiple candidate approaches, asking it to think step by step, etc.
gamma_naught#5267: i effectively used gpt4 to solve a bug in my elasticsearch bulk insert code today. my workflow was roughly:
1) show it the suspicious insert, describe issue
2) it suggested a fix. I ran it, got an error
3) tell it the error
4) it apologizes! and corrects the error
5) bug fixed
so idk about competitive programming out of the box, but from an "everyday" programming perspective, I found it useful
chilli#5665: yeah, "everyday" programming is way easier though
chilli#5665: I use copilot/bingchat regularly as part of my regular programming workflow
gamma_naught#5267: only reason i dont is because im concerned about openai training on my codebase
chilli#5665: luckily I work on OSS stuff 🙂
Some Point Process#3793: how did it "correct" the error after you told it what it was
Some Point Process#3793: But yeah it's p cool that an llm can more often than not acknowledge it made a mistake (after being contradicted by the human assistant) and somewhat fix its line of reasoning, per my own experiences (gpt3)
gamma_naught#5267: me: this gives opensearchpy.exceptions.RequestError: RequestError(400, 'x_content_parse_exception', '[1:2] [UpdateRequest] unknown field [_type]')
|
gpt4: I apologize for the confusion. The _type field is not necessary for OpenSearch 7.x and later versions. You can simply remove the _type field from the data_generator() function to avoid the error. Here's the updated function without the _type field:
chilli#5665: it can fix trivial stuff like that
chilli#5665: but in the case of competitive programming, it just has the wrong approach
chilli#5665: and despite my persistent attempts to correct it
chilli#5665: it persists in its incorrect approach
chilli#5665: 😠
gamma_naught#5267: im sure youve seen this: https://arxiv.org/pdf/2203.07814.pdf
chilli#5665: yeah
chilli#5665: but now I'm kinda suspicious about the results in that paper
gamma_naught#5267: they sample a TON of solutions and do a bunch of postproc to get anything to work
AI_WAIFU#2844: What are the odds the base model could do it but it was RLHFed with a strat that doesn't work?
chilli#5665: yeah I'm kinda worried about that
chilli#5665: I'm wondering whether RLHF makes it better out of the box at a lot of stuff people ask it
chilli#5665: but worse at these actually difficult questions
AI_WAIFU#2844: IMO it also just lowers the entropy of the model, to the point that it fails to consider alternatives when asked to come up with them.(although my experiences here are a bit out of date)
chilli#5665: but tbh my experiments lead me to
chilli#5665: largely discount all results on standard benchmarks
chilli#5665: like Minerva's MATH results or any results on APPS
anotherone#9475: Wait, is the GPT-4 model they're serving RLHF'd?
|
ilovescience#3282: yes
Fessus#9563: Wouldn't be too surprising. The RLHF correction hammer does some damage with every swing
JDC#0128: Is this just a browser issue? Or did they make a mistake? https://cdn.discordapp.com/attachments/729741769738158194/1085760297308008531/image0.png
anotherone#9475: suspicious how? leakage?
anotherone#9475: Uh, the tested on a contest that was held-out based on time right? So it couldn't have been leaked
chilli#5665: yeah, that's true
chilli#5665: perhaps fine-tuning GPT4 on competitive programming problems
chilli#5665: + best-of-1000
chilli#5665: will make it pretty good 🙂
StellaAthena#3530: It’s supposed to be in italics. There’s a browser plug in that will make it render that way
anotherone#9475: @chilli have you seen people posting on twitter generating small games like pong + game of life
anotherone#9475: *apparently* it's a lot more robust at generating error-free complete code
chilli#5665: yeah, but those are all "existing code"
chilli#5665: for the most part, GPT-4 has been fine at generating the code (for the simple problems)
chilli#5665: if it has the right solution
chilli#5665: but it's sucking at generating the right solution
anotherone#9475: Gotcha. What happens if you for best of n, you do it consecutively like "that's one way to do the solution, can you think of another way?"
chilli#5665: well, I can only really do best of 10 reasonably
chilli#5665: lol
chilli#5665: and even that takes a while
|
anotherone#9475: Oh yeah you have to manually enter it lol
anotherone#9475: What's your current best prompt?
anotherone#9475: Does it involve few-shot
chilli#5665: No, I've mostly just been asking it to "write code for this problem"
chilli#5665: or
chilli#5665: "provide a solution sketch to this problem"
anotherone#9475: Hm, I don't know whether few-shot learning would even improve the RLHF'd models lol
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/1085762797918498856/screencapture-chat-openai-chat-88cdc244-092c-4fd7-8c73-0f8928584b61-2023-03-15-20_12_54.png
chilli#5665: Here's a representative interaction with the model where I try pretty hard to get it to correct its solution
chilli#5665: actually that's unreadable on discord
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/1085763242984472698/screencapture-chat-openai-chat-88cdc244-092c-4fd7-8c73-0f8928584b61-2023-03-15-20_15_03.pdf
chilli#5665: try that pdf lol
anotherone#9475: What if you told the model what the right test case output should be/
chilli#5665: I did
chilli#5665: if you look at that chat
anotherone#9475: Yeah I mean in one go
chilli#5665: I've also tried that
anotherone#9475: Hmm
chilli#5665: that's what I originally tried actually
chilli#5665: seems to have made it worse overall, although I didn't properly benchmark it
|
ogkalu#7841: Try simulating conversational dynamics. Hold on I'll link what I mean
anotherone#9475: Yea i'm guessing it's just not going to be very good at the comp programming problems
anotherone#9475: Regardless of what god tier prompt you use
alstroemeria313#1694: ...There is an activation function that works by simply treating the outputs of the previous layer as a vector and *sorting* them? Whaaaat?
ogkalu#7841: https://www.reddit.com/r/ChatGPT/comments/10bpzjb/chatgpt_scores_80_correct12_out_of_15_on_sample/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button
alstroemeria313#1694: https://arxiv.org/abs/1811.05381
chilli#5665: that's hilarious
anotherone#9475: That is incredibly cursed
ogkalu#7841: Don't think anyone's actually evaluated a method like this against normal CoT but one thing it does that normal CoT doesn't do is start on a path/solution and later with other personas decide it's not correct/enough etc
chilli#5665: I mean, if you want to send me a solution to the codeforces problem I can evaluate it for you 😛
chilli#5665: Try to get this one to work: https://codeforces.com/problemset/problem/1792/A
chilli#5665: Or any of the first 10 on this page: https://codeforces.com/problemset?tags=800-800 (other than 1800A, 1796A, and 1791B, which I managed to get working with enough attempts/different prompts)
paws#3311: whats the current prompt template you are using? "Solve this coding problem for me <insert problem> ::"?
chilli#5665: yeah, I've tried that.
chilli#5665: I've tried a bunch of variations, including "Provide a high level solution sketch to me", "write the code for this problem", etc.
paws#3311: pseudocode?
chilli#5665: Originally I was asking for code in C++, but it seems to do better in Python?
chilli#5665: yeah I tried "provide a high level solution sketch in pseudocode to me" followed by "please implement it"
chilli#5665: although usually its high level solution sketch in pseudocode was wrong
tysam&co.#4818: yay for pytorch 2.0 being out! can't wait to see what bizzare things we all get into now
|
chilli#5665: yeah, please send me any ... bugs you run into
paws#3311: hmm i wonder if a prompt changes the performance a lot
chilli#5665: hasn't had that large of an impact for me in my experimentation
tysam&co.#4818: sure thing. anything in particular?
have been waiting a looooong time for good compilation like this. there's a lot of very cool lower-level weight hacks that theoretically are much faster but in practice so much slower due to unfused kernels
chilli#5665: just anything you expect to work that doesn't
chilli#5665: although if you have any particular examples I can probably guess whether it'd work or not 😛
Some Point Process#3793: yeah i guess if there's effectively some chain of thought process that can be derived from a convo it can answer a difficult question (where the "inferential distance" can covered by virtue of the "conversation" making those inferences "step by step" or w/e (i.e. to get from a->d, figure out what "a" (premise) implies and so on (recursively), to get a chain of implications a->b->c->d))
anotherone#9475: Have you tried like "Pretend to be an extremely smart competitive programmer who won the IOI..." etc
anotherone#9475: LOL
anotherone#9475: "explaining on the forums to a student..."
paws#3311: i wonder what this means about its ability to generate code for simple coding stuff
jrowe#5371: I told it to create a script that opens ms paint and draw a picture of what it thinks about EleutherAI's ILoveScience
jrowe#5371: https://cdn.discordapp.com/attachments/729741769738158194/1085769025663029329/image.png
jrowe#5371: Almost every script I've had it make just worked
jrowe#5371: Though asking it to write freehand was hilariously bad
paws#3311: hmm quite interesting to me that it can do this but not coding problems
jrowe#5371: Some problems will require recursive chains of thought in order to use effective problem solving techniques
jrowe#5371: First order / zero shot solutions are limited to what can be calculated in one pass, from whatever the llm's knowledge base contains
|
jrowe#5371: some issues will require building the solution strategy itself recursively
tysam&co.#4818: The nightly version failed on a really fast CIFAR10 pytorch conv-based implemetation that I had, but I haven't tried it with the release version (I'm sure I'll get around to that sometime).
Something I did see in the transformers project I'm working on was that if one added certain layers during training, it would just ignore them or act weird otherwise (which makes sense). Is there a good manual way to trigger a retrace (hopefully using any possible caching?)
paws#3311: ya sort of like <simple scripts> are interpolation in its knowledge base hence possible and can be pattern matched against, but give it something a step higher and it breaks down
jrowe#5371: Yup, so a super valuable chain of thought would be analyzing what order of problem solving needs to be applied
jrowe#5371: Getting into computation theory and stuff that will likely continue to be hard problems, unless we get lucky
chilli#5665: What was the error you ran into?
tysam&co.#4818: I'll see if i can dig it up once I swing around to that one.
For the one I'm currently working on, something was strange in the performance. I have a suspicion it wasn't backpropping through the branches on the residual whenever I added a new residual block (only the embedding layer would get the weights). Turning off compile let everything work pretty well as it seems like it should have, at least.
ogkalu#7841: This chrome extension runs cGPT code right in the chat interface https://chrome.google.com/webstore/detail/rungpt-execute-chatgpt-co/ddfiefcjflpdpanadjmgpogkfnjaifod so you can have some quick feedback with what it generates. That might help
chilli#5665: That sounds pretty bizarre
tysam&co.#4818: I'll see if I can quantify it more clearly if I run into it again. I haven't tried any of the debugging tools on for size. It's a weird edge case, between all of the layers below the current layer being frozen and the dynamic growing. Will update you as I have something closer to actionable info on it.
love not attention#5854: Wow it struggles very surprisingly with that.
chilli#5665: Oh, dynamic shapes?
chilli#5665: Yeah I’m pretty disappointed
tysam&co.#4818: Dynamic depth -- in attention blocks. So...technically yes and no?
tysam&co.#4818: https://tenor.com/view/well-yes-but-actually-no-well-yes-no-yes-yes-no-gif-13736934
chilli#5665: Ah, I think that should work, as long as you’re compiling within the blocks
|
love not attention#5854: It's pretty counterintuitive that the same model is able to answer some leetcode hards...
tysam&co.#4818: Okay.
tysam&co.#4818: oh you know what
tysam&co.#4818: The value initializes with a default value of 1 block
tysam&co.#4818: so maybe compilation is tracking/tracing the 1-block deep network?
paws#3311: what happens if you make the problem straightforward
paws#3311: is it able to go from pseudocode to code for this
tysam&co.#4818: The block selection is written as a for loop over already-initialized block layers, so maybe the initial logic is screwing with something in the compiler function
chilli#5665: Do you have the code somewhere?
tysam&co.#4818: I don't have the current breaking code easily handy, but I'd like to revisit it soon and I can see if I can create a good reproducible example with it
Rohan#3064: think microsoft gave openai access to all the private github repos for training
tysam&co.#4818: oh joy
Rohan#3064: gotta embed one of those canaries in a repo, like how cartographers used to add fake cities to maps to catch copycats
chilli#5665: If you wanna do it I can feed it in 🙂
chilli#5665: The problem statement is already pretty straightforward tbh
love not attention#5854: maybe the competition phrasing signals the problem is more difficult than it actually is or something lol
paws#3311: oh yes i agree, i just sorta meant if it cant solve the problem because of language, what level of handholding can make it generate accurate code
paws#3311: if i get access, i'll try and let you know
love not attention#5854: also I wonder if the model begins to think it's dumb if it has to repeatedly be corrected
~~~#4682: https://cdn.discordapp.com/attachments/729741769738158194/1085791811341144094/FrN56vyXwAATRLi.jpg,https://cdn.discordapp.com/attachments/729741769738158194/1085791811580211200/FrN8bJoagAABuNa.png
|
~~~#4682: Damn!
jrowe#5371: <https://twitter.com/jacksonfall/status/1636107218859745286>
Spring#7247: Some random attacking Eleuther & Connor
(See child comment as well)
https://www.troddit.com/r/MachineLearning/comments/11sboh1/d_our_community_must_get_serious_about_opposing/jcd8pmg/
Spring#7247: Reddit link if you can't access that alternative front-end
Spring#7247: https://reddit.com/r/MachineLearning/comments/11sboh1/d_our_community_must_get_serious_about_opposing/jcd8pmg/
ilovescience#3282: looks like that guy was banned in 11/2021 and >1 year he's still bitter about it :berk:
ilovescience#3282: I saw him once on Twitter complaining too
anotherone#9475: r/ML is a shell of what it once was
anotherone#9475: sad
ilovescience#3282: let's nag chilli about that :berk:
anotherone#9475: is he a mod or smth?
anotherone#9475: tbh, not really much you can do. Most communities die when they get too popular.
ilovescience#3282: yeah he's a mod there
joaquinito2070#6071: Hello!!! Please, develop a GPT-4 to install it on DigitalOcean.
Rugnir#1468: thats really interesting
Elle#6396: It was cool to see you all add Flash Attention. Do you think there will be plans to add integration for something like peft to the main repo? I saw they managed to fine-tune the 20B NeoX model on a single 4090
Elle#6396: They used 8-bit quantization, iirc
|
joaquinito2070#6071: Hello!!! Please, develop a GPT-4 to install it on DigitalOcean.
joaquinito2070#6071: Is there a GPT-4 to install it on any VPS??? @everyone
reavatar#8127: wrong ~~channel~~ server
zukaboo#8804: This is the right server: https://discord.gg/jNnu3drx
zukaboo#8804: I'm not sure if DO provides a GPU though.
Ryu#0274: Llama cpp is cpu afaict
zukaboo#8804: Yep, so it might be an option.
wissotsky#8554: Youll have to go to the openai offices and ask them nicely
Louis#0144: Can we star this
Louis#0144: Lmfao
Louis#0144: Pls
synquid#7193: I mean, I want it too...
AI_WAIFU#2844: actually, how impractical would it be to do cpu distributed fine-tuning?
AI_WAIFU#2844: training is out but high ram machines are cheap and the use of CPU limits bandwidth requirements since the machines themselves are much slower
kurumuz#5695: very practical, you can just use zero-offload to offload the whole optimizer to the CPU. and its still quite fast unless your batch size is too low.
AI_WAIFU#2844: like I'm thinking internet scale finetuninng on CPUs
kurumuz#5695: ahhh, you mean no GPU at all
AI_WAIFU#2844: so that a bunch of anons can make something together
synquid#7193: so bloom but finetuning
AI_WAIFU#2844: ye
|
AI_WAIFU#2844: or petals really
Hyperion#0575: Gptmacbook
Ryu#0274: GPT-🍎
Orz#3023: gptbook
cognomen#6297: imagine incinerating a significant % chunk of your entire company's shareholder value to make a model that fits on a $4 VPS
synquid#7193: GPT-$4
cognomen#6297: less bottlenecking, more like driving a train through the eye of a needle
uwu1#4864: main issue is distributed trust rather than perf for that IMO
AI_WAIFU#2844: this seems like not that big of an obstacle tbh
uwu1#4864: how do you protect the system against anon bad updates?
uwu1#4864: if you trust everyone you can try federated learning but it would probably only work for finetuning
jrowe#5371: Especially given a malware risk
jrowe#5371: Like torrents, even exclusive community isn't entirely safe
AI_WAIFU#2844: don't let dicks into your discord
joaquinito2070#6071: @jrowe Is there a GPT-4 source code to install on any server???
AI_WAIFU#2844: don't execute any code, only pass around weight updates
jrowe#5371: Yes, go to openaigpt4forfreebecausethatstotallyarealthing.com
joaquinito2070#6071: does not work
jrowe#5371: Hmm, must be blocked in your country
AI_WAIFU#2844: like I'm imaging everyone shares a public key to sign messages or something and then a group can get together and train
|
joaquinito2070#6071: @jrowe can you give a GPT-4 bot in Discord???
jrowe#5371: No
synquid#7193: you refuse?
jrowe#5371: Get $20 and sign up lol
jrowe#5371: That's the only way
jrowe#5371: I don't have api
synquid#7193: $25 because VAT :goose10:
uwu1#4864: if you're fine with them using a central server to be the param server, there a lot of federated learning impls. and you can just overlay your favorite pk based p2p networking atop that
AI_WAIFU#2844: yep, that's roughly what I had in mind, not a massive lift to implement
uwu1#4864: it would be interesting, I worked on a web browser based one a bit ago but didn't figure out the p2p part
uwu1#4864: the idea being that you can just run this page to help train w gpu accel or pure cpu, sandboxed by the browser
joaquinito2070#6071: Is there an freelancer to build me a GPT-4 with ChatGPT to run on my dedicated server hosted at my home??? @everyone
Spacecraft1013#5969: i'm pretty sure if an individual had the skills, time, and computational capacity to clone and train GPT-4 on their own, it would have been done well before OpenAI
synquid#7193: casual billionaire in the chat
Spacecraft1013#5969: someone's gonna come in chat like "well I have a datacenter with 2000 H100s"
synquid#7193: another 0 and we're talking
Spacecraft1013#5969: H1000 :berk:
Cyclcrclicly#3420: datac0enter
45#2247: on which discord do they have the got-4 discord bot
zukaboo#8804: All right, now a serious answer: GPT-4 is so propietary that even the model size is a trade secret. It is something nobody but OpenAI can run.
|
I gave you a link to a LLaMA server for a reason: it is a model you can actually download and run, even if you are not OpenAI and do not have a datacenter of A100s. And the smallest model was even finetuned for following instructions.
zukaboo#8804: Now if you want a some kind of web server that *queries* GPT-4 using API, it is possible. See https://openai.com/blog/openai-api. But nobody will help you here, since everyone hates OpenAI.
kostermw#7940: Is the the-faraday-cage broken? Not seeing any output since around 4pm.
BoneAmputee#8363: CLIP-guided VQGAN (`.imagine`) is currently down :odonod: I need to fix that :guilty: CLIP-guided diffusion (`.diffusion`, `.diffusion2`) and Stable Diffusion (`/imagine`) are up, though the former two are not behaving appropriately right now
Hawk#1399: Are CNNs shift invariant or equvariant? Google says both.
ari#9020: Convolutions are shift equivariant, global pooling is shift invariant
Hawk#1399: So the whole network would be?
uwu1#4864: no unless you took care to make it
uwu1#4864: e.g padding
kostermw#7940: Don't feel guilty. This is complex stuff 😉
PassingbyPosts#3227: I'll be honest i see that llama is way closer to stable diffusion for language models then closed A.I https://cdn.discordapp.com/attachments/729741769738158194/1085998267403354194/image.png
PassingbyPosts#3227: but it leaves alot to be desired
tohara-pandologic#1138: Hi, might there soon be a HuggingFace-compatible version way to parallelize gpt-NeoX-20B (e.g., via GPTNeoXForCausalLM)? From what I gather the only practical way to run inference is via a 80gb GPU (e.g., via PaperSpace's A1000-80 machine type). This is for R&D not production, so it would be for adhoc usage inline with the high rates for such cloud-based GPU. By the way, I thought this would be in the works for parallelformers, but I just noticed that the repo is not very active. -- Tom
StellaAthena#3530: It can be run on two 3090s or a A6000 (48 GB) pretty much out of the box
tohara-pandologic#1138: Thanks for the info. I'm more familiar with AWS instances so the 80gb via Paperspace was a guess.
paws#3311: there are cheaper instances on vast.ai if you get lucky
paws#3311: but ofcourse more useful for R&D purposes not production
moschrex#4468: lmao at Yudkowksy
moschrex#4468: We can't duplicate a strawberry without destroying the world
sisyphus23#9750: Is there a channel where we're all working together to beat OpenAI at their own game? How can I help?
|
A Deserted Genie#1498: stability ai kind of sucked that energy out of eleuther. as far as i can tell all of the "beat open ai" is now private under the stability umbrella
synquid#7193: http://actuallyopenai.com
Hyperion#0575: man Emad does like buying novelty domains 😄
sisyphus23#9750: So EleutherAI is not actively making a plan to build something new?
A Deserted Genie#1498: not that's competitive with anything oai is doing, no
StellaAthena#3530: Ignore @A Deserted Genie, he doesn’t know what he’s talking about
A Deserted Genie#1498: I've read every channel, show me where anyone here is openly planning replication of something like chatgpt
StellaAthena#3530: “Not doing anything new” and “replicating ChatGPT” are not the same thing. Our focus is on doing research in AI, not building commercial apps
sisyphus23#9750: I'm not being judgy, I just want to help
Any issues I can look at, tasks on a board?
A Deserted Genie#1498: ok stella, this is what he originally asked https://discord.com/channels/729741769192767510/729741769738158194/1086008108968579214
Shmingmaster#3961: Go go "Channels and Roles" right above "Announcements", scroll down and hit "Receive notifications from active projects looking for volunteers"
sisyphus23#9750: @A Deserted Genie That is fair, I did phrase that badly, my apologies
A Deserted Genie#1498: not that EAI was ever out to "beat oai" in the first place
StellaAthena#3530: @sisyphus23 what’s your background / experience? Have you ever trained an AI algorithm on multiple computing nodes?
StellaAthena#3530: Same, I just logged in and didn’t scroll back for more context
sisyphus23#9750: I haven't done anything with multiple computing nodes, no
My experience is mostly theoretical at the moment (from a degree) but I'm decent at the programming parts
Zoru#1014: "beat openai at their own game" implies playing by their rules.
|
Which implies that they set the rules and playing fair. Building open source ai is none of those things.
Zoru#1014: If eai followed everything openai did, they could easily mislead into dead ends and bad rabbit holes
StellaAthena#3530: @sisyphus23 if you scroll down on the channels list beyond the “discussion” section, the next four are primarily composed of project channels organized by topic area (NLP, Interpretability, Alignment, and Multimodal Models). A lot of these projects have room for more volunteers, though the details of the skills and experience they’re looking for varies massively.
StellaAthena#3530: As @Shmingmaster said, you can click “channels and roles” to sign up to receive pings for calls for volunteers as well.
sisyphus23#9750: Thanks, I'll start going through those! I assumed they were discussing concepts
Also have signed up for the volunteering part
StellaAthena#3530: Training our next big language model is a very technically challenging task that requires rather specific skills, and is unfortunately not something that most people can be easily onboarded to. Like, on a basic level, there’s very little HPC in current AI curricula despite that being 90% of the hard stuff.
StellaAthena#3530: But that doesn’t mean that there isn’t cool research going on that you can be a part of.
StellaAthena#3530: Thanks for the feedback!
Yeah, maybe the welcome message should say that explicitly.
sisyphus23#9750: Sounds reasonable! I'd love to help with that hard part as well
Can I DM to ask about perhaps a link or two to begin getting used to the environment you work with?
StellaAthena#3530: We use a wide variety of computing environments depending on the needs of individual projects. I think the best thing to do is browse channels and their pinned messages to get a feel for what interests you.
sisyphus23#9750: That's a diplomatic but reasonable answer 🙂
Going through the channels now
StellaAthena#3530: In terms of easily on-boardable coding work though, #lm-thunderdome is the channel for our project maintaining the Language Model Evaluation Harness, a library that allows users to evaluate a wide variety of models and APIs across many NLP benchmarks. We’re currently doing some refactoring, but we’re always interested in having more tasks implemented.
Current version: <https://github.com/EleutherAI/lm-evaluation-harness>
|
WIP refactor: <https://github.com/EleutherAI/lm-eval2>
The_Alt_man#5718: Does EAI still sponsor compute for individual projects? @StellaAthena
sisyphus23#9750: Oh that's nice!
I did see the project, didn't notice the rewrite
Will check it out
StellaAthena#3530: We are currently in a bit of a computing crunch (large model + final runs for papers) but in principle yes, especially if you’re down to wait a couple weeks for capacity to get freed up 😛
The_Alt_man#5718: 👍 I'd DM-ed you a few weeks back, but I guessed you might be too swamped with DMs to read them all
The_Alt_man#5718: I can wait a couple of weeks 🙂
StellaAthena#3530: Oh yes! I was very interested in that idea. Sorry, I didn’t realize I never replied
StellaAthena#3530: You mentioned having some preliminary results, can you share them with me?
The_Alt_man#5718: absolutely! DM-ed
cvanderloo#2939: I’ll do it for $100k
Hyperion#0575: I'll build GPT-4 for $150m and a team of 100 PhDs
artem9k#7593: I'll do it for $1000
paws#3311: You need compute experts
paws#3311: :berk:
paws#3311: (More than phds)
zphang#7252: phd in slurm
makya#2148: Hire some or buy some lol.
BlinkDL#1985: RWKV 14B ctx8192 is a zero-shot instruction-follower without finetuning 😃 (only trained on pile v1)
|
You just need the alpaca prompt and that's enough
try it: https://huggingface.co/spaces/BlinkDL/ChatRWKV-gradio (click examples and edit) https://cdn.discordapp.com/attachments/729741769738158194/1086064619048685619/image.png,https://cdn.discordapp.com/attachments/729741769738158194/1086064619434541076/image.png
zfurman#1678: Is there a browser plugin to replace those stupid numbered citations with nice, author-year citations? Someone has to have done this, I've reached my breaking point
tysam&co.#4818: Could someone please point me to a resource on converting perplexities between different tokenizers? I'm doing some experiments with various token lengths and am completely lost on the current way of ensuring an apples-to-apples comparison (bits per character, etc?)
StellaAthena#3530: You can’t do it in a tokenizer-independent fashion, but you can pick a reference tokenizer (e.g., Unicode) and convert the perplexities to bits per Unicode character
tysam&co.#4818: Much appreciated, thanks!
talia#1119: Hey if anyone has GPT-4 access and wants to play around with a proof bot thing I just made a quick little starter app that is MIT licensed and i'm hoping modulo the whole closed ai thing folks can just contribute and make a thing for fun: https://github.com/tlringer/gpt4-fun
talia#1119: it's super naive I just wanted to get gpt-4 set up, but it'd be nice to get it interacting with the proof checker and so on
talia#1119: but also feel free to use it as a launchpad to do whatever it is you want to do with the gpt-4 api
talia#1119: also if there are open soruce chat APIs feel free to stick one in there as an alternative backend and cut a PR
StellaAthena#3530: @talia There's a ChatGPTNeoX that I'm 90% sure is on HuggingFace and is Apache 2.0
talia#1119: nice is the interface similar?
talia#1119: i'm going to pass out because i haven't slept in forever but if anyone wants to add a ChatGPTNeoX backend that can be configured for this please please do it'd be good to not depend on any one chat backend
StellaAthena#3530: ```
from transformers import pipeline
pipe = pipeline(model='togethercomputer/GPT-NeoXT-Chat-Base-20B')
pipe('''<human>: Hello!\n<bot>:''')
```
talia#1119: ah ok very different interestingly
talia#1119: we should make a uniform chat api that is backend-independent thatd be good
|
talia#1119: why are there no software standards in machine learning lol
talia#1119: that is a complaint about the field not you guys
talia#1119: i love that there even exists anything open source
talia#1119: still going to sleep but ty will think about over the weekend. i think next i am going to try to have it interact with an actual proof assistant and use proof assistant feedback as conversational input
talia#1119: so like human asks for proof, proofbot tries, calls out to proof assistant to check proof, looks at errors, tries to fix itself, etc . baldur but in chat format: https://arxiv.org/abs/2303.04910
talia#1119: just spitballing sorry i talk a lot good night lovely humans
kd90138#9368: we love a good proof-assistant calling proof chatbot
Hii#4258: @StellaAthena
We're working on live3d v3
<https://github.com/transpchan/Live3D-v2>
It allows to replace a person in a video with just a charecter turnaround
Think avatar level graphics with just a video of a human
V2 was trained on anime working to bring it to humans and any general character turnaround
This archetechure was the original inspiration behind controlnet
Can we get some compute and help too?
thedarth2k#6611: Anyone here have experience or a slight familiarity of what styleGANEX is?
ayushkaushal#1786: https://twitter.com/ycombinator/status/1636489217575624704?cxt=HHwWgIDRxbuR_bUtAAAA
Can someone tag OpenAI on this post?
lihe07#0906: hi everyone. I'm interested in ai research but I'm quite new to this and I wanna ask for some advice
lihe07#0906: i've already read and implemented some papers, but i don't know what to do next
|
The O#7760: I cannot grasp new concepts without sitting down and writing them from scratch myself. For example, I struggled so much with understanding transformers until I decided to sit down and produce a badly written one. Does anyone have better a better method, or an exercise that can help me reading papers and grasping "huh this probably will act like that" parts?
The O#7760: my method is probably not necessarily bad but I'd not reject faster ideas 😛
cognomen#6297: poke around, print debug statements and change things in an existing implementation
cognomen#6297: colab is good for live experimenting
lunarflu#6769: I'd say use analogies to get a high level understanding, then worry about the details later
arta#4338: idk i think going from scratch is the surest way to understand something, unfortunately the papers are not written to teach but to awe / get accepted in publications.
lunarflu#6769: Yeah, that's why I think getting *an* understanding, and evolving it, may be useful to certain people.
lunarflu#6769: something like "transformers are useful general architectures, we don't have to build every NN from scratch now" I'd say is a useful starting place, what do you think?
arta#4338: i have to agree with that too 😅 the classical top down or bottom up discussion. in general i've been wondering a lot how to learn about NN, any ideas? i feel of the bat one can consider a few things:
-1 the loss function
-2 the architecture
-3 the data
-4 bias / variation / demographic distributions
But I would have no idea how to systemically do "math exercises" about them. Maybe the tool from https://wandb.ai/ could be useful.
CarsonPoole#0640: does anyone know how much overhead there is moving data from one cuda mempool to another (on the same device)
ori#1974: Hi friends 🙂 I’m a PhD student at Texas A&M and looking to collaborate with people on open-source research. The goal would be to publish our research and learn from each other along the way - I’m trying to expand my circle of collaborators to all of you talented people! Hmu if you’re interested!
vikasp#7540: Has anyone ever fine-tuned GPT-J, but replaced the torch.float32 casts in the attention block with bfloat16? I think it should be fine, but I'm curious if you saw any accuracy issues https://cdn.discordapp.com/attachments/729741769738158194/1086318910539841576/image.png
kurumuz#5695: it will be mostly fine
kurumuz#5695: with gpt-j
kurumuz#5695: it doesn't really require that matmul to be in fp32
|
vikasp#7540: Thanks 🙂
Kharr#7888: You can fine-tune the entire thing in fp16 just fine too. GPTJ is very low precision friendly. Also quantized to 8bit really well.
jrowe#5371: 4bit too
jrowe#5371: <https://bellard.org/ts_server/> lots of interesting models here
vikasp#7540: Huh, so instead of using amp and autocast, just cast the model and all the inputs to float16?
Kharr#7888: Yep, just do model.half() which should use less memory than amp
vikasp#7540: Interesting, I'll try that out. Would you recommend fp16 over bf16 in this case?
Kharr#7888: Depends on your hardware. Fp16 has acceleration on more hardware. Bf16 is supposedly more stable.
vikasp#7540: Yeah, if we don't need the extra range of bf16, maybe the precision tradeoff of fp16 is better
vikasp#7540: I'll try it out
vikasp#7540: Thanks!
vikasp#7540: My understanding is that quantized training lowers speed (due to the overhead of quantizing/dequantizing), but also uses less memory. So it's possible that being able to set a higher batch size will increase overall speed, but not a given.
Is this the right way to think about it?
jrowe#5371: Quant/dequant seems counterproductive as part of training lol
Kharr#7888: Quantizing can be useful mainly if the model won't fit into memory on lower end hardware.
jrowe#5371: I have superficial knowledge of this domain, someone who's trained models might have more insight
vikasp#7540: 👍 That makes sense, so mainly useful for inference
jrowe#5371: Naively, training in 4bit mode means adjusting weights anywhere from 0-15, so it's heavy handed
Kharr#7888: Also useful for finetuning a bigger model like GPT-J on a 16 GB GPU. Not everyone has access to A100s
|
jrowe#5371: Cries in 4gb vram laptop gpu
uwu1#4864: with fused dequant+mm kernels i wonder if loading from mem + dequant saves enough bandwidth for the mm to actually be faster
Daniel Elias Velasco#8584: Depends, what are you interested in! there are so many exciting and fun fields to explore. For example, Natural Language Processing includes popular projects you have likely heard about such as ChatGPT. Computer Vision is a field that explores object detection, segmentation, and image recognition. This is where you see the software that is used in Tesla's self-driving cars, and surveillance video that is used to spot and capture rare species of animals through trail cameras. There is reinforcement learning, where you see people working on creating AI models that can play the world's oldest and toughest games (such as Deepmind's AlphaGO, which if you haven't seen the documentary I highly recommend checking it out!). This is just a brief explanation of popular trends, but there is so much more waiting to be discovered.
amaliaaa#7120: out of curiosity, does anyone know if any good general vision transformer model is released?
amaliaaa#7120: something that can generally tell what's happening in an image
amaliaaa#7120: ive been completely out of the loop when it comes to ViT s
tpapp157#3643: As a start point you can try using clip.
amaliaaa#7120: oh really
amaliaaa#7120: was CLIP just giving an image some fitting tags?
tpapp157#3643: CLIP was image-text contrastive learning. So it'll convert your image into an embedding vector and you can do whatever you want with that.
amaliaaa#7120: ahh okay
amaliaaa#7120: cool, thank you!
The O#7760: thanks for all the answers!
Dri0m#3828: hey, anyone here paying for gpt4 chatgpt?
Dri0m#3828: does it work for you right now?
Dri0m#3828: my plus subscription completely disappeared lol
Dri0m#3828: just wondering if their system is on fire or what
Dri0m#3828: fun fact btw
Dri0m#3828: if you ingest a scientific paper into it, it can generate a pytorch implementation for you
Dri0m#3828: can't wait for the API access
|
alstroemeria313#1694: that happened to me once and i just had to wait, it was weirdness due to really high load i think
alstroemeria313#1694: oh i should try that
alstroemeria313#1694: ok so like. i have a Gabor filter bank, how do I decide how to weight MSE on its features in my loss vs MSE loss on the raw pixels?
alstroemeria313#1694: should i just pick the constant that makes the filter bank normalized or something?
alstroemeria313#1694: which for a filter bank with four orientations of Gabor filter is pi, i think
alstroemeria313#1694: ...
alstroemeria313#1694: actually wait, Gabor filters already respond to signals w/ constant offset, just not as strongly as they do for edges
Millander#4736: Welcome! We have open tasks in #deleted-channel 🙂 https://eleutherai.notion.site/Semantic-Memorization-eeba3b27f82e43f4b636d742f2914d4f
alstroemeria313#1694: lol they don't have *very much* DC response do they
alstroemeria313#1694: so using just them in my loss is kinda bad
Raison#2632: Hi everyone! I hope it’s okay to post this here. My team is organising a symposium on evaluation and design of generalist AI systems, to be held as part of AAAI spring symposia in Burlingame CA, on March 27-29 (in person and remote). Premise of the discourse is that we need better frameworks to evaluate AI - both for improved performance on tasks like reasoning (visual and language-based), AND to provably ascertain compatibility with human values and cognitive processes. We will have a great lineup of keynotes, panels and original paper presentations, and are cordially inviting you to attend and participate in the dialogue! The event is organised to have plenty of opportunities for interaction, and will have more of a feel of a workshop than a conference. All discussions are off the record to allow for intellectual freedom.
More details here: https://www.cognitive-ai.org/edges-23
Registration here: https://aaaiconf.cventevents.com/event/f2166151-5af1-450e-81be-a5cd7c872cf7/summary
alstroemeria313#1694: ...what's a smooth approximation to leaky relu?
smy20011#8489: Maybe dumb question: Do you think we will get better model by just adding parameters to the model? or you think we need ML structure improvement in order to make breakthrough.
bmk#1476: maybe use a sigmoid to mix between the two linear components
bmk#1476: x sigmoid(x) + 0.1x (1 - sigmoid(x)) or whatever
Straw#2743: Hyperbola
alstroemeria313#1694: hmm https://cdn.discordapp.com/attachments/729741769738158194/1086377540731535421/Screenshot_2023-03-17_at_12.56.25_PM.png
|
alstroemeria313#1694: idk
Straw#2743: Or just soft plus + kx / 1 + k
bmk#1476: it's only slightly cursed
bmk#1476: and looks vaguely like swish + some additional component
alstroemeria313#1694: oh hey yeah that's leaky swish
alstroemeria313#1694: `x Sigmoid[x] + \[Alpha] x Sigmoid[-x]`
bmk#1476: :galaxy_brain:
StellaAthena#3530: Call it leakier relu
kurumuz#5695: lol
alstroemeria313#1694: Oh mathematica, why can't I nest Minimize calls
alstroemeria313#1694: apparently at alpha = 0.0907763 or higher the function becomes monotonic again
alstroemeria313#1694: around there anyway
alstroemeria313#1694: alpha = 0.09077627822686764 apparently
alstroemeria313#1694: that makes the derivative at the point where the derivative is minimized 0 to machine precision
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/1086382242131103845/Screenshot_2023-03-17_at_1.15.05_PM.png
Some Point Process#3793: yeah about to suggest similar, going off of <https://paperswithcode.com/paper/glu-variants-improve-transformer> (i.e. swish and gelu are multiplicative interactions or smth) https://cdn.discordapp.com/attachments/729741769738158194/1086390027086467153/image.png
alstroemeria313#1694: i was already using swish in the net i was going to try it on :)
alstroemeria313#1694: and i use geglu or swiglu in most of my transformers
Some Point Process#3793: yeah I've wondered how they would stack up https://cdn.discordapp.com/attachments/729741769738158194/1086392296012005396/tJI3j.png,https://cdn.discordapp.com/attachments/729741769738158194/1086392296238481579/Plots-of-the-new-Swish-activation-function-26-compared-to-other-activation-functions.png,https://cdn.discordapp.com/attachments/729741769738158194/1086392296511119380/Screen-Shot-2017-10-18-at-2.png
alstroemeria313#1694: i should evaluate it in one of my classifier test scripts
|
Some Point Process#3793: the first (blue) plot is gelu
The O#7760: is this readily available on cGPT?
alstroemeria313#1694: the one with four plots is wrong, the one labeled "swish" looks like some sort of softplus variant?
alstroemeria313#1694: but not softplus, because it's 1 at x=0 and softplus isn't
alstroemeria313#1694: anyway the net is training and leaky swish has sane second derivatives and is not breaking the thing i'm using it in which does an HVP with a metric that comes from the net.
alstroemeria313#1694: (which is why i could not just use leaky relu)
Dri0m#3828: for paid subscribers yeah
The O#7760: how do you do this, I couldn't make it write a code for Hinton's forward forward thing
Dri0m#3828: we dumped tons of informations and synthesized (what it thinks) are a really good approaches for training losses for VFI
Dri0m#3828: and ideas how to make ESRGAN temporally stable lol
Dri0m#3828: i'm not skilled in pytorch or ML architecture at all but my mates said it all looks pretty good, they will try to implement them into their trainer framework
Dri0m#3828: @The O feel free to read through my convos with chatgpt about CAIN and ESRGAN https://cdn.discordapp.com/attachments/729741769738158194/1086401929464262797/ChatGPT_20230317T195828480Z_ResearchPaperRequest.md,https://cdn.discordapp.com/attachments/729741769738158194/1086401930013724782/ChatGPT_20230317T202630548Z_ESRGANresearchpaper.md
The O#7760: ok this makes sense
The O#7760: I just tried dumping everything and it complained about the paper being too long 😛
The O#7760: thanks!
Dri0m#3828: you don't neccessarily have to start by infodumping a research paper but it's a good primer
Dri0m#3828: the API with 32k context window will come in handy for that for sure
The O#7760: yep...
The O#7760: I'm working on a morphological parser for a language and it's scaringly good at something I was planning to publish as a novelty lol
The O#7760: I guess I should've been faster 😛
|
Dri0m#3828: I saw some weird interesting ideas in it as well that i've never seen before
Dri0m#3828: `color harmony loss function`
Dri0m#3828: `Color Harmony Loss: To encourage the generated image to have a more visually pleasing and harmonious color scheme, you can create a loss function based on color harmony principles. One example is to use color theory, such as complementary or triadic color schemes, and penalize deviations from these schemes in the generated image.`
Dri0m#3828: there is the code for it in the convos
The O#7760: cGPT single-handedly made me take at least one year off my studies lol
The O#7760: I was planning to start a PhD right away
The O#7760: but all my assumptions are kinda challenged these days
Dri0m#3828: or `Certainly! Here's an example of how to implement texture loss using the Gray-Level Co-occurrence Matrix (GLCM) in PyTorch:`
The O#7760: this is really neat
Dri0m#3828: yes yes
Dri0m#3828: is all i can say
Dri0m#3828: as a software engineer with microscopic amount of knowledge about modern neural networks
Dri0m#3828: but hey i can implement a multi layer perceptron
Dri0m#3828: yay for me
The O#7760: well, I do ASR+NLU and I can train a perceptron with pen and paper
The O#7760: I guess that counts for something 😛
Dri0m#3828: GPT4 is significantly more intelligent than the default 3.5 model
The O#7760: I'm spoiled by the fast responses of 3.5 so I was holding off
Dri0m#3828: the responses are more dense with information and well, it speaks less like a robot and more like 150 IQ guy you're drinking a beer with at the pub who really knows his shit
Dri0m#3828: and it's less, how to say it
|
Dri0m#3828: less prescribed?
Dri0m#3828: actually productive at being creative?
Dri0m#3828: if you ask it for something novel, it sometimes actually spits out something novel
The O#7760: I wouldn't use it for social sciences or history etc.
The O#7760: but I actually quite like debugging with cGPT
Dri0m#3828: oh yeah i was just in the middle of producing some nontrivial code with 3.5 when 4 launched
The O#7760: it's like pair programming on steroids
Dri0m#3828: day and night difference
Dri0m#3828: as long as you don't need help with cutting edge stuff it's good
Dri0m#3828: but even then, you can paste a documentation about the code you need help with and it will help lol
The O#7760: I mean, you don't have to do cutting edge stuff to bring in six figures
Dri0m#3828: depends on when you live haha
The O#7760: let me put it this way
The O#7760: if you're not making six figures without cutting edge stuff, you won't with that either
The O#7760: and I'm not making that 😛
Dri0m#3828: doesn't mean that someone invented that yet
Dri0m#3828: it seems to be that good at times
Dri0m#3828: to invent novel approaches
Dan.#4017: question
Dan.#4017: Are there any endpoints or any sites where I can use gpt neox 20b
|
Dan.#4017: Dont think I can use it on my pc since its like 900 gb
StellaAthena#3530: 20b.eleuther.ai
alstroemeria313#1694: mm sometimes this thing gets higher val accuracy than swish and sometimes it doesn't
alstroemeria313#1694: regardless it seems to be like. around as good as swish and is monotonic
alstroemeria313#1694: so far
TheNickestNick#1024: Hi all - I've lurked here on and off for a while, but I'd like to start actively contributing to an EleutherAI project. Is there a list of active/ongoing efforts somewhere that are open to contributions? What would be a good place to start?
drdiffie#1162: This feels like a joke. The work of the OpenAI team is impressive, but I don't understand why they would go to any lengths to develop this library. The Eval Lib from EleutherAI is active for almost 1 year in Production environment and supports on top even more models......
The only good thing is the MIT license. Working on a fork rn that enables all the models we already support. https://cdn.discordapp.com/attachments/729741769738158194/1086437012615200849/IMG_8692.png
kaldor-hicks#8224: Hey Ben,
Thank you for posting that! I ended up joining this server last night because I ran into the same download issue a lot of other people were running into, but that link appears to be working for me.
Last night I asked ChatGPT about the size of datasets and number of parameters that were used for GPT-2, GPT-3, and ChatGPT to get an idea of what it took to produce each of those, and I was surprised to learn that the dataset for ChatGPT was over a thousand times smaller than what was used for GPT-3 (according to what ChatGPT told me).
Thanks to you I now have that data to experiment with! 🙌
StellaAthena#3530: If you scroll down on the channels list beyond the “discussion” section, the next four are primarily composed of project channels organized by topic area (NLP, Interpretability, Alignment, and Multimodal Models). A lot of these projects have room for more volunteers, though the details of the skills and experience they’re looking for varies massively.
Also you can click “channels and roles” to sign up to receive pings for calls for volunteers.
TheNickestNick#1024: Thanks! I'll look through the channels to try to get an idea of what's going on and who to reach out to for more specifics
|
alstroemeria313#1694: Where is the documentation for the GPT-4 API?
alstroemeria313#1694: like how do you use the visual input?
TheNickestNick#1024: I think they said that the API doesn't support image inputs yet
TheNickestNick#1024: https://community.openai.com/t/gpt-4-api-and-image-input/102852
alstroemeria313#1694: oh :/
alstroemeria313#1694: oh well
alstroemeria313#1694: i wanted to see how good it was at cleaning the captions of poorly captioned images
StellaAthena#3530: @alstroemeria313 special people get access to it
StellaAthena#3530: Which seems to mostly mean employees, sycophants, and friendly journalists
alstroemeria313#1694: Ah.
cognomen#6297: the discord bot sometimes gave a canned response about being a text only model even though it saw the picture
jrowe#5371: <https://github.com/ggerganov/llama.cpp/issues/91#issuecomment-1473271638> might be helpful for people working with small models
The_Alt_man#5718: isn't that just muP
Arthur_Embry#5364: Anyone have a 4 bit quantized version of alpaca?
jrowe#5371: no, still not technically legal to share that, and people definitely have not at all shared them to popular pirate torrenting sites, because they are behaving strictly within the lines of the meta license agreement
Arthur_Embry#5364: Lol
Arthur_Embry#5364: If you download from the torrent, I don't think you can be legally held to any sort of rules
jrowe#5371: cant use it commercially, and i honestly think meta wont care unless you make them look bad in public or try to commercialize
Arthur_Embry#5364: Sure
Arthur_Embry#5364: I just wanted a voice assistant for my pc
|
Arthur_Embry#5364: Didn't want to shell out tons to openai, and don't have a beefy gpu
Arthur_Embry#5364: Like, one I can tinker with
jrowe#5371: the llama.cpp seems to be developing an active hacking scene, it'd be cool if people came here and started using eleutherai models
Arthur_Embry#5364: That's a pretty good idea
Arthur_Embry#5364: I wouldn't really know where to start fine tuning it though
jrowe#5371: <https://bellard.org/ts_server/> can download some quantized models here
Arthur_Embry#5364: I'll start with that then
Arthur_Embry#5364: Let you know how it goes
jrowe#5371: instruction tuning gpt-j 6b will probably be the best approximation of alpaca 7b
kd90138#9368: Doesn't gptj have material differences in model arch tho
kd90138#9368: Does it have parallel attn ffn layers
jrowe#5371: dunno, maybe neox-pythia-6.9 might be better?
LunchInSpace#6973: Ran a big hyperparameter tuning job on GPT-J* to see if the number of parameters trainable (using LoRA) has a similar relationship to tokens as found by deepminds scaling paper for pretraining LLMs. Long story short- it looks like there *might* be but it only starts becoming noticeable for larger fine-tuning datasets (~10k sequences of max 2048 tokens it starts to look nice on a plot). (Score is perplexity on held-out data) https://cdn.discordapp.com/attachments/729741769738158194/1086492851342422046/image.png,https://cdn.discordapp.com/attachments/729741769738158194/1086492851589881966/Untitled.png
alstroemeria313#1694: love how he put "machine learning" in quotes but he's completely correct and i harp on this often and feel it's not well understood by many practitioners https://cdn.discordapp.com/attachments/729741769738158194/1086519464809463859/Screenshot_2023-03-17_at_10.19.34_PM.png
ILmao#5683: What would a mistaken assumption be? That the errors are likewise transformed when non-linear?
alstroemeria313#1694: people often think it's a good idea to try using L1/MAE loss for diffusion models but for you to learn the score of the noised data correctly your model output has to be the mean of a normal distribution and this happens by using MSE
alstroemeria313#1694: this is bc we use a Monte Carlo estimator of the score as the model target, whose expected value is the score
Some Point Process#3793: Yeah I guess, it can be shown somehow via laplace distribution having a similar form (abs. value of the deviation instead of the squared deviation) for the expression in the exponent?
Some Point Process#3793: https://cdn.discordapp.com/attachments/729741769738158194/1086541010110267484/210306e18c75c252ce85eb79c3af18bb5c8dd1a8.png
Some Point Process#3793: <https://en.wikipedia.org/wiki/Laplace_distribution>
|
Some Point Process#3793: interesting, didn't know that
alstroemeria313#1694: yep, an L1 loss is the negative log likelihood of a Laplace distribution so the thing you are fitting when you use one is the location parameter of a Laplace distribution
Solux#5809: You're definitely correct from a theoretical perspective, but from an engineering perspective, it just gives you better samples in some cases if you use L1
alstroemeria313#1694: i think it loses diversity though
Solux#5809: That might well be, but for strongly conditional tasks (which is where I've seen L1-trained models outperform L2 ones), that's not necessarily a problem
alstroemeria313#1694: Ah
Solux#5809: Things like superresolution e.g.
Piplup#6706: Taleb moment?
Piplup#6706: This is why statistical methods beyond mean/mode/median should be locked up behind measure theory
0x1111111#4026: concerning exact links in GPT:
it seems ChatGPT-3 ( openai api) is able to give exact , non-hallucinated, links.
Question: how did they do it?
It is not part of the standard Transformer.
Are there references in the literature where something like this is discussed?
synquid#7193: it can remember lots of stuff exactly
synquid#7193: just today I made gpt4 generate an entire list of references for a subfield of ML and they were all real (from before 2021 ofc)
|
jamesc#4183: maybe they had a stage where, if the model spits out a link, they literally hit the link, check if the page exists. +ve reward if its real, -ve reward if faked. its seems easy to verify
jamesc#4183: or actually just make that process part of the reward model
0x1111111#4026: There are 2 problems with plain memorization:
1. the LM will only remember highly cited links, forgetting the low cited ones (e.g. links to some obscure pages)
2. it is unclear how to summarize a set of docs for a given query (i.e. "summarization with exact links")
NN_45#2752: is their an AI model website for making diagrams i.e tables, graphs from user language input ?
alstroemeria313#1694: Hey has anyone ever tried probit GANs before?
main#7610: Just ask gpt to make mermaid shortcode
alstroemeria313#1694: Like if you write D loss as mean(f(d(reals))) + mean(f(-d(fakes))), and G loss as mean(f(d(fakes))).
alstroemeria313#1694: Then the normal GAN objective is with f(x) = -log(sigmoid(x)).
G(r)EEK#4286: what is currently the state of the art transformer implementation for inference? nvidia triton?
alstroemeria313#1694: What if you chose f(x) = -log(Gaussian cdf(x))?
alstroemeria313#1694: i.e. `-jax.scipy.stats.norm.logcdf(x)`
alstroemeria313#1694: actually i tried this already and it was kinda bad but a modified version worked
alstroemeria313#1694: in D loss, `return jnp.mean(-jax.scipy.stats.norm.logcdf(d_reals[:, None] - d_fakes[None, :]))`, in G loss, `return jnp.mean(-jax.scipy.stats.norm.logcdf(d_fakes[:, None] - d_reals[None, :]))`
alstroemeria313#1694: This came from the observation that sigmoid(x) is the CDF of a logistic distribution with loc 0 scale 1.
alstroemeria313#1694: So I substituted in the Gaussian CDF instead.
NN_45#2752: nice, thank you, the user is non-CS person but this is helpful
|
main#7610: yes, you can direct him to an online mermaid editor if it helps
main#7610: he can generate and paste it there
him#6491: If you assume OpenAI is willing to lie - they are, they are human and it is their nature - disregard claims on GPT-4 restrictions and assume further they have agency and agenda.
naclbbr#9203: Toolformer-style augmentations like that would actually be nice going forward.
https://arxiv.org/abs/2302.04761
Louis#0144: gooseformer when
naclbbr#9203: e.g. literally using calculator API when the model needs to work with numbers
alstroemeria313#1694: ```python
def leaky_swish(x, alpha=0.09077627822686764):
gate = jax.nn.sigmoid(x)
return x * gate + alpha * x * (1 - gate)
def leaky_gelu(x, alpha=0.11418519963359057):
gate = jax.scipy.stats.norm.cdf(x)
return x * gate + alpha * x * (1 - gate)
``` btw
alstroemeria313#1694: leaky + monotonic versions of swish and gelu, the default alpha (leak factor) is the minimum value for alpha that makes the function monotonic
zphang#7252: anyone tried using gpt to use gpt
zphang#7252: we can arbitrarily nest GPT calls
|
naclbbr#9203: I would call that Deep-MoE and current most MoE implementations wide moe
naclbbr#9203: GPT calling another specialized model to save parameters and maybe that specialized model could call another one
naclbbr#9203: APIs could become agents. That would at least save it from having to retain edge case parameters after quantization which would not be used 99.99% of the times anyway
Maximum Limelihood Estimator#8915: THANK U
alstroemeria313#1694: :)
alstroemeria313#1694: 75500 training steps, experimental CIFAR-10 GAN https://cdn.discordapp.com/attachments/729741769738158194/1086730143407554581/demo-68.png
alstroemeria313#1694: It's only 270K params for G and the same size for D!
alstroemeria313#1694: OK what did I change to make it this good
kurumuz#5695: GAN uprising
kurumuz#5695: oh no
kurumuz#5695: how does it even work that good with 270k params
alstroemeria313#1694: it has to be either my new leaky gelu activation function or my new loss function
alstroemeria313#1694: those are the two big changes
Sphinx#2092: You should note that this tweet is basically https://cdn.discordapp.com/attachments/729741769738158194/1086732728327741590/Z.png
Maximum Limelihood Estimator#8915: How so
alstroemeria313#1694: so i can try reverting them to normal stuff, like normal gelu, or an RGAN where the model output is used to form logits instead of probits
Sphinx#2092: You can justify the use of mean square error by saying "well, if the residuals are normal, and we do MLE, we get the MSE loss." You can also run a similar argument for MAE. However, this is of course not necessary. There is a world beyond MLE, after all.
Sphinx#2092: You could instead think of, "Well if I want f(X) ~ Y, then maybe f(x) = E[Y | X = x]"
Sphinx#2092: In this case, this actually motivates the use of MSE loss, since the mean EX = min_c E[(Y - c) | X = x]
Sphinx#2092: Note that here we assume nothing about Y or X, or anything really.
|
Maximum Limelihood Estimator#8915: Ahh, yes, that's one justification, but MLE isn't actually necessary
Sphinx#2092: And you can of course say "Well, mean is for losers, maybe we want some other statistics, like the median."
Sphinx#2092: and this would yield MAE as well.
Maximum Limelihood Estimator#8915: Right
Sphinx#2092: Mind you, I think the MLE justification is super neat and definitely worth teaching it this way, but pushing too hard like that tweet can sometimes go far the other way
alstroemeria313#1694: oh, of course the other big change is that it's a relativistic GAN (I had to add this to get the probit outputs to work well)
alstroemeria313#1694: let's change back to logit
Maximum Limelihood Estimator#8915: So, I think you’re partly correct, but it’s not just about the MLE! it’s that (from a Bayesian perspective) the implicit assumption you’re making when you choose MSE is that the data are normally distributed. Otherwise, MSE isn’t a sufficient statistic, and your model doesn’t update correctly
Sphinx#2092: But you only care about sufficient statistics because you are implicitly assuming that we care about maximizing likelihood.
Maximum Limelihood Estimator#8915: No, I care about sufficient statistics because I care about doing Bayesian updating to get a correct posterior distribution
Sphinx#2092: Yeah? When was the last time you ran MCMC for deep learning?
Sphinx#2092: Or computed any posterior that wasn't just analytically tractable?
Sphinx#2092: I mean, I'm all for idealism, but let's not pretend. In reality, most people are just computing point estimate, and so people default to MLE.
Sphinx#2092: Which is fine, but there are other things you might want to do, depending on the use-case.
Maximum Limelihood Estimator#8915: Ok *that* one I do all the time
Maximum Limelihood Estimator#8915: (But I’m not usually working on deep learning TBF)
alstroemeria313#1694: i do this actually :blobcutehappy:
alstroemeria313#1694: sometimes
alstroemeria313#1694: hmm most of the difference may be coming from leaky gelu tbh
AI_WAIFU#2844: this is not actually that uncommon for me
|
nostalgiahurts#3408: I remember reading in https://github.com/soumith/ganhacks#5-avoid-sparse-gradients-relu-maxpool that
> Avoid Sparse Gradients: ReLU
> the stability of the GAN game suffers if you have sparse gradients
> LeakyReLU = good (in both G and D)
so maybe non-leaky GELU/swish lead to sparse gradients? but I don't know if this was ever formally studied
alstroemeria313#1694: they aren't sparse but they can vanish if your activation is really far negative
alstroemeria313#1694: the worse thing about gelu/swish, in G, is that they are non-monotonic
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/1086749747420856472/Screenshot_2023-03-18_at_1.35.24_PM.png
alstroemeria313#1694: this is with the minimum leak factor to make it monotonic, which is 0.114185
alstroemeria313#1694: i can of course just pick a bigger leak factor like 0.2
nostalgiahurts#3408: hmm. I guess my question then is why non-monotonicity hurts G. just optimization difficulties?
does seem to be true in practice, though. I think all the GANs I've seen use monotonic activation functions
alstroemeria313#1694: it seems to lead to weird blobby artifacts and then it has to try and make the images and textures out of the artifacts
alstroemeria313#1694: "why not just use leaky relu" i wanted something leaky that also had sensible second derivatives
Solux#5809: You are doing this just to train a time-continuous consistency model with it later, aren't you? 😅
bmk#1476: leakiness being good for GANs seems well known but not sure why nonmonotonicty would hurt esp when it helps in LMs
alstroemeria313#1694: actually i just kind of don't like relu ^^;;
alstroemeria313#1694: and i *do* use second derivatives to train D because of gradient penalty
bmk#1476: ~~you can tell whether someone is a scientist or an engineer based on how they feel about relu~~
alstroemeria313#1694: so should i go for a higher leak factor?
|
alstroemeria313#1694: i should probably explain the probit thing. it seems to help a little
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/1086755660617752627/Screenshot_2023-03-18_at_1.58.56_PM.png
alstroemeria313#1694: ...except i'm getting really tired, so
alstroemeria313#1694: 650K params or so? 75500 training steps, same setup https://cdn.discordapp.com/attachments/729741769738158194/1086757559425318985/demo-71.png
kd90138#9368: Have u tried mish
Maximum Limelihood Estimator#8915: Did you mean: “diet GeLU”
Maximum Limelihood Estimator#8915: I’m not surprised at all by nonmonotonicity being bad, I’m surprised by it ever being good
bmk#1476: conditional on it ever having been good, I am confused by it being bad again
alstroemeria313#1694: i think it just leads to weird visual artifacts when used in G
alstroemeria313#1694: a convnet G
alstroemeria313#1694: transformers fine
alstroemeria313#1694: 45k steps... hmm... https://cdn.discordapp.com/attachments/729741769738158194/1086820918363435130/demo-72.png
alstroemeria313#1694: also wow group normalization is *slow*
alstroemeria313#1694: but i'm tired and really don't feel like threading the state through my loss functions to make batchnorm work.
Fessus#9563: Yeah, I'm surprised how rarely people bring that up. I tried swapping BN for groupnorm for something I was doing and that alone halved the training speed
alstroemeria313#1694: also i tried putting bn in just now and it totally broke training
Fessus#9563: Yeah, the downside of BN is that sometimes it just totally fails, minor issue :thinkies:
Fessus#9563: At least when I've been messing with architectures where both makes sense, the thing which usually breaks BN is attention
CarsonPoole#0640: Is it slow because the operation is fundamentally more intensive than BN/LN or does it just need a dedicated kernel
Fessus#9563: For wide layers it requires significantly more computations (and memory) that batchnorm
|
alstroemeria313#1694: this GAN is learning absurdly fast compared to my previous GANs
alstroemeria313#1694: i hit upon an interesting architecture choice by accident
alstroemeria313#1694: ```python
class ConvBlock(nn.Module):
features: int
act: callable = nn.relu
@nn.compact
def __call__(self, x):
x = nn.GroupNorm(self.features // 8)(x)
x = nn.Conv(self.features, (3, 3))(x)
x = self.act(x)
x_skip = x
x = nn.Conv(self.features, (3, 3))(x)
x = self.act(x)
x = nn.Conv(self.features, (3, 3))(x)
x = x + x_skip
return x
```
alstroemeria313#1694: it REALLY likes having activations inside the main stream, not just inside residual blocks
|
alstroemeria313#1694: here we are at only 25500 steps https://cdn.discordapp.com/attachments/729741769738158194/1086838484997779506/demo-74.png
kurumuz#5695: wtf.
alstroemeria313#1694: with a ~750K param G and D
alstroemeria313#1694: (it is actually using leaky gelu as the activation, not the default of relu)
alstroemeria313#1694: training seems stable and would probably just keep going forever if i let it
alstroemeria313#1694: but i should actually try that instead of just asserting it :blobcutehappy:
alstroemeria313#1694: i am also using fairly heavy weight decay (i should look into the modified optimizer i'm using and find out what it actually is)
alstroemeria313#1694: at least i think it is fairly heavy
alstroemeria313#1694: https://arxiv.org/abs/1807.00734
alstroemeria313#1694: ^-- i am using this but actually just doing all the pairwise comparisons, they say it's expensive but it's actually cheap compared to the actual model training
alstroemeria313#1694: 50k steps https://cdn.discordapp.com/attachments/729741769738158194/1086843477951782962/demo-75.png
pbaylies#1820: I always liked ELU, personally
alstroemeria313#1694: look at that derivative though... this means the second derivative is discontinuous :/ https://cdn.discordapp.com/attachments/729741769738158194/1086844130157658272/Screenshot_2023-03-18_at_7.50.15_PM.png
KublaiKhan1#6681: Does that actually matter for a GAN?
alstroemeria313#1694: probably not
KublaiKhan1#6681: I mean maybe there's some use for a second order optimizer or something?
pbaylies#1820: Look at that blue curve tho, nice, smooth, increasing...
alstroemeria313#1694: but it does for other stuff i do so i came up with leaky gelu
pbaylies#1820: sin() is also pretty interesting 🙃
alstroemeria313#1694: ehehe
|
bmk#1476: looks like crappy softplus
alstroemeria313#1694: with an offset yeah
bmk#1476: https://arxiv.org/abs/2006.09661 :ultraberk:
pbaylies#1820: Yup that's the one
bmk#1476: what if
offsetted softplus
alstroemeria313#1694: yep
alstroemeria313#1694: just subtract softplus(0)
pbaylies#1820: what if... we add a learned parameter for the offset because sure why not
bmk#1476: while we're at it why don't we just learn the entire activation function
pbaylies#1820: ok but at least tie them all / use the same one
bmk#1476: this would be ridiculously slow but I want to do this just for the meme
bmk#1476: ofc
bmk#1476: and I don't just mean activation function search
bmk#1476: but like actually learning it with a smaller neural network that you pass gradients to
alstroemeria313#1694: it would be its own little MLP?
bmk#1476: yeah
pbaylies#1820: you could plug in a truncated taylor or fourier series and just learn n parameters
bmk#1476: but then I can't call it MLPception
|
alstroemeria313#1694: i am unsure how much interpreting the difference of two critic outputs as a probit rather than a logit is helping, i think it does help at least some
pbaylies#1820: So a frequentist and a bayesian commit a crime together, but the frequentist is released; why?
StellaAthena#3530: Because the cops could only prove that 95% of the time in similar situations a frequentist would be involved
pbaylies#1820: Because he didn't have any priors.
alstroemeria313#1694: lol
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/1086848336281215086/Screenshot_2023-03-18_at_8.07.12_PM.png
alstroemeria313#1694: and i am comparing all x_r in the minibatch pairwise to all x_f
alstroemeria313#1694: and f_1(x) is -log(normal_cdf(x))
alstroemeria313#1694: instead of -log(sigmoid(x))
kd90138#9368: This explanation makes for better comedic prose but I like Stella's explanation better
alstroemeria313#1694: my intuition for why to use probit (negative log of normal cdf as activation) is that its gradient doesn't saturate and as a result outlier really fake looking images in the minibatch get weighted higher
alstroemeria313#1694: it's roughly a one-sided quadratic loss
artem9k#7593: this is drenched relu?
artem9k#7593: hmm, no. needs a better name
alstroemeria313#1694: that's elu
artem9k#7593: not my greatest joke 😅
pbaylies#1820: yeah sigmoid() can be quite extreme
alstroemeria313#1694: i am also using a zero-centered gradient penalty of weight 2 on the reals (like SG2)
pbaylies#1820: It's looking good on CIFAR-10 for sure
bmk#1476: ML researchers fear him https://cdn.discordapp.com/attachments/729741769738158194/1086856304410243102/Screenshot_20230318_203755_Chrome.jpg
|
alstroemeria313#1694: ahah
pbaylies#1820: Hey leaky relu, what's your angle?
Fessus#9563: Just replace all activation functions with.abs() and be done with it
artem9k#7593: that looks more like dry relu haha
bmk#1476: relu6 is the most galaxy brain activation function
Maximum Limelihood Estimator#8915: I want a square-root activation function so that the gradients can overflow and vanish at the same time
Maximum Limelihood Estimator#8915: How long until we just loop back into sigmoids
bmk#1476: sigmoid6(x) = 6sigmoid(x/6)
ILmao#5683: But only by a constant factor, right?
ILmao#5683: Existing GN kernels are definitely not particularly well optimized compared to BN ones
ILmao#5683: Ah no, it includes the batch dim as well
Skewbed#1836: What is relu6? Is it like (relu(x))^6. I think I saw something about a squared relu once
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/1087052487715999784/Screenshot_2023-03-19_at_9.38.23_AM.png
alstroemeria313#1694: got it
alstroemeria313#1694: leaky softplus
alstroemeria313#1694: Cleaner version https://cdn.discordapp.com/attachments/729741769738158194/1087053120963629157/Screenshot_2023-03-19_at_9.40.54_AM.png
alstroemeria313#1694: Because in reality you would use the framework builtin softplus to evaluate that part because it is more numerically stable than just naively doing log(exp(x) + 1)
alstroemeria313#1694: relu6 is np.clip(x, 0, 6)
Skewbed#1836: Got it
Skewbed#1836: Do you have an intuition for why a leaky function might improve models?
|
bmk#1476: moar grads
alstroemeria313#1694: for relu it prevents the dead neurons problem
alstroemeria313#1694: and GANs seem to like leaky activations independently of this
bmk#1476: do we know why GANs benefit more from it
alstroemeria313#1694: i don't
bmk#1476: because I also have this intuition from poking around with GANs back in the day but have no idea why nothing else uses leaky acts
bmk#1476: time to train a transformer with leaky gelu?
alstroemeria313#1694: leaky activations are usually worse for normal models
Skewbed#1836: Makes sense
alstroemeria313#1694: but i have not tried leaky gelu on any normal model but a convnet classifier yet. it actually had pretty good validation accuracy, well above the non-self-gated activation runs still but a little under plain gelu
bob80333#4040: Have you tried the snake activation? I remember seeing an audio gan using it at some point
alstroemeria313#1694: no, what is it?
bob80333#4040: f(x) = x + 1/alpha * sin^2(alpha*x)
bob80333#4040: In the BigVGAN paper they make alpha a channel-wise learnable parameter
bob80333#4040: https://cdn.discordapp.com/attachments/729741769738158194/1087056916355752156/Screenshot_20230319-125311.png
alstroemeria313#1694: oh, periodic + monotonic?
alstroemeria313#1694: i see
alstroemeria313#1694: hmmmm i am switching this GAN over to use gradient penalty on random interpolations of the fakes and reals rather than on the reals
alstroemeria313#1694: GP on reals is not constraining enough imo
alstroemeria313#1694: It can just do anything near the fakes
|
alstroemeria313#1694: ...Wait is that one of the stability problems with StyleGAN? They do GP on reals probably for speed
alstroemeria313#1694: i wish MMDGAN worked better
alstroemeria313#1694: it gives you a builtin minibatch discriminator without the need to actually make D mix information between batch items
alstroemeria313#1694: it was too hard to make it conditional though.
alstroemeria313#1694: ...this made it worse
alstroemeria313#1694: @bob80333 how do they stop the learnable alpha from becoming zero?
alstroemeria313#1694: or do they just special case it so they don't divide by zero
ari#9020: https://github.com/NVIDIA/BigVGAN/blob/main/activations.py#L46
alstroemeria313#1694: oh they had the "parameterize it as log alpha" idea, is this better or worse?
Maximum Limelihood Estimator#8915: hwat
Maximum Limelihood Estimator#8915: ML is cursed
alstroemeria313#1694: ehehe.
alstroemeria313#1694: possibly bad minibatch discriminator idea
alstroemeria313#1694: Once you've reduced your D inputs down to feature vectors at the end of the conv stack, do single head scaled dot product self-attention along the batch axis to mix information about the features within the batch.
alstroemeria313#1694: Where q = k = v, no learned projections
alstroemeria313#1694: Either use that directly as your MLP input or concat it with the original features so the MLP can easily see how much each feature vector was changed
alstroemeria313#1694: maybe layernorm first
alstroemeria313#1694: (Yes, this is nearly the same as one of the Hopfield layers in CLOOB)
alstroemeria313#1694: thank goodness for jax pjit, which makes it really easy to try ideas like this while still training on a bunch of GPUs
kurumuz#5695: pytorch parallelism :berk:
|
kurumuz#5695: maybe Dtensor makes things better…
alstroemeria313#1694: wait wait
alstroemeria313#1694: does batchnorm in pytorch actually operate microbatch wise? lol
alstroemeria313#1694: like it normalizes per device and then you sync statistics at the end of the forward pass
Avi#0447: so do y'all ever get used to it, or is every conference deadline preceded by frantic writing and nervous edits?
or do I just have to much caffeine in my system and that's why I *feel* like I might vibrate off of my chair
(I've been out of academia for a while and my experience was not very representative, so I'm curious if it ever gets better lol)
wissotsky#8554: All of humanity pushes things up to the deadlines
nauti#4836: I believe they's always more stuff to do, so even if you get everything you planned done by the deadline, you'll find you have time to add more stuff to it
Avi#0447: that is true, and that is an unfortunate trap
alstroemeria313#1694: i can't get snape to work
alstroemeria313#1694: at least i haven't yet
alstroemeria313#1694: (I typoed it as "snape()" multiple times when writing the code to use it)
alstroemeria313#1694: Do you use it in D+G or just G?
alstroemeria313#1694: Because it seems to have a derivative whose maximum value is 2
alstroemeria313#1694: So it is Lipschitz-2, not 1
alstroemeria313#1694: And you really want 1 for D
alstroemeria313#1694: I could solve this by just dividing its output by 2 when used in D but I'm not sure that's a great idea unless D is a resnet
alstroemeria313#1694: ok, gelu/leaky gelu/etc also have a Lipschitz constant over 1 but they're not as huge as 2
|
StellaAthena#3530: @Avi on the bright side: the more you write the more you get rejected, and the higher the proportion of your submissions are resubmissions! Those tend to be easier
alstroemeria313#1694: i don't think it works so well, none of my attention minibatch discriminator ideas have worked as well as just appending the feature-wise stds to D's MLP input or sticking them in an extra token for a transformer D
alstroemeria313#1694: i guess feature-wise stds in D give G nice informative per-example gradients or some such
alstroemeria313#1694: maybe i should try log variances or some such (with a small epsilon added to the variance first so if one of the channels collapses i won't be feeding a huge negative number into D's MLP)
alstroemeria313#1694: starting to suspect the probit nonlinearity is bad if D can easily become really strong, trying logit again instead
alstroemeria313#1694: Oh wait is the reason SG uses virtual minibatches so D can't see one really obvious fake and rate *the entire minibatch* as super fake?
alstroemeria313#1694: uhh, virtual minibatches is their name for when you divide the minibatch into groups and compute feature-wise stats for each group and broadcast the stats to everything in that group
nostalgiahurts#3408: the bigvgan paper doesn't use snake in the discriminator https://cdn.discordapp.com/attachments/729741769738158194/1087142103198482503/no_snake_in_bigvgan_disc.png
alstroemeria313#1694: ah yeah.
lunarflu#6769: I think it's probably built into our animal biology, there is a strong incentive to focus on doing what needs to be done *now*. Even though we invented the future, it's still a pretty new tool. If you're not hungry now, why cook food? If the project isn't due, why work on it?
lunarflu#6769: I wonder if there's any tools that can predict stuff like this and plan around it, have you heard of any?
magenta#8040: https://vxtwitter.com/guillaumelample/status/1637741752114200577
Maybe someone is able to explain it to him? (I can’t because I don’t know) He’s one of the llama authors.
Hyperion#0575: https://twitter.com/arankomatsuzaki/status/1637758140820213761 perfect explanation
zukaboo#8804: "These opensource guys".
zukaboo#8804: By the way, there's a question I wanted to ask long ago. What is the significance of geese in the context of AI?
zukaboo#8804: Explain me like I'm a boomer.
Piplup#6706: The goose is for stealing sausages from the grill
Piplup#6706: ~~wait, this isn't #off-topic~~
zukaboo#8804: I think, this is the right place to ask serious questions about AI memes.
|
zukaboo#8804: In #off-topic I would get memes as a response.
StellaAthena#3530: There isn’t one. It’s just something people got obsessed with.
zukaboo#8804: For no reason at all? doubt.jpg
lunarflu#6769: gpt-4 would say there isn't a connection. But then, do you *really* trust gpt-4?
Louis#0144: Honk!
AI_WAIFU#2844: So I did a napkin calculation of the limits of scaling. TSMC currently seems to produce about 1-10 million 12 inch wafers per year depending on the node. An H100 takes up 814mm^2. So factoring in packing inefficiency and yield, that's maybe 50-70 H100s per wafer. Which means that if we dedicate the entirety of TSMCs production capacity towards GPUs, that would amount to maybe 50-700M H100s per year. This puts a pretty hard upper bound on scaling. Given that the largest supercomputers probably have somewhere on the order of 10^4-10^5 GPUs, that suggests a hard upper bound on compute scaling of 1000x and more realistically 100x, because we still need chips for a whole bunch of other applications. With chinchilla scaling laws, that would mean models maybe only 10x as big as the biggest models today.
AI_WAIFU#2844: After that you gotta wait on growth in supply chain volume production and any improvements due to Moores law
Hyperion#0575: In theory if you use enough GPUs for a run then you probably lose 1 GPU to some sort of hardware/software failure every gradient step. The number is probably really big though
AI_WAIFU#2844: Yeah, but that seems like the kind of thing you can engineer your way around
AI_WAIFU#2844: Although you will need a steady supply of replacement parts and a crew working 24/7 to feed the beast
Hyperion#0575: Sort of yeah
But I think there's a chance that datacenter design and power considerations impose some sort of limit which is practically impossible to engineer around, and that this limit is likely lower than the limit you calculate from TSMC production
Dashiell#8739: Can only TSMC make the silicon for an H100? Samsung et al are just too far behind?
AI_WAIFU#2844: Well it wouldn't be an H100 then. I assume the chip design is a function of the silicon, but I suppose if you were sufficiently mad you could get different but similar accelerators from different fabs working together.
Dashiell#8739: I was more asking--I really don't know--whether H100s can only be made on the 3nm nodes that only TSMC has
Dashiell#8739: Or whatever the bleeding edge is right now. 2nm?
AI_WAIFU#2844: 100M H100s dissipates maybe 100-150GW, which is...not great. That's 5x the output of the Three Gorges Dam, which is a shitload, but not impossible to procure. Assuming you built it as a floating solar instillation in the pacific (good luck getting regulatory approval anywhere else lmao) Then peak solar irradiance is 380W/m^2 so with 20% efficiency the overall installation would have to be...a square 40km on each side of just solar panels. On the cooling side, since you built it in the ocean, and say you tolerate increasing outflow temperatures by 20c, that gives...about 1800 metric tonnes of water per second.
AI_WAIFU#2844: Not great, not terrible
synquid#7193: start building nuclear reactors rn
Hyperion#0575: Ok, at that point why not just build it in space
|
Unlimited solar and you can dissipate heat into the vacuum
AI_WAIFU#2844: how do you plan do dissipate 100GW of power in *space*?
lunarflu#6769: I'm not too familiar with the technical details, but in a way it's reassuring knowing both that there is still room for growth through scalability as well as time for innovation and optimization of LLMs
AI_WAIFU#2844: The cool thing about the floating solar installation is that it doesn't on net actually do anything, the sun's energy would have still have been dumped into the ocean, and you power and cool it with solar panels and pumps, which are very simple and well understood.
lunarflu#6769: A nearby cap on scalability probably also helps open source compete with tech companies, right? Limitation on pouring endless resources into training etc
uwu1#4864: i don't think there's any true limitations on multi-datacenter training TBH
lunarflu#6769: Do you mean something like borrowing a lot of datacenters at once to train really big models?
uwu1#4864: training algorithms that allow scaling out beyond single datacenter
uwu1#4864: Also that's just one TSMC. You could 10x or 100x TSMCs, especially if you can fold in cumulative improvements that AI itself may give to fabbing
AI_WAIFU#2844: Aside from the chip fab limitations I highlighted, you are limited by the surface power/cooling density that you can get approved, same goes for the communications infrastructure.
lunarflu#6769: Fair, any predictions involving AI are probably going to be unreliable even in a few years
lunarflu#6769: space will cool it down :1000IQ:
AI_WAIFU#2844: you can actually do the math on this, it's not pretty
AI_WAIFU#2844: Although maybe you can use the back side of your solar panels
AI_WAIFU#2844: that might work
uwu1#4864: build it out of vaccum tubes
uwu1#4864: it's already a vacuum and they like being hot
AI_WAIFU#2844: but you would be looking at erecting a structure in space 10s of kms in diameter
AI_WAIFU#2844: That is beyond humanity's potential
AI_WAIFU#2844: at least for the moment
|
Dashiell#8739: Space will not cool it down! That's not how space works. There's stuff to pull heat away and your structure can only cool via radiation
anotherone#9475: What's the best estimate right now for limits to data scale?
AI_WAIFU#2844: the computer itself would not actually be that big, just a cube 500 gpus in dimension
lunarflu#6769: That could fit in one room basically, right?
AI_WAIFU#2844: Yeah a big one
lunarflu#6769: Is there anything like this currently?
uwu1#4864: closest is probably together's one but the various async distributed optimization stuff and older oai/google research into gradient compression can also play a role
AI_WAIFU#2844: I'm actually feeling pretty good about this, I wouldn't expect a system of that scale for at least 10 years, and a system 10th it's size for at least 5
Dashiell#8739: any sooner would require a _determined_ nation state. And I can't speak for China, but that's just not where the US is politically
Dashiell#8739: not that those are the only two countries, but as you go down to smaller countries it only becomes a bigger proportion of their GDP and they have to be that much more determined
AI_WAIFU#2844: pretty sure china straight up can't do it, they don't have the fab capacity or the expertise at sufficient scale
AI_WAIFU#2844: Doubly so with these sanctions
synquid#7193: EU could do it
StellaAthena#3530: They couldn’t do anything with it
AI_WAIFU#2844: lmao
baidicoot#9673: average atomic rockets post
StellaAthena#3530: I guess aleph alpha has figured their shit out, so at least there’s now a model bigger than 6B params that’s actually be trained in Europe
AI_WAIFU#2844: this project would probably cost about as much as ITER and I would expect the EU to take about as much time doing it
StellaAthena#3530: No idea how good their stuff is though
synquid#7193: reminder about the rumors that TSMC will build a fab in Germany
|
anotherone#9475: under your model, compute maxxing & bogstandard research improvements *might* already be enough for video tho
anotherone#9475: (Not sure at all)
AI_WAIFU#2844: I think hardware design is pretty tightly coupled to the process node.
Dashiell#8739: let me put the question another way, what's the most recent / most powerful Nvidia GPU that could be made in a Samsung fab?
StellaAthena#3530: That’s probably non-public info tbh
synquid#7193: what's "could"?
synquid#7193: it would have to be redesigned for the process probably
AI_WAIFU#2844: take an H100 and add 10-30% improvement, H100 uses N4 and bleeding edge is IIRC N3
AI_WAIFU#2844: It's getting notably harder to squeeze more out of these machines.
Dashiell#8739: the GPUs or the process nodes?
CarsonPoole#0640: I know very little about semiconductor manufacturing but it seems like considering GPUs are good at parallel computing that they should be able to get better performance despite difficulties in shrinking transistors just by the fact that they can add more transistors. Obviously the downside is that power usage would increase but if you're just looking for better perf that seems like it would be the way to go
AI_WAIFU#2844: so that's the thing, transistors have kinda stopped shrinking, most recent improvements now take the form of efficiency improvements from better geometries, better materials, and more consistent/precise manufacturing.
StellaAthena#3530: One reason for this is that there’s a minimum distance you can have between two conductors before you have only one conductor
StellaAthena#3530: Which I find hilarious
AI_WAIFU#2844: yep, electrons have a wavelength, and quantum tunneling becomes significant at current transistor sizes
CarsonPoole#0640: "these things are made out of atoms" - Gordon Moore
Pierre Bartet#8359: I don't know what is the limitation to building superconducting computers so that we can stack many layer without having to dissipate too much, there was some initiative toward this at some point
CarsonPoole#0640: there would be a lot of perf improvements I suppose if nvidia really increased the amount of SRAM they have
StellaAthena#3530: That sounds like something that would melt in the limit
AI_WAIFU#2844: that certainly helps, but superconducting logic takes up far more space than transistors, at least currently
|
StellaAthena#3530: I recall hearing about some work on “super insulators” which would allow you to have something like two atoms of insulators between conductors and still keep them apart. Dunno if anything came of that though.
CarsonPoole#0640: like if all the DRAM they have became as fast as SRAM that would be a big improvement
AI_WAIFU#2844: I think we already have single-atom layers in modern chips
StellaAthena#3530: Ah, so it worked
StellaAthena#3530: And now we need to genuinely alternate between conductors and insulators on the atomic level
StellaAthena#3530: That’s pretty awesome
AI_WAIFU#2844: well, GPUs especially are pretty efficient about this, you only access DRAM like once and then operate on data 100-1000 times in SRAM
CarsonPoole#0640: yes but having a lot more SRAM would enable faster large matmuls
CarsonPoole#0640: the actual kernels you can write are in some cases limited by the largest tile size you can have which depends on the amount of SRAM
CarsonPoole#0640: nvidia knows this though as the A100 had a big increase in SRAM and then they increased a lot again with the H100
AI_WAIFU#2844: right, we can probably keep scaling SRAM by stacking it, but SRAM density per unit area has stopped going up
StellaAthena#3530: Yeah I was about to say this
StellaAthena#3530: Also caching and tiling plays a huge role
AI_WAIFU#2844: I think AMD might already be doing this
synquid#7193: they do have that 128 gb GPU
synquid#7193: not sure if thats why
CarsonPoole#0640: most people don't talk about this but the H100 has a lot of other optimizations that they're taking advantage of to get so much higher FLOPs. Things like different/better async data loading and tensor core operations
CarsonPoole#0640: it's not just smaller/more transistors
Pierre Bartet#8359: Probably, and I don't know whether keeping the gate non superconducting would work, it is hard to find litterature about this since most of the effort are toward quantum computing
AI_WAIFU#2844: https://www.nature.com/articles/s41598-019-46595-w
|
AI_WAIFU#2844: Fundamentally though, without more fancy tech like the stuff I just linked, you run into limits on the switching efficiency of transitors.
AI_WAIFU#2844: But I could certainly see a future with Reversible/optical processors + stacked sram + dram
AI_WAIFU#2844: bit of a ways away tho
StellaAthena#3530: The discretization stuff is also a godsend
CarsonPoole#0640: yeah the interesting stuff to me about discretization implies that there is a smaller model that should be able to get the same performance. Like if I can compress a model by 2x and get the same performance then there should be some architecture that could be 2x smaller with the same perf
StellaAthena#3530: No it doesn’t actually.
StellaAthena#3530: Parameter precision “goes to infinity” in a fundamentally different way form the model architecture
StellaAthena#3530: Another example of this phenomenon in action is that fact that for any finite set of graphs, G, the space of NNs computed (with any real parameters!) on those graphs is not dense in L^2(R)
AI_WAIFU#2844: Both, i suppose, we're running out of low hanging fruit in general
chilli#5665: well... depends on what you mean by "shrinking"
chilli#5665: the half-pitch isn't decreasing much anymore, but density is still increasing quite abit
AI_WAIFU#2844: I thought they both hit a wall recently
chilli#5665: not so much AFAIK
AI_WAIFU#2844: There was an article on it for N3 vs N4? SRAM density, at least, I assume same applies to logic
chilli#5665: TSMC claims 136 MTr/mm^2 for N5 and 220 MTr/mm^2 for N3
chilli#5665: oh, I think this is not true
AI_WAIFU#2844: https://fuse.wikichip.org/news/7343/iedm-2022-did-we-just-witness-the-death-of-sram/
AI_WAIFU#2844: Ah looks like it very much does not apply for logic
chilli#5665: yeah, I was saying that "I assume same applies to logic" is not true
AI_WAIFU#2844: Wonder why that is though? Like SRAM is just a few transistors glued together.
|
chilli#5665: my general understanding is that SRAM is more sensitive to leakage than logic
AI_WAIFU#2844: That figures
Maximum Limelihood Estimator#8915: Muon computers when
Maximum Limelihood Estimator#8915: Hold on, does that apply to GPUs as well? I thought Huang’s law was still alive and well
jrowe#5371: gpus and cpus have reached parity - Huang's law was only ever possible because of the disparity caused by gpu's being second class
jrowe#5371: the crypto craze beefed up asic and fpga production, too, launching them a few generations ahead, but moore's law seems to be the computational "great filter" and they're running out of functional dark magic
jrowe#5371: i keep waiting for some sort of computronium lattice made from fungus or some sort of microorganism
jrowe#5371: go 3d and use biological manufacturing, you can shuttle graphene, gold, copper, silver, etc using well known biology.
jrowe#5371: grow your own gpu lol
zukaboo#8804: So finally LLMs will not only behave like shoggoths but look like so too.
Maximum Limelihood Estimator#8915: What do you mean by "reached parity"
jrowe#5371: Roughly equal underlying fabrication technology
jrowe#5371: there's no dark magic quantum wizardry that works for gpus that doesn't apply to cpus
nullonesix#0744: why are we reaching limits of moore's as we approach agi
nullonesix#0744: exothropic principle: any approach to computing agi will be met with computational limits
Maximum Limelihood Estimator#8915: Hmm, are GPU/CPU transistors the same size?
jrowe#5371: They're a generation or two behind bleeding edge - 7nm as of 2021, not sure on current specs but I believe Intel and Apple have taken all the 2-3nm fabs for themselves
bw#3136: Nvidia's using a custom TSMC 5nm process for Hopper and Lovelace. Apple is also using a TSMC 5nm process for their current gen chips
Fessus#9563: Apple took almost all of TSMC's N3 (base) because no one else wanted it. It was a problematic node which didn't make economic sense for a super-majority of big customers and is only now getting "fixed" with their N3E node
jrowe#5371: Neat , so gpus are essentially caught up?
|
Fessus#9563: More or less
chilli#5665: I don't think Intel has that many orders on 3nm in the next year or so
Dec#1621: hello friends! have you struggled to keep up with alllll the goings on across allllll the AI discords? we've got a solution for you 🙂 we take discord noise and turn it into a custom newsletter for you to read whenever u want. check us out at https://spreadable.chat/ and feel free to DM me if you have feedback, questions or you'd like to be part of our pilot!
main#7610: This sounds like a privacy disaster
Ravna#1831: Computing as a whole consumes less than 1% of global energy production, while solar panels cover only a tiny amount of earth surface. The room for more computing is not nearly close to the limit yet.
lunarflu#6769: Does this comply with Discord's TOS?
Dec#1621: short answer: so far yes! long answer being completely transparent: we could run into some issues in the future but we'd hope to be able to work them out before they became a serious problem
Dec#1621: pls elaborate 🙂
MetaBoy#3008: Hi!
Recently OpenAI announced that they are shutting down Codex API including code_davinci:002 https://news.ycombinator.com/item?id=35242069. Correct me if I am wrong, but I think it was the only capable publicly API-available model without RLHF? I have seen a few people on Twitter who mentioned that it really hurts their research which is mostly safety-related.
Many people, including myself, instantly thought that LLaMA 70B (perhaps a fine-tuned version) could be a decent or even very good alternative. However, it is impossible to run on consumer hardware, probably you would need A100 for this which can get pretty expensive for many people (and also will require them to go through cloud setup).
I am curious about people's opinions on the following questions:
* Would it be actually a good alternative?
* Is it a good idea to have such a service, especially from the AI safety research point of view? If not, why?
* Isn't EleutherAI uniquely well-positioned to run such a project? Perhaps there is a chance EleutherAI would do it? If not, why? Legal risks, too difficult, too expensive, not enough capacity, not enough demand, something else?
lunarflu#6769: Gotcha, and how does the monetization work?
Dec#1621: if you decide to turn it on, the same as any other newsletter - affiliate links. But we customise the affiliate links to the content to make sure its not just some random link dropped in there for no reason
|
lunarflu#6769: Aha, so you create a newsletter (for example, about what is happening on eleuther.ai) and then you can add affiliate links?
Dec#1621: you got it
StellaAthena#3530: > The following models will be discontinued: code-cushman:001 code-cushman:002 code-davinci:001 code-davinci:002
I thought these are all models primarily intended for use to generate code, am I misremembering?
StellaAthena#3530: Yeah, @MetaBoy I think you’re misreading the scope of the announcement. `text-davinci-001` is still up and isn’t on their list of models to be discontinued
dmayhem93#3202: code-davinci:002 is the base model for text-davinci-003
synquid#7193: they just call it code
synquid#7193: for some stupid reason
StellaAthena#3530: Okay, but `text-davinci-01` is definitely a capable text-based non-RLHF model right
dmayhem93#3202: It's the FeedME thing that 002 was
dmayhem93#3202: davinci is still available iirc
dmayhem93#3202: https://platform.openai.com/docs/model-index-for-researchers#footnote-1 https://cdn.discordapp.com/attachments/729741769738158194/1087727379834744852/image.png
lunarflu#6769: So, as I understand it, you are using a bot to scrape user data without consent and then monetizing that data? https://cdn.discordapp.com/attachments/729741769738158194/1087727485728333834/image.png
StellaAthena#3530: @Dec this is blatantly a violation of discord ToS and basic human decency. If we ever catch you or someone else doing this with the EleutherAI discord server we will send you a cease and desist letter, as well as take any necessary precautions to prevent your service from functioning.
AI_WAIFU#2844: user has been banned for this post
lunarflu#6769: 👍
StellaAthena#3530: I was going to give it five minutes so they could read my reply, but I’ll DM them (also warn LAION and Carper, as they’re in their servers)
Hyperion#0575: I thought code-cushman:002 wasn't on the API? Or at least I only have cushman 001, and code-davinci-002
StellaAthena#3530: I’m just copying from the announcement
|
lunarflu#6769: Yeah, it's good to be cautious, but a reply like this doesn't exactly inspire confidence https://cdn.discordapp.com/attachments/729741769738158194/1087730117196926986/image.png
StellaAthena#3530: Classic
lunarflu#6769: "we'll figure out the tos violations somehow 😄 "
Hyperion#0575: Hmm https://discord.com/channels/729741769192767510/730095596861521970/1081591111904153710
Looks like the existence of this model was leaked via tiktoken but it's not public
StellaAthena#3530: On a reread, it doesn’t seem as blatantly wrong as I had originally thought (I think he’s trying to sell *EleutherAI* summarization of *our own discord server* as a service, not third parties) but we also don’t own rights to your content.
I can firmly say that we have no intention of ever agreeing to something like that.
jrowe#5371: Off-Topic gonna need an NDA
lunarflu#6769: I could even see a world where that's fine, for example, friend group, everyone consents, then no problem. But tens of thousands of users across multiple communities? I'm doubtful. If it's opt-in only, it defeats the point of scraping and summarizing posts in a discord. If it's not opt-in, then that's extremely shaky ground at best.
StellaAthena#3530: Oh I agree it’s a violation of Discord’s ToS regardless. I meant that some of my strongly ethical language is less applicable than when I thought they were announcing they were scraping the server and you should subscribe to their service to get summaries.
This is splitting hairs though – it’s wrong and we will seek to prevent it
lunarflu#6769: Gotcha, yeah, it was the option to scrape etc
MetaBoy#3008: Yeah, you are right, but I think that text-davinci-001 is visibly worse. It is basically GPT3 vs GPT3.5 difference.
I think it changes the situation significantly, but not fundamentally. And think LLaMA 70B is noticeably better than GPT-3?
I am still interested in opinions, especially on why it could be a bad idea to have such a thing. I suppose the pros are much larger than the cons but maybe I am missing something?
ag7032#3373: Is there a way to give the model new information and then ask it to write about it.
|
I am not asking finetuning, for example giving it the recent news of gpt-4 and asking questions about it?
MetaBoy#3008: just literally send it to it?
StellaAthena#3530: I think you’re confused about something… either you want a non-RLHF model, or you want a model that’s like GPT-3.5 and not GPT-3.
MetaBoy#3008: I mean that davinci-002-code is around GPT-3.5 level and not RLHF and davinci-001-text is GPT-3 (and not RLHF too)
StellaAthena#3530: I would not assume that it’s not RLHF
kd90138#9368: if my translation pipeline is showing 90%+ utilization in taskmanager (mostly 3D) and 40%utilization in nvidia-smi
kd90138#9368: can i shove in more?
synquid#7193: I think they removed that info from the model overview, but I seem to remember the code-* models not being? https://platform.openai.com/docs/models/overview
synquid#7193: never mind https://platform.openai.com/docs/model-index-for-researchers
synquid#7193: I read that as code- being a pure pretraining model
StellaAthena#3530: I think that’s a dangerous assumption
StellaAthena#3530: Is this the right-hand column of nvidia-smi?
MetaBoy#3008: I am like 95% sure it is not.
Kinda reliable source?https://twitter.com/repligate/status/1638041885498560512
I have seen some other people on Twitter saying the same thing and I vaguely remember that there were some available non-RLHF models which were closer to GPT3.5 than GPT3
Anyway, it's not even the crux here, the thing is that some people want capable models which are not RLHF (or sufficiently similar to it, it is not about a particular approach to training, it's about behaviour) for their research
|
So the question still stands even if it will turn out I was not technically correct here
StellaAthena#3530: @janus is generally as reliable a source as it’s gets IMO
StellaAthena#3530: The core issue is that there is no model that’s comparable to it (or to GPT-3) whose license allows me to put it on the cloud and sell API access.
StellaAthena#3530: (This is why it matters that Llama isn’t actually open source)
kd90138#9368: https://cdn.discordapp.com/attachments/729741769738158194/1087740539031531621/image.png,https://cdn.discordapp.com/attachments/729741769738158194/1087740539245449297/image.png,https://cdn.discordapp.com/attachments/729741769738158194/1087740539459346522/image.png
kd90138#9368: yes it seems to be the righthand column
jrowe#5371: The world needs some llama/openai level foss foundation models
jrowe#5371: Meta should MIT license llama weights
Hyperion#0575: Looks like the Bard waitlist is finally up: <https://bard.google.com/>
Piece on it here: https://www.theverge.com/2023/3/21/23649794/google-chatgpt-rival-bard-ai-chatbot-access-hands-on
StellaAthena#3530: So that’s not actually measuring “end to end” utilization. It’s measuring the amount of time that is spent waiting for the GPU to finish computing. The other ~60% of the time you’re blocking on something other than the raw computing (probably communication across devices or data loading)
StellaAthena#3530: ♻️ is a bad meme, let’s not complain about people for trying to share news
paws#3311: sorry 😦
kd90138#9368: which one are you referring to? the task manager or nvidia smi?
kd90138#9368: currently it is single GPU so i guess it must be ram access
StellaAthena#3530: nvidia
kd90138#9368: i see
kd90138#9368: I would try batching but https://cdn.discordapp.com/attachments/729741769738158194/1087742120317702164/image.png
StellaAthena#3530: I highly recommend @chilli’s blog post “Making Deep Learning Go Brrrr From First Principles”
|
https://horace.io/brrr_intro.html
kd90138#9368: the way i interact with huggingface pipelines do not expose OOM details in a way i'm comfortable with so ill let it be for now
kd90138#9368: thanks, I will
kd90138#9368: wait i already read this haha but it was a skim so this time ill read deeper
kd90138#9368: Really funny how the cookie crumbles. I swore that I HF pipeline interface was good enough for me and I would never go deeper. Now I'm wondering how i can mess with its guts
kd90138#9368: will i be messing with pytorch custom CUDA kernels next
StellaAthena#3530: It depends on what you’re doing
StellaAthena#3530: (Helpful, ik)
StellaAthena#3530: View this as a level-up experience! You’re now (invested / skilled / knowledgeable / bored / something) enough to care about it 🙂
StellaAthena#3530: I would skip ahead to the “Overhead” section btw
MetaBoy#3008: This is kinda expected but I am not sure I fully understand the LLaMA licensing situation. I see that they have GNU GPL v3 in their repo but I do not understand how it relates to weights? Are they also GNU GPL v3 or what? Or like using weights without their permission is violation? Doesn't feel like it because Alpaca seems to be fine. Yes, they removed live demo but due to safety concerns, not licensing (or at least this is what they say).
Since you said "sell API access" and some other signs, I assume that the problem is that license in the repo is GNU.
If anyone knows how licenses work please tell me, especially if I misunderstood something.
But if I am right, is the problem mostly money or other aspects of GPL (such as contagiousness?).
If it's solely the money issue I think it should be solvable, given amount of people interested and reasonably low cost of running such a service, I think it is very achievable to fund it via public fundraise or something like this.
|
FWIW there is a lot of interest in changing the license to Apache: https://github.com/facebookresearch/llama/pull/184
But I don't think it's very likely to be implemented, quite possibly it will not even be addressed
StellaAthena#3530: Llama is not GPL licensed
MetaBoy#3008: Why is alpaca fine then? Not disagreeing, just trying to understand
StellaAthena#3530: The *codebase that Meta released for inference* is GPL licensed
StellaAthena#3530: When you go to download the weights they have you sign a separate license for the model weights
StellaAthena#3530: My read on the situation is that they kinda don’t know what they’re doing from a licensing POV and are figuring it out as they go along tbh.
StellaAthena#3530: See, e.g., https://cdn.discordapp.com/attachments/729741769738158194/1087747334156337202/IMG_2185.png
MetaBoy#3008: Oh, I think I understand now. For some reason, I assumed that Alpaca provides the weights. Apparently, it is not the case?
So the problem is neither money nor the contagiousness of GPL, it is completely unrelated to GPL and basically illegal
StellaAthena#3530: It looks like Alpaca is actually no longer online at all
MetaBoy#3008: Yes, as I said they removed online demo but [at least initially] said it's due to safety https://twitter.com/1littlecoder/status/1636843916833042432
But the github is still fine https://github.com/tatsu-lab/stanford_alpaca/tree/main#
StellaAthena#3530: Money is *a* problem but I’m sure it’s one we could easily overcome by offering the API at-cost
StellaAthena#3530: The GitHub doesn’t have the model. It has instructions on how to make the model yourself
synquid#7193: there are LORA versions on huggingface
|
synquid#7193: at least
StellaAthena#3530: To the best of my understanding (and that of our lawyer), it is a violation of Meta’s ToU to offer Llama or a derived model as a service. This doesn’t mean Meta will sue everyone who does, but it means they certainly could.
Additionally, regardless of whether I feel Llama should be open source I do think it’s important to respect other people’s decisions about the terms and contexts that they’re comfortable with their models being used in. If we don’t respect people’s good faith attempts to disseminate technology, it makes it harder for people to continue to release things.
MetaBoy#3008: I imagined that fine-tuning is relatively cheap. I think the original Alpaca was fine-tuned for 600$ and some people managed some non-trivial amount of fine-tuning for even less. I assume that fine-tuning 70B version is not going to be crazy expensive (less than 50k$, maybe even 25k$)
Inference doesn't sound like a big cost if it is not completely public access but just "whitelist for researchers" kind of thing.
Of course, there are other costs but I imagine it is manageable?
Also, do I understand correctly that training LLaMA-like model from scratch is not really possible due to data limitations? Or the main issue here is cost as well?
MetaBoy#3008: I am not super knowledgable about costs of training and stuff like that so I may misunderstand something
StellaAthena#3530: Off the top of my head, training the 65B llama model would cost around a million dollars. Finetuning it with Carper’s trlX library should cost a couple thousand if it’s done on the same data Stanford used.
paws#3311: wait
paws#3311: a million dollars?
zukaboo#8804: I've done a curious little experiment on abnormal psychology applied to language models.
paws#3311: damn i thought you could train a 100B+ model in a million dollars
StellaAthena#3530: Well you can, if you train on less data
StellaAthena#3530: Remember this model is a little better than the original GPT-3
|
MetaBoy#3008: 1m$ sounds like a significant barrier. What do you (or somebody else) think about data limitations? Is it going to be even larger obstacle or not?
StellaAthena#3530: @zukaboo these are very large text blocks that are disrupting an on-going conversation. Can you upload them to an external service like paste bin and consolidate your posts into a single message
StellaAthena#3530: No, I think that’s a non-issue
paws#3311: magic of long context (i would imagine)
StellaAthena#3530: @MetaBoy the cost for the compute needed to pretrain a large language model is Cost = k\*N\*D, where:
- k is a factor that depends on your deal with cloud providers and technical skill, but between 10 and 50
- N is the number of parameters (in billions)
- D is the number of tokens (in billions)
MetaBoy#3008: Okay, that's somewhat reassuring. I am curious would EleutherAI be interested in the such project if somehow reasonable funding will be provided? It seems to be in line with previous efforts such as GPT-Neo models lineup. Or did the research direction change significantly and it's not the kind of thing Eleuther does?
I understand that it is probably a serious organizational decision, but perhaps it is possible to share a rough expected sentiment?
P.S in case it is not obvious, I do not have access to funding.
zukaboo#8804: Done: https://gist.github.com/suhr/932e08352f3db5c1df7955e81d7ddacd
MetaBoy#3008: Sorry if for some reason it is not very appropriate question which I am asking out of sheer curiosity
paws#3311: can you also delete the longer blocks of text and point to this instead
StellaAthena#3530: Our primary goal is to do cool research on the capabilities and limitations of LLMs and empower other people to do so as well. This is ultimately a cost-benefits analysis based on how much good we think a particular release would do
zukaboo#8804: You mean, remove my messages in the chat?
paws#3311: oh i see they were removed
paws#3311: cool
|
Ravna#1831: competing against other lm providers with benchmark metric is definitely not a eleutherai priority
MetaBoy#3008: I see
Well, thanks a lot @StellaAthena and others who participated in this discussion, I am less confused about LLaMA-Alpaca situation now and understand some basic aspects of doing base-model projects
MetaBoy#3008: Interesting but honestly it is hard to say anything specific about it, I suppose that less capable model just performs worse (as expected)
Hyperion#0575: Last call for the talk on Tracr! We will not be recording this so this is your chance to catch up on David's work! https://meet.google.com/ivx-acvz-wfi
Louis#0144: tracr?
Louis#0144: ohh
Louis#0144: ya
Hyperion#0575: See #announcements for the abstract
Louis#0144: damn I wish he was recording it
Louis#0144: 😢
tpapp157#3643: Nvidia GTC keynote: https://www.youtube.com/watch?v=DiGB5uAYKAg
Akratic#3884: Great talk, thanks David!
alstroemeria313#1694: ok so. if i have a loss function that trains my model using the *first derivative* of the model output (obtained with forward mode autodiff)
alstroemeria313#1694: what issues arise during training that don't arise for models with normal losses?
alstroemeria313#1694: the model is a transformer w/ geglu or swiglu
alstroemeria313#1694: (i tried leaky geglu in case gelu's nonmonotonicity was a problem and it didn't really make it better)
alstroemeria313#1694: the specific thing it's doing is that after enough training, its output images become blurry
janus#0150: ❤️ We've come so far from when no one here believed anything I said about GPT without a paper to back it up :berk:
|
alstroemeria313#1694: do i need larger batch sizes? to disable weight decay?
alstroemeria313#1694: a different activation function? turning off geglu/swiglu in my 1st derivative trained model?
alstroemeria313#1694: more layers? a different architecture somehow?
alstroemeria313#1694: lower learning rate? lr decay?
alstroemeria313#1694: add dropout?
alstroemeria313#1694: I am trying to train a feedforward single step ODE integrator where I can feed in x_t for any t from t_start to t_end and it gives me the solution at t_end
nostalgiahurts#3408: the only prior work I can think of is SIREN, which solved the Poisson equation by training the NN on just the gradient or laplacian. so maybe a different activation function is needed? I guess it depends on whether geglu/swiglu's derivatives are sufficient for the task. use fourier features somehow if you're not already?
alstroemeria313#1694: the only thing i am using random fourier features for at the moment is the timestep embedding
alstroemeria313#1694: i also tried my leaky gelu (as the gate in leaky geglu), which is monotonic and also has a valid second derivative
alstroemeria313#1694: i could try softplus but that tends to be worse
nostalgiahurts#3408: oh sorry, it looks like different activation functions actually work for the gradient. they just don't work for the laplacian https://cdn.discordapp.com/attachments/729741769738158194/1087843842612535387/siren-poisson.png
alstroemeria313#1694: those are spatial derivatives though
alstroemeria313#1694: or, uh
alstroemeria313#1694: they used a loss that matched the spatial derivative of the image to that of the neural net?
alstroemeria313#1694: since i'm training a single step ODE solver, i have fixed f(x, t_end) = t_end and then i am trying to push the derivative of the model output along the directions of ODE solution paths to 0
nostalgiahurts#3408: yeah, from what I understand https://cdn.discordapp.com/attachments/729741769738158194/1087844532009308180/siren-poisson2.png
alstroemeria313#1694: and... it works at first? then with more training instead of the output images getting better quality they get worse, washed out and blurry
Fessus#9563: Weird manifestation of normalization artifacts maybe?
alstroemeria313#1694: right now i am trying turning off weight decay
alstroemeria313#1694: what's interesting is what happens in between the initial good phase and the blur. the images start looking like someone took two training set examples and linearly interpolated them, like you can see a CIFAR-10 car and then a fainter overlaid car facing in the other direction
|
Fessus#9563: very weird
alstroemeria313#1694: only images from the same CIFAR-10 class get blended in this way (the model is class-conditional)
alstroemeria313#1694: i have tried all sorts of things to fix this
alstroemeria313#1694: like training with a perceptual loss based metric instead of simply pushing the mean square of the derivative toward 0 (Euclidean metric)
alstroemeria313#1694: this is sharper early on but then it blurs anyway
alstroemeria313#1694: hmmmmm
alstroemeria313#1694: what if the model i'm distilling this from (the neural ODE) becomes *overfit* later on and has too-sharp decision boundaries
alstroemeria313#1694: and the single step solver i'm training alongside can't handle that
alstroemeria313#1694: this is cifar-10 and the models are width 768 depth 12 gated unit transformers, of course it will easily overfit if not regularized
Fessus#9563: It certainly sounds like some weird overfitting is happening with the additional ghosted images of the same class
alstroemeria313#1694: and the overlaid other images are the result of the NODE decision boundaries being sharper than the single step solver can learn
alstroemeria313#1694: and eventually it just converges to an MSE optimal blurry solution
alstroemeria313#1694: idk
Fessus#9563: That would make some sense. The fact that it gets things right early then gets "worse" certainly implies that at least from the model's standpoint, optimizing the learning objective means producing worse samples.
alstroemeria313#1694: in which case i should try adding dropout
alstroemeria313#1694: at minimum
alstroemeria313#1694: where does dropout go in a transformer again?
Fessus#9563: after the add+norm in the self attention block unless someone proved that was crap and you should do it a different way
alstroemeria313#1694: i'm using prenorm so there's no add+norm
alstroemeria313#1694: save input -> layernorm -> qkv proj -> reshape to multiple heads -> layernorm q and k -> matmul q and k to make attn weights -> softmax attn weights -> matmul attn weights with v -> reshape from heads to single vectors -> out proj -> add to saved input
|
alstroemeria313#1694: oh i guess i can go read the flax code i copied this from and modified to add the "layernorm q and k" step
Fessus#9563: You could probably get away with just doing it on the out projection
alstroemeria313#1694: ok so. flax does it to *the attention weights matrix*
alstroemeria313#1694: pre softmax
Fessus#9563: yeah, that was the other option. or both
alstroemeria313#1694: i was thinking both
alstroemeria313#1694: i also need to add it to the ffn, i put it after the activation right?
Fessus#9563: I assume yes
alstroemeria313#1694: `# dropout is broadcast across the batch + head dimensions` (for attn weights dropout)
alstroemeria313#1694: ty~
alstroemeria313#1694: ...Across batch? What?
Fessus#9563: I assume that's just poorly written because that makes no sense
alstroemeria313#1694: our axes are batch, sequence, head, d_head
alstroemeria313#1694: uh
alstroemeria313#1694: what shape is that actually
alstroemeria313#1694: it's `bhqk`
alstroemeria313#1694: so batch, head, query, key
alstroemeria313#1694: NO THEY REALLY DO THAT WTF
alstroemeria313#1694: `dropout_shape = tuple([1] * (key.ndim - 2)) + attn_weights.shape[-2:]`
alstroemeria313#1694: Whyyyyy would you ever drop out the same positions of each *batch item*.
|
alstroemeria313#1694: OK I will just not do that because wtf, unless someone can explain it
Fessus#9563: Yeah, that makes no sense
alstroemeria313#1694: Should I still broadcast across heads? IDK what that's for either
alstroemeria313#1694: So leaning toward no
alstroemeria313#1694: ...Let me check PyTorch
alstroemeria313#1694: It's in C somewhere lol
alstroemeria313#1694: So I can't read it easily
Fessus#9563: Every intuition I have says that dropout indices should just be as random as possible
Fessus#9563: but idk
alstroemeria313#1694: Normally
alstroemeria313#1694: For conv layers you want to broadcast across the spatial axes.
alstroemeria313#1694: But across *batch*? whattttt
Fessus#9563: yeah, but the conv thing of highly correlated neighboring channels doesn't really apply for attention heads does it?
alstroemeria313#1694: it does not.
alstroemeria313#1694: well, i think it doesn't.
Fessus#9563: I'm pretty sure it doesn't
alstroemeria313#1694: ok let me go read the pytorch transformer code to see where the other dropouts go
alstroemeria313#1694: they drop out the output projection of self-attn, they drop out after the activation in the ffn, and they drop out after the 2nd layer of the ffn
Fessus#9563: 🤷♂️ sounds like everyone's just winging it tbh
alstroemeria313#1694: ok let's see what fun bugs i wrote when putting dropout in
|
alstroemeria313#1694: ...huh. so when dropping out the attn weights matrix you still zero the dropped values?
alstroemeria313#1694: instead of setting them to -inf?
alstroemeria313#1694: ok
alstroemeria313#1694: guess i'll do that too
alstroemeria313#1694: ...why is it REALLY SLOW now?
alstroemeria313#1694: is dropout that expensive?
alstroemeria313#1694: training went from 5 it/s to 2.8 it/s
Fessus#9563: Could be why they were doing those broadcasts
alstroemeria313#1694: yeahhh
chilli#5665: https://github.com/pytorch/pytorch/blob/master/torch/_decomp/decompositions.py#L968
chilli#5665: I wonder if Jax's splittable RNG is just expensive
chilli#5665: we have reference implementations (in Python) of a pretty wide slice of PyTorch operations nowadays
alstroemeria313#1694: it's counter based, and you should be able to shard creating the randomness across data parallel shards by using the right counter offsets, but otoh it's slow in practice and that's an obvious place for them not to have done the obvious optimization
alstroemeria313#1694: ...maybe i am hitting some sort of perf problem because i am doing dropout inside a thing i jax.linearize() actually
alstroemeria313#1694: inside a pjit
chilli#5665: hmm... but are the offsets as large as the actual tensor?
alstroemeria313#1694: this is not the best optimized code path maybe
chilli#5665: which we use for a lot of things, including compilation lol
alstroemeria313#1694: with data parallel you should be able to just make an arange() or something that starts at the right offset, one scalar per device, because the batch axis is on the outside
chilli#5665: yeah i was thinking that's one possible thing to do, just not sure if that's what jax does
|
alstroemeria313#1694: tbh i should take the linearize() out and replace it with a jvp(), i don't need a full linearize anymore
alstroemeria313#1694: i think i have seen linearize() perf footguns before
chilli#5665: Do you have an example of how to do this btw?
alstroemeria313#1694: no
chilli#5665: Like, how to implement the local semantics
alstroemeria313#1694: it's in the jax internals somewhere.
chilli#5665: but you can pass an array of keys to `jax.random.normal`?
alstroemeria313#1694: i'm... not sure the counter stuff is actually exposed
chilli#5665: oh lol
&.#0001: 4 new OpenAI models just silently released https://cdn.discordapp.com/attachments/729741769738158194/1087929895998459904/cachedImage.png
&.#0001: could be unintentional like gpt-3.5-brooke
&.#0001: initial pass, they seem like strong models, comparable to curie or davinci
&.#0001: they aren't instruct models
&.#0001: https://github.com/EleutherAI/lm-evaluation-harness hpw would I go about running lm-eval-harness on this?
&.#0001: wait, there's a tutorial
&.#0001: running it now
StellaAthena#3530: I wonder if it's called canary because it's a canary that someone is plumbing the OAI API in a way they aren't supposed to...
Kharr#7888: I expect it's their new optimized to versions that will run at a lower cost and faster latency
CarsonPoole#0640: I just tried them and they're not good zero shot at all
CarsonPoole#0640: like much much worse than an alpaca-tuned 2.7b
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.