data
stringlengths 115
7.61k
|
---|
genai (Immortal Discoveries)#0601: Is it what GPT-2 trained on?
StellaAthena#3530: It would probably be a good idea to consult !faq
Carl-bot#1536:
genai (Immortal Discoveries)#0601: I'd not worry to go with the same 40GB GPT-2 used, but other datasets have that worrying possibility they may not give me that magic that GPT-2 attained. Do yous or don't yous have the same 40GB ?
genai (Immortal Discoveries)#0601: And is openWebText2 closer to the 40GB gpt2 used ?
StellaAthena#3530: Our model does a lot better than GPT-2. GPT-2 didn't have particularly good training data
genai (Immortal Discoveries)#0601: Ok. So I think I was downloading from OpenWebText somehow, and not https://the-eye.eu/public/AI/pile/
genai (Immortal Discoveries)#0601: which is a different page i never saw lol
Teemochu#8740: OWT2 is close to the GPT-2 set, yes
genai (Immortal Discoveries)#0601: but those expiring offlinks....
Teemochu#8740: https://the-eye.eu/public/AI/pile_preliminary_components/ (contains one or two things not in the pile, note)
Teemochu#8740: if you just want OWT2, openwebtext2.jsonl.zst.tar
StellaAthena#3530: Welcome to the internet. Sometimes links go bad. There's nothing we can do about that
genai (Immortal Discoveries)#0601: that's why i was asking if anyone has stored the 40GB
Teemochu#8740: If you're looking for a dataset (much much) smaller than the Pile for testing models, enwiki8 is always a good one btw
genai (Immortal Discoveries)#0601: no no it not 🙂
genai (Immortal Discoveries)#0601: it has much html etc lol
StellaAthena#3530: ...
StellaAthena#3530: You linked to a website where you can download the 40GB
StellaAthena#3530: err |
genai (Immortal Discoveries)#0601: huh?
StellaAthena#3530: I guess tyou didn't.
Teemochu#8740: the one I linked to has the 40gb (actually 66)
Teemochu#8740: openwebtext2.jsonl.zst.tar is that when unpacked
StellaAthena#3530: but presumably you found https://openwebtext2.readthedocs.io/en/latest/
genai (Immortal Discoveries)#0601: ya but is OWT1 or OWT2 even the 40GB 🙂 ?
StellaAthena#3530: and @Teemochu linked to a place to download OWT2
StellaAthena#3530: I don't know what "the 40 GB" refers to.
Teemochu#8740: plug and play version is better, "raw scrapes" includes a lot of spam sites that no one ever voted on on Reddit
genai (Immortal Discoveries)#0601: "the 40GB"...you know 🙂
genai (Immortal Discoveries)#0601: gpt2
genai (Immortal Discoveries)#0601: i'm reading the page, etc
genai (Immortal Discoveries)#0601: brb
StellaAthena#3530: If I knew what you meant I wouldn't have asked.
StellaAthena#3530: If you want the GPT-2 training data, that was never released
StellaAthena#3530: Which is kinda the whole purpose of OWT2 and the Pile
Teemochu#8740: OWT2 is an attempt to reconstruct something similar to the GPT-2 set
rom1504#5008: Do you have the compute to train large language models @genai (Immortal Discoveries) ? If not are you sure you're looking for a 40GB dataset ?
𓅬 gabriel_syme 𓅬#3220: is there a slight possibility that you also think that because your focus is language? Not trying to say you're wrong, just maybe thinking why others think of different domains (because they are probably domains they work on or like)
genai (Immortal Discoveries)#0601: you can see my project here > https://encode.su/threads/3595-Star-Engine-AI-data-compressor |
Teemochu#8740: to be fair, 40GB is plausibly single-3090 territory for any model that will fit inside one for training
genai (Immortal Discoveries)#0601: my score is 19,477,251 bytes for enwik8.
StellaAthena#3530: @genai (Immortal Discoveries) I **strongly** recommend reading the documentation for the resources you're talking about. All of the information you've asked for is easily findable.
genai (Immortal Discoveries)#0601: top scores are listed here for Lossless Compression evaluation (the one i use) http://mattmahoney.net/dc/text.html
genai (Immortal Discoveries)#0601: the AI I can make (without using Transformers...) is like Matt Mahoney's, it would (if in C++ lol) be able to run in 10 mins 100MB of data
genai (Immortal Discoveries)#0601: so 1GB is very plausible
genai (Immortal Discoveries)#0601: 1GB would give somewhat comparable results
genai (Immortal Discoveries)#0601: to 40GB
genai (Immortal Discoveries)#0601: i had tried small model gpt2 and it was pretty close to larger models
StellaAthena#3530: At 100 MB every 10 minutes 40GB would take a month
StellaAthena#3530: oops extra zero there. 3 days then
EricHallahan#1051: This is explicitly stated in the FAQ too by the way.
> However, we ask that you do not expect us to be your tech support; those who contribute to EleutherAI do so in their free time and tend to prefer contributing to projects rather than debugging your problems. **We recommend consulting the corresponding documentation before asking us for help.** If you think you have found a bug, please consider opening an issue on GitHub.
<https://eleuther.ai/faq>
genai (Immortal Discoveries)#0601: this is, using CPU (Matt Mahoney's doesn't use GPU 🙂 )
genai (Immortal Discoveries)#0601: same for mine too BTW
genai (Immortal Discoveries)#0601: Ok I will. But aren't brains faster sometimes to ask 🙂 ? I still think so...
genai (Immortal Discoveries)#0601: It can depend, lots of stuff on a page can make it look lngthy to read through.
StellaAthena#3530: This is a research server. We are not trying to teach people CS 101 things. Especially when even wrote the info down for you already
StellaAthena#3530: You should reread the #rules, as you clearly didn't pay a whole lot of attention. |
genai (Immortal Discoveries)#0601: ok
bmk#1476: lmao a .su (Soviet Union) TLD in the wild
Kia#2550: Yes:mittwoch:
adrien#5042: I like how straightforward the rules are
Kia#2550: Yup 😄
Imperishable_NEET#1969: I wonder if GPT-3 (trained in part on Reddit posts) was trained on r/subredditsimulator and r/subredditsimulatorGPT2. Which were themselves generated by bots, no humans allowed.
Imperishable_NEET#1969: The original non-GPT2 r/subredditsimulator (now defunct) I think used continously-training Markov chains, so it was learning from bot output.
Imperishable_NEET#1969: If you train on enough bot output doesn't it just become a feedback loop of nonsense? Or does something more interesting happen?
CKtalon#7792: the percentage of 'bad' data probably isn't that great
Kia#2550: Probably and probably isn't Nonsense?
jazzydag#5857: I begin to train a gpt-neo 1.3B on a v3.8 TPU with a 20GB-text corpus (another language than english). I wonder the order of magnitude of the training time: several hours, few days, several days, ... I could get a cost estimation on GCE. If someone can share its experience, it would be great! Thanks
pragmaticml#1730: https://scite.ai/ fascinating idea but seems pretty useless for the queries I've tried so far -- don't think arXiv is part of the index (or basically any ML literature)
nz#9710: I never tried Scite but was quite satisfied with connected papers (which I guess is somewhat similar?)
janus#0150: The opposite. I focus on language _because_ I think this. Before GPT-3 I would have bet on RL.
janus#0150: I'm curious to what extent people think
1) other modalities are needed to improve language ability to superhuman level (a means to an end)
vs. 2) other modalities are important because they are inherently important (an end itself)
janus#0150: 2 is what doesn't make sense to me, so I'd be interested in hearing points in favor. Probably it comes down to different takeoff stories.
Sphinx#2092: What does "inherently important" mean?
janus#0150: (I don't know :p. Thats why I have to ask). I think rom1504 thinks that we want it to become embodied and it needs other modalities to become embodied? |
janus#0150: Sub in for 2) "any reason for other modalities except 1)"
alstroemeria313#1694: huh, apparently you can do projected gradient descent with momentum with a weak Wolfe conditions line search
alstroemeria313#1694: you just have to like... be careful that momentum isn't making your step not a descent direction (you can just check this explicitly by taking the dot product of the step and the gradient)
alstroemeria313#1694: (You can also do Adagrad or Adam style diagonal preconditioning with a line search, since the diagonal preconditioner doesn't change any signs it still gives you a descent direction, but I tried this and it was kinda worse on the problems I tried it on)
alstroemeria313#1694: (Sometimes I come across problems that are not stochastic, that I only have a loss function for, that I want to minimize, maybe subject to some simple constraints)
alstroemeria313#1694: (And line search is king in this area)
Sphinx#2092: I mean, that much is clear. There is just value (as in $$$) in other modalities. Moreover, humans interact with images. Just because some subset of them don't (e.g. blind people) doesn't negate the fact that a large portion of them do and building tools to interact is useful. Even if your goal was purely communication, there's still value in things like e.g. being able to search through images with both text (e.g. a prompt) and image (e.g. a reference image) input at the same time.
janus#0150: What if your goal was building AGI, not $$$ or improving google image search?
janus#0150: One argument is that we might want AGI to give us technical specifications for building better GPUs and maybe diagrams are useful for that? I think text might be enough to communicate the needed information though.
Sphinx#2092: Even if your goal was AGI, it's not clear to me that just using text is better than e.g. using images, making tonso f money, then reinvesting that money in text.
Sphinx#2092: But I think this also ties back to the question I asked before, namely what does "inherently important" mean. If that means "towards building AGI" then maybe? I dunno. Not sure if anyone knows what are all the inherently important parts of building AGI. Though I would be surprised if additional modalities don't speed up the process. If it's not necessary, it's likely helpful.
cfoster0#4356: I guess I don't get this distinction. When you say "language ability" I think of solving problems like parsing sentences into trees, or coreference resolution, or maybe summarization. For example, I don't think writing technical specs for new GPUs is a language problem: it's a system design and simulation problem with a writing component at the end
cfoster0#4356: So if you want your general intelligence to handle general problem solving (or at least with the same task generality as us hoomans) then I think non-text IO and maybe even representations are likely
janus#0150: I'm using language somewhat broadly. I think of writing textbooks as language, including math, physics, etc. I think of writing research papers as language. I think of coding as language.
I can see a perspective that those things aren't enough for what we need AGI to accomplish, and that it needs non-text IO to be able to do what we want it to accomplish, or that language is enough for AGI to do what we need it to accomplish, but it won't be able to learn those things accurately without a physical understanding of the world and non-text IO is necessary to give it that.
janus#0150: So if the GPU designs can be described in natural language, thats a natural language problem. If drawing a schematic instead is some magnitude(s) easier, then maybe its not.
cfoster0#4356: If you wanna train a transformer that operates autoregressively on *arbitrary bytestreams*, then yeah, I believe that's enough for AGI
cfoster0#4356: Yeah, I suspect this will be the case
janus#0150: Nah, I think there is a major divide between arbitrary bytestreams and the things I described. Maybe "things that humans can write" in linear form. Like natural language, math, code, etc. |
janus#0150: I don't know enough about GPU designs and bottlenecks to know. Seems plausible
cfoster0#4356: So diagrams would not fall under that categorization because they aren't in linear form, right?
janus#0150: Right
janus#0150: But instructions in natural language and equations describing material properties quantum mechanics solutions that are relevant would be 'things that humans can write in linear form'
janus#0150: My guess is that its possible (and practical) to learn on nothing but natural language and produce something intelligent enough to communicate the important/useful ideas we need to move forward. It's possible I'm wrong on the intelligence side, its possible I'm wrong on the usefulness side. I doubt the second because imo what it needs is to do ML research and obviously to us that purely language based. But I can see other things being useful like GPU schematics or moreso making money on the stock market.
cfoster0#4356: I think that story is possible, but I don't assign much weight on it. Primarily because I suspect that learning other modalities will be significantly easier (ie needs less data and smaller models) than learning language, so it might end up being a nearly-free improvement, especially on the large set of tasks that aren't naturally formulated as language tasks
janus#0150: Thats an interesting point. I have generally considered it not worthwhile because its low information density and especially low _useful_ information density. But that does imply it will be easier, and the added brain module may be worth the inclusion.
rom1504#5008: > what it needs is to do ML research and obviously to us that purely language based
I disagree with this. What is "ML research" ? I don't think the goal is "producing papers". It's solving important problem using ML and solving ML fundamental to help on this.
What are important problems ? I would say ultimately it's acting on the physical world to improve human lives.
The physical world is not language. It's atoms.
That's why I don't see how you can build an AGI purely on language. Trying to teach an AI about atoms (and macroscropic structures) through language *alone* seems massively inefficient. Maybe not impossible, but needlessly hard.
James#6892: Why do all problems have to be in the physical world? There's ton in the digital one, and many of them are roughly language problems.
rom1504#5008: sure, not all problems are in the physical world, and plenty problems in the digital world are interesting and useful to solve (and some can be solved with language only)
but the question we discussed above is: is it enough? can we solve all the important problems with language alone? I think that's not the case because there are important problems that are in the physical world only
James#6892: I see your point, but I don't think AI can solve all important problems. Even if it works in the physical realm, there would still be other and new problems like social, emotional, economical, new ones we haven't seen before.
rom1504#5008: "solve all important problems" is what AGI is imo. But not believing in AGI possible existence is quite a reasonable opinion
cfoster0#4356: Tbh I think you can do everything you need with vision and proprioception/motor control. Even audio is probably unnecessary
𓅬 gabriel_syme 𓅬#3220: how do I stop downloading and start making things again? I have hundreds of papers, and I do read them (slowly), but I'm not spending any time doing/training/using stuff. I really need to find a dataset that sparks my interest, the layouts are quite simple atm
AI_WAIFU#2844: sit down, shut up, and start writing code. |
AI_WAIFU#2844: AFAICT Vtubers still need a good male-to-female voice converter and I dropped that project a while ago
AI_WAIFU#2844: So do that if you literally can't come up with anything better
Jonnathan#1234: Yea basically this. If you're having a tough time balancing things you can always implement a more strict schedule with certain hours for coding/projects and certain hours for reading papers/theory.
𓅬 gabriel_syme 𓅬#3220: thanks, sounds like solid advice!
𓅬 gabriel_syme 𓅬#3220: my problem is mostly that if I don't care about the application in mind I don't do it. So I need to sit down and create a dataset that matters (for me) I think
EricHallahan#1051: > my problem is mostly that if I don't care about the application in mind I don't do it.
That sounds like me.
𓅬 gabriel_syme 𓅬#3220: I think in July (when I get a spell hopefully from work and research) I'll sit down, rank the different areas by interest and then sit down and make datasets for each one of them. That should help
𓅬 gabriel_syme 𓅬#3220: it really sucks my domain has 0 open source datasets and doesn't care to change that..
chirp#4545: Some interesting tidbits from Karpathy’s talk: 20 ML people collaborating to architect a giant multi-headed model, trained on 5000 GPUs, 3 supercomputers with 15000 GPUs ($150M) all told, trained with data sourced from millions of cars, an entire team that just looks for hard examples, 1M videos with dense labels, 1.5 PB of labeled (!) data
James#6892: What’s this for?
chirp#4545: The upshot: the final model estimates the distance and velocity of the car ahead of you far better than radar ever could
chirp#4545: https://www.reddit.com/r/SelfDrivingCars/comments/o4oyn6/andrej_karpathy_tesla_cvpr_2021_workshop_on/
James#6892: Ahh Tesla, makes sense
chirp#4545: @EricHallahan if you’re curious, the video has some really convincing plots!
EricHallahan#1051: I am just highly skeptical of the "Cameras are All You Need" approach in general. (Not that I do not believe them in their claims here.) Nothing can replace a redundant sensor with an entirely different technology backing it up.
kurumuz#5695: vision is all you need for sure
kurumuz#5695: andrej is too based
EricHallahan#1051: Ranging sensors are cameras. :bigbrain:
𓅬 gabriel_syme 𓅬#3220: I was watching this really nice presentation yesterday from Kilian Weinberger and one of the examples was about self-driving cars and using cameras vs lidar. He was showing how representation and understanding what works and why is crucial, I think that's a better place to focus vs specific hardware or architectures |
𓅬 gabriel_syme 𓅬#3220: in his example, the way they radically improved performance was to a transformation from depth map to point cloud (type) of representation.
𓅬 gabriel_syme 𓅬#3220: Question: if you were trying to sketch a map of the AI-drive applications, or a subset of that, how would you structure it?
Would it be based on model architecture? Would it be based on the data modality? Or the task/real world application at hand? Or is either of these not sufficient and I need some higher dimensional map?
My first few mind maps followed the data modality, it's an easy enough way but it does have its challenges. Wonder if anyone else has done something similar and is willing to share insights.
chirp#4545: AI-drive?
chirp#4545: Nah I think that’s pretty different
chirp#4545: Assuming you’re talking about self-driving, maybe you could start by drawing out the sensor-to-actuator pipeline. Then you can situate each type of model based on what part of the pipeline it assists. For example, Karpathy’s depth detector would go next to the “slow down if there’s something ahead of you” logic
chirp#4545: Also if you’re just thinking about Tesla, I think they actually use a single neural net for all their computer vision
𓅬 gabriel_syme 𓅬#3220: I was talking much more general, but that helps too! It's a focus on processes
𓅬 gabriel_syme 𓅬#3220: so like I'm making a mind map for AI-drive design applications, and not sure what's the best way to structure it
Napolean_Solo#2907: Hi there a parameter to add stop sequences in GPT J?
Louis#0144: No
Louis#0144: Would be relatively simple to add though
nev#4905: https://mobile.twitter.com/reactjpg
nev#4905: can we train a meme clip with these
Deleted User#0000: 🤔 the clip would only be able to generate memes with things that have already been shown
zphang#7252: complete non-sequitur:
I predict that within the next 1.5 years, Yejin Choi's group will release some kind of fact-checking/factuality dataset titled "No Cap" |
Trainmaster9977#3932: What’s the current goal in terms of the next GPT-NEOX model if I may ask
EricHallahan#1051: It depends upon what you mean by "goal". I assume you mean our next milestone?
Louis#0144: I fucking love yejin's group
Louis#0144: they do stuff like this constantly
Louis#0144: the DELOREAN name made me snort laughing
zphang#7252: exactly
zphang#7252: and they've done some work on fact-checking
zphang#7252: I can feel it coming
EricHallahan#1051: https://ddl.stanford.edu/marty/overview
AI_WAIFU#2844: Wait for GPUs, because they're all on backorder.
zphang#7252: other name candidates include: SHEESH, SUS, POGGERS
&.#0001: what are your research interests?
𓅬 gabriel_syme 𓅬#3220: My interests are mainly within my domain of practice: architecture, engineering and construction. My current research is about developing a generative design system that sort of fulfills the hopes and dreams of the people who first thought of them in the 60s. There are a lot of parts in it, but some central ones are semantic generation, quality diversity, preference learning (and designer modelling) along with surrogate models for performance and fancy ways of visualizing and extracting (design) intelligence from designs, latent spaces, and preferences (I've yet to work on visualization).
In a way, it's an old fashioned goal for architecture, generative design. But I see it through the lens of AI and the incredible capabilities it can bring to the table.
𓅬 gabriel_syme 𓅬#3220: On the more practical side of things, when I get back to practice (i.e. in an office) soon, I'll definitely focus on more AI-driven tools that are closer to traditional research, like Q&A, retrieval, summarization, design assistant systems for NLP, 3D space stuff (navigation, point clouds, segmentation, generation), things with structured data, agent based stuff for behavioral design, etc.
&.#0001: Are these your reading interests, or are these your "I want to research by doing experiments" interests?
&.#0001: If the latter, what mix of GOFAI and deep neural networks do you plan to use?
𓅬 gabriel_syme 𓅬#3220: that's actually my job 🙂
&.#0001: Oh, alright 🙂 |
𓅬 gabriel_syme 𓅬#3220: and I plan to mostly focus on DL, although some GOFAI takes place I guess if you consider QD as one
&.#0001: I'm working on some things near this domain
&.#0001: Specifically a GPT research assistant (and a couple other projects)
𓅬 gabriel_syme 𓅬#3220: oh nice 🙂 I do think that's the hardest part btw, or one of them
𓅬 gabriel_syme 𓅬#3220: but in my case I don't want it to be necessarily a human like thing, just something that aids you
&.#0001: I have a concrete– albiet draft– plan and I plan to implement it
𓅬 gabriel_syme 𓅬#3220: hehe nice
𓅬 gabriel_syme 𓅬#3220: For me, I feel that's a distraction but mostly because it's not my focus
&.#0001: Let me know if you make progress, I'd be happy to share the result of our research with each other
&.#0001: Ah, ok
𓅬 gabriel_syme 𓅬#3220: like Autodesk's research head right he was talking about the systems of the future and all his focus was voice
𓅬 gabriel_syme 𓅬#3220: how he'd use voice to design, semantically
&.#0001: Voice is just a UI, no?
𓅬 gabriel_syme 𓅬#3220: my first comment was, why voice?
𓅬 gabriel_syme 𓅬#3220: it's a complexity no?
&.#0001: I feel people only want Voice because it's cool/sci-fi
𓅬 gabriel_syme 𓅬#3220: yeh, I don't think he's really aware of AI tbh so
𓅬 gabriel_syme 𓅬#3220: yeah exactly
&.#0001: One could slap voice on a humanlike system
&.#0001: And it would perform really well |
&.#0001: But the humanlike part is the harder part imo
rom1504#5008: I'm surprised how poor is android voice recognition still
&.#0001: And you get better system performance if you say 'please'
rom1504#5008: Like it's still X10 faster to touch the screen than to use voice. Even if you're doing something like driving when you can't touch the screen
rom1504#5008: So voice interface still seems to be in "not good enough" land
&.#0001: Depends on the system imo
rom1504#5008: Is there something that works better than android and iphone voice recognition?
rom1504#5008: They both work pretty poorly
&.#0001: Honestly, this goal sounds like it would be better served by programming language and compiler theory: Define an abstract system of ideas that can be converted down into architecture and engineering.
Dromarion#3383: I'm still studying but my main goal is creative use so AI that draws and writes or at least saves a lot of heavy lifting so I can create more with the same amount of energy.
&.#0001: Almost all software is written by a 'compiler'. Humans define the software on an abstract level and the computer writes the executable code
rom1504#5008: Sounds very old AI
&.#0001: Imagine declaring a set of rooms and getting a detailed blueprint output
&.#0001: Oldie but a goodie
rom1504#5008: I bet that kind of stuff is already the prod baseline
𓅬 gabriel_syme 𓅬#3220: that never worked, I feel that people were closer to that approach before AI
𓅬 gabriel_syme 𓅬#3220: (modern AI)
&.#0001: Can you cite sources? Why didn't it work?
𓅬 gabriel_syme 𓅬#3220: like sometimes what I'm trying to do is exactly the same thing they did in the 60s, only now I can lol
&.#0001: 🤔 |
&.#0001: Why not just give the compiler a ton of control over the details
rom1504#5008: Can't handle diversity
&.#0001: Elaborate?
𓅬 gabriel_syme 𓅬#3220: the most famous stories is Negroponte's Urban5, Alexander's HIDESC3, and Sketchpad I guess (although the latter not so much). Also YONA from Negroponte and al.
𓅬 gabriel_syme 𓅬#3220: those were incredible systems btw, for their time
𓅬 gabriel_syme 𓅬#3220: just didn't have :brr: quite possibly 😄
&.#0001: Hmmmm, yeah, I feel like if they had more compute, maybe
&.#0001: Maybe a mixed approach works; formal semantics in, implemented by a neural network out
𓅬 gabriel_syme 𓅬#3220: Urban5 was even a design assistant, that captured constraints and generated layouts. Wild for the 60s lol
𓅬 gabriel_syme 𓅬#3220: only instead of formal, we can just do semantics 🙂
𓅬 gabriel_syme 𓅬#3220: (although I'm not sure I always get what that means exactly lol)
&.#0001: Here is what I mean by formal semantics:
```
greenery >= 15;
bathrooms >= 1;
```
𓅬 gabriel_syme 𓅬#3220: but my big step has been semantic generation and how it allows for embedding of arbitrary constraints (well, at least possibly and with a lot of work)
𓅬 gabriel_syme 𓅬#3220: ah sure, yea semantic generation can do that
&.#0001: I define 'formal semantics' as, 'given a program input and a generated output, does this always fall inside the domain of what we asked for?' |
&.#0001: Neural networks naturally tend to be 'does this fall inside what we asked for 70% of the time'
&.#0001: For instance, 'BUILD MORE PAPERCLIPS' is formal semantics
&.#0001: Even if it results in a runaway AGI
&.#0001: But 'make me happy :)' is not formal semantics
&.#0001: Both are useful, depends on the context
𓅬 gabriel_syme 𓅬#3220: I understand yeah, feels like an important distinction to discuss if I ever write a thesis
𓅬 gabriel_syme 𓅬#3220: I do think I have both in there, for example I do performance based design so performance is almost always formally defined
&.#0001: One simple approach is to use PLT to define the possibility space, then use a heuristic system like a neural network to search over constraints– modern compilers do this, without neural networks
𓅬 gabriel_syme 𓅬#3220: but I also want to have more open ended design, in which case I wouldn't mind ways to break that
&.#0001: Eleuther AI is a project to find out if the SCP foundation is real, and if so, how much AI we need to build to be captured by them.
&.#0001: A modern compiler applies a series of heuristics to transform a high-level formal input (language) to a low-level formal output (blueprint)
&.#0001: With ML, your heuristic is a neural network
&.#0001: Which learned things automatically from examples
𓅬 gabriel_syme 𓅬#3220: so my problem with this is that it's really hard to embed constraints formally in design. Like, the paper I was focusing before DALLE did it with constraint satisfaction and hard coded constraints (typical to the domain). I know next to nothing about that (constraint satisfaction etc.). It was really nice work but DALLE approaches did it without all that.
cfoster0#4356: At some point we'll figure out how to use program synthesis with NNs well. I dunno when but eventually
cfoster0#4356: Or maybe the NNs will figure it out for us
𓅬 gabriel_syme 𓅬#3220: I wish I was smart enough for this lol
𓅬 gabriel_syme 𓅬#3220: instead I'm leaving it to the NNs, and hope they can do
&.#0001: Do you have a well-defined constraint input? What about a well-defined middle-level intermediate representation (MIR)?
&.#0001: LLVM is a MIR |
&.#0001: Basically a language that is close to the final form, but easier to manipulate (more abstract) than it
𓅬 gabriel_syme 𓅬#3220: ehm, I guess for the (very) specific case of layouts an intermediate representation is a graph based one, although it could even be coordinates and features? but not sure about the compiler level, I'm totally clueless there
&.#0001: I think it would be graph-based– once you have a final graph-based, you can layout with coordinates and features
𓅬 gabriel_syme 𓅬#3220: yep
&.#0001: Perhaps you could have a series of heuristics that take one graph and turn it into another?
&.#0001: Or subunits of the graph
&.#0001: Perhaps NNs let you globally superoptimize the graphs
𓅬 gabriel_syme 𓅬#3220: I thought of that, it was my goal before DALLE I think (and still probably is a goal).
𓅬 gabriel_syme 𓅬#3220: there's a lot of work on that, decades long probably
&.#0001: The issue is.... with a pure NN setup, how exactly do you stop the NN from violating the constraints?
&.#0001: Writing a GPT or CLIP prompt is like a wild goose chase
𓅬 gabriel_syme 𓅬#3220: well, I don't really mind if it does given that the system is meant to be an ideation engine, concept design
&.#0001: oh
𓅬 gabriel_syme 𓅬#3220: so it's not like 'put this in production'
&.#0001: concepts.... why can't you just feed a dataset in a special format to GPT-J🙂 ?
𓅬 gabriel_syme 𓅬#3220: also, the idea is that it smh continuously learns (that's where designer modelling comes in I guess)
&.#0001: Replace GPT with a more appropriate architecture
𓅬 gabriel_syme 𓅬#3220: I kind of did that, or I guess DALLE did that
𓅬 gabriel_syme 𓅬#3220: but yeah one extension I'm looking at is that, just do it on the base of representation vs images (where DALLE works at)
𓅬 gabriel_syme 𓅬#3220: so just create text, coordinates and shapes |
&.#0001: With CLIP, the NN does the idea work, you merely influence its process
&.#0001: Cool
&.#0001: I see where you are going now
𓅬 gabriel_syme 𓅬#3220: images are an architectural fetish lol
&.#0001: You can generate an image from a text or graph format easily
𓅬 gabriel_syme 𓅬#3220: I had to start that way I guess, but I agree an intermediate representation is more useful and it allows you to do stuff like that
&.#0001: See `.dot`
𓅬 gabriel_syme 𓅬#3220: yeah exactly
&.#0001: Yes
𓅬 gabriel_syme 𓅬#3220: also this has to work at large scales, so efficiency will matter
&.#0001: IRs are far more flexible for a human and computer to manipulate
&.#0001: Yeah flexible and efficient
𓅬 gabriel_syme 𓅬#3220: I'm planning to try this with a Decision Transformer approach soon (I hope)
&.#0001: I'd be happy to hear your results
𓅬 gabriel_syme 𓅬#3220: will try both images and a representation (possibly just coordinates etc)
𓅬 gabriel_syme 𓅬#3220: I'll let you know if I have any lol, it's doubtful 😄
&.#0001: Unrelated. Is anyone here doing robotics? Building a robot to cook meals?
&.#0001: If you have the physical movement working, but need recipes and/or instructions, let me know
CRISPR IQ300#6848: Did Snapchat ever publish how they did the genderswap filter? That came out 2 years ago, runs on mobile realtime, temporally very stable, works with head turns, so my thought is that it could work as a realtime ebsynth if trained as a more general style transfer. Why haven't we seen anything like that, or have we?
CRISPR IQ300#6848: Here's a compilation to refresh memory: https://www.youtube.com/watch?v=StKXab_5C9s |
CRISPR IQ300#6848: Oh wait, this is tiktok
CRISPR IQ300#6848: Did snapchat and tiktok use the same technique?
CRISPR IQ300#6848: https://www.youtube.com/watch?v=I3Zn5xhNwuA
CRISPR IQ300#6848: Some kind of hybrid facetracking/algorithmic/realtime GAN?
dmvaldman#4711: the vfx team behind deepfaketomcruise started a company for high-production deep fakes https://metaphysic.ai/
dmvaldman#4711: let many Tom Cruises bloom
CRISPR IQ300#6848: To clarify this is a response to my post or coincidence? Is the genderswap filter basically a deepfake filter?
CRISPR IQ300#6848: I was under the impression that deepfakes are not realtime but maybe I'm overthinking this, I'll check out the article.
dmvaldman#4711: didn't mean it to be related to your earlier posts
CRISPR IQ300#6848: The video of Tom Cruise looks so good I'm not sure if I missed something and they just plain hired him lol
CRISPR IQ300#6848: The majority of people who saw Trump's final speeches and even deepfake detecting AI's were convinced but couldn't prove the speeches were deepfaked, even ML engineers still aren't sure, perhaps they used this tech. It's incredible.
CRISPR IQ300#6848: Maybe I'm missing something but the raw output looks kinda mediocre, and they're showcasing their epic labor intensive process that comes after that? https://youtu.be/BcrrQAf42qw?t=12
gdawg16#0493: GPT-J IS OFFICIALLY BETTER THAN GPT3 https://www.youtube.com/watch?v=V0pceNYgELE&list=PLqJbCeNOfEK88QyAkBe-U0zxCgbHrGa4V
AI_WAIFU#2844: cringe
genai (Immortal Discoveries)#0601: I must go now but, I downloaded from The Pile the openwebtext2 file and unzipped it to find separate jsonl.zst files, I'm disappointed as now I need to stitch them together after figuring out how to decrypt them....can you help?....
dmvaldman#4711: the hype is real!
dmvaldman#4711: sota!
dmvaldman#4711: should be added to the lm-evaluation harness work in #lm-thunderdome
bmk#1476: no thanks
𓅬 gabriel_syme 𓅬#3220: ~~Sota? More like Leta, amiright?~~ |
triggerhappygandi#0001: Never thought deep learning would be a big clickbait content provider but here we are
Louis#0144: Why did this never occur to you
triggerhappygandi#0001: It was a niche
Louis#0144: Never
Louis#0144: Was literally never niche
triggerhappygandi#0001: It is in the sense that compared to clickbait farms like fotnite, amogus etc its not as well known to normies
triggerhappygandi#0001: Well.. pre GPT-3
bmk#1476: ~~who cares what normies think~~
Louis#0144: I only care what the 4chaners think
bmk#1476: that's literally worsr
Louis#0144: oh
bmk#1476: 4chan is not a good place for social validation lmao
Louis#0144: I accidentally became 4chan famous the other day…
Louis#0144: I don’t even post there
Louis#0144: There was a thread about me
bmk#1476: poor decision on your part
Louis#0144: I know
Louis#0144: I should have posted
Louis#0144: Ur right
bmk#1476: no |
triggerhappygandi#0001: never let yourself be known on that website.
Trainmaster9977#3932: welp somehow I forgot about this really quickly but i guess yes?
EricHallahan#1051: We are pretty much in "wait for hardware to arrive" mode.
Trainmaster9977#3932: thats fair!
Trainmaster9977#3932: tho may i ask if you know how many parameters your next model will have?
Louis#0144: Chonk
bmk#1476: more than 2.7
bmk#1476: not billion, just 2.7
bmk#1476: (we really dont know yet lol)
Louis#0144: Ur in good hands though
Louis#0144: Or uh
Louis#0144: Wings?
Trainmaster9977#3932: ok fair
Trainmaster9977#3932: imagine if you somehow end up surpassing gpt-3's biggest model in a couple years
bmk#1476: ¯\_(ツ)_/¯
kurumuz#5695: its too big
bmk#1476: its never too big
EricHallahan#1051: https://what-if.xkcd.com/imgs/a/18/bb_more.png
EricHallahan#1051: a man in a hat suggests we try more
kurumuz#5695: i dont like how i turned to an infrastructure engineer |
kurumuz#5695: i dont even know how it happened
kurumuz#5695: :berk:
kurumuz#5695: @EricHallahan i propose stacking LMs, instead of layers
bmk#1476: that sounds like MoE, but like worse
kurumuz#5695: well if we can do something worse, that is an accomplishment too
Teemochu#8740: Only a conversation within a thread. I've been screenshotted but never mentioned on the same thread.
kurumuz#5695: this is not true btw
kurumuz#5695: lol
kurumuz#5695: they like goose though that is true
AI_WAIFU#2844: You lead a ~~writing assistant~~ MLOps company.
kurumuz#5695: they call him "the mysterious KG man"
Louis#0144: I’m referring to the bad thread
Louis#0144: The one where they went on a rampage
Louis#0144: For like a solid hour
bmk#1476: ~~MLOps~~ CoomOps
kurumuz#5695: Its seriously turning into that
Teemochu#8740: Nothing but mammals
bmk#1476: do they call him KGoose
Teemochu#8740: (Sorry for goose erasure, song popped in my head)
kurumuz#5695: oh god |
kurumuz#5695: i hope they dont
Louis#0144: THEY SHOULD
Louis#0144: OMG
kurumuz#5695: NO
bmk#1476: Knowledge Goose
EricHallahan#1051: !goose
Isaac McHorse#2007: https://cdn.discordapp.com/attachments/729741769738158194/857107625664970802/goose.jpg
bmk#1476: I have created an infohazard
AI_WAIFU#2844: Have you considered branching out into B2B applications once your cash pile stops burning?
EricHallahan#1051: Thanks Isaac.
kurumuz#5695: i will write a blogpost about why KGs will never work and release it to 4chan
kurumuz#5695: then they will hate the kgoose :)
Teemochu#8740: I can think of a *far* worse goose based infohazard involving a CDPR game
bmk#1476: I need a link
Teemochu#8740: One which fits NAI *perfectly*
kurumuz#5695: well our api is already public
kurumuz#5695: but maybe yea
AI_WAIFU#2844: Like your tool has *much* more potential than what you're currently using it for. If you support things like on premise hosting, you could pull in *serious* revenue.
kurumuz#5695: We just need time
AI_WAIFU#2844: Both front end and back end |
bmk#1476: you'd become a HF competitor lol
Louis#0144: We know
kurumuz#5695: everything is really chaotic right noe
Louis#0144: lol
Louis#0144: Trust me
Louis#0144: We know
AI_WAIFU#2844: No, their front end makes them arguably better than HF
EricHallahan#1051: HF sucks at efficiency too lol
kurumuz#5695: our api is better too
Louis#0144: We need engineers though
kurumuz#5695: lol
Louis#0144: It’s so hard to find engineers rn
kurumuz#5695: ye
Teemochu#8740: Hmm... what to call a huggingface competitor that's about, um, more than hugging and more than the face...
kurumuz#5695: Lmao
kurumuz#5695: our ui implements pretty much everything openai playground has
kurumuz#5695: lol
AI_WAIFU#2844: They have a full fledged writing assistant. OAI didn't want to let people use their API that way, but Kuru has no such limitations.
kurumuz#5695: some things are missing but coming soon
Teemochu#8740: From what I gather the reason I haven't gotten more interviews to give as of late is everyone wants to stay where they are until the dust settles. |
Teemochu#8740: (As a Google SWE)
kurumuz#5695: we need faster inference tbh
guac#4716: (What’s kurus project?)
kurumuz#5695: working on optimizing it but we're short on engineers
AI_WAIFU#2844: welcome to the club
Teemochu#8740: Never thought "lack of candidates" would be why I'm only giving one interview a month
kurumuz#5695: finetune is working on the secret project
AI_WAIFU#2844: do tell
kurumuz#5695: lol
Teemochu#8740: Distillation?
Louis#0144: Is it secret
kurumuz#5695: @Louis do i tell yet?
Louis#0144: Oh yeah wait it is
Louis#0144: Don’t say
kurumuz#5695: ye probably not yet
Louis#0144: lol
guac#4716: Oh I didn’t know it was top secret my bad lol
kurumuz#5695: @AI_WAIFU sorry, cant say :P
kurumuz#5695: @guac nah my project is novelai
EricHallahan#1051: KG distillation |
AI_WAIFU#2844: you fuckers better not be making a sadpanda generator
Teemochu#8740: MoE involving fanfic 6B stacked with Sigurd?
kurumuz#5695: we gave up on distillation
Teemochu#8740: Good idea!
bmk#1476: is the secret project an enormous goose statue
kurumuz#5695: still hopeful about sparsity though
bmk#1476: :goose:
kurumuz#5695: @bmk its a lm that only repeats GOOSE GOOSE GOOSE GOOSE
kurumuz#5695: %99.99 accuracy at gooseset
Teemochu#8740: (Sorry Canadians this is one space America has you beat)
EricHallahan#1051: I am going to take a long-shot guess in that it is image generation related.
bmk#1476: :ultragoose:
Louis#0144: Oh yeah Leo@and I wanna make goose LM
kurumuz#5695: @EricHallahan we're pretty slow ob that actually
Louis#0144: You wanted me to do that data preproc
Louis#0144: Where I replace every name with goose
Louis#0144: Right?
AI_WAIFU#2844: I don't think they have the compute to do that.(yet)
kurumuz#5695: @Louis this fucker is making me work on KGs
kurumuz#5695: help |
Louis#0144: LMAO
Louis#0144: I NEED YOU TO OPTIMIZE
Louis#0144: IT WORKS
Louis#0144: I DID 90% OF THE WORK
Teemochu#8740: My guess is the word "stack" has something to do with it
kurumuz#5695: it sucks though
kurumuz#5695: lol
Louis#0144: Yeah
bmk#1476: overflow
kurumuz#5695: paracomet sucks the way it is
kurumuz#5695: you overhyped it
Teemochu#8740: ~~Oh something will overflow~~
kurumuz#5695: :berk:
kurumuz#5695: @Louis we will make it good dw
EricHallahan#1051: no, it will exchange.
kurumuz#5695: the potential is there
Louis#0144: It’s pretty good if we get the memory transformer working and do rejection sampling
kurumuz#5695: then what you get instead of things you rejected?
Teemochu#8740: Don't reject the waifus and everything's fine, got it? :gooseknife:
kurumuz#5695: you will be missing information |
kurumuz#5695: losing track of some things
kurumuz#5695: this is not trivial
Louis#0144: Yeet
Louis#0144: It’s fine dw
Louis#0144: I got it
kurumuz#5695: i wanna reject the geese tbh
kurumuz#5695: :shiSmug:
Teemochu#8740: smol goosgrill doesn't want to reject you though
Louis#0144: To clarify the above btw paracomet is very picky about the kind of story you give it. It’s totally fixable
Louis#0144: But it’s just weird rn lol
kurumuz#5695: you need a bigger model
kurumuz#5695: im convinced you need to scale up
kurumuz#5695: @bmk won again
kurumuz#5695: stacking more layers
kurumuz#5695: :PepeCry:
EricHallahan#1051: The bitter lesson is called that for a reason.
Louis#0144: LMAO
Louis#0144: I agree
Teemochu#8740: Is there a link to where this bitter lesson comes from?
Louis#0144: We need to scale our KG stack |
Louis#0144: But we don’t have engineers
kurumuz#5695: we will both scale our models
Louis#0144: yet
kurumuz#5695: and KG
Louis#0144: So
kurumuz#5695: i am literally begging people to pay them money at this point
kurumuz#5695: lmao
kurumuz#5695: not for KGs though
kurumuz#5695: you should do KGs for free
Louis#0144: Wut
Louis#0144: Oh
kurumuz#5695: KG engineers are like inferior
kurumuz#5695: no need to pay them
Louis#0144: @bmk coming back to Eleuther
kurumuz#5695: :berk:
kurumuz#5695: pmao
kurumuz#5695: @Louis too late
Teemochu#8740: > Here’s a simple scenario. You’re at a party and you meet a cute girl. She’s a total babe and you have a lot of things in common. You get her number and text her the next day, but she doesn’t respond. A week later, you try again and she’s still not responding. So, you try again the next week, and she doesn’t respond.
>
> In other words, you’ve asked her out four times and she hasn’t given you a response. It’s not that she’s being a bitch or mean, it’s just that she doesn’t know you. You’re not sure how to approach her. |
>
> There’s a solution to this problem: the knowledge graph.
thank you AI. (unfinetuned 6b, not revealing my prompt)
kurumuz#5695: there is a solution to this problem: prompting
Teemochu#8740: Prompt was about knowledge graphs but too weebish for Connor
kurumuz#5695: lol
jesse#7865: make this thing a copypasta
AI_WAIFU#2844: DM me
Louis#0144: @kurumuz I worry people will see us bitching about KGs here
Louis#0144: And think NovelAI is doomed now
Louis#0144: LMAO
Louis#0144: Dw guys it’s not doomed
kurumuz#5695: we're closing the store
kurumuz#5695: its totally doomed
Louis#0144: LMAO
neko#5937: If you want faster inference, what do you think of using the Tensor Parallelism method? Since it looks like it can scale quite well. Maybe you and @finetune could contribute it to huggingface, for the larger EleutherAI models? I'm genuinely curious what you think since I never saw this method discussed here
https://github.com/huggingface/transformers/issues/9766
https://github.com/huggingface/transformers/issues/10321
kurumuz#5695: (jk)
Louis#0144: We can fit the model within a GPU |
kurumuz#5695: and we dont want to stay on huggingface
kurumuz#5695: lol
neko#5937: what do you want?
kurumuz#5695: they dont accept our pull requests either
kurumuz#5695: leave their inference repo asap
Louis#0144: Something designed around KG inference
Teemochu#8740: I use finetune's branch
kurumuz#5695: get something custom
Teemochu#8740: Much better
kurumuz#5695: yea
kurumuz#5695: its what we use lol
Louis#0144: Our KG stack for reference also uses a fork of transformers
AI_WAIFU#2844: You guys should try distilling to a MoE model. They have lot's of capacity, but much smaller compute requirements.
kurumuz#5695: hmm
Louis#0144: Distillment doesn’t really work for our use case yet
Louis#0144: We looked into imitation learning based distillment
Louis#0144: But it’s only for seq2seq
AI_WAIFU#2844: Did you only try it on dense nets?
kurumuz#5695: do i get kicked out of the club if i said i didnt read about MoEs yet
Louis#0144: :/ |
kurumuz#5695: @Louis dont we still believe in sparsity
Teemochu#8740: I keep seeing MoE as :catgirl3:
AI_WAIFU#2844: Like what I'm saying is distill 6B GPT-J down to 20B MoE
Louis#0144: I tried some basic KL divergence distillment experiments and it murdered the creative component of the LM
Louis#0144: Was not fun
Louis#0144: MoEs are sparse
Louis#0144: lol
Teemochu#8740: And I'm imagining a site called distilled.moe that has text generation from exactly what it claims to be
kurumuz#5695: thats what im sayin
kurumuz#5695: lol
kurumuz#5695: ik they arr sparse
Louis#0144: But but 6b < 20b????
Louis#0144: ;p
Louis#0144: I actually wanted to distill down to a giant LSTM
Louis#0144: For the memes
AI_WAIFU#2844: Yeah but because the latter is MoE the compute requirements of the 20B model can be smaller than the 6B model.
Louis#0144: I’m kidding
Louis#0144: Dw
Teemochu#8740: *watches as it outperforms due to magical context length*
Louis#0144: Yo the imitation learning paper actually showed when you distill into an LSTM it improved performance on summarization and QA |
Louis#0144: LMAO
bmk#1476: moes are lower class citizenship than real parameters
Louis#0144: it was so wild
bmk#1476: :schmid:
Louis#0144: Even if the LSTM is smaller
kurumuz#5695: dont believe in RNNs
kurumuz#5695: they're not real
Teemochu#8740: Wouldn't be surprised to see multi-token embeddings end up doing well though
Louis#0144: Oh
Louis#0144: LSTMs are kickass for small inference though like KGs
Louis#0144: 15x faster than a transformer
Louis#0144: lol
Louis#0144: Is that not tempting?
kurumuz#5695: it is but we need big
kurumuz#5695: bigger
EricHallahan#1051: LSTMs are useless past small scales it seems.
Louis#0144: Train 6b on paracomet
Louis#0144: Distil into LSTM
Louis#0144: Indeed
kurumuz#5695: @Louis check sentenceLVM |
Teemochu#8740: Basically learn a feedforward that begins with two concatenated embeddings, has a 4s hidden layer, and ends with one embedding representing the two-token set. Repeat for 4, 8, 16, etc tokens. Now further back in context use these in place of the original one-token embeddings.
bmk#1476: > we need to improve inference efficiency
> time to use LSTMS
:confusedwat:
kurumuz#5695: wait wrong paper name
kurumuz#5695: @bmk goose is confusing me
Louis#0144: Genuinely
Louis#0144: They’re way faster for inference
bmk#1476: good - your power as a rationalist is to be more confused by fiction than reality
kurumuz#5695: no
Louis#0144: Just slower for training
kurumuz#5695: they:re recurrent
kurumuz#5695: tjey cant scale well
Teemochu#8740: Fiction does not confuse me. It's the reality of people who decry it that confuses me. :ZoeBlehGif:
AI_WAIFU#2844: don't listen to anything that has "LSTM" and "efficient" in the same sentence unless it comes with "graphcore" and "marketing".
Louis#0144: LOL
kurumuz#5695: lol
kurumuz#5695: when is recurrence coming back
Louis#0144: https://arxiv.org/abs/2009.07253
Teemochu#8740: After recurrence comes back |
bmk#1476: recurrence expensive
Louis#0144: 15x speed improvement for inference
Louis#0144: Over a similarly sized transformer
kurumuz#5695: at 100m
kurumuz#5695: lel
Louis#0144: We don’t need a big model for KGs?
kurumuz#5695: transformers can parallelize
kurumuz#5695: WE DO
AI_WAIFU#2844: When we get something better than BPTT
Louis#0144: BUT TRAIN A BIG MODEL THEN DISTIL
kurumuz#5695: you need good reasoning for good knowledge extraxtion
kurumuz#5695: @Louis maybe
Teemochu#8740: "Our newest model has 6.9 distillion parameters"
kurumuz#5695: lol
kurumuz#5695: @Louis i think we want distilneo6b for paracomet
kurumuz#5695: it is a specific task
kurumuz#5695: should work
AI_WAIFU#2844: Yall should turn GPT-J into a transformer-xl then fine tune.
Louis#0144: Ofc
Louis#0144: This is literally what paracomet does |
Louis#0144: To GPT2
Louis#0144: lol
Louis#0144: They add recurrence and then finetune
AI_WAIFU#2844: >10^5 tokens of context for almost no extra compute overhead.
kurumuz#5695: i need to think morr about paracomet
kurumuz#5695: im pretty sure the architecture can be improved
kurumuz#5695: their memory scheme is dumb imo
kurumuz#5695: we dont need it
kurumuz#5695: you need a big transformer
kurumuz#5695: @Louis dont get scared of scale
kurumuz#5695: embrace it
kurumuz#5695: we need it for KGs too
Louis#0144: lol
kurumuz#5695: i got scalepilled by @bmk
Louis#0144: Go to bed
kurumuz#5695: im scaaaaaaling
AI_WAIFU#2844: https://arxiv.org/pdf/1901.02860.pdf
Teemochu#8740: Thought cats were furry
kurumuz#5695: @AI_WAIFU i will read it after i wake up, thanks
kurumuz#5695: @Teemochu say no to scales |
Louis#0144: Wait how is it no extra compute overhead
kurumuz#5695: say yes to scaling
kurumuz#5695: @Louis magic
bmk#1476: transformer-xl is surprisingly uncommon in practice
bmk#1476: I rarely hear people talk about it
bmk#1476: idk ehy
bmk#1476: maybe all the people using it are just staying silent about it
kurumuz#5695: seriously though we want to scale, we seemed against scaling on public for some reason
bmk#1476: the only use case for transformer-xl I've seen is in the related works section of other equally unused transformer variants
kurumuz#5695: lol
kurumuz#5695: i think we should just design a better kg extractor
kurumuz#5695: i dont like atomic btw
kurumuz#5695: i should sleep lol
Louis#0144: Probably
AI_WAIFU#2844: It's cause' it's a PITA to implement, at least before the local attention trick + rotary
kurumuz#5695: what does @AI_WAIFU thinks about KGs
AI_WAIFU#2844: With GPT-J it should be easier.
kurumuz#5695: im curious
Louis#0144: ~~He doesnt like them~~
AI_WAIFU#2844: Not quite |
kurumuz#5695: reee
AI_WAIFU#2844: @Louis don't put words in my mouth
Louis#0144: Fixed
AI_WAIFU#2844: I think there's potential for KGs and other high-memory data structures, but I think there's still some work that needs to be done/problems need to be solved before they become actually useful.
Louis#0144: I agree
kindiana#1016: recurence breaks iid
kindiana#1016: which hurts quite a bit
kurumuz#5695: i think we're working on most of the puzzle pieces
kurumuz#5695: the glue between them is scary though
kurumuz#5695: it is feature engineering
AI_WAIFU#2844: If your doing feature engineering I'm fairly confident you're on the wrong path.
Louis#0144: For generalized KGs of course
Louis#0144: There’s the paper I sent you kuru on self supervising KBs
Louis#0144: That doesn’t require feature eng
Louis#0144: But for our use case feature eng is fine
Deleted User#0000: Get a hold of this joke:
*Knock knock*
AI_WAIFU#2844: What if you cache the old activations at a bunch of points to disk/ram, then randomly fetch from different points, moving forwards a bit each time and recaching
kindiana#1016: then you are computing gradients wrt old activations
kurumuz#5695: feature engineering requires good engineering too |
kurumuz#5695: i would prefer end to end
Louis#0144: No one is denying this
Louis#0144: Go to bed kuru
Louis#0144: Lol
bmk#1476: I spent a year working on stuff that got obsoleted by gpt3
bmk#1476: :harold:
AI_WAIFU#2844: Presumably that's less of an issue than IID right? Once you start to converge, so should the activations.
bmk#1476: don't waste your time on feature eng lol
kindiana#1016: > implying you do more than one epoch
kurumuz#5695: well need to think more then
AI_WAIFU#2844: Actually no, you can have up-to-date graidents.
AI_WAIFU#2844: Just lay out all your data in a big line.
kindiana#1016: thats very not iid then lol
AI_WAIFU#2844: evenly put sampling points equal to your BS.
AI_WAIFU#2844: yeah your right, nevermind
AI_WAIFU#2844: I still think you're activations will partially converge tho. At worst it should be at least as good as not having access to those activations in the first place.
kindiana#1016: well then you have train-eval distribution shift
kindiana#1016: always scary
kindiana#1016: while weights do tend to converge very closely, I'm not sure activations do the same
AI_WAIFU#2844: If weights converge, why wouldn't activations? |
AI_WAIFU#2844: Activations are a function of weights
kindiana#1016: yeah, a highly nonlinear function
kindiana#1016: lmao
EricHallahan#1051: pls go to slp
AI_WAIFU#2844: That can't be right tho. Otherwise NNs would be difficult to optimize.
kindiana#1016: :thonk:
kindiana#1016: I mean, look at how small LRs are
kindiana#1016: lmao
kindiana#1016: and thats just for taking a single step
AI_WAIFU#2844: I bet our architectures specifically have the property that their activations change almost linearly with weights locally.
AI_WAIFU#2844: For a very loose definition of "local"
Louis#0144: So transformer XL is a meme then?
AI_WAIFU#2844: No.
kindiana#1016: I don't think recurrent caching is of too much value, but the pos encoding which allowed for sliding window generation is useful
kindiana#1016: (though superceded by rope)
genai (Immortal Discoveries)#0601: I downloaded from The Pile and unzipped it and got jsonl.zst files, how do i read them?
AI_WAIFU#2844: git gud
kurumuz#5695: lol
guac#4716: tar ✴️
gdawg16#0493: i watched this video so now i know about knowledge graphs https://www.youtube.com/watch?v=xop5tC9T5xM&ab_channel=stanfordonline |
cfoster0#4356: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
genai (Immortal Discoveries)#0601: why would you compress using zstd instead of just 7zip....
kurumuz#5695: because its based
kurumuz#5695: git gud
EricHallahan#1051: :smallbrain: 7z
:bigbrain: zstd
genai (Immortal Discoveries)#0601: can't even find easily the zstd software Lolz
genai (Immortal Discoveries)#0601: did the files really have to be separate? And not just 40GB in 1 file?
genai (Immortal Discoveries)#0601: cuz now I'll have to stitch em together too
genai (Immortal Discoveries)#0601: maybe on linux
neko#5937: `tar -I zstd -cvf x.tar.zstd x`
neko#5937: `tar -I zstd -xvf x.tar.zstd`
neko#5937: what do you mean?
neko#5937: it's a single zstd file
neko#5937: oh nvm you're talking about the piile
genai (Immortal Discoveries)#0601: I downloaded the openwbtext2 from the Pile and inside is hundreds of small files....maybe because I didn't was told to use that command you gave above which i bet resolves seeing many files now that i think about ti
neko#5937: i was referring to 6b my bad
neko#5937: idk anything about the pile
genai (Immortal Discoveries)#0601: right now i'm trying this and it's haunting me with that little picture of an penguin https://sourceforge.net/projects/zstd-for-windows/
neko#5937: use ubuntu on windows? |
neko#5937: i hope windows 11 is a linux distro lol
EricHallahan#1051: It isn't.
genai (Immortal Discoveries)#0601: they said windows 10 is the last version
kurumuz#5695: would be nice
EricHallahan#1051: I ran the Dev build.
kurumuz#5695: win 11 seems pretty bad
kurumuz#5695: i dont like it
neko#5937: oof
EricHallahan#1051: I actually think it is a pretty clear-cut improvement assuming that 21996.1 is an unfinished Dev build as labeled.
kurumuz#5695: gpt-j huggingface card wen
kurumuz#5695: :P
kurumuz#5695: well wrong channel
EricHallahan#1051: I have precommitted to a few things for the event already.
kurumuz#5695: i thought that PR would get merged fast lol
EricHallahan#1051: lol
kurumuz#5695: i was really naive ig
EricHallahan#1051: It was ten days between launching Neo weights and the HF implementation getting merged.
kurumuz#5695: it got merged?
EricHallahan#1051: Well there is a `GPTNeoModel` class. :P
kurumuz#5695: https://github.com/huggingface/transformers/pull/12243 |
kurumuz#5695: 🤔
zphang#7252: Neo, not J
EricHallahan#1051: pls slp
kurumuz#5695: ohh
kurumuz#5695: yea
kurumuz#5695: need sleep
kurumuz#5695: i cant read
EricHallahan#1051: https://eleuther.ai
EricHallahan#1051: Those researchers must be working really hard to open source research. :berk:
EricHallahan#1051: It seems to be the only thing they do. :3berk:
genai (Immortal Discoveries)#0601: is zstd for windows? It won't run...
neko#5937: its a compressed file
genai (Immortal Discoveries)#0601: i cant uncompress them...
genai (Immortal Discoveries)#0601: needs zstd software
neko#5937: you can try using colab then
kindiana#1016: good luck, we are not here for tech support
genai (Immortal Discoveries)#0601: but yous just talked about non-AI stuff for hours above...
genai (Immortal Discoveries)#0601: at least, actual work i mean
CRG#8707: Also should make generation past 2048 tokens true O(n^2) instead of O(n^3). (Since no recomputation)
Gurkenglas#7362: Is there a repository of tools such as natural language shell which works as soon as you plug a language model into one place? |
Sid#2121: @Dirk Groeneveld sorry for the out of the blue ping - but do you know if mc4 is shuffled? (i.e if i take a block of mc4 supplied by allennlp - is it a truly random sample by date? or will all items in a block share a similar date range)
Dirk Groeneveld#5137: I don't believe it's properly shuffled. Documents in one file are likely to come from the same CC dump, so they will be from the same time period, give or take.
I'm not 100% sure of this though. You could look at the code to check whether there is a shuffle step.
Sid#2121: I had a little look and couldn't see one - but checked the timestamps within a single chunk and they varied from 2015 - 2019. Do you know if the timestamp is when they were crawled? I assumed that was the case
Sid#2121: there is this beam.Reshuffle() step but I can't really tell what that does
Sid#2121: the documentation is not really helping me :berk: https://beam.apache.org/documentation/transforms/python/other/reshuffle/
kindiana#1016: you gotta shuffle it yourself anyways after tokenizing
Sid#2121: yeah but i want to train a tokenizer
Sid#2121: and want to know if i can just take a few random chunks
Sid#2121: or if i need to reshuffle the whole thing
kindiana#1016: ah
kindiana#1016: does it matter than much if its not iid for tokenizing? lol
Sid#2121: as long as it's not literally all from one month / year, probably not.
Dirk Groeneveld#5137: Maybe the easiest then is to download one file and check.
Sid#2121: in one random chunk - the earliest date is 18/05/2013, and the latest is 15/08/2020. So it seems it is shuffled - or I'm just confused about what 'timestamp' means exactly.
Dirk Groeneveld#5137: They might end up accidentally shuffled by something else. If you're lucky, they are grouped by hash of the URL or something.
StellaAthena#3530: That sounds “random enough” to me
Louis#0144: anyone have a moment to let me pick their brain about beam search stuff
Louis#0144: particularly caching on transformers w/ beam search
cfoster0#4356: uhh do you wanna just ask your question? Surely there are people around |
Louis#0144: lol sorry
Louis#0144: ok so I have a batch of 32, where half of them start with prompt X and the other half start with whatever prompt. I want to do shared caching on beams where the starting prompts are identical. Is there an easy way to implement this without making me cry?
Louis#0144: or do I need to do a custom caching scheme
Louis#0144: nvm
Louis#0144: solved it
Louis#0144: lol
inox#5400: why is this JAX TPU tutorial a comment in an issue? https://github.com/google/jax/issues/2108#issuecomment-866238579
cfoster0#4356: I love that format and I don't know why
janus#0150: we should get that guy to join Eleuther :berk:
Sid#2121: can't tell if shitposting but shawwn hung around here in the early days, helped build the pile, etc.
Sid#2121: he runs another ML discord from whence eleuther was spawned
inox#5400: I only just noticed the short fiction :goose2:
ersatz#0001: > At low parameter counts, the most efficient use of parameters is to just learn how to write correct words with the right frequency. But this has diminishing returns, so at a certain size it becomes more efficient to learn relationships between words. But the same applies again, at a certain point the useful direct word relationships are depleted, so instead the AI has to learn something even more higher level such as the relationships behind grammatical rules. And this continues with higher and higher levels of abstraction.
>
> So essentially, higher parameter counts force you to learn more abstract relationships because the less abstract ones have already been squeezed dry. Which would also explain why GPT-3 is so good few shot learning (at some point the relationships got so high level, that it has "learned to learn").
Is this correct?
EricHallahan#1051: It is a good hypothesis, yes.
ersatz#0001: so bigger = better? no ceiling?
EricHallahan#1051: Diminishing returns.
ersatz#0001: that's crazy |
inox#5400: that suggests you could make loss function that reward those abstract relationships more than autoregressive max likelihood
EricHallahan#1051: "Better" is an unconstrained objective, there will be a practical limit given the constraints of reality.
bmk#1476: citation needed
bmk#1476: line looks pretty straight to me https://cdn.discordapp.com/attachments/729741769738158194/857381511514816542/unknown.png
EricHallahan#1051: Well, assuming it doesn't find some higher level of understanding.
bmk#1476: and loss makes more sense in logspace, before you point that out
bmk#1476: and so does compute
ersatz#0001: what does this mean? more compute = better text prediction?
Sid#2121: Yes
Sphinx#2092: Can't wait till it gets negative loss.
Sid#2121: :chonk:
EricHallahan#1051: The straight line is the compute-optimal frontier.
bmk#1476: this is a logplot
ersatz#0001: how many orders of magnitude we are from a human level of text prediction?
inox#5400: isn't it already superhuman at next word prediction?
inox#5400: someone get karpathy to predict words
ersatz#0001: maybe next word prediction isn't the right metric then
ersatz#0001: the ability to predict an entire text then?
bmk#1476: if only there existed benchmarks of ability at various language tasks
bmk#1476: and if only there existed a way to easily test models on all of those benchmarks at once |
ersatz#0001: my question is if the relationship between the number of params and prediction quality remains the same how many orders of magnitude away are we from reaching human level?
ersatz#0001: based on that
Sid#2121: Gpt-3 is already human level at some tasks
Sid#2121: It really depends what you mean by human level
ersatz#0001: I hope so, and my phone is much better than I am at crunching numbers and stuff, but you know what I mean
ersatz#0001: like
ersatz#0001: how long before I can say "GPT-5, write me a prequel to Harry Potter" and he answers "Yes master, here are your 10 million words on Harry Potter's great uncle"
Sid#2121: Would an average human be able to do that?
ersatz#0001: average? no
Sid#2121: So I really don’t what you mean. 10 different people saying “human level” tend to mean 10 different things
ersatz#0001: https://cdn.discordapp.com/attachments/729741769738158194/857384470353412106/d39.png
bmk#1476: damn beat me to it
EricHallahan#1051: That scene is really satisfying.
AI_WAIFU#2844: We're already well past human level text prediction. Go ahead, try to predict the next word anywhere close to as well as GPT-J.
Teemochu#8740: The average human can't play Go but can learn how to play Go.
Teemochu#8740: The latter is *way* harder for a system to do
bmk#1476: oh my god, the volcano is erudite
ersatz#0001: yeah, a better metric might be full-text prediction
Dromarion#3383: What sort of things are expected from "average humans" anyway?
CRG#8707: GPT-3's 1.8 loss intuitively means that it guesses the next token correctly 1/6 of the time. (Vs the 1/9.5 of GPT-2) |
Teemochu#8740: The volcano is erotic. 😳
ersatz#0001: I guess it becomes almost impossible to get better at predicting the next token after a point
CRG#8707: Even if you get very close to the "irreducible loss / noise floor", the quality of you representation keeps improving
CRG#8707: See, irreducible loss and imagenet classification on the second scaling laws paper.
𓅬 gabriel_syme 𓅬#3220: share it for posterior don't be the person closing the issue with that sentance :berk:
Louis#0144: WHO DARES SUMMON ME
Louis#0144: hi
Louis#0144: LMAO
Louis#0144: im writing the code rn
Louis#0144: im p sure it'll work
Dromarion#3383: Now that I think about it if a human had the mean skill level in every skill, they'd be pretty extraordinary just from the range of things they're able to do.
Louis#0144: also i cant share code because its very specific to knowledge graph decoding
cognomen#6297: humans can't output accurate probability distributions of possible next tokens across tens of thousands of words
𓅬 gabriel_syme 𓅬#3220: I'm half kidding no worries 🙂
cognomen#6297: gpt 1 humans 0
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/857386217300426802/32d9763e2703b98bdb4bd3b847418d46-1.png
bmk#1476: the mean skill level across the entire population is a really really low bar tho
bmk#1476: in one day's practice you can probably do better than 90% the world population on any given thing
bmk#1476: for most things, at least
bmk#1476: and ofc the more obscure the thing is the lower the bar is |
ersatz#0001: correct me if I'm wrong but basically you mean that predicting the right tokens in a text amounts to predicting the right concepts because you have to predict the concepts to predict the tokens?
Teemochu#8740: the mean skill level is a low bar, the mean skill level after some intentional learning is a mildly higher bar but overall far harder for a computer to do (the "learning" part I mean)
ersatz#0001: so you can always get better at predicting tokens by getting better at predicting concepts?
ersatz#0001: or something like that?
Teemochu#8740: "GPT-3 can play chess like a 3-year-old" is moderately surprising; "GPT-4 can learn a new game of similar complexity to chess" would be far more so IMO.
ersatz#0001: no ceiling except the very complexity of reality?
CRG#8707: I'd say it's backwards, but yeah.
Teemochu#8740: (btw tokens are grossly overencoded, by a factor of the hidden dimension)
CRG#8707: See also: https://arxiv.org/abs/2012.13255 larger models finding lower dimensional representations / compressing better than small models. https://arxiv.org/abs/2002.11794
Teemochu#8740: "You have one of 50000 things, right?"
"Yup."
"And that means you could write each one in two bytes?"
"Yup."
"And you use 24 kilobytes to encode one?"
"Eeyup."
ersatz#0001: to come back to my question, how many orders of magnitude do you think it would take to scale GPT-3 to get to a human level on say writing entire books?
ersatz#0001: and by that I mean the level of the best human
Sid#2121: to write whole books we might need some architectural changes, or models on top of the LM
Sid#2121: even GPT-3 only has 2048 tokens of effective memory
ersatz#0001: how hard would it be to make its memory unlimited? |
Sid#2121: very
Sid#2121: at least, I've not seen a satisfactory solution to that problem yet
Sid#2121: the softmax at the heart of the attention layer is quadratic complexity - so gets infeasibly expensive as you scale up the length. There are some architectural tricks like transformer-XL, and maybe some sampling tricks, but i've not seen anything work well at long context lengths
EricHallahan#1051: **Infinite *power!***
EricHallahan#1051: Or something like that.
EricHallahan#1051: ¯\_(ツ)_/¯
ersatz#0001: how do animals solve the problem?
Sid#2121: show me an animal with infiinite memory
Sid#2121: lol
CRG#8707: There's methods fore fine-tuning with a longer context after training, like with linear attention (doesn't work very well) and the TrXL caching, (context * layers, not unlimited, and needs relative /RoPE)
ersatz#0001: actually it's as if you were talking about short term memory
Sid#2121: why do i never see anyone using TrXL lol
CRG#8707: The hierarchical memories transformer paper was interesting.
Sid#2121: yes - it might be better to think of the long term memory as the weights, and the short term memory as the context
Teemochu#8740: The way I've had rattling through my head recently is to learn embeddings for multi-token sequences (using concatenation of two n/2-token embeddings and a feedforward MLP) and use those for the further-back context input
Teemochu#8740: you may need to retvrn to learned positionals for that since the new "context" wouldn't have each embedding be equal size
cfoster0#4356: Hmm I mean the x axis is log scaled, so it could reasonably be interpreted as diminishing marginal returns. (ie an extra petaflop/s-day is worth less and less)
Sid#2121: gpt has a long term memory in that it can 'recall' items from its training data
ersatz#0001: so training and inference should be the same process?
CRG#8707: I think no one really used relative embeddings in autoregressive models (prior to RoPE) |
Sid#2121: no they're good questions lol, and ones lots of people are trying to figure out
CRG#8707: https://arxiv.org/abs/2105.14039
EricHallahan#1051: They are only beginner questions in that they are the ones that we naively ask ourselves before realizing that they are hard.
bmk#1476: but compute increases exponentially over time, rather than linearly
ersatz#0001: I suspect that the hardware would have to be redesigned to allow training and inference to be done in the same process and to enable a more than short term memory 🤔
cfoster0#4356: Sure sure. From the perspective of any particular user deciding how much to invest right now in training their model, it's diminishing returns, but investment in scaling by the field as a whole doesn't suffer from that
Sid#2121: so you're saying it's low hanging fruit 😛
bmk#1476: I think getting more money is also an exponential thing, personally
bmk#1476: getting 10x more money is usually less than 10x as hard
bmk#1476: and also something something economies of scale
Sid#2121: but the money:happiness payoff is a log scale lol
Sid#2121: getting 10x more money is barely ever 10x as rewarding
bmk#1476: ~~bold of you to assume I want happiness, what am I, a hedonic utilitarian?~~
bmk#1476: I mean what if I just really like geese
cfoster0#4356: damn I need you at the negotiating table
ersatz#0001: do you think that my intuition is right? that the cap is going to be the ability for the model to update its weights for long-term memory and therefore that another kind of hardware is needed to enable it?
ersatz#0001: or am I missing the point?
cfoster0#4356: the intuitions are good. I dunno if hardware is the answer, but at this point there isn't a clear path forward, so why not
ersatz#0001: we know that animals do it with neurons and so some hardware that would behave like a neuron could address the issue I guess
ersatz#0001: a simple neuron simulation would be too complex I guess |
Sid#2121: or we can do even better than nature. There's a reason cars don't have legs :berk:
ersatz#0001: you all seem to be saying that nobody has any idea how to do otherwise
Sid#2121: I'm not sure why these problems couldn't be addressed at a software level. Like idk why you're jumping immediately to neuromorphic hardware
Sid#2121: i'm sure lots of people have *ideas* but no one has shown one to work yet, at least to my knowledge. It's a pretty hard question lol.
ersatz#0001: this is the only way we know that works and no one is proposing an alternative (as far as I know) that is as efficient
AI_WAIFU#2844: We bicker about this question constantly.
AI_WAIFU#2844: Ah I see you got sucked in too
Sid#2121: idk about you but looks to me like transformers on GPUs are currently working much better than anything on neuromorphic hardware
ersatz#0001: totally but not for the problem of updating weights as quickly as inference
ersatz#0001: as neurons do
ersatz#0001: because it is the same process for a neuron
Sid#2121: can you show me some numbers that will back up your statement that neuromorphic hardware can "update weights as quickly as inference" better than GPUs? or are you just kinda pulling that out your ass
Sid#2121: because I've seen nothing that suggests that's the case
ersatz#0001: I'm talking about neurons not "neuromorphic hardware"
ersatz#0001: I know nothing at all about "neuromorphic hardware"
Sid#2121: well, we don't know how to make neurons lol. Or how they work. You're just adding an extra layer of difficulty.
Sid#2121: Before planes were invented - people tried to make flying machines with wings that flapped like birds. Look how that worked out.
Sid#2121: Nature doesn't always offer the best solution. It's just had billions of years to stumble upon some pretty good ones.
AI_WAIFU#2844: Yeah, and the people who have tried replicating the brain by copying how it works have been BTFO'd so badly by matmuls and autodiff that you need a really strong argument to justify doing anything otherwise.
ersatz#0001: if someone has an idea of how to update the weights as fast as inference (like a neuron) without mimicking the behavior of a neuron then I'm interested, maybe even the very premise is wrong and it's not necessary to update the weights of a model (to have a long term memory like an animal) |
AI_WAIFU#2844: Well there was :schmid: with fast weight updates, and other datastructures,
AI_WAIFU#2844: But that was a while ago and it comes with some problems when you try to scale it up.
janus#0150: -1 https://cdn.discordapp.com/attachments/729741769738158194/857397622087090226/Screenshot_from_2021-06-23_19-10-59.png
Sid#2121: *his pork*
ersatz#0001: What is this paper?
cfoster0#4356: https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html
ersatz#0001: also why are people on pretty much all deep learning discord shitposting about this dude having a paper about everything since the 90s?
Sid#2121: because he has had a paper about everything since the 90s
cfoster0#4356: It's like, only 60% shitposting and at least 40% truth
ersatz#0001: some bot on another discord is adding (Schmidhuber, 1991) to every post linking to arXiv
Sid#2121: lmao
EricHallahan#1051: wait is Schmidhuber real? i thought that was fake
cfoster0#4356: yo be real
bmk#1476: i thought schmidhuber was a made up german person, like hegel
ersatz#0001: I don't want to post examples of memes about him on this non-meme channel, but he is very popular (or unpopular) for some reasons in the deep learning community.
ersatz#0001: even Yan LeCun posted a meme about him
kurumuz#5695: get real
ersatz#0001: I don't even know if people think he's lying or?
ersatz#0001: Is he taking credit for things he doesn't deserve?
Sid#2121: he genuinely did pioneer a lot of things in the field, but also has a tendency to overstep the line a little |
AI_WAIFU#2844: On the contrary, he was well ahead of his time, and knows some fundamental shit that has been mostly lost to DL researchers.
AI_WAIFU#2844: Hey that gives me an idea. We should replicate his factorial code paper with modern levels of compute.
ersatz#0001: like Google Colab Pro levels of compute or more?
AI_WAIFU#2844: I give it 10% chance it BTFO's contrastive representations.
AI_WAIFU#2844: Nah, 256 tpu cores.
ersatz#0001: you people are rich af
Sid#2121: not really lol
Sid#2121: just memed our way into lots of compute
Sid#2121: we don't really pay for pretty much anything
Sid#2121: our costs as an organization are very low
ersatz#0001: you must be some very convincing memers to get 256 TPU cores
bmk#1476: oh shit im rich? i never even noticed
cfoster0#4356: 👀 I can't believe I didn't know about these
AI_WAIFU#2844: Really old shit, never saw anyone do anything like it afterwards. (I have some suspicions as to why tho)
http://mediatum.ub.tum.de/doc/813184/409390.pdf
chilli#5665: Schmidhuber just exposes the unfortunate truth that many ideas in machine learning are plentiful and not very difficult.
chilli#5665: The hard part (and what you get credit for) is demonstrating convincingly to other people that it works.
chilli#5665: The ideas themselves are not worth much
𓅬 gabriel_syme 𓅬#3220: so very true, in almost every field
ersatz#0001: https://cdn.discordapp.com/attachments/729741769738158194/857412051336298496/Screen_Shot_2021-06-24_at_02.09.10.png |
bmk#1476: this citation can refer to any of dozens of things
cfoster0#4356: So far this is sounding like some kind of funky autoencoder? But where there's also, like, an adversarial loss to make code dimensions unpredictable from one another?
bmk#1476: something something annus mirabilis
AI_WAIFU#2844: :berk::schmid: :berk::schmid: :berk::schmid: :berk:
ersatz#0001: https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html
ersatz#0001: that?
ersatz#0001: also why is he referring to himself in the third person?
ersatz#0001: is this a case of https://en.wikipedia.org/wiki/Royal_we
cfoster0#4356: it's common in academic writing
ersatz#0001: okay but that's a blog post
Dromarion#3383: I encounter the "We" meme mostly from soccer fans.
cfoster0#4356: *he's an academic, writing*
ersatz#0001: he was already using "self-supervision" and not "unsupervision" like Yan LeCun is pushing people to do back in 1990
ersatz#0001: the math is surprisingly simple
ersatz#0001: maybe he really did invent half the field in the early 90s
bmk#1476: i choose to believe it's the royal plural
bmk#1476: even in a paper
bmk#1476: all academics are royalty
gdawg16#0493: https://cdn.discordapp.com/attachments/729741769738158194/857416732222160906/DTxYg53.png
bmk#1476: ES IST MITTWOCH, MEINE KERLE |
gdawg16#0493: :ultragoose:
JYT4040#2180: Ha! I tried downloading the openwebtext2 file and inside I see it also isn't "ready" to use right away either. I don't think I'm going to bother lurking here seeing how they treat other members, there's too much emojis and immature attitude everywhere you look.
EricHallahan#1051: I am sorry if you feel that way.
𓅬 gabriel_syme 𓅬#3220: Do you know George Grandey? That comment honestly reminded me of him. Don't let our humorous attitude make you ignore the vast amount of knowledge and expertise you can gain through this place simply by showing up
mega b#6696: *audible gasp
𓅬 gabriel_syme 𓅬#3220: (disclaimer: that knowledge is from everyone else but me lol)
𓅬 gabriel_syme 𓅬#3220: !goose
Isaac McHorse#2007: https://cdn.discordapp.com/attachments/729741769738158194/857467820522602507/goose.jpg
𓅬 gabriel_syme 𓅬#3220: damn, good pick!
EricHallahan#1051: I am loving this dataset. It is actually really high quality.
𓅬 gabriel_syme 𓅬#3220: yeah
𓅬 gabriel_syme 𓅬#3220: this one, the look in the eyes, and even chewing the little toothpick
𓅬 gabriel_syme 𓅬#3220: maybe we can hook up a bot that let's us annotate them after someone calls one image?
𓅬 gabriel_syme 𓅬#3220: semantic goose generation baby
guac#4716: Man I know that one it’s around the first 20 I was going to make it my profile pic cause it looks like a cowboy
𓅬 gabriel_syme 𓅬#3220: yeah exactly lol
𓅬 gabriel_syme 𓅬#3220: needs the hat, someone will put it on
guac#4716: I have the bot in a GitHub gist if you want to mess with it lmao
𓅬 gabriel_syme 𓅬#3220: I could try and fail
𓅬 gabriel_syme 𓅬#3220: but more importantly, I could copy all your work and make bots! |
guac#4716: https://gist.github.com/jon-tow/7fc6bbcb12e722ce0dd7890023b88269
𓅬 gabriel_syme 𓅬#3220: holy hell these are simpler than I thought lol
guac#4716: (Thank bonamputee for the hosting)
𓅬 gabriel_syme 𓅬#3220: well done discord
guac#4716: Yeah great api lol
𓅬 gabriel_syme 𓅬#3220: for real
𓅬 gabriel_syme 𓅬#3220: will try and see how they send info back in boneamputee's bots maybe
𓅬 gabriel_syme 𓅬#3220: we should be able to slowly annotate these 😄
𓅬 gabriel_syme 𓅬#3220: maybe it prints the index, and we can !annotate_goose index "text"
guac#4716: lmao oh yeah that’d probably be like 5 lines of code it’d be easy. We’d just need some server to store the annotations in
𓅬 gabriel_syme 𓅬#3220: I would imagine we can use the faraday one
𓅬 gabriel_syme 𓅬#3220: maybe?
EricHallahan#1051: I need to fix up the StyleGAN implementation so that I can shave off half of the generation time.
𓅬 gabriel_syme 𓅬#3220: man this might be a decent way of creating sort-of open ended annotated datasets of images lol.
guac#4716: What if we required new members to annotate a goose image before they can access other channels 😏
𓅬 gabriel_syme 𓅬#3220: sounds good to me, make it 50
guac#4716: Please let me know if you follow up with this lol
EricHallahan#1051: Well I have the ported source already, I just keep trying to improve the method rather than updating the notebook and fixing the bot lol
𓅬 gabriel_syme 𓅬#3220: so I think when bmk is up we can ask him if the annotate command is cool and we do it 🙂
EricHallahan#1051: What kind of annotations are you doing? |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/857473058914435072/helo-fish-.jpg
𓅬 gabriel_syme 𓅬#3220: I was thinking if it would be interesting to whip up a bot that does image annotations (by us, in the chat). Could use it to label `!goose` perhaps? boneamputee says he already has a bot actually that does kind of that
bmk#1476: what kind of annotation?
bmk#1476: "ah yes this goose is made of goose"
moopaloo#7562: "America invented the water bed, the ice maker, the television, the car and the washing machine. But no one thinks we invented the bedpan." Original joke from GPT-J
EricHallahan#1051: That may be better suited to #the-faraday-cage-archive.
guac#4716: @𓅬 gabriel_syme 𓅬 hmm bmk is right but we’ll pivot this. We’ll pivot. Give me some thinking time
𓅬 gabriel_syme 𓅬#3220: that, or also "a cowboy looking goose, deep into thought, looking at the sunset"
𓅬 gabriel_syme 𓅬#3220: the usual
bmk#1476: .. are most of these images *that* interesting?
bmk#1476: !goose
Isaac McHorse#2007: https://cdn.discordapp.com/attachments/729741769738158194/857474869122170900/goose.jpg
𓅬 gabriel_syme 𓅬#3220: poor goose, looks so sad
bmk#1476: "shook goose"
guac#4716: The photographer left no room for the imagination
Louis#0144: We need a sad goose emote
bmk#1476: be the change etc etc
moopaloo#7562: Thanks for the pointer
𓅬 gabriel_syme 𓅬#3220: I was thinking open ended ones, maybe affective: how does the goose make you feel? :berk:
𓅬 gabriel_syme 𓅬#3220: (i'll be moving to offtopic now lol) |
simpleV8#7276: hi, anyone know where can I find an explanation about PKL files and the code inside of it? thx
8004 95f0 0101 0000 0000 008c 1464 6e6e
6c69 622e 7466 6c69 622e 6e65 7477 6f72
Deleted User#0000: How to download and extract a subset of the pile.( When I try to extract only single zst file, it shows some kind of error)
Drakkaa#3367: what error ?
Deleted User#0000: Don't know, it just shows 'killed' (probably out of memory?)
Drakkaa#3367: depends, can also mean out of diskspace
Deleted User#0000: But I have 24gigs ram, and nvme for swap, and plenty of space
GrimSqueaker#8837: Suggestion: Datasets channel? (For storing nice resources for data)
Isaac McHorse#2007: https://cdn.discordapp.com/attachments/729741769738158194/857537337852166144/goose.jpg
Kia#2550: Straight forward :thonk:
Tinytitan#5596: !goose
Isaac McHorse#2007: https://cdn.discordapp.com/attachments/729741769738158194/857543830743285781/goose.jpg
Kia#2550: Hey #the-faraday-cage-archive
mr_seeker#1337: Someone here who can tell me exactly what "per_device_train_batch_size" does in the Huggingface Trainer? Does that increase VRAM but benefit training or not? Looking at a GPU that is not fully committed yet, looking at what I can do to speed up training.
Daj#7482: Killed means you ran out of RAM and the OS killed the process
Daj#7482: We used to have this but we don't really build datasets anymore
Kia#2550: Ow yeah the #pile
Daj#7482: Another reminder, as time is up soon:
|
Hey everybody! Eleuther's **one year anniversary** is coming up soon (3rd of July)!
We are are working on writing a retrospective post collecting funny anecdotes, valuable lessons learned and highlights from the last year. We would love to have input from lots of people here (but depending on level of interest I can't guarantee everything will make it into the final draft).
Please **DM me or email us at [email protected] with stories, memes, quotes** or whatever else about Eleuther and what it has been to you this past year if you wanna contribute!
Sid#2121: anyone know off the top of their head what token `Ċ` represents in a BPE vocab? is it a linebreak?
bt#7597: `Ċ` is a newline and `ĉ` is a tab
bt#7597: and `Ġ` is something like two consecutive spaces?
quinn#9100: @jacquesthibs
inox#5400: wait so do TPU VMs mean that the GCE cost of using TPUs (because you used to have to start a VM to control them remotely) is massively reduced? Or do you still have to pay a lot to host the dataset etc?
AI_WAIFU#2844: For us I think the dataset/network costs dominate, and you still probably want a small host vm. But it can certainly help.
Avital Oliver#8700: Hi folks, just wanted to share information about an upcoming JAX & Flax community week with Hugging Face (incl. free Cloud TPU access, but sounds like many of you have that covered) -- https://twitter.com/huggingface/status/1407702538078474241. Happy to chat more if anyone wants to discuss possible projects.
(Also not sure about etiquette, in case it's considered bad form promoting such material here)
kindiana#1016: (I'll possibly be giving a talk on mesh-transfomer-jax then haha)
Avital Oliver#8700: And more generally, @nz suggested that I post more broadly--
I am on the Flax team, and I'd like to offer any help for folks trying to understand or use Flax. If there's anything about Flax that's hard to figure out, that's a bug that we should fix. In particular he suggested that @Deleted User de23c58c @cfoster0 @EricHallahan @Aran Komatsuzaki might be interested. Always happy to schedule a call or chat, whatever works better for people.
mega b#6696: Weird Flax but ok :berk:
AI_WAIFU#2844: What kind of parallelism strategies does flax support on multihost TPU setups? Are there any examples of model/data/pipeline/zero parallelism that can be used as a reference? |
Deleted User#0000: i've become a happy user of haiku
Deleted User#0000: i'll give flax a second chance if i come across a fodder project in the future
Deleted User#0000: Ross tweeted that he has successfully trained a bunch of models using pytorch XLA on TPU VMs
Deleted User#0000: something else to think about
Deleted User#0000: overall, my experience with Jax and TPU VMs have been A+
Deleted User#0000: its worth reiterating
Deleted User#0000: should have a fairly big protein language model being trained by early next week
Avital Oliver#8700: Flax doesn't imply any particular form of model parellelism -- it fundamentally just exposes pure functions. Most people that are currently using model parallelism with Flax use `jax.pjit` (which isn't quite documented well enough at the moment)
Avital Oliver#8700: Happy you're enjoying Haiku! If any point I can be helpful if you'd like to port anything from Haiku to Flax, or anything related, don't hesitate to reach out
UnsupervisedLearner#4148: Is there something that works like git for training models?
Like hyperparameters, dataset, random seed, loss fn, etc are a million branches. There are probably tools existing somewhere I can use, wondering if someone here knows
UnsupervisedLearner#4148: While I'm at it, what are the best tools for model weight statistics visualizations? Is tensorboardX still the thing?
rom1504#5008: Have a look at wandb.ai
Louis#0144: I signed up for it
Louis#0144: I want to see if I can train BART with rotary on the pile
Louis#0144: Rather than the learned embeddings Bart uses
Cade Gordon#3029: I feel like a lightning brand ambassador but grid might be something worth looking into for hparams
Eleiber#8347: Where can I find a guide or something to finetune GPT-Neo?
Eleiber#8347: ^ Nvm, I'm dumb |
Eleiber#8347: It's right on the Colab
UnsupervisedLearner#4148: thank you guys
StellaAthena#3530: Does anyone know a simple way to generate text from a HF model until a condition is met? I have a list of tokens and want to generate from a model until the model diverges from the list.
Louis#0144: Discriminator
Louis#0144: Oh a simple way
Louis#0144: Missed that word
UnsupervisedLearner#4148: Why can't you just call a break statement?
StellaAthena#3530: Like
```
output = []
placeholder = ""
while placeholder not in bad_list:
placeholder = model.generate
output.append(placeholder)
```
StellaAthena#3530: Hm
StellaAthena#3530: For some reason I thought that would come with significant overhead
kindiana#1016: why not calculate logprobs?
StellaAthena#3530: *\*shrug\**
kindiana#1016: do you want to ban tokens? |
kindiana#1016: or sequences?
StellaAthena#3530: This is the memorization thing. I’ve simplified it a bit, but what I’m really interested in doing is generating until the model deviates from the training data
kindiana#1016: you don't need to actually generate
kindiana#1016: you can just simulate it
cfoster0#4356: @StellaAthena I'm setting up some similar experiments
StellaAthena#3530: @kindiana simulate it with the logprobs? Is that going to be noticeably faster?
kindiana#1016: yes
kindiana#1016: by like, 100x
StellaAthena#3530: O
cfoster0#4356: What you could do is just use the loglikelihood function like eval harness. It comes with an `is_greedy`
cfoster0#4356: That should do what you want
StellaAthena#3530: This is a sketch of what I have in mind: https://github.com/EleutherAI/project-menu/issues/11
kindiana#1016: is_greedy is not great for entire documents though
StellaAthena#3530: Oops
cfoster0#4356: Right
kindiana#1016: so you need to compute the logits of each document
StellaAthena#3530: (Link corrected)
kindiana#1016: compute which ones are predicted correctly
kindiana#1016: mask out first 20 tokens
kindiana#1016: and do a cumprod and then a sum |
StellaAthena#3530: When you say the logits of each document, are you picturing the entire document or each token conditional on the previous one
StellaAthena#3530: I guess I can do it both ways
StellaAthena#3530: @cfoster0 What experiments were you thinking of doing
kindiana#1016: they are the same thing?
kindiana#1016: not sure if I'm understanding your question
StellaAthena#3530: P("is a girl" | "Stella") != P("is"|"Stella")P("a"|"Stella is")P("girl"|"Stella is a")
StellaAthena#3530: right?
bmk#1476: pretty sure those are equal?
kindiana#1016: yeah
bmk#1476: wait
bmk#1476: yeah
bmk#1476: those are equal
EricHallahan#1051: ¯\_(ツ)_/¯
bmk#1476: the (other) chain rule
kindiana#1016: if you assume softmax sampling
kindiana#1016: well actually any sampling, as long as you keep it the same between the two
StellaAthena#3530: I thought equality required that each of the expressions on the RHS be independent
bmk#1476: I don't see why it would need to be independent
cfoster0#4356: A couple things. I wanna know how quickly transformers memorize synthetic data (for ex. Zipf distributed random bytestrings) and whether they become faster at it as a function of size or training iters. Also similarly interested in how big of an effect recency has. But all this is with smaller models, not pretrained.
Spy#9778: I just noticed I'd been generating from my transformer conditioned an out of range token index rather than the start token :thonk: |
Spy#9778: They'd need to be independent for P("is a girl" | "stella") to equal P("is" | "stella")P("a" | "stella")P("girl" | "stella")
kindiana#1016: they are independent for lms no?
Spy#9778: definitely not
Spy#9778: like if the result comes up
Spy#9778: "stella is _an_" instead of "stella is a"
Spy#9778: P("girl" | ...) goes down and P("engineer"| ...) goes up
kindiana#1016: well the rhs is fixed?
kindiana#1016: like its always P("girl"|"Stella is a")
kindiana#1016: no matter if a or an is generated
Spy#9778: oh I thought you meant in what I typed, not the chain rule one
Spy#9778: I'm not sure if there's a definition for independence when you're conditioning on different information for each rv
kindiana#1016: well either way LMs are not conditioned only on the previous token?
Spy#9778: yeah but you can still consider the 2 or 3 step conditional probabilities
Spy#9778: even though they're intractable to compute
Spy#9778: like
Spy#9778: P("girl" 3 words from now | "stella") or w/e
AI_WAIFU#2844: Are you sure about that?
StellaAthena#3530: It should have, at worst, fixed parameter tractibility
StellaAthena#3530: If we say we want to estimate k tokens in the future, then that might be intractable as k grows but for fixed k it's tractible
Spy#9778: yeah it's probably not too bad for 2 tokens in the future |
Spy#9778: 3 in the future is vocab^2 forward passes which is getting costly
kindiana#1016: well depends on how good your model is
kindiana#1016: you only need to check the high likelyhood ones
chirp#4545: https://robotic.substack.com/p/ml-becomes-rl
chirp#4545: idk how much sense this makes but i found it an interesting analogy!
nostalgebraist#3542: i do not think it makes much sense
nostalgebraist#3542: the specific examples involved statistical biases introduced by human decision-making practices
nostalgebraist#3542: which happens in any kind of scientific activity
𓅬 gabriel_syme 𓅬#3220: for those studying Europe (or elsewhere), this was recently initiated. Perhaps it is of interest to some of you
http://www.i-aida.org/
quinn#9100: robust cooperation in oneshot opensource pd got a v2 update a few months ago https://arxiv.org/pdf/1401.5577.pdf
quinn#9100: :ultraberk: https://cdn.discordapp.com/attachments/729741769738158194/857946094485897226/Screenshot_from_2021-06-25_12-30-14.png
Daj#7482: This is an amazingly :bigbrain: argument I love it lol
alstroemeria313#1694: ...you can just make up any bad thing that could happen and also some way you could "help" the something or other and frame it as "cooperating" but you just made the things up
alstroemeria313#1694: the universe is cooperating with me by not undergoing false vacuum collapse, i'll cooperate with it back by maximizing paperclips for it
Louis#0144: Paperclip false vacuum wen
mega b#6696: i cannot understand whats going on, but it seems like everyone knows so imma pretend to know :bigbrain:
Dromarion#3383: I've been here for almost a year and I can only follow a third of what everyone is talking about.
Spy#9778: One of the theories to explain the nice features of our universe is that it maximizes the number of black holes in order to produce more universes
Spy#9778: so you should pay it back by converting all matter into small black holes |
Spy#9778: I forget who the theory is due to, penrose maybe?
Spy#9778: oh it's Lee Smolin
Spy#9778: https://en.wikipedia.org/wiki/Cosmological_natural_selection
Louis#0144: General was so quiet all day
Isaac McHorse#2007: https://cdn.discordapp.com/attachments/729741769738158194/858185529606930452/goose.jpg
Louis#0144: @bmk can we make it so that Isaac has a small chance to send a goose in any channel unprovoked
bmk#1476: idk you write the code i add it
Louis#0144: Ok
Louis#0144: Where’s the repo
bmk#1476: write it in a gist and ill hand transplant it over to the sever
Louis#0144: Kk
bmk#1476: there might be a repo but its outdated anyways
Louis#0144: Isaac is all in python?
bmk#1476: https://github.com/EleutherAI/isaac-mchorse
bmk#1476: i think it's out of date tho
bmk#1476: lemme check
Louis#0144: It should have worked by now right?
Louis#0144: Isaac wtf
bmk#1476: whatg are you doing
bmk#1476: stop spamming |
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/858186843278278656/image0.png
bmk#1476: ok it's up to date now
AI_WAIFU#2844: make your own discord to debug
Louis#0144: What percentage do we want
Louis#0144: 0.001?
bmk#1476: 0.0001 probably
Louis#0144: ok
bmk#1476: same probability as the other unprompted thing
EricHallahan#1051: I strongly oppose "any channel".
Louis#0144: what if I make it crazy rare
Louis#0144: lol
Louis#0144: https://github.com/EleutherAI/isaac-mchorse/pull/2
bmk#1476: random goose mode is now live
𓅬 gabriel_syme 𓅬#3220: Is it coming yet?
bmk#1476: ok i actually need to go do things now
UnsupervisedLearner#4148: So I am pretty sure there's a way to calculate, given a FLOPs budget, the ~optimal
Model parameter count
Stepsize
Batch size |
Probably something else I'm forgetting
Based on all the scaling laws papers, right? Has someone made a script for this or do I need to go back over the papers and take notes this time?
EricHallahan#1051: I think this has been discussed in #scaling-laws before.
UnsupervisedLearner#4148: Keywords I might type in the searchbar?
EricHallahan#1051: Don't need to.
https://discord.com/channels/729741769192767510/785968841301426216/855844363870666783
UnsupervisedLearner#4148: This is perfect. Thank you very much
EricHallahan#1051: It was in the pins for that very purpose. `;)`
Eigen#6630: Hello, Is there anybody working on the **Open Catalyst Challenge**[NeurIPS2021]? Redirecting to them would be useful.
alstroemeria313#1694: OK, so question
alstroemeria313#1694: How come, when training GANs, people don't save old fakes and sample randomly from them to keep D remembering what the old fakes look like to prevent cycling
alstroemeria313#1694: Like train D each iteration w/ a batch of current fakes and a batch of randomly sampled old fakes
alstroemeria313#1694: I was looking at my GAN outputs in frustration and kept going "how does D not recognize these fakes as fakes, they're obviously fakes, there are simple discriminative features that any normal classifier would have figured out a long time ago"
alstroemeria313#1694: Each iteration D tells G how to trick it in, by construction, the *cheapest way possible*
alstroemeria313#1694: This usually consists of the fakes moving off-distribution for D.
alstroemeria313#1694: A human, looking at the fakes, wonders why D seems extremely slow to generalize.
alstroemeria313#1694: This is because the human remembers the old fakes
alstroemeria313#1694: But D is constantly being fine-tuned on current fakes only.
pebbles#7130: hmm, maybe you could even train a "teaching dataset" for D simultaneously, which helps it remember the old fakes, and that's mixed into the training data? |
alstroemeria313#1694: (I mentioned this idea to someone last night and he said "why not have G try to trick multiple D states")
alstroemeria313#1694: (i.e. EMA versions of D too?)
alstroemeria313#1694: hm GPU memory used is creeping up
alstroemeria313#1694: how to handle this
marmiteCloud#5923: have you found anything interesting out? I just tried Tesseract 5 and noticed it is much better than Tesseract 4 and it reminded me of this
alstroemeria313#1694: ...does it not like allocating lots of small tensors actually
alstroemeria313#1694: ...actually, if i save a slice of a tensor does it actually keep the entire tensor around
alstroemeria313#1694: I bet it does and I have to clone the slice
chinesesoup#6725: Well instead of OCR you can directly extract text from pdfs if you'd like
chinesesoup#6725: Then use ocr only on images
alstroemeria313#1694: i am saving one fake per batch of fakes
alstroemeria313#1694: then making a batch of old fakes to train D on by sampling with replacement from the memory
alstroemeria313#1694: also the memory is saved in the checkpoints
alstroemeria313#1694: i guess if the memory gets too big i can start randomly dropping from it or smth
alstroemeria313#1694: it's learned some basic shapes already https://cdn.discordapp.com/attachments/729741769738158194/858378766563540992/out_0005791.jpg
alstroemeria313#1694: (Class-conditional CIFAR-10 ofc)
alstroemeria313#1694: Each row is one class
alstroemeria313#1694: Also these outputs are from the training G, not an EMA G
alstroemeria313#1694: Because I didn't copypaste the EMA code in
alstroemeria313#1694: mb i should print the |
alstroemeria313#1694: *average p(real)* instead of the raw loss
alstroemeria313#1694: i keep having to paste the loss values into ipython to convert them back to p(real)
alstroemeria313#1694: the p(real) can be gotten from either exp(-loss) or 1 - exp(-loss) depending on whether the loss was for maximizing p(real) or minimizing p(real)
alstroemeria313#1694: (I am splitting the different parts of the D loss out for printing/logging)
aero#1357: https://cdn.discordapp.com/attachments/729741769738158194/858383523538534440/unknown.png
aero#1357: I figured out a thing
told it that nicolas cage stole a jar of bees around """"memory"""" 200
aero#1357: works surprisingly well
aero#1357: around 100ms to sort 8k memories (using the hidden states to calculate distance to the current conversation) and then build an optimized prompt
aero#1357: butchered that explanation but it works
aero#1357: <https://github.com/AeroScripts/HiddenEngrams> put up a little repo if anyone is curious, but its a mess atm
EricHallahan#1051: ~~Everyone knows that Nicolas Cage stole the Declaration of Independence~~
StellaAthena#3530: @aero Very cool! Is this using GPT-J or what?
aero#1357: yeah 😄
aero#1357: the hf port though
aero#1357: https://cdn.discordapp.com/attachments/729741769738158194/858385165884850176/unknown.png
aero#1357: really good at matching conceptually too, calculating distance using hidden states works surprisingly well
StellaAthena#3530: Is this totally novel, or is it based on a paper?
aero#1357: just through experimentation the past couple weeks |
StellaAthena#3530: Dope
StellaAthena#3530: The explanation on GitHub is a little hard to follow, but you’re building a set of memories and then using them to somehow design the prompt?
aero#1357: in the case of chat bots I break them up into individual messages and get the hidden states for each, then compare that against the current message to get a distance value, then sort the table by that value
aero#1357: been experimenting with stepping forward/backward in messages too which works well but havent had much time, busy with real work too 😅
EricHallahan#1051: And then run KNN?
aero#1357: yeah basically, I run a few separate passes selecting fewer each time (and checking deeper into the history/future each step)
StellaAthena#3530: Okay so there’s a list of memories $m_1, m_2, \ldots, m_k$ and a user input $x$. You compute the sequence $d_i = ||m_i - x||$ and then do what?
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/858386525925408788/193204646687408129.png
aero#1357: I originally used torch .argsort() but after some testing found that heapq.nsmallest is much faster even though it runs on CPU
StellaAthena#3530: You find the j such that $d_j = ||m_j - x||$ is smallest?
TeXit#0796: **Stella Biderman** https://cdn.discordapp.com/attachments/729741769738158194/858386803215695872/193204646687408129.png
StellaAthena#3530: I guess I’m not sure where the table is coming from
aero#1357: in encode.py I convert a shakspeare .csv into an array of 'memories'
that calls build_engram for each which gets the hidden states for that text
aero#1357: and yeah sorry for sticking to code explanations 😅 much more familiar with that
StellaAthena#3530: I’m not asking about the code. I’m asking about what you’re doing on a conceptual level
aero#1357: in the case of chat bots:
each message the user inputs and each ouput of GPT is added to an array along with the hidden states
|
Each new message is checked against that array using the hidden states of that new message, finding the nearest matches and then building an optimized prompt using them
StellaAthena#3530: Okay
StellaAthena#3530: How is the optimized prompt built
aero#1357: right now its not very smart, it assumes things are question-answer pair. so it adds the messages in pairs to the prompt first
Then it adds n "short term memories" to the prompt and returns that. when its tokenized any extra tokens are discarded from the front of the array ( ex tokens[: -2047:] )
aero#1357: which basically means the lowest scoring memories are discarded if theres too much text
StellaAthena#3530: Hmmm
StellaAthena#3530: So it builds the string “Memory1, Output1, Memory 2 Output2, Memory3 Output3 … Input”?
StellaAthena#3530: Where Outputk is what the model spits out when prompted with Memoryk?
aero#1357: yeah
aero#1357: surprisingly fast too.. with 8k memories the sort takes about 100ms on my 6700k
StellaAthena#3530: Oh yeah sorting is free
StellaAthena#3530: Heaps are over powered
aero#1357: sort and distance calculation* but yeah its pretty cheap
alstroemeria313#1694: https://cdn.discordapp.com/attachments/821173872111517696/858391832760549376/out_0019380.jpg
alstroemeria313#1694: it stopped changing on epoch 38
alstroemeria313#1694: D gradient penalty loss dropped to 0 and it's outputting p(real) = 0.5 for everything
StellaAthena#3530: @aero i have some experiments I want to run building off of this, maybe we can collab. Have you done any training with the memory bank, or only using it after the fact? |
aero#1357: never tried any training, ive had it running for about a week in a pretty active discord. also tried importing the star trek scripts which was pretty cool
aero#1357: dont have the ability to train anything currently
aero#1357: love to collab a bit though 😮
aero#1357: I don't have a ton of time this week, lot I need to get done at work
StellaAthena#3530: Gotcha gotcha. If you have ideas on things to do with training models I can spare some compute to test them
gdawg16#0493: https://tenor.com/view/national-treasure-benjamin-gates-nicolas-cage-declaration-of-independence-steal-gif-4752081
alstroemeria313#1694: I restarted it w/ self-attention taken out and a 1-centered gradient penalty instead of 0-centered https://cdn.discordapp.com/attachments/729741769738158194/858406896334798868/out_0036600.jpg
alstroemeria313#1694: It's still going
alstroemeria313#1694: It is training abnormally fast, from my previous experiences with CIFAR-10 GANs
cfoster0#4356: What model is this, again?
alstroemeria313#1694: it's a tiny cifar-10 gan
alstroemeria313#1694: It is like the original StyleGAN except with residual blocks
alstroemeria313#1694: And 128 channels at most
sweg#8920: seemingly random question but have you ever trained stylegan?
sweg#8920: from scratch i mean
alstroemeria313#1694: yes, several of them
sweg#8920: did you ever find that it would suffer from mode collapse early on but correct that later down the line?
alstroemeria313#1694: the times the modes collapsed it never recovered
alstroemeria313#1694: how bad was the mode collapse?
sweg#8920: im only at like https://cdn.discordapp.com/attachments/729741769738158194/858418640138010704/unknown.png |
sweg#8920: only 200k images in
alstroemeria313#1694: oh
alstroemeria313#1694: that's bad
alstroemeria313#1694: yeah it's p not going to get better
sweg#8920: lmao
sweg#8920: i made a mistake before where i accidentally multiplied the noise injections with image width rather than a learned parameter
sweg#8920: which shouldve been massively bad
sweg#8920: but that version went better than this
alstroemeria313#1694: oh, it's a from-scratch implementation?
sweg#8920: yeah
alstroemeria313#1694: i've trained several using the official repo and i have also trained stylegan-inspired archs that were not exact replications
alstroemeria313#1694: but like... mine have generally had different looking fakes consistently from early on
sweg#8920: possibly dataset dependent
alstroemeria313#1694: what is the dataset?
sweg#8920: custom one
sweg#8920: with only 800 images
alstroemeria313#1694: oh
alstroemeria313#1694: you have adaptive discriminator augmentation right
sweg#8920: adaptive aug wasnt working for me so i took it off and just went with a fixed one tbh
sweg#8920: i havent had any issues with augmentation leaking |
alstroemeria313#1694: oh, but it's diffaugment right?
alstroemeria313#1694: ah
sweg#8920: when i had it adaptive (i implemented from paper) it would just keep going up lmao
alstroemeria313#1694: oh
alstroemeria313#1694: with 800 images it should go p high?
sweg#8920: yeah i think intuitively that makes sense
sweg#8920: but it would also leak bad at that point
alstroemeria313#1694: one time i tried an experiment in low-data GAN training, WGAN-GP is actually decent at it
alstroemeria313#1694: Oh
alstroemeria313#1694: I could train a WGAN-GP on a single batch of MNIST reals successfully
alstroemeria313#1694: And I could train on a single real with diffaugment
sweg#8920: wait is diffaugment not ADA
alstroemeria313#1694: diffaugment is a slightly earlier paper, it's just not adaptive
sweg#8920: OH right
sweg#8920: i feel like with mnist a lot of the transformations work really well
sweg#8920: they dont cause artifacts
alstroemeria313#1694: ah
sweg#8920: wgan-gp was the first ever gan i coded
sweg#8920: but i was in high school and didnt understand machine learning at all
sweg#8920: so i trained it on a dataset i scraped of 80 eye pictures |
sweg#8920: and was really hyped even though it was just overfitting lmao
alstroemeria313#1694: wgan-gp is like the best low data GAN i've seen
alstroemeria313#1694: It's supposed to be able to give G good gradients to follow even if you *train D to optimality* between each G step
sweg#8920: right
alstroemeria313#1694: Lucky you tried it and not something else, anything else would probably have totally collapsed.
alstroemeria313#1694: I think this is because it tries to minimize the Wasserstein-1 distance and not the Jensen-Shannon divergence between the fakes and the reals
alstroemeria313#1694: And Wasserstein is still defined even if the distributions do not have the same support
alstroemeria313#1694: (If D memorizes the reals, it knows exactly the support of the reals, and the fakes do not have that support, so normal GANs just fail)
alstroemeria313#1694: (oh. apparently it *is* defined - KL divergence is the one that isn't - but it has a zero gradient in the case where the two distributions have non-overlapping support.)
alstroemeria313#1694: i.e. gradient collapses to 0 if D memorizes the fakes and reals' distributions.
sweg#8920: thats really interesting
sweg#8920: i know very little about information theory but that makes sense from what i know about divergence
alstroemeria313#1694: (...Wait, can you just substitute some other metric than Euclidean into a WGAN-GP)
sweg#8920: i dont imagine theres anything simpler than that
alstroemeria313#1694: Like. Could I extract features from VGG-16 and then use the Euclidean distance between the feature maps as the metric.
alstroemeria313#1694: I think that should work actually!
sweg#8920: thats interesting
sweg#8920: you'd leverage domain knowledge
sweg#8920: wait thats actually a really good idea
alstroemeria313#1694: i.e. it would work as normal except you would feed extracted feature maps of the fakes and reals to D |
sweg#8920: you should totally try that
alstroemeria313#1694: And apply the gradient penalty to D's gradient wrt the feature maps.
alstroemeria313#1694: So it does its Wasserstein-1 approximation in that space instead.
sweg#8920: just so we're on the same page youre minimizing euclidean distance between reals and fakes embedded with vgg-16 right
alstroemeria313#1694: Yeah, like the output of relu2_2 or relu3_3
alstroemeria313#1694: Euclidean distance between VGG feature maps is a common perceptual loss
sweg#8920: this seems like a kind of transfer learning
alstroemeria313#1694: The thing that makes it different is you apply the gradient penalty to the feature maps and not to the fakes/reals proper
sweg#8920: oh i see
alstroemeria313#1694: The thing the GP is applied to determines the space it approximates Wasserstein-1 in
sweg#8920: ok but why VGG specifically
sweg#8920: at that point why not use clip embedder
sweg#8920: lol
alstroemeria313#1694: It is particularly good for a perceptual loss
alstroemeria313#1694: 'cause CLIP throws away spatial information
sweg#8920: oh yeah i guess thats true
StellaAthena#3530: Does anyone know a good reference explaining what tokenizers are actually for? Like, why we use them, what desirable properties of tokenizers are, etc.? It took me a while to figure out their purpose and I feel like there isn't a good canonical explanation. If so, I might write one.
sweg#8920: i originally learned about them for compilers
chilli#5665: 🤔
chilli#5665: I feel like compiler tokenizers have very different properties |
sweg#8920: in that context they just seemed useful for splitting long string sequences into discrete chunks
sweg#8920: and thats useful for modelling sequences
chilli#5665: I don't really think they're similar in their goals
StellaAthena#3530: I just wrote up a 101 level explanation of what (I think) they're for in LMs in #prompting
chilli#5665: they're only nominally similar in that both split strings into chunks
chilli#5665: lol
StellaAthena#3530: The main concern is the fact that natural English text is not an efficient coding of English semantic content. Which isn't really in play for compilers AFAIK?
chilli#5665: For compilers, the main goal is to get it into some form that you can parse
sweg#8920: oh i misread
sweg#8920: what they're for not what they do
chilli#5665: ok, so the main questions I have about tokenization
chilli#5665: are
chilli#5665: or well, this is my understanding
chilli#5665: there's basically 2 parts of tokenization
chilli#5665: The first one is balancing 1. Maximizing your sequence length and 2. Minimizing the size of your vocab
sweg#8920: i mean i think this is a given cause semantic content is a representation in the brain, we just choose a word to then communicate that semantic content
chilli#5665: I'd be curious about how people worry about these tradeoffs
StellaAthena#3530: People don't worry about these tradeoffs and 99% of people use the GPT2 tokenizer
StellaAthena#3530: 😛
chilli#5665: The second issue I'm curious about is how having big tokens affects your performance |
StellaAthena#3530: Intuitively, it seems to me that having big tokens primarily increases the variance of your performance
StellaAthena#3530: An efficient tokenizer is optimal in *amortized* complexity
Sphinx#2092: BPE is designed explicitly to address this trade-off.
StellaAthena#3530: It is obviously more efficient to have a single token for "the quick brown fox jumps over the lazy dog" if that's the input
StellaAthena#3530: But you face the cost of a small degradation of performance most of the time because most passages do not contain that phrase
StellaAthena#3530: I doubt you'd see the impacts of a single large token in normal use, but if you introduce thousands of them that's a different story.
chilli#5665: hmm, so, let's say you were trying to learn math
chilli#5665: would it work better to tokenize every 2 length sequence?
chilli#5665: or one token for each number
Cade Gordon#3029: Can you explain this more? I thought CLIP retained some spatial information
StellaAthena#3530: Probably every token, but you'd need to be precise about the data distribution to make firm claims
alstroemeria313#1694: it throws a lot away though
alstroemeria313#1694: like too much usually to make the fakes be able to resemble the reals well if you used that as your only metric for comparing them.
alstroemeria313#1694: oh hey. https://cdn.discordapp.com/attachments/729741769738158194/858429719824171028/out_0001379.png
Cade Gordon#3029: Ahh okay that makes sense. I was wondering because I was maximizing cosine similarities for a crude "style" transfer and it managed to retain some geometry
CRG#8707: <https://arxiv.org/abs/2102.13019> https://cdn.discordapp.com/attachments/729741769738158194/858430438023495690/13-Figure3-1.png
Cade Gordon#3029: https://twitter.com/CadeGordonML/status/1368988450339368961?s=20 for reference
alstroemeria313#1694: The Gram matrices used in Gatys et al style transfer actually *do* contain spatial relationships btw
sweg#8920: isnt this just a consequence of convolutional layers
alstroemeria313#1694: This is because of the implicit positional encoding introduced by the zero padding. |
sweg#8920: i remember reading a paper that talked about a modification to convolutions
alstroemeria313#1694: In the conv layers.
sweg#8920: to fix something related to spatial variance
alstroemeria313#1694: There may be other stuff, like max pooling is very much not translation invariant
alstroemeria313#1694: I can actually reconstruct larger features rather well sometimes from Gram matrices.
sweg#8920: https://richzhang.github.io/antialiased-cnns/
sweg#8920: this is what i was thinking of
sweg#8920: kind of old now
sweg#8920: but stylegan2 incorporates this
Cade Gordon#3029: Ohh let me fix this then
alstroemeria313#1694: alias free gan?
alstroemeria313#1694: what's their blurpool
sweg#8920: it has to do with signal processing
alstroemeria313#1694: yeah but what filter
sweg#8920: like its not obvious from this post but in the stylegan implementation they use it with something called upfirdn2d
alstroemeria313#1694: i have a custom Binomial2Pool2d layer
sweg#8920: 1d FIR filter i think?
alstroemeria313#1694: That is p similar to stylegan downsampling
alstroemeria313#1694: It uses a fixed 3x3 kernel
alstroemeria313#1694: In a stride 2 conv2d |
alstroemeria313#1694: Oh, so they replace stride 2 max pooling with stride 1 window 2 max pooling then blur pooling?
alstroemeria313#1694: Alias-Free GAN actually goes so far as to do upsample 2x -> Leaky ReLU -> downsample 2x
alstroemeria313#1694: @sweg yeah my custom layer is the thing they call Triangle-3 in the paper
alstroemeria313#1694: [1/4, 1/2, 1/4]
alstroemeria313#1694: The outer product of that and itself.
Louis#0144: whats sota on coreference resolution?
Louis#0144: like for end to end
sualehasif996#8908: what is the general channel to ask questions about gpt-neo models
sualehasif996#8908: There is a certain help-wanted sort of question that I would like to ask?
AI_WAIFU#2844: Generally no. We don't offer tech support
AI_WAIFU#2844: You'll have much better luck with the hugging face api and community
sualehasif996#8908: gotcha 🙂
AI_WAIFU#2844: https://discuss.huggingface.co/
nostalgebraist#3542: i realized today that `create_tfrecords.py` behaves in an unexpected way that has adversely affected all my fine-tuning runs with gpt-neo and gpt-j thus far.
PR to fix the problem: https://github.com/EleutherAI/gpt-neo/pull/230
i dunno if this impacts any EAI stuff. (maybe the "bug" is expected behavior in the context you guys are using the script, all i know is i didn't expect it when running the Colab notebook...)
kindiana#1016: that is totally not what I would have expected either...
kindiana#1016: @bmk :thonk: |
kindiana#1016: we might need to recreate those tfrecords lmao
bmk#1476: I have no idea how create_tfrecords works
bmk#1476: it's one of the cursed pieces of neo that I always worked around when I could
EricHallahan#1051: Passive-aggressive blog post wen.
StellaAthena#3530: @nostalgebraist What is "the behavior I had expected originally."
StellaAthena#3530: Is it (each line is a document)
```
the beginning tokens of file1
the ending tokens of file1.<|endoftext|> The beginning tokens of file2
more tokens from file2
the ending tokens of file2.<|endoftext|> The beginning tokens of file3
...
```
StellaAthena#3530: ^^ that's the behavior I would have told you it had.
alstroemeria313#1694: Hey, how many fakes do people normally sample to compute FID
alstroemeria313#1694: It's on CIFAR-10 so there are 50,000 reals
StellaAthena#3530: @alstroemeria313 StyleGAN did it across half the dataset
StellaAthena#3530: 200,000 image dataset, filtered down to 100,000 before training began. FID calculated on 50,000
StellaAthena#3530: Page 2, right hand column: https://arxiv.org/pdf/1812.04948.pdf
alstroemeria313#1694: ah, ty :) |
alstroemeria313#1694: probably 50k is fine for me then
nostalgebraist#3542: yes, that is the behavior i expected
TheGamingWizardC75#9635: I lost my Dad yesterday
alstroemeria313#1694: oh no :(
Kia#2550: Sorry for your lost
aero#1357: anyone familiar with HF api know if there is a way to return "n last hidden states" ?
When calling the forward function to get hidden states, it returns one tensor for each token
The problem is if you pass in a lot of tokens it quickly runs OOM (from all the tensors in the final list to return)
Might have to create a custom parameter but really seems like there should be a way
StellaAthena#3530: @aero I would recommend asking on the HF forums or github repo
aero#1357: yeah did some digging too, seems there isn't a way
aero#1357: at about 900 tokens it OOMs at 24gb vram which isn't ideal. Hadn't noticed it before becase all my chat bot messages are way shorter than that
aero#1357: not sure how accurate things are with that many tokens anyway 🤔
aero#1357: need to come up with some way to test the accuracy
chirp#4545: https://twitter.com/marksaroufim/status/1408965281775456260?s=21
𓅬 gabriel_syme 𓅬#3220: should I still expect improvement when fine tuning if the training loss is more or less flat after some iterations?
𓅬 gabriel_syme 𓅬#3220: (obviously I'll also monitor eval loss but that's not out yet lol)
alstroemeria313#1694: Hey, is there any work on what a conditional WGAN actually computes and tries to minimize? |
AI_WAIFU#2844: conditional?
AI_WAIFU#2844: Like conditional probability?
alstroemeria313#1694: Conditional WGAN-GP seems to work *in practice*, but I'm not sure it's actually minimizing the Wasserstein-1 distance between the fakes and reals anymore
alstroemeria313#1694: No, class-conditional
AI_WAIFU#2844: Isn't is just minimizing distance between fakes and reals given that reals are drawn from a conditional distribution?
alstroemeria313#1694: ...Is it?
alstroemeria313#1694: WDYM drawn from a conditional distribution exactly
AI_WAIFU#2844: the distribution of images, given that they come from a certain class.
alstroemeria313#1694: ...how can I show that it actually does this
AI_WAIFU#2844: I mean it's slightly more complicated by the fact that you're using the same network for all classes, but effectively you'd have something like
min P(class)W1(P(image|class),P(fake|class))
And you would just show that from a proof that WGAN minimises W1(real distribution, fake distribution).
alstroemeria313#1694: Yeah I am worried it is just some unprincipled thing that works empirically
AI_WAIFU#2844: The class conditional thing or WGAN in general?
alstroemeria313#1694: Class conditional WGAN
alstroemeria313#1694: I am trying an idea rn, which is to use VGG-16 feature maps as input to the WGAN discriminator
alstroemeria313#1694: To get it to minimize W1 in a perceptual space rather than RGB
alstroemeria313#1694: (Since for Wasserstein distances the metric of the space matters) |
alstroemeria313#1694: And it is a class conditional WGAN and IDK if that messes things up
AI_WAIFU#2844: I think it should be fine
alstroemeria313#1694: ah
alstroemeria313#1694: Well, the class conditioning is actually working
alstroemeria313#1694: current demo grid https://cdn.discordapp.com/attachments/729741769738158194/858682630974734346/out_0022750.png
alstroemeria313#1694: (It is confusing cars and trucks rn but ime it gets better w/ more training)
alstroemeria313#1694: I also want to try this with InceptionV3 feature vectors
alstroemeria313#1694: Because lol
alstroemeria313#1694: (Because FID is W2 between multivariate Gaussians which have the empirical means and covariances of the distributions of the fakes and the reals' InceptionV3 feature vectors)
alstroemeria313#1694: (I do not expect this to be as good visually as VGG (or even RGB really) because the feature vectors throw away spatial information and it's just goodharting FID for the lulz)
alstroemeria313#1694: Actually
alstroemeria313#1694: Why not do it with CLIP too
alstroemeria313#1694: I mean
alstroemeria313#1694: Well, because of the spatial information thing mostly.
alstroemeria313#1694: VGG WGAN-GP, FID=19.65 https://cdn.discordapp.com/attachments/729741769738158194/858696546699051022/out_0070000.png
alstroemeria313#1694: ...
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/858697637914345512/Screen_Shot_2021-06-27_at_6.17.47_AM.png
alstroemeria313#1694: So is FID supposed to be W2 or squared W2
alstroemeria313#1694: (Everyone, in practice, reports squared W2)
alstroemeria313#1694: (And, glancing at the graphs in the paper, if they're replacing nearly all of an image with a black rectangle and reporting an FID of ~250 between the disturbed images and the originals, they're also reporting squared W2) |
alstroemeria313#1694: ...So WGAN-TS just does an explicit optimal transport calculation between batches of fakes and batches of reals?
alstroemeria313#1694: With linear programming?
alstroemeria313#1694: And then trains a D to approximate the exact calculation that they can backprop through?
alstroemeria313#1694: ...But we can backprop through linear programming these days, can't we?
alstroemeria313#1694: Or the Sinkhorn distance?
alstroemeria313#1694: I've actually tried training a G to minimize W2 between batches of fakes and batches of reals as computed by geomloss, no D involved. It was p bad/blurry
alstroemeria313#1694: FID got down to 18.2
alstroemeria313#1694: OK let's try the InceptionV3 version
alstroemeria313#1694: ...I don't think this version is working very well yet
alstroemeria313#1694: The negative of D's loss is the current W1 distance approximation
alstroemeria313#1694: And it's not even close to sqrt(FID)
alstroemeria313#1694: I mean those are not the same thing but
alstroemeria313#1694: It's like nearly an order of magnitude different.
alstroemeria313#1694: Also FID is still p high
AI_WAIFU#2844: How well does flax/haiku handle parameter sharding in the SPMD framework?
AI_WAIFU#2844: Like how does checkpointing work
kindiana#1016: wdym?
kindiana#1016: with both you just get a big parameter matrix
kindiana#1016: up to you to serialize it
AI_WAIFU#2844: How do they coordinate to write to disk? Does it all get dumped in one big file? |
kindiana#1016: well it depends on how you dump it
kindiana#1016: it doesn't do anything automatically
AI_WAIFU#2844: So the parameters are sharded between instances and it's my responsibility to dump it to disk.
kindiana#1016: with mp<8 they are replicated
kindiana#1016: with mp > 8 they are sharded
AI_WAIFU#2844: right, I'm asking in the case where they are sharded.
AI_WAIFU#2844: Basically I'm wondering if I have to save the shards or if there's a cleaner way to go about it.
kindiana#1016: I think you just gotta save the shards
chilli#5665: what happens if you use `remat`?
chilli#5665: error?
AI_WAIFU#2844: what is remat?
chilli#5665: it's Jax's checkpointing/rematerialization API
chilli#5665: Actually, perhaps you were talking about parameter checkpointing
chilli#5665: in which case, it's not relevant lol
AI_WAIFU#2844: https://github.com/google/jax/pull/1749
AI_WAIFU#2844: ok it's an alias for checkpoint
AI_WAIFU#2844: but yeah, the problem is how to save/load sharded models
AI_WAIFU#2844: *Ideally* it would all get dumped into one big file, and we had some autoshard protocol, that way you could train it with one config and reload it with a different setup later.
AI_WAIFU#2844: but I think for now I'll just make directories for each shard and save stuff that way.
Spy#9778: So I'm reading the reward is enough paper |
Spy#9778: And it just sounds like a lot of nothing
Spy#9778: Everyone already agreed that all human capabilities came from evolution, i.e. maximization of a single reward
Spy#9778: Am I missing something?
Spy#9778: Possibly unrelated but on my haiku gpt2 implementation, using remat makes my GPU oom on GPT2-XL but I can do Adam GPT2-XL without it
Spy#9778: I assume there's some JIT thing happening that it doesn't play nice with
Spy#9778: So ymmv
One#5919: hey everyone! what can i help with? what can _anyone_ help with? what are the server's current goals?
StellaAthena#3530: @One Welcome! What are you good at?
One#5919: @StellaAthena art and improvisation
StellaAthena#3530: @One I meant what AI-related things. Those are definitely useful skills... but not the most applicable to doing AI research
Louis#0144: We have an art project wrapping up
Louis#0144: I don’t know of any others planned
One#5919: @Louis is there a gallery of the art?
Louis#0144: We should do a vr gallery @alstroemeria313
alstroemeria313#1694: oh?
Louis#0144: Eleuther art
Louis#0144: In vr
Louis#0144: Kinda a cool idea
Louis#0144: Idk
alstroemeria313#1694: oh, how would that work? |
kurumuz#5695: @alstroemeria313 put the art in a vrchat world
kurumuz#5695: with unity 3d
alstroemeria313#1694: oh
kurumuz#5695: would be cool
alstroemeria313#1694: ...idk what that is tbh
kurumuz#5695: its a vr metaverse or whatever
kurumuz#5695: so you can definitely do art galleries with that
kurumuz#5695: works without vr too
AI_WAIFU#2844: Eleuther VRC art gallery/conference wen?
EricHallahan#1051: The EleutherAI virtual art gallery page is on hold right now, I never got around to it because there was always something more useful to do lol
EricHallahan#1051: ~~like making the website functional~~
bmk#1476: I still want real physical tshirts
bmk#1476: meatspace above the intertubes
EricHallahan#1051: Maybe I can use the laser cutter... 🤔
kurumuz#5695: would be cool, joining as a catgirl
Louis#0144: No
Louis#0144: You either join as a goose
Louis#0144: Or you don’t attend
Louis#0144: 😤😤😤
kurumuz#5695: you are outnumbered louis |
bmk#1476: :goosegirl:
kurumuz#5695: oh goosegirl
kurumuz#5695: that is fine too
alstroemeria313#1694: 🐈
One#5919: what if it's a bot/channel where we rate submissions with emoji reactions and posts the ones that pass a certain threshold on to the site
alexyz#3459: goosegirl merch :goosegirl:
alexyz#3459: who designed the emoji?
bmk#1476: the original art is by froot I believe
bmk#1476: and then we just cropped it lol
One#5919: pretty genius design to represent the beak and feathers as orange and white colored hair
One#5919: :goosegirl:
EricHallahan#1051: This has already been suggested in the past, but again, it is kind of at the bottom of my priority list right now. Maybe in a few weeks I could start on something like that.
One#5919: https://en.wikipedia.org/wiki/Convergent_evolution
Ajay sahu#2540: https://www.amazon.science/blog/amazon-berkeley-release-dataset-of-product-images-and-metadata
Ajay sahu#2540: Dataset of fashion, object and 3D Objects with text image pairs with multilingual Metadata
Ajay sahu#2540: Use case on DALL E replication cogview replication or similar custom VAE models
gdawg16#0493: finally
gdawg16#0493: i can train an AI to imagine new types of couches
Ajay sahu#2540: Anything household objects, lights, dress, clothes.. Can be trained or iterated 😅.. Is there a separate Dataset channel, which i am unaware of, as i couldn't see it
inox#5400: this is gonna supercharge avocado armchair research |
EricHallahan#1051: No, no dataset channel.
Ajay sahu#2540: Ok..
triggerhappygandi#0001: currently fine tuning a bert and this https://cdn.discordapp.com/attachments/729741769738158194/858937101939376148/unknown.png
kindiana#1016: dropout?
triggerhappygandi#0001: Does that explain it?
kindiana#1016: yeah?
triggerhappygandi#0001: Its ephemeral though. Soon it gets normal and train loss is lower. Why happen early on?
nostalgebraist#3542: train loss declines really fast early on
nostalgebraist#3542: so an average is going to be a bit stale
kindiana#1016: interplay between dropout making train a constant factor worse and overfitting making training loss lower
nostalgebraist#3542: and biased upward
Deleted User#0000: Hey guys, is there any gpt3 ai website that can generate scientific text paragraphs based on input scientific articles? Just wondering
Kia#2550: Currently nothing
Kia#2550: Also If That's website is a thing, You probably need to pay a high fee
Deleted User#0000: I can pay
Kia#2550: Ok..?
Kia#2550: Nonetheless that's the only thing I can help you
Daj#7482: New project just dropped. I'm looking for ~1 person with ML dev experience that would be interested in helping me take apart a large LM to understand better how short vs long paths inside the network affect performance. See the graph at https://arxiv.org/pdf/2103.03404.pdf#page=11
https://github.com/EleutherAI/project-menu/issues/13
𓅬 gabriel_syme 𓅬#3220: cool! |
Mezareph79#7685: Hi everyone and thanks for allowing me to participate in this discord server
𓅬 gabriel_syme 𓅬#3220: welcome!
Mezareph79#7685: Thanks
smallanimalfriend#4355: @-Archivist Nice work on the Alamy dataset! From the posts you've made since I was here last it sounds like it's all downloaded and just in the process of being tar-ed/gzip-ed/whatever? Also: The guy behind pushshift (https://twitter.com/jasonbaumgartne - mostly known for the reddit dumps/api) has been on a bit of a spree recently (I think he got a grant or something) collecting all the data from several large sites like youtube, telegram, 4chan, etc. For example, the youtube dataset is currently at billions of videos and several billion comments - and ingesting in near real-time. I think it would be awesome if there were several copies of the yearly dumps of these sites around the place so that if he gets sued or whatever the data will survive. Currently for the non-reddit datasets it looks like you need to contact him via email/twitter - probably to lower the chance of being sued for publicly hosting them all. I'm not sure whether you'd want all his data, but I'm sure there's a subset of it that would be quite useful for AI research (especially since he tends to organise his datasets quite well - e.g. mapping weird formats into sane JSON-type stuff). If it is legally possible for you to not just backup the data, but also *publicly* host it, that would be beyond awesome. That said, this is just a "this would be pretty cool" type idea - not something worth spending your time on if there are currently higher-priority things. If it's as simple as wget-ing some URLs though, and you've got some spare time (and space!), then it might be worth it
-Archivist#7336: he's not been entirely forthcoming with previous data... so good luck dealing with him to get copies.
smallanimalfriend#4355: Ah so you're already on to him - which data did you try to get? Reddit data is public but I think the dumps are a bit behind at the moment. You tried to get Gab data or something? Did he say you needed academic affiliation or something like that? And any clues on the publishing restriction versus reddit? Legal reasons?
Kharr#7888: https://arxiv.org/abs/1605.06431
Daj#7482: Yep, the paper builds somewhat on that idea, just for transformers
Daj#7482: But they only show it in tiny toy networks
Daj#7482: So I wanna see what happens if you surgically fuck around with large LMs
Kharr#7888: Sounds fun :morelayers:
Daj#7482: It would be interesting if you could disentangle different paths as smaller subnetworks and maybe they decompose into more interpretable chunks. Or not. Interesting either way
Sid#2121: How large is large? from what I can tell the experiment is just "train model on toy task - evaluate using only subsets of the layers" right?
Daj#7482: I would want to try e.g. GPT2 models
Daj#7482: or Neo 1.3B
Daj#7482: In the paper they have like three layer networks trained on sorting numbers or something
Daj#7482: Should be pretty easy to write a general script to generate that graph from any transformer
Sid#2121: does it have to be the toy task - or can we try it on LM tasks?
Daj#7482: I'm specifically interested in LMs, not toy tasks
Daj#7482: They didn't test LMs |
Daj#7482: for some reason
Daj#7482: which is sus
Daj#7482: They tried some BERT variants in other experiments, but not this one
Daj#7482: So I wanna take pretrained LMs and do that path analysis
Daj#7482: and see if it also shows mostly short paths
Kharr#7888: If you use HF you can mess around with splicing BERT in encoder-decoder and check if it holds: https://huggingface.co/blog/warm-starting-encoder-decoder
Sid#2121: @Daj it should be as simple as inserting something like this (I'd have to double check the indices are correct) here: https://github.com/EleutherAI/gpt-neox/blob/main/megatron/training.py#L358 ```python
transformer_layers = model.module.forward_fns[2:-3]
input_layers = model.module.forward_fns[:2]
output_layers = model.module.forward_fns[-3:]
transformer_layers_subset = transformer_layers[n:n]
model.module.forward_fns = input_layers + transformer_layers_subset + output_layers
```
Sid#2121: on a separate note: anyone ever used RayTune?
Daj#7482: It's more complex because you want to do paths through _heads_, not through layers
Daj#7482: I'm sure it's not terribly difficult, just don't have time to do it myself and would be a good small project for someone
Sid#2121: wait what :thonk: why through heads
Sid#2121: the thing they're trying to investigate is path length, right? |
Daj#7482: Read the paper lol. Basically they say that the skip connections allow tons of small paths through heads to cooperate because straight attention double exponentially leads to rank collapse
Daj#7482: So you have a not just paths of length n_layer, but all paths of length [0...n_layer]
Sid#2121: i mean, selecting a subset of layers is still selecting a subset of heads
Daj#7482: (talk later, call)
CRG#8707: 🤔 <https://www.reddit.com/r/MachineLearning/comments/4klqq9/160506431_residual_networks_are_exponential/> https://cdn.discordapp.com/attachments/729741769738158194/859073434445283348/Screenshot_20210628-161026.png
Daj#7482: So the difference is that lets say the path [0,0]->[1,0] (first layer, first head -> second layer, first head) is a valid path, but so is [0,0]->[5,3]->[6,1]->[9,13]
Daj#7482: And the paper says that amount all those paths those that contribute most to the output tend to be very short (length <3 or the like)
Daj#7482: and removing long paths barely changes performance
StellaAthena#3530: @Daj This seems hard to reconcile with @nostalgebraist's work that shows that there are often long chains of bad answers followed but sudden shifts to good answers
Daj#7482: Yep, that's why I want to test it
CRG#8707: From my understanding, a "short path" would mean the identity path is used in most of the layers except in a few (the layers that change the result), so shouldn't it be compatible?
Sid#2121: would the path [0,0] -> [5,3] be length 1 or length 5?
Daj#7482: Length 2
Daj#7482: It's using skip connections but only goes through 2 heads
nev#4905: awww thanks
-Archivist#7336: Got history with him, he used to host everything openly then he asked me to host a lot for him then he vanished for better part of 2 years while more and more of his data started being withdrawn and made unavailable
dms#2699: Hi everybody we have a ML question related to creation of a text formatting engine...we're using GPT-J for part of the project and are hoping one of the many benevolent braniacs here can give us a tip via DM! 😀
Sid#2121: you'll have better luck just asking your question out here
dms#2699: @Sid we're looking to create a machine that takes plain text and formats it with various HTML tags. We have 10k+ pages worth of input data (examples of text w/ and w/out tags) to train it with but are having trouble finding a suitable transformer to fine tune.
|
most seq2seq transformers we're finding seem to be more about text translation versus formatting
Sid#2121: text translation and formatting are kinda the same task, no?
StellaAthena#3530: Why do you think this would be a serious issue?
StellaAthena#3530: Also, I think finding an "appropriate" transformer is overrated... I think you should just take one and try it. Finetuning can overcome a lot of weirdness
dms#2699: This repository seems to be the most popular starting point for fine tuning a seq2seq model, but it requires us to put a target language and source language. Perhaps we could set `English` as both target and source? https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation
dms#2699: Thanks for your help btw 😀
EricHallahan#1051: I doubt that it would matter that you use real languages. You could just use `text w/ tags` and `text w/out tags`.
dms#2699: We'll try this!!
dms#2699: SUCCESS
alstroemeria313#1694: The InceptionV3 feature vector WGAN never got that good, I trained it for like over 24 hours
alstroemeria313#1694: https://cdn.discordapp.com/attachments/729741769738158194/859180860250652772/cifar_wgan_inception_out_0235000.png
alstroemeria313#1694: FID=39.68
alstroemeria313#1694: I wasn't expecting it to get visually good but I wanted better FID
alstroemeria313#1694: You can at least make out the ten CIFAR-10 classes in the fakes
alstroemeria313#1694: (It's class-conditional)
UnsupervisedLearner#4148: So I see hints of a lot of giant scale embeddings frameworks going on at Big Tek™ [1, 2]
What are the general properties of these embeddings? What are they achieving? How are they trained? They do not seem like a token embedding or something, so what are they?
Thank you for any insight. |
1: https://arxiv.org/abs/2004.08366
2: https://arxiv.org/abs/2104.05158
cfoster0#4356: my assumption is they're designed for the ads business
UnsupervisedLearner#4148: Same.
I'm wondering at what sort of information they encode and how they are trained, because they likely have similar properties to a component of a harebrained giant multimodal scheme I have cooking up
Kharr#7888: Think embedding sizes in the hundreds of thousands, not these tiny 50k embeddings used in LMs. Same with classifiers having enormous amount of categories.
UnsupervisedLearner#4148: Yes, that's actually something I'm looking for. More like-- millions to billions at full scale actually.
Do you have any resources on what these embeddings represent? Are they similar to a latent space, like a large cross product means they're semantically similar?
And do you have any resources on how they actually work with these giant embedding tables?
UnsupervisedLearner#4148: As in, how are they trained and updated?
Louis#0144: Probably really useful for KGs
UnsupervisedLearner#4148: I'm building something like a distributed KG actually
When I have more coherent writing on it I'll share
Louis#0144: Pog |
Louis#0144: Love me some distributed KGs
Louis#0144: Are you doing what ASCENT did
Louis#0144: From max planck
Kharr#7888: There are some papers if you search around. They're used for recommending products and stuff. Random link from Amazon https://medium.com/apache-mxnet/learning-embeddings-for-music-recommendation-with-mxnets-sparse-api-5698f4d7d8
anonymouser#4675: Hi. Is there a public repo with projects built using gpt-neo or Gpt-J? I want to build a use case with either of these models but I don't know how to get started on anything beyond prompt or basic inference. Any help would be great thanks.
GrimSqueaker#8837: that guy is crazy 😄 (In a great way)
GrimSqueaker#8837: Are they? KGs are nice and compact and semantic and interpretable. And they're aground truth source for embeddings , not the other way round, usually ; (As a lazy alternative to custom symbolic features for use in a model) .
GrimSqueaker#8837: I have no resources handy;
If I had to guess: LSH, vector space embedding, fuzzy similarity/NN. (e.g. libraries like ANNOY and
https://cloud.google.com/architecture/building-real-time-embeddings-similarity-matching-system
https://ai.googleblog.com/2020/07/announcing-scann-efficient-vector.html
)
You can probably guess a lot about their general architecture by looking at their guides/posts for some frameworks; e.g. Google's TF Recommenders 2 tower approach
https://www.tensorflow.org/recommenders/examples/efficient_serving
gdawg16#0493: anyone know what shungite is
téo#8356: hey all, sharing a shower thought, happy to discuss: optimizers define the set of possible architectures, e.g. no transformers without ADAM - is this true?
Ravna#1831: Plain sgd also works on transformers. It just usually performs worse, but it doesn't mean a catastrophic failure mode like diverging all the time.
téo#8356: I guess `s/define/enable` would be a better statement then |
CupOfGeo#1363: Hello everyone nice gpt neo notebook
Kharr#7888: While it does "work" it is very slow and sample inefficient. SGD works for finetuning large pretrained Transformers (which only need a slight nudge toward a new task), but I would not recommend it for training from fresh initialization unless your sample size is huge (to deal with all the noise).
AerysS#5558: I just start using W&B recently. I am training a CNN, and it says my RAM usage is a low as this. I think this is a sign of not utilizing the best resources, right? https://cdn.discordapp.com/attachments/729741769738158194/859429247604097104/unknown.png
EricHallahan#1051: Do you have a GPU?
AerysS#5558: Yes I am using a GPU
EricHallahan#1051: Then that doesn't seem concerning at all.
AerysS#5558: So that's normal? I think it's low so maybe I can use a few tricks to speed up the training process
StellaAthena#3530: You typically don't need much CPU to do DL
StellaAthena#3530: That is not telling you anything about your GPU utilization
Deleted User#0000: https://twitter.com/github/status/1409883156333879300?s=20
GrimSqueaker#8837: https://minimaxir.com/2021/06/gpt-j-6b/
(He compares code generation of GPTJ and GPT3 as well as other stuff)
cfoster0#4356: Could this be part of the promised *thing*?
alexyz#3459: Github Copilot uses GPT-3 :openai:
(i think?)
cfoster0#4356: >>> GitHub Copilot is powered by Codex, the new AI system created by OpenAI.
alexyz#3459: Ah
Sid#2121: https://www.cnbc.com/2021/06/29/microsoft-github-copilot-ai-offers-coding-suggestions.html
Ravna#1831: anyone can do similar things by fine-tuning gpt-neo on more code-like datasets |
alexyz#3459: Would it be using a tokenizer trained on code?
Sid#2121: > The model at the core of GitHub Copilot, called Codex, is a descendent of GPT-3, a powerful model that OpenAI trained on large volumes of text, Brockman said. Engineers fed the model “many, many terabytes of public source code out there,” Friedman said.
Sid#2121: probably a distillation?
Sid#2121: > The underlying technology won’t be only Microsoft’s to use. OpenAI will release the Codex model this summer for third-party developers to weave into their own applications, Brockman said.
alexyz#3459: OpenAI, releasing models? Wow...
StellaAthena#3530: Note that they don't say it will be free or open source
alexyz#3459: That's true.
alexyz#3459: It could be them just adding it to their API
alexyz#3459: (which is, now after thinking about it, very likely)
Ravna#1831: Before I opened that link I was thinking "finally a usable programming synthesis tool maybe?" and expecting that they curated something like a huge dataset of input-output/sourcecode pairs. But no, lol, just yet another code completion tool trained on existing github source files, a scaled-up tabnine.
cfoster0#4356: A little bit on the "is it just copy pasting existing code?" question https://docs.github.com/en/early-access/github/copilot/research-recitation
AI_WAIFU#2844: For once we're a bit ahead of the curve.
StellaAthena#3530: I'm at a small conference about AI security and someone from a FAANG company just admitted that a technology they created was cloned and deployed by a competitor with a model stealing attack on the API.
StellaAthena#3530: It happened three weeks after launch and the competitor undercut their cost by 50%.
Chlorokin#6581: Hopefully this isn’t the thing.
ethan caballero#6044: I confirm codex isn't the thing.
Louis#0144: What thing
nz#9710: *that* thing
Louis#0144: Wtf
Louis#0144: This thing? |
cfoster0#4356: It's been heavily rumored that OA is planning to announce something big towards the start of July
cfoster0#4356: Or at least, rumored around here
Louis#0144: Huh
StellaAthena#3530: You know you're a *big* deal when you show up in the chat at someone else's talk and people start directing questions to you instead of the speaker
aero#1357: :Ragekin: how can I compete with microsoft
aero#1357: maybe a good alternative can still get popular, lot of developers prefer open source and don't like analytics / data collection (and I bet ms will be all over that)
aero#1357: also interesting legal question: does openai own all the code they trained on and whats the legal situation using that code? It's based off other people's code that you don't own
cfoster0#4356: I'm pretty sure they trained on open source repos, so no to the first question
aero#1357: going to be interesting whenever a lawsuit related to gpt/copyright does come up, and knowing how much people love to sue, it's inevitable I think
cognomen#6297: might end with a rather big `LICENSES.MD` for users
45#2247: might be late to the party but wdu guys think about the github OAI copilot thing
Ravna#1831: From what's shown on its website and tweets, pretty unimpressive so far.
mega b#6696: I don't think its horrible, I think the idea is good when the inference is very fast
Ravna#1831: I think someone can fine-tune gpt-j and reach pretty similar results
mega b#6696: right
chilli#5665: :thonk:
mega b#6696: I would to love to use a GPT-J powered code editor
Spy#9778: What thing was promised?
Ravna#1831: tabnine is a tiny 340M GPT-2
Ravna#1831: doesn't take much to make a huge leap from that |
AI_WAIFU#2844: I wonder if this will drive even more demand for GPUs. Since every tech company is gonna want an on-prem copy of this.
AI_WAIFU#2844: Yeah but tabnine sucks
AI_WAIFU#2844: babby model
cfoster0#4356: I don't know what it is, just something significant
Fessus#9563: The pile only ended up using 95G out of 630GB of github data that was collected. Would be curious to see how a 6B level model exclusively trained on code would do
mega b#6696: i. would. love. this
mega b#6696: mostly to experiment and play around with
Fessus#9563: There's a realistic chance that too much focus on code could actually make performance worse
mega b#6696: ooh right
Ravna#1831: 6B model on 600G data is way off from the optimal compute curve
Ravna#1831: let's do 600B model on 600G data
Ravna#1831: :berk:
Spy#9778: By GitHub or Microsoft or who?
cfoster0#4356: OpenAI
Spy#9778: Ah okay
Spy#9778: Exciting
zphang#7252: everyone here playing scoop roulette :virgin:
Louis#0144: What’s the leading theories
Xirider#4010: They say they used terabytes of data
Louis#0144: What happened |
Louis#0144: It’s probably that they’re adding DALL-E to the api
Louis#0144: lol
Louis#0144: Honestly
StellaAthena#3530: Nicolas Carlini
Louis#0144: Damn
Fessus#9563: My bet was on a very large model which mixed in non-text data
Fessus#9563: In a very DALL-E way
Spy#9778: Given the whole ai dungeon fiasco they'd better have a next generation nsfw detector ready to go
Louis#0144: lol
EricHallahan#1051: I am *highly* skeptical that it would be DALL-E.
Louis#0144: GPT4
Louis#0144: is the safest bet
Louis#0144: lol
EricHallahan#1051: Nor GPT-4.
Ravna#1831: GPT is free-form with a lot of degrees of freedom, while Dall-E is strongly restricted to the "one blob of text, then one picture only" format. I wish someone could come up with a multimodal setting with similar level of freedom as GPT.
Louis#0144: Why
EricHallahan#1051: Because I don't think it would be called GPT-4.
EricHallahan#1051: ¯\_(ツ)_/¯
Louis#0144: Oh but you think a LM is possible
Louis#0144: ? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.