data
stringlengths 115
7.61k
|
---|
triggerhappygandi#0001: Also they talk like "chungus, 👏 Keanu wholesome meme" and it is the opposite of funny
Louis#0144: Eg easier to search for threads
triggerhappygandi#0001: 4chan has the potential to become a big mainstream platform
triggerhappygandi#0001: If only it stops looking like 2003
Louis#0144: Yeah
Louis#0144: I agree
Sahl#0630: What makes reddit more conducive to toxicity than 4chan
Sahl#0630: Siloing of communities?
Louis#0144: It’s the echo chamber effect
Louis#0144: The notion of upvotes
triggerhappygandi#0001: Yeah. Was gonna mention both
triggerhappygandi#0001: People get downvoted for asking why most people think what they do
Sahl#0630: one thing to study would be WT.Social
Louis#0144: 4chan is significantly more democratic imho
Sahl#0630: I don’t know how big it’s gotten yet but seems to be interesting concept
triggerhappygandi#0001: I was once banned from r/40klore for arguing against a character lol
Louis#0144: Image boards are massive in SEA
Louis#0144: Atleast they used to be
Louis#0144: Like pre Facebook and stuff
Louis#0144: They never took off in the west |
Sahl#0630: oh I mean WT.Social not chans
Josh#5264: Hey all. Lots of new folks here---I work on AI safety at OpenAI. Before folks kick off the GPT-3 replication training run, would anyone be open to talking about general challenges/opportunities in achieving AI safety? There are some things it might be good to share about what to do for the long run.
Louis#0144: @Ambient
Josh#5264: @Daj it might be good for us to catch up sometime in the next month or so.
Louis#0144: Ambient does alignment and ethics research at Georgia tech
Louis#0144: I’ll ping him
Louis#0144: (We’re in a slack together)
triggerhappygandi#0001: Considering the bad press OpenAI got for gpt-2 followed by the realisation that it wasn't really all that ominous, do you still see the same problems with gpt-3 or is there something extra? @Josh
Josh#5264: I think the same problems, along with some other ones, are still present in GPT-3. It's complicated.
Josh#5264: Maybe one way to frame it is---there are a set of vulnerabilities to harm created by tools like GPT-2 and GPT-3 being generally available. Whether or not those vulnerabilities get exploited depends on circumstances external to the existence of the technology---who's aware of it, who's trying to use it, what additional infrastructure do they build to use the models for those purposes, etc. All of the vulnerabilities are real and as the capabilities of models go up over time, the magnitude of potential harm increases, and as people develop additional infrastructure for serving the models over time, the likelihood of exploitation increases. We're not yet at the stage where on a day-to-day basis GPT-3 is a differentiating factor in elections, but we could conceivably get to the point where tools like large language models are a nontrivial factor in shaping public conversations through astroturfing.
triggerhappygandi#0001: Doesn't the fact that it's so huge already limit the harm that could be done with it?
triggerhappygandi#0001: Even if it's open sourced, you can't just put the weights on a $30/hr AWS instance
triggerhappygandi#0001: But then again I'm thinking of individuals not an entire group of people trying to exploit it
Sahl#0630: Do you think that releasing the weights of GPT-3 would be net positive or negative (including opportunity costs)?
Sahl#0630: I’m not sure if the attack surface that is opened up outweighs the benefits of access
Josh#5264: I think releasing the weights of GPT-3 would be net negative. The main thing here is about the line in the sand that has to get drawn on how increasingly-capable AI models are deployed---as capabilities increase, deployment needs to involve increasingly-better containment.
triggerhappygandi#0001: But doesn't its size already limit the number of potential users?
Deleted User#0000: well, at least not affordable for me
triggerhappygandi#0001: Most people who create an instance large enough to fine tune gpt-3 are probably people like us.
Josh#5264: @triggerhappygandi re---hugeness limiting harm---it does for now, but infrastructure to deploy it cheaply will develop over time. |
triggerhappygandi#0001: Ah yes. But by then gpt-3 would be the new gpt-2.
Sahl#0630: The bad actors that we are trying to keep from getting these models already have access to cheap human labour. Do the models change the efficiency of astroturfing that greatly?
triggerhappygandi#0001: People would be able to train it from scratch much cheaper anyway
Josh#5264: @Sahl I would guess not yet. But I expect that to change eventually. There's also an element of scalability, coordination, and repeatability achievable with AI systems for astroturfing that might be much harder to get with humans. I expect this will gradually change the dynamics of how astroturfing campaigns get conducted.
Sahl#0630: I feel like agents with a lot of resources would be able to replicate the models anyways, so all releasing the model does is empower smaller agents.
Sahl#0630: This could, of course, make smaller agents a threat
Sahl#0630: Like terrorist groups and the like
Sahl#0630: But it shouldn’t empower say, China or Russia
triggerhappygandi#0001: For example, I am working on making chatbots that can drive the conversation if the user isn't responsive enough, such as by incorporating ice-breakers and one-liners. It wouldn't take much effort to turn it into a troll, as was evident by the microsoft chatbot on twitter. If that's possible with even Bert, then it's possible with everything better than Bert.
Josh#5264: It also changes the speed with which different large actors make progress. Research and development can have false starts and failures, but if a launching point is available off-the-shelf that eliminates a lot of the hard work, that makes things a bit easier.
Sahl#0630: But they already see the success of the model and the paper gives the means to replicate it
Sahl#0630: If you release the methodology and results publicly but the model privately, they can recreate the model
Sahl#0630: If both are private, I’d understand
triggerhappygandi#0001: And as you said, the training cost would be reduced exponentially in 3-4 years. By then GPT-3 would be something even students can create. Doesn't this make the excercise futile?
Josh#5264: Months and years of time delays in proliferation can make a pretty substantial difference in impact on the world. One of the things worth observing here is that a lot of safety issues in AI---when the AI doesn't directly control physical infrastructure---come down to what expectations people have from interactions with AI or the ambient information landscape. That is, cultural context matters a lot for whether the AI tools can differentially enable certain kinds of harms. Over time the cultural context can be changed to innoculate against some of the AI-related harms.
Sahl#0630: We are already at the point where we cannot trust information as we hear it from many sources. This model simply decreases the cost of such attacks, not the way they are carried out. This should only change frequency of attacks rather than strength.
Sahl#0630: I’d argue the culture is there already.
Sahl#0630: However, I think the benefits of such a model greatly outweigh this one failure case, unless I’m missing other cases
triggerhappygandi#0001: I fail to see how delaying it has any lasting effects though.
Josh#5264: I do think the availability of models like GPT-3 might change the way certain kinds of attacks are carried out. For example, GPT-3 based chatbots combined with infrastructure for long-term memory could be used to develop highly-personalized relationships with people and in turn highly-personalized motivation towards particular philosophies or actions that serve the attacker's interests. |
Sahl#0630: They already use people for that
Louis#0144: Is slack down?
Sahl#0630: That’s part of the radicalization playbook, no?
Josh#5264: Difference in scale would be a difference in kind.
Deleted User#0000: we have to automate the world
Louis#0144: I’m 99% sure slack is down
Sahl#0630: Yes, but it’s only a difference in scale
Sahl#0630: As a result, the culture is already developing
triggerhappygandi#0001: I can understand this concern, but how do you balance it with the exponential growth and availability of compute?
Sahl#0630: Also, such models can combat radicalization
Sahl#0630: By connecting with people, offering mental health support, etc
Sahl#0630: I think cases like this will greatly outweigh attacks in the future
Josh#5264: @triggerhappygandi There are ecosystem-level approaches that could be taken. What if cloud providers applied some level of scrutiny to AI models being trained on their instrastructure? This feels plausible (and/or regulatable).
Sahl#0630: This doesn’t stop agents like China or Russia from simply buying computers
Sahl#0630: It may stop smaller agents though
Sahl#0630: However I’d argue smaller agents are way less of a threat
Sahl#0630: Even if they were more capable
Deleted User#0000: should give a try to tpu from google
triggerhappygandi#0001: Also, Russia/China can replicate GPT-3 by themselves too with the knowledge from the paper.
Josh#5264: @Sahl The argument "won't stop China or Russia from doing X" as a basis for not taking a precaution strikes me as a weak one---it's equivalent to saying "we should not coordinate to prevent X because someone might do it anyway," and that kills the whole goal. |
triggerhappygandi#0001: I still believe the benefits outweigh the negatives
triggerhappygandi#0001: As was evident with GPT-2
Sahl#0630: No, my argument is that it won’t significantly affect the decision of agents who matter most
Sahl#0630: But will significantly affect smaller agents, positively and negatively
Deleted User#0000: guys
triggerhappygandi#0001: But then again, 3 looks extremely fun to mess around with
Sahl#0630: But if you look at the benefits of smaller agents, I believe they outweigh drawbacks
Deleted User#0000: i think that is possible make an AI almost like the human brain
Deleted User#0000: i mean
Deleted User#0000: a general one
Sahl#0630: Like mental health support, deradicalization, and many economically beneficial activities
Josh#5264: @Sahl Those benefits can be obtained in ways other than open-sourcing the model, though.
Sahl#0630: True, that’s why you consider opportunity cost
Sahl#0630: Such as the API
Josh#5264: Many AI models can be made available in ways that have appropriate containment and control to prevent harmful outcomes.
Josh#5264: Right.
Sahl#0630: However, I believe the value of releasing the model - value of the API is positive
Sahl#0630: The API doesn’t scale well enough to the potential economic benefits
triggerhappygandi#0001: On that topic, when can we get public access to the API?
Sahl#0630: It does do well to cut into the economic value without incurring cost |
triggerhappygandi#0001: I have filled the form like 6 times now lol
Sahl#0630: But opening the model allows way greater value with slightly more cost
Josh#5264: @triggerhappygandi As much as I would love to answer it probably would not be appropriate for me to comment on that.
Sahl#0630: Probably not the right person/place for this
Sahl#0630: I would love access but processes shouldn’t be bypassed like this
triggerhappygandi#0001: Yeah i guess
triggerhappygandi#0001: Greg Brockman did say something along those lines on twitter though
cfoster0#4356: I think right now we're in a tenuous state where, on the one hand, general AI systems are becoming more possible, but on the other, very few researchers are able to work on avoiding disastrous outcomes from them
Josh#5264: It's somewhat less of a process bypass than you might think, but I think it's better if I engage with EleutherAI folks primarily on AI safety topics and don't mix in resource exchange opportunities. I don't want it to be the case that discussions I have with people here are colored by weird incentives.
triggerhappygandi#0001: In any case, I hope this doesn't become the norm in other areas of research. I would hate it if say Jukebox 2 was not accessible in a similar fashion
Sahl#0630: @Josh I may be misunderstanding, but I feel like OpenAI focused too much on costs rather than doing a proper cost/benefit analysis. I think it’d be important to get perspectives from economists (which I am not), as they should be able to give a more accurate estimate.
Sahl#0630: I’m sure you did a cost/benefit, but I feel like people don’t see the benefits perhaps as much as they should.
Deleted User#0000: guys i've got an idea
Deleted User#0000: make a channel for beginners
Louis#0144: No
Louis#0144: This isn’t a beginner friendly server
Louis#0144: It’s more of like beginner tolerant
Deleted User#0000: i'm beginner, sure that most of the people here are
Louis#0144: That’s fine
Louis#0144: They can watch |
Louis#0144: If you look into most of the channels, people post very high skill research questions. Most of the time beginner questions are ignored
Louis#0144: I think having a beginner channel would too drastically change the landscape
Deleted User#0000: hmmm
Louis#0144: Not every ML discord needs to be for beginners
Deleted User#0000: only a channel
Josh#5264: @Sahl It's hard for me to comment on this because I don't want to ascribe a particular position to the company as a whole. But I will note from my own work that balancing costs and benefits for this kind of technology is particularly difficult because of the high uncertainty about both. Quantifying and measuring costs and benefits is a nontrivial problem. Where uncertainty about the costs is high, but long-range ecosystem impacts from a decision trend towards increasing the likelihood of catastrophic outcomes, I tend to be quite conservative about where I'd put the trade-off.
Deleted User#0000: think that it would just be optional
Louis#0144: Idk, it’s not my decision at the end of the day. You should ask one of the admins.
Louis#0144: They share similar views to me though since this has come up before
rivalset#4984: maybe it could be easier to ignore them if they would be in a separate channel lol
rivalset#4984: it could be like beginner questions and german memes
Louis#0144: Maybe or maybe it would drastically change conversations that occur in all channels
Deleted User#0000: xD well just a idea, nothing else
Louis#0144: It’s hard to predict these things
Sahl#0630: That’s fair. Perhaps I underestimate the costs that the precedent sets. However, I think that getting advice from economists is important for this sort of thing.
Josh#5264: @Sahl Strong agree that intuition is no substitute for well-constructed models and clearly-identified assumptions, and experts like economists can be helpful on this.
Sahl#0630: They’re in the business of valuation after all :)
Sid#2121: every new channel here costs time from people who organize / moderate this discord. We're here to do research. There are lots of other discords more appropriate for beginners to ML. You're happy to stay and lurk, but we're not here to teach.
rivalset#4984: maybe you could make a list of other servers that you recommend for beginners
Louis#0144: #communities |
Louis#0144: Yannics server is beginner friendly
rivalset#4984: I think bmk also recommended fast.ai
Louis#0144: Yeah
triggerhappygandi#0001: It is.
Louis#0144: They’re good too
Deleted User#0000: EXACTLY
Sid#2121: if someone has a link to the fast.ai server i'll post it up in #communities
Deleted User#0000: thank you
chilli#5665: Please don't
chilli#5665: Make a beginners channel
Sid#2121: are you saying "please don't make a beginners channel" or "please don't. Make a beginners channel" lol
Deleted User#0000: nonono
Deleted User#0000: don't make the server beginner
chilli#5665: Please don't make a beginners channel
chilli#5665: Lol
goolulusaurs#1571: Please don't take this with offense, but speaking of weird incentives, I am just curious if you are here speaking with us of your own accord or is OpenAI paying you to do so?
Sid#2121: yeah, we're not going to nw
rivalset#4984: this is the reason nlp is hard
Sahl#0630: oh god flashbacks to evaluating @Louis’s language model
Deleted User#0000: then recommend a server beginner friendly |
Louis#0144: LMAO
Sahl#0630: I don’t think that’s a weird incentive, it only makes sense both as a company looking to make money and as an organization concerned about ethical implications
triggerhappygandi#0001: Has anyone here tried applying for AWS research grant?
goolulusaurs#1571: Maybe its not a weird incentive, but it is an incentive so I think its worth asking either way.
triggerhappygandi#0001: They ask for a link to the pricing calculator, which doesn't take in custom instances from what I've seen. Like what if I want to apply for anything more than 16 GPUs
rivalset#4984: Is that only for academic researchers or is it like tfrc?
Josh#5264: Purely of my own accord. Neither my manager nor anyone else at OpenAI has asked me to do this.
triggerhappygandi#0001: I will know when they accept/reject my application
triggerhappygandi#0001: Though tfrc is infinitely more friendly
triggerhappygandi#0001: AWS applications take months to clear
Sphinx#2092: Of course.
Sphinx#2092: Frugality is one of their LPs lol
Josh#5264: There are a few folks in OpenAI who do know about EleutherAI and every now and then chat about it a little bit, mostly because this is a pretty accessible group for understanding the implications of language model development and sharing.
triggerhappygandi#0001: Does Ilya know us too?
Ambient#0001: Hey Josh, appreciate the thoughts above and look forward to discussing some of the practical issues in #alignment-general, and would love to hear about anything you guys are working on/thinking about
Ambient#0001: Disentangling incentives is an important point
Josh#5264: @triggerhappygandi I don't really know to what extent Ilya is aware of this group. I'm sure he's seen something or other though (since there's a thread on Slack every once in a blue moon).
Josh#5264: As a general request I'd prefer not to get asked about the specific opinions of other folks inside of OpenAI, since it wouldn't be appropriate for me to represent them.
triggerhappygandi#0001: Damn. Soon this server will be among the highest echelons lol
Also, I understand. Just was curious since you mentioned it. |
Josh#5264: For sure, for sure.
triggerhappygandi#0001: Anyway, do you see these issues coming up with Jukebox 2, or very complex gym environments too, as have been raised with GPT-3?@Josh
Josh#5264: @triggerhappygandi I don't have state on future Jukebox-related releases at the moment, though my general view---and the recommendations that I make to folks in OpenAI---holds that narrower models usually have more-easily identified risks, and as a result, are easier for us to release or commercialize. I think I would personally have a hard time arguing that a Jukebox sequel would pose a serious issue unless new evidence was brought to my attention.
triggerhappygandi#0001: Hypothetically, lets say @Josh
triggerhappygandi#0001: That such a model is being produced somewhere
triggerhappygandi#0001: It _could_ be used to copy human voices too could it not?
Josh#5264: (Hold on just a little bit, got a phone call.)
Josh#5264: So---that's a good point! The kind of questions I would have about that sort of model would relate to the data distribution and some measurements about its capabilities towards that purpose specifically.
tin481#8570: @Josh I feel an important point here is trust. The people in this discord are quite alignment-leaning, and so receptive to arguments about possible harms of big models. Instead, there's more disagreement about whether "allow OpenAI to monopolize large models" is a good alternative. Research on these topics seems very important, even crucial, for the future of humanity. Can we really rely on OpenAI to do all the work?
Josh#5264: @tin481 I appreciate getting down to a core issue here. The way I see it, the choice isn't about whether to allow OpenAI to monopolize large models---it's about whether large model development and distribution is done in a way that allows for containment and control. OpenAI is not, and will not be, the only actor in this space---but what choices do we make as an ecosystem, and when and how do we deploy these technologies into the world? These are the questions I am interested in, and interested in discussing with folks here.
Louis#0144: @Josh whats your opinion wrt safety about people poisoning your datasets for product placement? This is just a one off conversation I had a few moments ago, but it would be reasonable for like a widescale robots.txt to replace key words with products (in exchange for $) whenever someone is crawling to make a dataset
Louis#0144: I would almost imagine something like this already happens at a small scale but as GPT3 and the like make it to more end users it might become more common
Louis#0144: It doesnt need to be product placement, I'm mostly curious about widespread data poisoning
Josh#5264: @Louis On an intuitive level that feels very plausible to me. The way I see it, dataset hygeine is foundational to any AI safety effort---knowing exactly what you are putting it, and as a result, what you are incentivizing. Poisoning is one of the things that we'll all have to look out for because of the potential to distort specifications.
Louis#0144: did you notice any small scale poisoning efforts when crawling before?
Josh#5264: I haven't personally done much crawling and so I haven't seen this or gone looking for it.
Louis#0144: ah ok
Louis#0144: do you guys have data quality and data lineage checks in place?
Louis#0144: Im kinda just curious about what AI companies at the scale are doing to prevent poisoning or ensure high quality data
nz#9710: Thanks for mentioning it, I didn't know there was one. |
triggerhappygandi#0001: How do you tackle it in a time when datasets are not curated but crawled through the internet?
tin481#8570: Containment and control — who’s control? I worry about the growing divide between OpenAI and the research community. Aligning an AGI is monstrously difficult. GPT-3 is among the most promising paths to AGI. Yet, the structure of the API restricts most research to one private lab. The wider community needs access to finetuning, architecture changes, a say in future large scale projects, the chance to do fundamental experiments. These things come at significant cost! But if you want centralized control, they’re costs you’ll have to bear. This kind of coordination means building an *Institution*, capital I. One with a democratic element and a significant amount of transparency. One very different from anything that exists today. Otherwise, I’ll fall back on traditional, decentralized academia.
bmk#1476: @Josh echoing the sentiments above: how are people supposed to believe in OA's alignment work when it's been shown that a large number of people at OA don't believe in the seriousness of, say, orthogonality?
asparagui#6391: @Sid https://discord.com/invite/xnpeRdg
triggerhappygandi#0001: @bmk what is orthogonality?
triggerhappygandi#0001: in this context
Sahl#0630: Basically, that intelligence is orthogonal to terminal goals
Sahl#0630: https://m.youtube.com/watch?v=hEUO6pjwFOo
goolulusaurs#1571: I disagree with orthogonality too.
goolulusaurs#1571: As its usually formulated, as goals and intelligence being totally independent, its seems false to me. There are examples like "could a worm have the goal of building a nuclear power plant", that show to me that some goals don't make sense without enough intelligence to actually formulate those goals.
Sahl#0630: In nature, certain goals can be more common
Sahl#0630: That doesn’t mean orthogonality is false
Sahl#0630: Imagine you had a neural network linked up to a keyboard
Sahl#0630: It has one hidden node
Sahl#0630: Just one
Sahl#0630: But you trained it based on how close it got to creating a nuclear power plant
Sahl#0630: Then its terminal goal would approximate that
Sahl#0630: While it wouldn’t really be able to accomplish it
goolulusaurs#1571: But training it assumes a bunch of other stuff.
goolulusaurs#1571: Another example. Imagine you had a network that was identical to the one that was trained for GPT-3, but instead it was trained to only ever output "5". No one would be calling it intelligent, but the only difference is the objective it was trained for. |
Sahl#0630: It could be intelligent
goolulusaurs#1571: And what positive evidence is there in favor of orthogonality?
Sahl#0630: If it were really intelligent, it’d kill humanity, stabilize the world around it, then output 5
Sahl#0630: If it were dumb, it’d output 4 even though its terminal goal is outputting 5
Sahl#0630: Same goal in all agents, different levels of intelligence
dopa#3178: if it was superintelligance it would output 42
goolulusaurs#1571: But my point is when the goal is sufficiently complex a less complex agent might not even have the capacity to formulate or hold that goal.
dopa#3178: to me, if there is complex goal, there is no single answer, no global optimal solution, and solution is not verifiable
Sahl#0630: Agents are an abstraction: training a network to accomplish a goal makes that goal its value function, but you can’t recover the value function from the network afterwards
Sahl#0630: The value function and the terminal goal isn’t “real”
Sahl#0630: But the network will behave as if its striving towards that goal, however badly
Sahl#0630: You can also make an agent with an actual value function
Sahl#0630: The value function can be computed by something else
Sahl#0630: And fed into the agent
goolulusaurs#1571: But you have to already have something complex enough to represent that goal to train the agent. That doesn't make the intelligence and the goal independent, it just moves the intelligence need to the person setting the goal for training.
Sahl#0630: The value function could be random and this will still hold
goolulusaurs#1571: To go with your example maybe you could train a single neuron to try to build a nuclear power plant, but you couldn't train it to try to achieve a goal even more complex than what humans can formulate.
Sahl#0630: A smart enough agent will attempt to replace the random value function with the max value
Sahl#0630: In effect, the value function isn’t the value function, it’s where the output of the value function enters the agent
Sahl#0630: So the terminal goal is always to maximize what the value entering the agent is |
Sahl#0630: Independent of intelligence
Sahl#0630: But when the agent approximates a value function or has it within itself, then their terminal goal is maximizing the value function
Sahl#0630: Which can be whatever
Sahl#0630: (see wireheading for case 2)
triggerhappygandi#0001: So orthogonality just means 0 alignment here?
cfoster0#4356: This is a good discussion. Let's move to #alignment-general
bmk#1476: But Sama doesn't just disagree with the strong interpretation. He seems to think that if you throw enough intelligence at it, it'll automatically learn to be moral or something
3dprint_the_world#6486: but the orthogonality thesis isn't about *goals*, it's about utility functions.
a worm could absolutely have a utility function of building a nuclear power plant.
you could easily make a worm-like robot and program its utility function to return 1 when it has a nuclear power plant and 0 otherwise.
3dprint_the_world#6486: the word 'goal' is a bit problematic as it implies something you're consciously aware of, can conceptualize in your mind, and can build a plan to work towards. which, sure, a worm can't do.
3dprint_the_world#6486: (this is why I think it's nonsense to say biological evolution has any 'goal', btw)
3dprint_the_world#6486: basically, the orthogonality thesis is that any level of intelligence is compatible with any utility function (within reasonable limits).
the case of low-intelligence being compatible with any utility function is trivial.
the only contentious bit is that of high-intelligence being compatible with any utility function (e.g. paperclip maximizers)
Sid#2121: Can you link me to where he writes about this?
bmk#1476: this was during the meetup
thenightocean#6100: and we weren't allowed to record it unfortunately
paws#3311: You met sama?
bmk#1476: it was a vidcall |
bmk#1476: SSC meetup
bmk#1476: but they asked us not to record it
Ken#8338: A question to ponder from Sam Altman (OpenAI) to get you thinking jump started in 2021: "If an oracle told you that human-level AGI was coming in 10 years, what about your life would you do differently?" https://twitter.com/sama/status/1346141592344612864
tin481#8570: He was pretty discrete. Didn't say anything that wasn't already public, I think.
bmk#1476: Not much because that's close to what my prediction is anyways
Ken#8338: I guess the question then becomes what are you doing now? 🙂
bmk#1476: This
bmk#1476: I am doing this
bmk#1476: Where this refers to Eleuther
45#2247: sam's & max hodak's tweet talking about AGI in 10y, pple at OpenAI leaving
45#2247: I'm more and more suspicious about some private info they have we don't
j o e#4696: I think if Sam knew there was a non-trivial chance of anything remotely AGI related in 10y he wouldn't have posted that Tweet
Ken#8338: If people at OpenAI really thought that AGI was 10 years away you think they would have more incentive to stay???
45#2247: tha'ts so obviously trying to coin the idea being fakely curious
45#2247: like, i have private info about demis hassabis & forecaster @ OpenAI predicting AGI in 10y
j o e#4696: but don't you think he'd face repercussions for posting something like that, even if its disguised as curiosity?
j o e#4696: people might read the bluff
j o e#4696: like you are now
j o e#4696: its hard to say
45#2247: possible goals of this tweet: |
- make people talk about the possibility of AGI in 10y
- actually learn about what people would do differently
cfoster0#4356: Also it's a great way to shill your own bags
45#2247: Like, if you were to release GPT-4 in 2 months what would you say?
j o e#4696: true, tin foil hat moment here but what if it came from the top and they wanted to get public opinion from the community without making an official press release?
paws#3311: Yeah I think he's excited about some things/results he saw some from gpt4
j o e#4696: if you make a press release you get everybody worked up / people pushing for red tape
cfoster0#4356: *If* you're at OpenAI but don't think it'll be the driver behind that near term AGI or think it's going along a risky path, you might have an incentive to leave.
bmk#1476: I think it's much more likely they just have no idea what they're doing lol
bmk#1476: Being incompetent is a lot easier than keeping a secret
bmk#1476: And from what Sama and other OA members have said, i don't have a lot of confidence in OA
paws#3311: Maybe I'm too naive, but I sorta thought the string of researchers leaving was because all of the people genuinely care more about the alignment and ethics problem o.O
45#2247: hypothesis 1: clark & amodei are ethical people. they see sama doing gpt-4 things, actually wanting to AGI any% & decide "nono we do agi super safu"
hypothesis 2: amodei see gpt-4 and thinks "no that not agi, people from open ai, let's build real agi outside" & clark & olah leave with him
bmk#1476: I know some people like Christiano are working on interesting things but it doesn't help if nobody else in OA is interested in those
goolulusaurs#1571: I think a lot of people have the sincere belief that there will be agi with in 10 years. I found this old blog post by Shane Legg in 2011 predicting proto-AGI by 2019 and AGI by 2028. http://www.vetta.org/2011/12/goodbye-2011-hello-2012/
bmk#1476: I think 1 is more likely, but also clark doesn't seem into alignment as much as policy (i guess that's a "duh", from his job title)
bmk#1476: If gpt3 counts as protoagi, which imo can definitely be argued for, then the prediction is spot on
45#2247: well he's into solving the whole ai safety problem, not alignment which is technical by def. true
bmk#1476: But he focuses on policy side of safety |
goolulusaurs#1571: Yes, Or MuZero, or Impala, or ... etc.
bmk#1476: While most of Eleuther isn't very into policy because we think it's too short sighted
paws#3311: Wait so y'all think gptx 2028, will be a multimodal all purpose almost agi system?
bmk#1476: I don't think it will necessarily be multimodal, no
bmk#1476: I'm skeptical of multimodal
45#2247: ELI5 multimodal
45#2247: like, few-shot learning is multimodal given the right prompts?
bmk#1476: Both images and text
bmk#1476: And audio and etc
45#2247: they're already doing images and text with gpt things no
paws#3311: Trying
j o e#4696: I get the feeling the first systems that the AGI argument starts over will be multiple systems stitched together in an openCog fashion (except it works), of which gtpx will be a component part
paws#3311: We don't know about success until they release
paws#3311: Does anyone think they are moving too fast 🤔
45#2247: well there's this https://openai.com/blog/image-gpt/
bmk#1476: Henighen scaling paper did do limited multimodal
bmk#1476: But it wasn't exciting
paws#3311: Yeah I meant the crossmodal large scale transformer, it'll be like lxmert
paws#3311: Lxmert was very interesting
paws#3311: But I think if ilya's words in the batch are anything to go by, a multimodal model with a human in the loop training regime sounds really really powerful o.O |
45#2247: hum so crossmodality is interesting because it's pre-trained on both images and text?
paws#3311: I can't believe I'm saying this, but it'll be a system that can understand memes (ideally)
bmk#1476: Awesome, we can finally have an Eleuther model
45#2247: a bit off topic, but assuming I wanted to spend 30m a day for 6m learning more about NLP to contribute to meme... hum Eleuther, do you guys have a list of stuff I should look into?
bmk#1476: What's your background?
45#2247: master's in AI, 6m doing RL/Alignment research, self-studied RL for a year, did cs231n (not the last part), now doing internship in computer vision
45#2247: I don't have much pytorch/TF/infra practical experience, but I think that with my day job I'll pick it up
bmk#1476: Lol we need you working with us on alignment then
bmk#1476: We need more alignment people
45#2247: my main conclusion from doing alignment research is that it's pretty hard though, not really motivating
45#2247: like, I feel that learning about NLP and helping with GPT things is more stimulating, even if goal is to come back to alignment later
bmk#1476: But we care a lot about Alignment here
bmk#1476: Lol fair
bmk#1476: We really wish we were doing more alignment tho
45#2247: well you wish
45#2247: and maybe connor too
45#2247: & ok 3d printer also
45#2247: & ok maybe everyon
bmk#1476: Given what I've heard about OA, the concentration of people who care about Alignment (not policy) here is probably higher than in OA lol
paws#3311: Can someone define alignment for me, I want to see if I understand it correctly o.O |
45#2247: maybe we could start with your definition ?
45#2247: I guess "caring about alignment" could be "being really concerned about making things go right". Are we saying that people at OA don't care about making alignment research? Is that because they're optimistic about the outcome? Is that because they have longer timelines?
paws#3311: I sort of think of it in the meta sense of asimovs rules for ai and irobot movie, you have some rules that you want the ai to follow but you don't know if it's going to do that, so ai alignment is trying to glean these hidden attributes and also where it's true values/principals lie in and ofcourse to make the model more aligned (literally) to the intentions/ values you want to imbue it with
45#2247: actually I think reading this first could help with the debate https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment)
paws#3311: Alright thankyou :)
bmk#1476: by alignment i mean specifically not policy stuff
bmk#1476: OA seems to care a lot about policy
bmk#1476: but only a small number of people like christiano are doing non-policy alignment
45#2247: but what about the other people doing NLP / RL, the GPT-3 authors, etc. they're technically not making progress but they "care" about alignment. it's just that OA won't put 90% of their staff into alignment research bc they need to keep up with capabilities
bmk#1476: i mean it feels like when most OA people talk about safety, they're only talking about policy things
45#2247: ok maybe I haven't talked to OA people enough
bmk#1476: even just signalling whether they care
bmk#1476: tbf ive only seen a very biased sample
bmk#1476: but it feels like of the people who even mention safety publicly at all, policy is much more outspoken and has much more support and people working on it in general
bmk#1476: i vaguely remember sama talking quite a bit about policy and basically handwaving away the alignment, though that might just be a false memory ill have to check my notes from the meetup
nz#9710: In the context of alignment, would it help if the AI learned what an human means not through some imperfect medium, but directly through the neural activity representing whatever we specify as a goal? Not that this is feasible (both short and longer term), just trying to understand as someone relatively new to the problem.
45#2247: I feel like there are two camps in wanting aligned AI:
- 1. "I'm an ethical person who care about things, and I care very much about future lives, so even if I don't really think AGI is coming soon, I'm going to make a good action for the world and help reduce this small risk of 0.1%"
- 2. "oh no if earth die me dying too, I think p(AGI bad) & p(AGI soon) are actually high"
|
people at OA in alignment who are EA think maybe 1. I think they all have survival instincts & would also think 2., but they either think AGI not soon or AGI would work out?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/795749999929458718/unknown.png
bmk#1476: what i'm thinking of is kinda orthogonal (pun not intended) to that
bmk#1476: there seem to be two caps along a different dimension to that - i guess it corresponds roughly to fast and slow takeoff but not exactly
bmk#1476: one camp seems to think the best way to prevent bad things is through policy to make sure society doesn't collapse in the meantime (if i understand correctly, this is approximately jack clark's position) - i guess in a sense this is a bit of a slower takeoff position
thenightocean#6100: FWIW here is the old steelman version of why it might be good idea not to work in the open once we start to get close to the AGI. Not sure if OpenAI feels we are close to that point: https://slatestarcodex.com/2015/12/17/should-ai-be-open/
thenightocean#6100: "Or are we worried that some big corporation will make an AI more powerful than the US government in secret? I guess this is sort of scary, but it’s hard to get too excited about. So Google takes over the world? Fine. Do you think Larry Page would be a better or worse ruler than one of these people? What if he had a superintelligent AI helping him, and also everything was post-scarcity? Yeah, I guess all in all I’d prefer constitutional limited government, but this is another supposed horror scenario which doesn’t even weigh on the same scale as “human race likely destroyed”.
If OpenAI wants to trade off the safety of the human race from rogue AIs in order to get better safety against people trying to exploit control over AIs, they need to make a much stronger case than anything I’ve seen so far for why the latter is such a terrible risk."
45#2247: (btw here's another more recent take https://www.nickbostrom.com/papers/openness.pdf)
45#2247: that's my impression of jack clark as well
thenightocean#6100: Ironically here OpenAI is criticised for being too open 😛 (good old days)
bmk#1476: the other seems to be mostly people who think that the coordination problem is hopeless and the best way to do things is to focus on figuring out the technical challenges of not-getting-turned-into-paperclips
bmk#1476: generally, eleuther leans towards the latter more than OA from my view
45#2247: porque no los dos
45#2247: why can't the pb be both technical and political
bmk#1476: theyre different approaches and OA does way more of the first than the second
bmk#1476: to people like me, the first approach feels like arranging deck chairs while the titanic sinks
45#2247: hum I think their policy team and AI Alignment team is about the same size, I just know brundage, clark and this Cunnen guy in policy
bmk#1476: it might just be that the alignment guys are quieter |
45#2247: less twitter followers 😉
kip#6104: i think alignment is harder to post publications for right
45#2247: here's my view: the boat is actually going to face an iceberg.
- policy people are like: "hey we should go talk to the captain and convince everyone to fuck the calm down with this boat".
- alignment people are like: "nono first we should solve all the equations about how to put lava in the water so the iceberg disappears before we hit it"
bmk#1476: i dont agree with this view entirely tbh
andyljones#7746: that is a view in need of some refinement, to put it lightly
paws#3311: Does oa optimize for publications :P
andyljones#7746: and i say that as someone who's pretty down on miri
andyljones#7746: (for this server at least)
kip#6104: i think in the situation where alignment is truly useful. once you see the iceberg. you have already hit it
45#2247: (nb: it's a really simplified view answering another simple view which was bmk saying "the first approach feels like arranging deck chairs while the titanic sinks")
kip#6104: weak ai alignment would be easier
bmk#1476: ill admit my analogy was unfair
bmk#1476: but the way i view it is, without any analogies, is: i think that if we don't figure out the technical alignment problems within, say, 20 years, we are all going to die a horrible painful death and destroy the rest of the universe
andyljones#7746: nah. there's another paperclipper out there somewhere, expanding at the speed of light
andyljones#7746: it's like the ol' false vacuum collapse, but with intelligence
thenightocean#6100: where is Connor btw, he would have some good takes here.
thenightocean#6100: and by "good takes" I mean convincing arguments why death might be too optimistic scenario ...
jrowe#5371: hi there |
thenightocean#6100: but lets not ruin anyones sleep tonight
jrowe#5371: is there a gpt-2/neo model released?
bmk#1476: no
jrowe#5371: I just found this a bit ago, trying to orient myself lol
45#2247: what if solving the policy things could give you more time to work it out? what if doing more fundraising, politics, etc. could fund Alignment research long term? is sitting on a chair and writing decision theory equations the best way to make progress in expectations given all the future trajectories? (sorry it's a complete strawman & I might be hurting some feelings but that's how I feel when I hear MIRI people criticizing politics/policy)
bmk#1476: first off ive heard that alignment isnt constrained on money rn
45#2247: > rn
nz#9710: Open philantropy?
bmk#1476: some hardliners like connor would argue that adding more people to alignment than now is net negative
kip#6104: it won't be constrained by money until google goes bust
bmk#1476: im not going to advance that line of argument, you can go talk to connor for that
45#2247: ~~what about funding Stanford PhDs who do alignment research like open philantropy~~
jrowe#5371: is it an e=mc^2 problem or a manhattan project problem?
jrowe#5371: how to you qualify the difference to even know?
thenightocean#6100: Thing I am getting concerned about is that original OpenAI idea is that they would start to be restrictive about publishing only when they get advanced enough systems that could be dangerous. I feel they didn't expect they get so far so soon. Also, I have this sinking feeling they know something we dont... something they discovered with their recent experiments, which is why they are quiet and seemingly nervous lately.
45#2247: I actually had a debate about that with connor & my view is "even if there are diminishing returns in doing direct work now, we can at least do field building & other stuff that will prove useful long term"
jrowe#5371: That'd imply that gpt-neo is hot on their tail, and state actors probably have copies
jrowe#5371: the obvious thing is code models from all the gajillion lines of well documented open source out there
kip#6104: it's more likely would not be realised due to potential monetisation oppose to dangerousness
thenightocean#6100: Oh I worry about something much worse |
andyljones#7746: on the other hand: a bunch of very serious researchers quit
thenightocean#6100: thats the "getting nervous part" I was referring to
andyljones#7746: it suggests issues over longer timelines rather than shorter ones
andyljones#7746: at least long enough to build up a secondary organisation
rivalset#4984: Which meetup was this from? Was this recorded?
andyljones#7746: ssc meetup, no
goolulusaurs#1571: Deepmind has been really quiet the past two years compared to how they were previously. To me it seems even more likely they have something than OA.
kip#6104: my bets are on deepmind being ahead
45#2247: obligatory gwern comment: https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang?commentId=jbD8siv7GMWxRro43
nz#9710: Can't that be related to more of DM's research being commercially viable (e.g. AlphaFold 2)?
45#2247: they've been really quiet solving protein folding lmao
Eddh👽#7290: Muzero might be the closest thing to agi ?
paws#3311: I think though demis hassabis is a man with different motivations (less commercial) towards how deepmind will function?
goolulusaurs#1571: This is interesting, hadn't read that yet.
cfoster0#4356: Working backwards from that, the first thing that comes to mind is that perhaps OAI discovered the scaling laws accelerate rather than peter out, which would imply we are very near takeoff.
paws#3311: Although they've been acquired by Google they don't contribute towards commercial products as much yet, and still a loss making entity
nz#9710: It may very well be, but isn't it gonna be Google that decides? And Google has investors to satisfy.
nz#9710: A significant part of Google's valuation comes from expectations of its future profits due to AI.
goolulusaurs#1571: On a practical level, how is the scaling hypothesis different from just underfitting? Both seem like they just mean the model performance will increase as you add more parameters.
goolulusaurs#1571: yeah, but that is narrow AI, previously they were much more directly focused on generalization across tasks. |
cfoster0#4356: The biggest practical difference is that larger models are smoothly more sample efficient, which (taken to the limit) would mean they may learn *very very* rapidly
bmk#1476: does this mean that if we keep scaling beyond gpt3 it might be more dangerous than we think rn?
bmk#1476: *shit, but i wanna do 1T*
nz#9710: Is there any symbolic reason as to the 1T target, or is it just big numbers go brrrrr
45#2247: https://cdn.discordapp.com/attachments/729741769738158194/795757855897812992/4sjnds.jpg
bmk#1476: big numbers go brrr
bismarck91#5255: Deepmind for Google👀
kip#6104: why do you think this?
thenightocean#6100: (if I am scaring anyone just a regular reminder that I am total AI/ML newbie whose job here is to make web icons look less shitty 😛 )
https://i.imgflip.com/19ty7h.jpg
bmk#1476: i was just taking @cfoster0 's thing at face value
bmk#1476: maybe i misunderstood
andyljones#7746: fwiw, i feel a little bad about how shrill that post is, but i was rather anxious at the time
cfoster0#4356: For the record I think this scenario is almost certainly not true. I don't think OAI has anything up their sleeve
cfoster0#4356: @bmk
bmk#1476: ah
bmk#1476: ok so 1T here we come
cfoster0#4356: These are just scary bedtime stories lol
bmk#1476: ***1T here we come***
bmk#1476: :ultrazucc: |
kip#6104: maybe it is dangerous👻
45#2247: from what you're saying it seems it's more the algorithms to train GPT are getting more efficient, not much related to scaling laws per se (or scaling laws take into account better algorithms?)
nz#9710: An even cooler number (apart from big numbers go brrrr) is 1Q(uadrillion), or close to the estimated number of synapses in a newborn baby.
45#2247: oh that's really interesting. i've been reading this post and sending it to a bunch of friends. what's the biggest thing you would change about it now?
andyljones#7746: qualify everything more. make concrete predictions. more comparisons against high-budget projects, less waving my hands about potential outcomes
andyljones#7746: buuuuut if i'd done that it'd both be (a) not as clickbait-y and wouldn't have gotten as much attention and (b) i'd be more likely to be publicly and concretely wrong
cfoster0#4356: Have you read the scaling laws papers or nostralgebrist's post? Those will help set the context
45#2247: I should read those yep
45#2247: I guess that will be my first task for Eleuther haha
cfoster0#4356: https://www.alignmentforum.org/posts/diutNaWF669WgEt3v/the-scaling-inconsistency-openai-s-new-insight
45#2247: (agreed. this makes me think that the only one really qualifying everything he says is Bostrom and he ended up with 400p of "if maybe then might else may")
45#2247: given how detailed the post is and how quick you (supposedly) wrote it, from an outsider perspective it's still super impressive, so thanks
Daj#7482: Complete extinction of all sentient life is one of the lucky scenarios :smiley:
andyljones#7746: thanks 😊
nz#9710: Just for intellectual fun, what's the worst scenario in your view?
thenightocean#6100: dont get him started
cfoster0#4356: I should clarify this. I think what they'll come out with is a finding that the same kind of few shot learning we saw in GPT-3 applies in other/cross-modal settings. But they don't have new infohazards
Daj#7482: This is assuming hyper computation doesn't exist
nz#9710: Hyper computation?
Daj#7482: Hyper Computation = Halting Oracles = My absolute worst nightmare |
Daj#7482: If Hyper Computation exists...lets not go there
45#2247: we encode something wrong and superintelligent AI create an infinity of humans just to torture them
thenightocean#6100: if these are the stakes, maybe Microsoft/OpenAI corporate dominance forever doesn't sound bad in comparison
Daj#7482: I expect a truly astronomical amount of suffering to be created by AGI by default
Daj#7482: My most likely scenario is that we/the AGI don't realize that some very sophisticated algorithms have moral worth
Daj#7482: e.g. training a large NN instantiates billions of minds and then destroys them or something
nz#9710: Oh I think I read about that idea.
cognomen#6297: zendegi?
Daj#7482: I guess I don't expect corporate entities to care about these concerns
45#2247: personally, maybe we discover that it's all a simulation but a deterministic one, so we are forced to relive the same live again and again. like in Tenet when the guy goes back in the past. the precise moment when you get it is "the worst".
cognomen#6297: mmo company spins up partial brain copies to automate content moderation
thenightocean#6100: I know I know. I am channeling the point from SSC OpenAI article
nz#9710: No, but thanks for mentioning it -- I love Egan so will make sure to read this.
Daj#7482: For the record, I think the most useful piece of my opinions I can give is that I _genuinely_ think human coordination problems are _much, MUCH harder_ than solving the entirety of AGI technical alignment
kip#6104: this sounds like a black mirror episode
Daj#7482: This is a crux in my thinking
cognomen#6297: they stop responding though because what they see is too disgusting to process
Daj#7482: I have small but genuine concerns GPT3 can experience morally meaningful pain
nz#9710: That's a cool idea, but I would not find it too bad compared to, for example, Connor's worst.
thenightocean#6100: so given this stakes maybe we and OpenAI can get along somehow... ? Try to figure out how to like, not go there through coordination problems created by merciless competition.. ..maybe I am just super naive. I dont know. |
cfoster0#4356: Feel like we're on that path. I know there are multiple OAI folks here and they're of course welcome to contribute and discuss
nz#9710: This is honestly amazing, together with the qualifications of most folks in this discord.
Daj#7482: To be very clear: I don't hold any ill will towards anyone at OAI
Daj#7482: I just think they're wrong lol
Daj#7482: But I talk to people like Jack
Daj#7482: Many times
Daj#7482: It's not like I dismiss them without extensive discussion
45#2247: @Daj what's your steelman of why they're wrong (or link to another message where you already said that)?
cognomen#6297: I don't see what's mean-spirited about good competition
Sahl#0630: Also living forever, right?
Sahl#0630: That’s kinda nice imo
j o e#4696: I think the sentiment (correct me if I'm wrong) is that like with a lot of big comapnies, they can become pathological without a single one of the constituent members having any bad intentions
Sahl#0630: TBF with reversible computing you can create a Suffer Box TM that works forever
Sahl#0630: Since it takes no energy
Deleted User#0000: http://petrl.org/
Daj#7482: I mean, this is a nuanced topic and I tbh don't have the time right now, but my side of the argument is basically "Policy work won't help due to Molochian Dynamics. Short term AI Safety stuff like dataset bias is completely irrelevant to long term AGI safety. Longterm AGI safety completely dominates the expected value of the long term future."
Daj#7482: Several consequences of hyper computation are sort of infohazards. I can link you some stuff if you really wanna read it but I won't link it publicly
Daj#7482: It's one of those things that's so weird some people read it and suddenly decide to believe in god or something lol
kip#6104: you have made me want to read it more
Sahl#0630: Sounds interesting, DM me |
Sahl#0630: Is this stuff like taking exponentially less energy per operation?
Sahl#0630: And diverging to infinite computations?
Daj#7482: Yea I know lol
Daj#7482: You can read it if you want, I've already basically given you enough context to neutralize the infohazard
Daj#7482: (namely that hypercomputation doesn't actually exist)
45#2247: ok the "molochian dynamics" part actually convinced me of something. I'm not sure why you're mentioning dataset bias when talking about OA though. yep I agree about "longterm agi safety". crux is mostly about whether we can "buy more research time" by solving / doing work on politics/policy things.
45#2247: like politics is instrumental way of doing more math. people into math should be into politics
Daj#7482: I think it's basically a cultural boogey man in Californian culture and they don't realize it's just that, a cultural fetish
Daj#7482: And yes I agree with that crux. I don't think we can buy much if anything
Daj#7482: I do not think people that are into math should be into politics
Daj#7482: I think the best thing politics people can do is shield math people from politics
Daj#7482: Politics is the mindkiller
45#2247: yup yup I was saying "they should be happy if politics actually succeeded in buying more time"
45#2247: huuum interesting. a bit related but thomas wolf (gpt-3 1st author) gave special access to gpt-3 if you were into alignment... and dataset bias
andyljones#7746: general advice: careful both naming a person and their opinion they've offered privately, it ends up being a chilling effect for other people talking to you privately.
Daj#7482: You're right
andyljones#7746: i like you connor 👍
Daj#7482: Though these aren't things we haven't discussed publicly
Daj#7482: I've had this exact conversation with him even here on this discord hah
andyljones#7746: yeah, i didn't think it'd be something that he'd be het up about. audience doesn't have that context though so what it looks like is gossip 🙁 |
Daj#7482: You are correct, thanks for pointing it out
Daj#7482: Lapse of judgement on my part
45#2247: re dataset bias: what if it actually helped with general ai safety long term, very indirectly?
45#2247: like, not solving dataset bias -> narrow ai corrupting society + people angry at ai -> more political instability & unfair tight ai race
solving dataset bias -> we can now talk about real agi safety pbs (or at least not dataset bias)+ more stability
Daj#7482: This is a common argument, I just don't think this is a particularly good approach
Daj#7482: But I might be wrong, I think it's good some people work on it and we disagree about it
Daj#7482: This is a good steelman of some of the OAI type arguments I have seen
Daj#7482: I just believe in the power of Moloch to fuck it up
45#2247: (you actually convinced me of moloch)
jrowe#5371: How is a gpt-2 type model structured? is there a way to navigate it so you can follow the chain for any given input? Or manually tweak it for a given input?
jrowe#5371: like any time it encounters the word "drop" you change the probability of "dead fred" following to 100%
jrowe#5371: http://jalammar.github.io/illustrated-gpt2/ - this seems relevant lol
jrowe#5371: time for readings.
cognomen#6297: in short: GPT takes in a sequence of words (as word embeddings) and outputs a probability distribution over next output tokens
Realmsmith#4506: Does this output array have a length equal to that amount of unique words it has encoutered?
jrowe#5371: really big markov chain for lots of tokens
Realmsmith#4506: Cause that's a lot of words.
CRG#8707: About 50000 Byte Pair encodings
Realmsmith#4506: 50,000 words? |
cognomen#6297: that gets cut down to top-k tokens to keep it on topic, the distribution is flattened somewhat by the temperature parameter to make output less deterministic, then an output token gets picked from the result
Realmsmith#4506: 50,000 different tokens?
CRG#8707: https://nostalgebraist.tumblr.com/post/620663843893493761/bpe-blues
cognomen#6297: you could apply a bias during sampling towards the tokens you want
CRG#8707: A token is about 0.3 words on average
jrowe#5371: can you arbitrarily modify a model after training?
Realmsmith#4506: what?
CRG#8707: AIDungeon does this: https://aidungeon.medium.com/controlling-gpt-3-with-logit-bias-55866d593292
Realmsmith#4506: Oh that clears it up a lot actually!
Realmsmith#4506: Thanks for the link.
jrowe#5371: yes, tyvm!
jrowe#5371: i love this rabbit hole
cognomen#6297: come to think of it, has much changed in the area of language model sampling?
cognomen#6297: beam search fell out of favor at least
cognomen#6297: seemed to be vulnerable to repetition at gpt-2 scale
CRG#8707: Entmax sampling has been proposed https://arxiv.org/abs/2004.02644
Aran Komatsuzaki#5714: We already have that in gptneo lol
CRG#8707: Ah, have you used it for sampling on any model yet?
Aran Komatsuzaki#5714: Actually we haven't lol lucid likes to add a functionality like this and never actually try lol
chilli#5665: coding is so easy if you never test your code |
Deleted User#0000: a bunch of poor hungry driven students will test it for me 🙂
Deleted User#0000: part of my plan
Deleted User#0000: its also a win win
AtteroBro#1823: hi yes idk anything about code, would it be inappropriate to ask questions here on a consumer level
cognomen#6297: there isn't a consumer level yet
3dprint_the_world#6486: as opposed to a producer level?
Deleted User#0000: are you just peeking the project?
rivalset#4984: This paper looks super interesting, but I have one question about this: It seems like they used grid search over their newly introduced metrics to pick the hyperparameters for all the sampling methods. Is that really enough evidence to show that their method is better?
CRG#8707: Yeah, I wouldn't endorse it just yet.
CRG#8707: Seeing if an entmax GPT-2 reduces repeat degeneration would help.
StellaAthena#3530: What is the “consumer level”?
AtteroBro#1823: Not sure? Just, questions about casual usability similar to AIdungeon
AtteroBro#1823: But if what witty said is true then I'll just hang back-
StellaAthena#3530: Yeah our “GPT-3 replication” currently doesn’t exist
rivalset#4984: They did have an experiment with gpt2 in the paper.
CRG#8707: Hm, yeah I forgot about that. Seems more convincing then. https://cdn.discordapp.com/attachments/729741769738158194/796057985444610068/024f2ef7b9c1fa62fcaacbd9308d3108.png
rivalset#4984: my only concern is that it's unclear that they choose the best hyperparameters for the other methods especially nucleus
ssodha#3259: hey everyone! apologies in advance if this is the wrong channel... total noob here but wanted a better understanding of how to look into training my own GPT-2/3 model. not a great coder but looking to learn. anyone have any ELI5/easy to read documentation on how to get started?
StellaAthena#3530: Testing LaTeX $\overline{F}=\frac{d\overline{\rho}}{dt}$
StellaAthena#3530: Dope |
ssodha#3259: Thank you so much! I will definitely take a look at these!!
Aran Komatsuzaki#5714: oh i didn't know that we had a useful bot too
bmk#1476: How did you add the bot?
Cheese is good#5316: unless it's a private bot you can copy its id with developer options and paste it in between the = and the & in https://discord.com/oauth2/authorize?client_id=&scope=bot&permissions=8
bmk#1476: I meant i didn't think anyone other than connor had the perms to do so
Cheese is good#5316: o ok
Aran Komatsuzaki#5714: @bmk regarding your multi-lingual pile i have some idea
bmk#1476: yeah?
Aran Komatsuzaki#5714: i see that bpe performs poorly on some languages like chinese and japanese
Aran Komatsuzaki#5714: while unigram lm (sentencepiece) performs pretty well just as in other languages like english.
Aran Komatsuzaki#5714: so, i'll check if that makes any difference on a large japanese dataset just to be sure.
Aran Komatsuzaki#5714: if it's verified, you may wanna use that
Aran Komatsuzaki#5714: ok maybe i should tell this after my experiment
bmk#1476: this isnt about the dataset though
bmk#1476: tokenization is part of the model
Aran Komatsuzaki#5714: true but you'd train a model on this anyway, so i thought it would be relevant
Daj#7482: I added it
Aran Komatsuzaki#5714: well i'm going to train a small gpt2 on japanese and let you know my experience if i find anything interesting
jrowe#5371: is that because of grammar, svo/sov differences?
StellaAthena#3530: @Aran Komatsuzaki FYI we’ve been discussing the extent to which one can accurately compare models across languages in #scaling-laws. We’d like to do multilingual scaling experiments but we’ve hit the same issues you’re discussing. We are currently collecting parallel corpus text to experiment on, but it sounds like you’re further along. |
Sharing your experience + code would be very useful.
bmk#1476: for multilingual scaling laws, only byte level makes sense imo
Aran Komatsuzaki#5714: Cool. Actually, I haven't started anything at all, and I'm not really knowledgeable about the thing you've mentioned. I guess the issue I was discussing is different in that I'm just interested in improving the tokenizer rather than comparing them.
StellaAthena#3530: So you’d just ignore the fact that information density differs by language?
bmk#1476: yes
bmk#1476: because BPE doesn't get around that in any way
bmk#1476: or other tokenization schemes
bmk#1476: they just insert more weird factors so you might as well just go to bytes
bmk#1476: and anyways what you care about is the *ratio* not the constant factor, right?
StellaAthena#3530: Oh crap. Y’all’re talking about tokenization lol. I misread the convo lol
StellaAthena#3530: I thought y’all’re talking about evaluation metrics
bmk#1476: ohh lol
bmk#1476: well, there i'd still arbute byte level makes more sense, for the same reasons
bmk#1476: it doesnt make any sense whatsoever to compare the raw numbers anyways, but at least you can report them in something like bytes where it isn't subject to other random factors
jrowe#5371: is there any sense of when gpt-neo will have a model for release?
zphang#7252: tolkienization makes everything longer but more elaborate
bmk#1476: no promises beyond a vague "couple of months"
jrowe#5371: cool
jrowe#5371: 2021 is shaping out to be pretty good - space stuff, foldy bendy screens, and now gpt-neo |
chilli#5665: In general, how many TPUs can an arbitrary researcher expect to get from TFRC?
bmk#1476: a handful of v3-8s, no pods
bmk#1476: eleuther and tensorfork are special cases
chilli#5665: and that's essentially due to special agreements?
Aran Komatsuzaki#5714: they said i can get up to v3-128
Sid#2121: Testing LaTeX $\overline{F}=\frac{d\overline{\rho}}{dt}$
TeXit#0796: **Sid** https://cdn.discordapp.com/attachments/729741769738158194/796085853469671474/616061740857425920.png
chilli#5665: through eleuther or separate conversation?
Aran Komatsuzaki#5714: separate
bmk#1476: we technically have up to 2048 (though we obviously can't get anything that big ever)
chilli#5665: and a v3-8 is about equal to 16 GPUs?
bmk#1476: 8 gpus
bmk#1476: (and also we have some special perks that i don't think i'm allowed to talk about in the open)
bmk#1476: but yeah if you use your 5 v3-8s or whatever it is well, tfrc is more than happy to throw resources at you
Aran Komatsuzaki#5714: iirc it's equal to 16 V100s (half-precision) in terms of computes but 8 in terms of memory if your V100 has only 16GB. in terms of throughput per cloud price, they are on par
chilli#5665: they are on par with with 16 V100s?
paws#3311: so the deal is you get 5 tpuv3s, 5 tpuv2s which are non preemptible and 100 tpuv2 which are preemptible
paws#3311: for one month
Aran Komatsuzaki#5714: yes
Aran Komatsuzaki#5714: no |
Aran Komatsuzaki#5714: wait forget about the last sentence
Aran Komatsuzaki#5714: i don't recall
Aran Komatsuzaki#5714: oh just remembered
Aran Komatsuzaki#5714: sorry i meant 4 V100s = v3-8 in terms of computes and pricing roughly
chilli#5665: so flipped the multiply?
paws#3311: and they very easily extend for another month
Aran Komatsuzaki#5714: yeah my brain is not clear. maybe i should go to bed.
triggerhappygandi#0001: Wtf wrong equation reee
Louis#0144: Hello nerds
Louis#0144: How are u guys
triggerhappygandi#0001: Very stressed as you can see https://cdn.discordapp.com/attachments/729741769738158194/796096789748777000/20210105_224232.jpg
triggerhappygandi#0001: Do you have to personally contact someone for that kind of perk?
StellaAthena#3530: What do you mean by "wrong equation"?
StellaAthena#3530: Website statistics https://cdn.discordapp.com/attachments/729741769738158194/796110688132399124/Capture.PNG
triggerhappygandi#0001: F = dp/dt
triggerhappygandi#0001: What's rho
bmk#1476: what's wrong with rho
bmk#1476: this equation cannot be wrong because it's literally just a test lol
bmk#1476: there's no context for it to be wrong *in*
triggerhappygandi#0001: If you write momentum as rho then you probably also do `import torch as tensorflow` |
triggerhappygandi#0001: That's just heresy
triggerhappygandi#0001: Also, I was kidding btw
bmk#1476: $\frac{\mathrm{d}\overline{\rho}}{\mathrm{d}t}$
TeXit#0796: **𝐛𝐦𝐤** https://cdn.discordapp.com/attachments/729741769738158194/796112308996276224/606987544235868219.png
triggerhappygandi#0001: This will be useful
goolulusaurs#1571: Is the GPT neo project channel gone or did I mess something up on my end?
Daj#7482: Still there for me
goolulusaurs#1571: ??? https://cdn.discordapp.com/attachments/729741769738158194/796115863291428914/unknown.png
goolulusaurs#1571: weird
cfoster0#4356: Click on projects
goolulusaurs#1571: Ahhh, thank you
Sahl#0630: $∑i$
TeXit#0796: **Sahleroventh** https://cdn.discordapp.com/attachments/729741769738158194/796116483235250236/127145467510259723.png
Sahl#0630: poggers, unicode support
triggerhappygandi#0001: $\int_x \overrightarrow{F}. \mathrm{d}x = W$
TeXit#0796: **triggerhappygandi** https://cdn.discordapp.com/attachments/729741769738158194/796119118818312252/748950925682409484.png
triggerhappygandi#0001: Missed the arrow on x
Dohn Joe#2433: Dall-e apparently requires 12b parameters.
Anyone have a sense of the minimum memory needed to run that would be? How about training/fine-tuning? |
gwern#1782: the forward passes for sampling would, I'd expect, fit into any 16GB GPU. the intermediate activations are typically pretty small
gwern#1782: training/finetuning should also be possible with n=1 and various tricks like reversible layers. I *think* I've read about people getting T5 and the like working on single TPUs etc
gwern#1782: but you'd also want to run it end to end for the VAE half too, and that'll eat up VRAM too
gwern#1782: so on net, dunno. seems likely that the easiest way will be multi-gpu and splitting the gpt and vae across nodes
Dohn Joe#2433: Ok! I ordered an A6000. I was disappointed to learn GPT3 wouldn’t be remotely runnable on it. Very happy to hear that’s not the case with this.
andyljones#7746: ...
gwern#1782: gpt-3 would be runnable on it, you'd just have to page in the model layer by layer. the bottleneck becomes feeding data from your ssd to the gpu, though, so the fancier GPUs still spend most of their time idle on gpt-3-175b, but you'd be able to increase throughput by calculating a *lot* of batche in parallel
baragonaru#7305: ETA on reproducing DALL-E? I'm shitting myself 🙂
Swedish_Hermit#2242: :peepoWeirdWave:
Daj#7482: Depends totally if some of the regulars wanna put in the work
Daj#7482: Which is totally up to them
maghav#7178: I think the first part is gathering the dataset - The Image Pile @bmk lol
maghav#7178: They made such a big vision-language dataset for this
Daj#7482: no don't bother bmk
bmk#1476: no
Daj#7482: He's still shellshocked lol
maghav#7178: Lol
Daj#7482: between tensorfork datasets, archivist's stuff and some stuff Sid says he can easily scrape, this would be doable
bmk#1476: again - if anyone wants to do a data project, talk to me so i can give advice on the infra, but i do not want to be the directly responsible person for another data thing ever lol
LOT#7968: What would need to go in the image pile? Just lots of unique images, or would they all need labels and descriptions? |
cognomen#6297: I propose The Stash for a name
Daj#7482: Captions are necessary for DALL-E
Daj#7482: I think
LOT#7968: hmm, I'd love to help if anyone is working on scraping to collect labelled images.
bmk#1476: They're not *necessary* if you don't want the cool text part
LOT#7968: you mean without the labels you could still do image completion?
Daj#7482: We'll have to see if Sid and/or Aran decide to take on this project
bmk#1476: The rest of it is basically iGPT+VAE
LOT#7968: I can think of a couple ways to get labelled images. One, a spider that visits websites looking for images with alt-text of more than a few characters, and the other would be google image searching for phrases and downloading the results
bmk#1476: the first already exists
LOT#7968: probably do that on instagram, twitter, etc
Swedish_Hermit#2242: @Deleted User :peepoWeirdWave:
bmk#1476: the second, ... let's just say scraping google images is hard
thenightocean#6100: but I assume the good thing about doing the image data thing is that can be used for any other Image like project in the future, right? We only need to do it once?
jrowe#5371: Is there an easy way to build a toy model with gpt-neo so I can learn? I was thinking I'd like to do basic english and have a limited set of something like 3-4 simple english wiki pages, and then expand as I learn.
gwern#1782: you'd use unlabeled image data for the VAE pretraining phase
gwern#1782: so it just learns generic image modeling/reconstruction
Daj#7482: Neo is not super user friendly, but I think there's a notebook in there you can try
jrowe#5371: alright
LOT#7968: would you want to gather the images in their original resolution, or would reducing them to 256x256 be okay? |
jrowe#5371: https://github.com/EleutherAI/gpt-neo/blob/22359c0b15acf780ae026a368bc6ff8772bf194d/GPTNeo_example_notebook.ipynb
jrowe#5371: there it is - thank you
jrowe#5371: im gonna have to get some things set up lol
Daj#7482: The project isn't planned or scoped yet or anything, if you see a new channel pop up in the projects section you know we decided to get this rolling hah
jrowe#5371: looks like my saturday is filling up - gpt-neo isnt exactly plug and play
DR.PROACT#2111: https://radiopaedia.org/ radiology images source. It could be pretty awesome if people could tinker with this
Sid#2121: it may be slightly out of date, let me know if you run into any problems
Deleted User#0000: Yes?
Swedish_Hermit#2242: i said hello :peepoWowLove:
jrowe#5371: will do, sid
Deleted User#0000: ahh
Deleted User#0000: hello
IKEA#9631: Oh wow, a dedicated ML discord server that's actually active? And it even has emojis for shitposting? Am I dreaming?
StellaAthena#3530: And custom memes
bmk#1476: We have a load of oc
zphang#7252: novel meme preprints
Namhar#6909: presumably those images are in DICOM image format?
IKEA#9631: Where can I get my dank ML memes peer reviewed for publication
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/796153832031256616/garfieldwonder.png
Sid#2121: #art |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/796154142325604372/iwantyoulabelleing.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/796154295149527100/gpt3params.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/796154316494733332/loss.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/796154506814685224/libreailabs.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/796154528110084156/oopsallopen2.png
bmk#1476: Eleuther OC
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/796154591094898758/image0.png
bmk#1476: Potato resolution
bmk#1476: Higher res version https://cdn.discordapp.com/attachments/729741769738158194/796154872755257376/meme.png
StellaAthena#3530: Okay but why do none of them have pink hair
Spy#9778: anyone got any idea how DALL-E is using a discrete latent code if they do away with the explicit codebook?
kindiana#1016: they do away with the explicit codebook?
kindiana#1016: reads like they still use an explicit codebook but use gumbel-softmax to differentiate through the sampling
kindiana#1016: >
> The images are preprocessed to 256x256 resolution during training. Similar to VQVAE, each image is compressed to a 32x32 grid of discrete latent codes using a discrete VAE that we pretrained using a continuous relaxation. We found that training using the relaxation obviates the need for an explicit codebook, EMA loss, or tricks like dead code revival, and can scale up to large vocabulary sizes.
Spy#9778: I read "obviates the need for an explicit codebook" as meaning they no longer had it but maybe not
kindiana#1016: wait I can't read
kindiana#1016: lol
Spy#9778: I also think they don't specify the generation model?
Spy#9778: They say they use a transformer for generating the latent code conditional on the caption |
gwern#1782: I guess this is another 'blessings of scale' - irritated by these random VAE problems? just pretrain it on 400m images
Sid#2121: they say it's a GPT-like autoregressive model
Spy#9778: they said that was the case for generating the latent tokens but I don't think that immediately implies that's how they're decoding from those tokens into an image
bmk#1476: that's the VAE part, no?
Sid#2121: well presumably they're using the VAE's decoder
Spy#9778: maybe, VQ-VAE 2's decoder uses a hierarchical representation though
Sid#2121: from the footnote I think we can assume they're not using VQ-VAE2, but a more simple VAE architecture with gumbel softmax
stellie#3553: I know it just came out a few hours ago, but does it look like DALL-E Neo could become a thing?
Spy#9778: it sounds easier than gpt-neo
Spy#9778: computationally
stellie#3553: in terms of training or in terms of evaluation
Sid#2121: sure, it looks doable. Data gathering and porting to TPUs will probably take a little while.
Sid#2121: that's assuming we run it on TPUs
Sid#2121: plus we need to prioritize gpt-3 🙂
Spy#9778: both
Sid#2121: but i want to make a project for it soon
gwern#1782: are you sure it's not just VDVAE
Veedrac#0443: why does the end look like nightmares
Spy#9778: whatever do you mean https://cdn.discordapp.com/attachments/729741769738158194/796190873758990416/download_20210105_173805.png
Sahl#0630: I'm going to post this as a meme in another server and see what the response is |
Spy#9778: Ask them who would win
Sahl#0630: TRUE
Sparkette#4342: Where can I report a bug on the website?
Sparkette#4342: It's nothing serious
Sparkette#4342: Ah, #website channel I guess
Deleted User#0000: yea i was confused by this too
kindiana#1016: I think my interpretation is they have 2 linear layers, one from hidden -> vocab and one from vocab -> hidden, then do one hot gumbel-softmax sampling on the vocab
kindiana#1016: there's no explicit nearest neighbor
Deleted User#0000: basically just like vq-vae except with the gumbel-softmax for discretization?
Deleted User#0000: you'll still need a codebook with some set 'vocab' size
Spy#9778: that's a weird framing since to me that sounds like you still have an embedding table but you're indexing into it softly is all
Spy#9778: but that does make the most sense
kindiana#1016: that's my best guess lol
Deleted User#0000: yea, the phrasing of 'do away' is confusing
ethan caballero#6044: Quote from CLIP paper:
> "We train each model for 32 epochs at which point transfer performance begins to plateau due to overfitting"
suggests that CLIP could get way better performance by training for one epoch on a 32x larger dataset.
Deleted User#0000: and then at the end, the ranking of the generations is just done by feeding it into CLIP and getting the topk similarities?
Deleted User#0000: nothing fancier than that i'm guessing
Namhar#6909: thats my understanding, kinda like replacing beam search with CLIP |
Deleted User#0000: yea, that's all super simple then
Deleted User#0000: it's all simple in hindsight, yet OAI just keeps executing over and over
Namhar#6909: incredible impressive, I've been stunned
Deleted User#0000: same
Deleted User#0000: couldn't even work today, just kept thinking of the results and the implications
Deleted User#0000: lol
Aran Komatsuzaki#5714: my guess is that DALL-E can be also improved by reducing the amount of data and increasing the model size.
zphang#7252: why reducing data?
Deleted User#0000: actually, it's been an amazing year (and start of a year) for ML
ethan caballero#6044: Figure 3 of https://arxiv.org/abs/2001.08361
Deleted User#0000: despite the societal collapse and mass death happening around us
Deleted User#0000: lol
gwern#1782: 400m images+captions seems like it ought to be a lot, and way into >1 epoch of data for a big model, so the imbalance is towards larger models for 1 epoch instead of small models for many epoches
zphang#7252: oh as in prioritize model size over data?
Aran Komatsuzaki#5714: yes
zphang#7252: ok that makes more sense lol
gwern#1782: sample efficiency, remember
ethan caballero#6044: Did they not train a bigger model because they ran out of on-device memory?
Aran Komatsuzaki#5714: they definitely have enough devices to host more parameters. they'd use more devices than to waste more gpus hours.
ethan caballero#6044: why would they not have trained a bigger model then? They have every incentive to do so. |
ethan caballero#6044: inference compute?
Aran Komatsuzaki#5714: i'm not sure. maybe they were sloppy? lol
gwern#1782: I wonder what the optimal scaling for this gpt-vae hybrid is. I bet it looks something like pretraining a very large VAE, compressing it down, and then doing relatively little text training on a smaller GPT. I find it hard t believe that GPT-3-12b and spending most compute on the language training could possibly be optimal or that language is doing all that much, given henighan
gwern#1782: almost all of the information in their dataset is in the pixels of the images
gwern#1782: the entanglement between the text caption and the image high level features, by contrast, is minimal
kindiana#1016: I'd imagine most of the 12B is spent modelling image-image interactions no?
ethan caballero#6044: larger models are better for inference after pruning too, so inference is not an incentive:
https://arxiv.org/abs/2002.11794
Namhar#6909: I think the heavy lifting there might be done by CLIP (constrastively forcing text/image representations together). Curious to see what all the generated images look like for a query, like all the bad ones too.
kindiana#1016: iirc they didn't evaluate inference wall time, pruning does not significantly improve inference time over a dense model with existing hardware until an extreme level of sparsity
zphang#7252: is CLIP used in the training?
Aran Komatsuzaki#5714: no
Namhar#6909: they generate a bunch of candidates with the gpt model and select the top ones with CLIP. so iiuc not used in training.
Deleted User#0000: i like CLIP, easy, simple, lines up with what is going on in self-supervised learning (collapsing into very concise algorithms)
ethan caballero#6044: I guess that means inference wall time is why they were incentivized to not train a larger model.
kindiana#1016: or they didn't want to spend the compute training lol 🤷
kindiana#1016: the cost to train a 100B model is certainly nontrivial
ethan caballero#6044: Train larger model is cheaper (with respect to compute and money) due to compute-efficient scaling law
kindiana#1016: there's an optimal model size for any given compute budget, and I guess 12B was the best for how much they had to spend
gwern#1782: I doubt they did the scaling laws for this gpt-vae hybrid, however. it may be the scaling law for a gpt-3 training on text captions alone, perhaps, but that's not terribly relevant |
ethan caballero#6044: https://twitter.com/kchonyc/status/1346547252366598146
https://twitter.com/kchonyc/status/1346647010275962881
https://twitter.com/kchonyc/status/1346647284897996801
kindiana#1016: I wouldn't be surprised if they did scaling laws, the amount of compute required to generate scaling laws is low compared to the amount wasted with compute inefficient training of a huge model
gwern#1782: it's not the compute, it's the human labor. this is quite a project as it is, adding on scaling law research is twice the work
Aran Komatsuzaki#5714: yeah i think that's a likely explanation
Aran Komatsuzaki#5714: they rushed enough that they didn't release the paper yet anyway
Aran Komatsuzaki#5714: maybe they'll do the scaling retrospectively
gwern#1782: in general, we need better tooling (and better hardware to make the better tooling less necessary)
gwern#1782: anyway, should I take CLIP's bag of words efficacy as indicative that tags really are pretty good quality image descriptions as far as we need be concerned? 🙂
ethan caballero#6044: :bigbrain: 🤯 :bigbrain: 🤯 :bigbrain: 🤯 :bigbrain: 🤯 :bigbrain: 🤯 :bigbrain: 🤯 :bigbrain: 🤯 :bigbrain: 🤯 :bigbrain: 🤯 :
Explanation is that Dario recruited all the scaling law people to Dario.agi
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/796238603595939900/image0.png,https://cdn.discordapp.com/attachments/729741769738158194/796238604791447552/image1.png
Louis#0144: OMG
Louis#0144: We can make this book our ablation for the paper
StellaAthena#3530: Which paper?
triggerhappygandi#0001: Have you seen this? https://openai.com/blog/dall-e/
triggerhappygandi#0001: Apparently you can feed in text and images in a single sequence
bmk#1476: you're late to the party lol
3dprint_the_world#6486: @triggerhappygandi this is what we've been talking about all morning |
triggerhappygandi#0001: Lol
3dprint_the_world#6486: morning/afternoon
triggerhappygandi#0001: Yeah I woke up late
triggerhappygandi#0001: More proof that GPT-4 will gobble up all of reddit
triggerhappygandi#0001: Not just text but everything
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/796261435873296414/Screenshot_20210105-221818_Twitter.jpg
3dprint_the_world#6486: https://cdn.discordapp.com/attachments/729741769738158194/796261579602788352/unknown.png
Aran Komatsuzaki#5714: i guess it's great that we have yet another reason to create the image dataset we were trying to collect lol
triggerhappygandi#0001: Indeed. Now more then ever lol
triggerhappygandi#0001: Gotta keep up with OpenAI
bmk#1476: good thing we have archivist on our side
bmk#1476: all the data we could possibly need
triggerhappygandi#0001: The u/archivist guy from reddit?
bmk#1476: ye
Aran Komatsuzaki#5714: cool we don't even have to crawl then
Aran Komatsuzaki#5714: did you actually ask the guy that we can use it?
bmk#1476: no but he has a load of other data too
bmk#1476: and also he has a load of infrastructure you could use to crawl too
triggerhappygandi#0001: Now this is diversity
kindiana#1016: get parallel image capture pairs dataset, train caption-gpt, and then use that to caption all the unlabeled data for open dalle :bigbrain: |
bmk#1476: this but unironically
zphang#7252: we must leapfrop OAI and achieve smell-o-vision
bmk#1476: i was suggesting just earlier to do that for danbooru
triggerhappygandi#0001: The last text-image model I saw literally used such a complex dataset that it isn't even impressive anymore
bmk#1476: in all seriousness, though, we *should* get brainstorming on what we can do to beat OA to the punch
bmk#1476: something they havent done yet
triggerhappygandi#0001: Do you guys know what localized narratives dataset is.
bmk#1476: instead of following them around and replicating their work
Aran Komatsuzaki#5714: @bmk cool. when we train dall-e with tpus, we need to move the data to google cloud tho. are we doing the caching approach suggested before?
triggerhappygandi#0001: Idk. Use videos too? @bmk
Aran Komatsuzaki#5714: yeah we need video dataset too
kindiana#1016: how much data did dalle take?
kindiana#1016: any guesses?
bmk#1476: it was 300M? or was that for CLIP
triggerhappygandi#0001: I didn't find much on the blogpost
Aran Komatsuzaki#5714: 400M images
zphang#7252: 400m images (if we assume it's the same data from CLIP)
triggerhappygandi#0001: _where is the paper_
Aran Komatsuzaki#5714: no paper yet just blog
bmk#1476: *noch nicht verfügbar* |
zphang#7252: "available through our new api"
bmk#1476: (the paper, not the model)
triggerhappygandi#0001: :mesh:
3dprint_the_world#6486: gpt but instead of on text, on audio?
triggerhappygandi#0001: The _new_ new API
bmk#1476: anyways we need to get brainstorming
3dprint_the_world#6486: train it on podcasts
bmk#1476: video is.. unlikely to get working
bmk#1476: audio is quite saturated at this point
3dprint_the_world#6486: what about text+audio
3dprint_the_world#6486: cross-modal
bmk#1476: i think one thing we could try to do is do super high res image generation
bmk#1476: we can try to figure out some hack to make that work
Aran Komatsuzaki#5714: that's just what i'm doing
bmk#1476: oh, nice
bmk#1476: iGPT was basically postage stamps and dalle is 256px
triggerhappygandi#0001: Abstracting the texr-image pipeline even more? Like we can do something like "draw a woman with Louis Vuitton handbag in the top left, walking on an alligator"
bmk#1476: if we could generate 1024px that would be much more impressive
ethan caballero#6044: Why did CLIP train from scratch instead of starting from GPT-3 pretrained embeddings?
Aran Komatsuzaki#5714: also video and audio too. they're growing rapidly with vq-vae. so shall with vd-vae. |
bmk#1476: different vocab
bmk#1476: 16k tokens
triggerhappygandi#0001: The superior vae yes.
Aran Komatsuzaki#5714: sampling of vq-vae for audio and video is like nightmare
kindiana#1016: why?
Aran Komatsuzaki#5714: super slow
bmk#1476: we could always do anime-dalle
triggerhappygandi#0001: Having the freedom to put the generated objects anywhere is lot more advanced than just generating said objects.
bmk#1476: would anyone be on board with that?
triggerhappygandi#0001: Uhhhh. Where do we get that many anime images
zphang#7252: anime key frame generation + interpolation network
Aran Komatsuzaki#5714: shouldn't be too difficult
bmk#1476: danbooru, duh
kindiana#1016: slower than vdvae?
3dprint_the_world#6486: > Zero-shot CLIP also struggles compared to task specific models on very fine-grained classification, such as telling the difference between car models, variants of aircraft, or flower species.
^ might be something to think about
Aran Komatsuzaki#5714: much slower cuz you need to AR-ly generate the discrete latents
triggerhappygandi#0001: And if you open source it, you will legit decrease society's productivity by 10%
Aran Komatsuzaki#5714: we can probably just generate video without interpolation efficiently if we can use vae-like approach.
kindiana#1016: :thonk: I thought your vdvae idea needed AR too |
3dprint_the_world#6486: now how would one train a model to do fine-grained classification
zphang#7252: unrelated but I thought this was impressive https://arxiv.org/pdf/2012.14271.pdf
also looks like they made a whole startup from it
zphang#7252: https://cdn.discordapp.com/attachments/729741769738158194/796264100014325760/unknown.png
triggerhappygandi#0001: Incorporating object-detection in your images is probably a lot more advanced: being able to move yourself where the objects in your image are sounds extremely cool.
Aran Komatsuzaki#5714: For image, no AR. For video and audio, it'll need some AR, but the extent of AR is much less so substantially faster.
Aran Komatsuzaki#5714: like several order of magnitude different
Aran Komatsuzaki#5714: regarding high-res objects, we can project the data into low-res in the first layer a la projection layer of Vision Transformer
Aran Komatsuzaki#5714: and further down-scale it in the later layers as in the usual (vq/vd-)vae.
3dprint_the_world#6486: ~~what about making a robot that can make a cup of coffee~~
Aran Komatsuzaki#5714: at least we can generate an image of a cup of coffee lol
3dprint_the_world#6486: lol
paws#3311: indeed, so i have a few friends who run manga translation shops who told me that this is how they actually do it, they first run machine translation over the text and fix any issues, this model (if it goes into largescale production) saves them a lot of time
3dprint_the_world#6486: joking aside, something alignment-related would probably be cool to do.
3dprint_the_world#6486: but maybe not.
cfoster0#4356: Why not? 🤔
StellaAthena#3530: *\*cough\** #deleted-channel *\*cough\**
3dprint_the_world#6486: I feel like the general public may not be ready yet for something *explicitly* about alignment.
3dprint_the_world#6486: unless it was phrased in a more general 'AI Safety' context.
3dprint_the_world#6486: but yeah, RL/IRL as an intermediate stepping stone to getting to alignment would be awesome. |
cfoster0#4356: :yes:
cfoster0#4356: I think us publicly doing IRL + scaling work is a solid path
bmk#1476: i like IRL but i'll have to do some reading up
cfoster0#4356: Same here. I've kind of... Ignored RL up until now
bmk#1476: i last touched it when i was trying to do some bizarre lm thing
bmk#1476: which didnt pan out
bmk#1476: i'd probably add "some kind of OA-like cool thing that laymen can understand and find cool" to the list of things to do
bmk#1476: like anime-DALLE
bmk#1476: (or soemthing better; i don't think anime-DALLE is the most central or optimal example, but it's the only one i have off the top of my head)
triggerhappygandi#0001: More controllability over the objects in the generated images
bmk#1476: how do you propose to do so
triggerhappygandi#0001: Somehow use objectron-dataset?
triggerhappygandi#0001: I'm sure there is some literature regarding this. I'll look it up
kindiana#1016: https://ericsujw.github.io/InstColorization/ I think something like this would be an interesting path for generative models
bmk#1476: this sounds like a bit of a "the same but like better" or "draw the rest of the fucking owl" kind of request
triggerhappygandi#0001: Yeah but if you can do something like "draw the owl only in the top left" then it can have a lot of applications
IKEA#9631: Imagine if you trained DALL-E on Deviantart, the endless weird porn you could make with it
bmk#1476: no thank you
thenightocean#6100: lol my first thought after reading the paper was that once they do this for the video entire porn industry is... fucked
Bedebao#4842: My my my, another new channel? Are you sure spreading your focus this wide is a good idea? |
PhantomLimb#8127: Hi everyone, just wanted to say hi and that I am very appreciative of the effort. If needed maybe we can setup some donation for hardware or training costs etc.
triggerhappygandi#0001: I _wish_ for the day we can replace Hollywood.
triggerhappygandi#0001: This is a true worthy goal.
thenightocean#6100: “GPT-neo please generate alternative version of GOT final season”
Aran Komatsuzaki#5714: @thenightocean we have a project of applying a big vae to generate nice looking videos btw
thenightocean#6100: ah shit
Aran Komatsuzaki#5714: as well as audio and other modalities
thenightocean#6100: do you need webdev help (...or money or my blood donation :p)
Aran Komatsuzaki#5714: not sure lol but i really appreciate your help in building a blog for eleuther at 'website' channel
thenightocean#6100: thanks! Will try to finish up something today
andyljones#7746: @chilli got a question on your mastermind subject, position encodings. what'd you recommend for attending over a hex grid like this?
could interpret it as a square and just go rows/cols, but then adjacency is weird. i *think* i'd like a position encoding s.t. distance between two cells is some simple function of their encodings https://cdn.discordapp.com/attachments/729741769738158194/796379166466637844/hex.png
andyljones#7746: well writing that out hit some ancient neuron and reminded me of cube coordinates
https://www.redblobgames.com/grids/hexagons/#coordinates-cube
great job chili, this is really helpful 👍
chilli#5665: I think going rows/cols would be good - I think the structure is regular enough that it'll have no problem with it
chilli#5665: Another option would just be to do relative position encodings, where each direction has its own learned embedding |
andyljones#7746: i'm working with tiny tiny networks, so i'm keen to make their lives as easy as possible. but you're right, i should start with the simple thing 👍
andyljones#7746: (i don't actually know if attention is even a good idea on this small a scale)
chilli#5665: I mean, it's basically a linear transform of a square
Louis#0144: Tiny as in a few thousand?
andyljones#7746: tens of neurons/layer up to few hundred
Louis#0144: O
Louis#0144: lol
Louis#0144: V tiny
Louis#0144: What is this a neural network for ants?
andyljones#7746: been using fully connected layers up until now, this is just a fun experiment
Louis#0144: I guess on the bright side u don’t need a GPU
andyljones#7746: tl;dr: scaling laws in alphazero
cfoster0#4356: It looks like there's probably a sinusoidal encoding that would work
andyljones#7746: that's what i had in mind when i was thinking 'how can i preserve distances'. sinusoidal over the cubic coords from that article seem like they'd work
buuuut as boring as it is, i should do the simple way first and see how to works out a few test cases
andyljones#7746: another attention question: can anyone recommend a Bumper Book of Transformer Ablations?
like, what happens if you use a smaller key size than the main layer, what happens if you merge the key and value, what happens if etc etc etc
chilli#5665: Uh, lucidrains repos has a lot of these |
chilli#5665: But he only generally puts the stuff that works
chilli#5665: You could also just ask the discord
chilli#5665: Afaik sharing qk is the least damaging of the 3
kip#6104: hey, i have a question about about the difference between auto-encoder embeddings, and embeddings from a contrastive loss. they both train in an unsupervised way, but do they capture similar detail?
kip#6104: are they even comparable?
Namhar#6909: Thats super cool 😮
StellaAthena#3530: Interested in getting involved but don't know how? Check out our jobs board, which outlines different skillsets and how they relate to our current needs.
https://github.com/EleutherAI/info/blob/main/jobs_board.md
Daj#7482: @StellaAthena Wanna add a call for an experienced web dev for advice on designing a human feedback gathering app?
StellaAthena#3530: It's in there.
StellaAthena#3530: Under UI/UX
Daj#7482: Not exactly what I had in mind, but sure it works
Namhar#6909: An autoencoder is trained to compress the information in a way you can still recover the original.
Contrastive methods we have to decide which things to push together or apart. eg. in multiview contrastive learning (like simclr) we decide random crops of the SAME image should be closer and those from other images should be further in representational space.
I guess it depends on what you mean by capture similar detail. Presumably if we define it as whether these embeddings are good at transfer to some downstream task, then we can compare them.
Sid#2121: Pinned a message.
3dprint_the_world#6486: I figure EleutherAI is still in the exploratory phase. |
Once new people start working on projects, the focus is going to narrow organically.
StellaAthena#3530: As a general rule, as we open channels people pour into them. Sure, some projects probably won't get off the ground but we'd much rather give people the space and resources to pursue what they want and have that sometimes fail than to curtail a cool idea. It also doesn't fit with our ethos as a community-driven research collective.
If you have a research project you want to pursue, we want to provide the resources (compute, collaborators, etc.) that you need to get it done. Channels crop up because people propose cool projects, not because some higher-up decides that we are going to focus on X.
PBn#3150: I was looking at the United States patent and trademark office and they have some bulk data in the tb range in XML files
PBn#3150: I'll post some links when I get off work
cfoster0#4356: Hi :) the Pile v1 contains the text of the background sections from those
PBn#3150: Well shoot, should have guessed that's the first place y'all would go
45#2247: openai: releases dall-e
ml researchers: shocked_pikachu.png
everyone elese: let's invade the capitol
3dprint_the_world#6486: openai releases dall-e
france surrenders
3dprint_the_world#6486: (apologies for resurrecting an extremely old meme)
45#2247: [INSERT JOKE.EXE]
gwern#1782: man, fukuyama was right. technical progress and everything goes on, but people are so wrapped up in identity and values that they won't even notice mere real world stuff
gwern#1782: 2020 has been super depressing for anyone caring more about ai risk than ai
Daj#7482: Did we...did we just lose?
gwern#1782: what if you had a virus^Wagi fooming and no one cared because they were too busy trying to figure out how to spin it on twitter about how it's the fault of structural racism or china
Daj#7482: Sounds like a very realistic scenario |
zphang#7252: this is francis fukuyama?
gwern#1782: yeah. his thesis was that we'd hit the end of human history, but the lack of meaning and anomie would drive politics into tribal directions
gwern#1782: people would *create* meaning through cultural revolutions / kulturkampf
Daj#7482: Maybe this is the Great Filter
zphang#7252: is this from his book or later writings
gwern#1782: book
gwern#1782: but he has more than one
gwern#1782: his _Origins_ is also pretty good IMO. I'd never really grasped the extent to which early states and governments were constantly at war with clans/family as rivals
gwern#1782: french and chinese imperial history especially make a lot more sense if you interpret everything as a struggle between the emperor and local clans/aristocracies
45#2247: imagine agi but trump is president
45#2247: better: trump president and civil war
3dprint_the_world#6486: I've been thinking along this axis recently. If we had an AGI fooming, it's likely the average person wouldn't even know, or really care.
3dprint_the_world#6486: They'd just attempt to explain what's going on around them as a result of various weird conspiracy theories.
Daj#7482: I genuinely think there's a ~1% chance we're already here
3dprint_the_world#6486: yeah, not totally unlikely.
Daj#7482: Or past the point of a chain of events that lead to it
45#2247: assuming we're already there, what would be the appropriate response?
Daj#7482: Pray?
45#2247: seriously
andyljones#7746: function under the assumption that we aren't |
Daj#7482: What Andy says
Daj#7482: if we're past the point of no return, and don't even know, what is there to do?
Daj#7482: ~~Build an even _bigger_ AGI, of course~~
andyljones#7746: it's only a slightly weaker argument that the point of no return was the development of language
45#2247: right, if by definition it's the point of no return then by definition there's nothing to do...
Daj#7482: Join the Eleuther Collectives AGI Force
thenightocean#6100: If I have to choose I would rather be killed by AGI than in some lame civil war. I already survived one (Yugoslavia) prefer not to go through another
Daj#7482: but for an attempt at a more serious argument: There are so, so many possible takeoff scenarios, all with greatly different dynamics
45#2247: maybe more productive, if there's 1% chance of being in the point of no return, and we assume that this probability grows with a certain shape with t (maybe sigmoid(2025-year)), how should that change our actions?
Daj#7482: Wouldn't change mine
andyljones#7746: snap
Daj#7482: I already act close to if that was true
45#2247: did dall-e made you update?
andyljones#7746: no
(answering for myself)
Daj#7482: Yes but not strongly
Daj#7482: It's still _evidence_
Daj#7482: So not updating would be wrong
Daj#7482: Well rather, I didn't update strongly because my priors are already so aggressive |
andyljones#7746: https://discord.com/channels/729741769192767510/747850033994662000/796127631304425542
Daj#7482: I think there's a 5% chance there will be no more humans in 5 years
3dprint_the_world#6486: lie back and think of England.
Daj#7482: It's not over till we're all paperclips
Daj#7482: Or hyper addicted blobs of dopamine dispensers
45#2247: it's not over until the paperclip machine suicides itself to make one more paperclip
andyljones#7746: live a life that's so boring that you won't be worth simulating
3dprint_the_world#6486: done.
Daj#7482: :guilty:
45#2247: to be more concrete, i'm doing a stupid job and i hate my coworkers, should I:
- a) travel the world with gf like agi is in 1y
- b) keep the job and learn deep learning
- c) do deep learning fulltime to build agi like eleuther-like thingies?
3dprint_the_world#6486: > travel the world
I feel like you should have been thinking about that in 2019, not now
andyljones#7746: how middle class are you
andyljones#7746: how old are you
andyljones#7746: how middle class are your parents
Daj#7482: a) Don't take advice from a techno-apocalyptic nerd discord
b) Learning ML and gaining power always seems like a good idea |
45#2247: I have 6k euros in the bank, 25, parents pretty middle class
andyljones#7746: quit your job and move home and figure out what you want from life. all you've to lose is your pride
Daj#7482: I would advise this because of the shitty job, not because of AGI lol
andyljones#7746: the great boon of middle class parents is getting to change your mind.
andyljones#7746: don't piss away your youth on something you don't care for, AGI or not
andyljones#7746: you're a software dev, it's easier for you to make money when you feel like it than for almost anyone else, ever
45#2247: it's actually a deep learning job, so if I only the job and don't talk to anyone then it's not cringe and I learn stuff. I just feel that after a few months I'll have learned the supervised learning stuff they do, and if we're really fooming like in expected value doing something about it makes more sense?
45#2247: (also, job market in deep learning is pretty tight attm, opportunities to learn are hard)
Daj#7482: If you _can_ do something about it, sure yes, almost surely. But we're a bunch of nerds on a discord, we don't know you or your life.
45#2247: question is: do I actually have a youth or not haha. AGI actually matters here
Daj#7482: You can try throwing yourself at a cool Eleuther project as a test for how you like it
Daj#7482: and to learn
Daj#7482: My current plan for AGI is just to amass generic power
Daj#7482: Friends, capital, technical knowledge, etc
Daj#7482: and invest some fraction of my effort into moonshot alignment ideas
45#2247: yeah that's my goal attm
45#2247: I'm trying to figure out the sign of:
p(burnout | day job & eleuther at night) x U(burnout) + p(doing something useful) * U(doing something useful)
goolulusaurs#1571: its hard to make good long term plans when the future is very uncertain |
Daj#7482: Yep, wish I could give you more concrete advice
Daj#7482: But I don't have anything under control either lol, no one does
45#2247: wait you no sam altman
Daj#7482: Sam is gonna paperclip us all fr
Daj#7482: That SSC meetup man
Daj#7482: Doesn't believe in Orthogonality, fuck me
3dprint_the_world#6486: tbh I think you guys are being a bit overdramatic
45#2247: please reassure us
Daj#7482: No don't worry, I'm like this all the time
Daj#7482: lol
3dprint_the_world#6486: haha
45#2247: can someone steelman agi in more than 10 years ?
andyljones#7746: don't really need to steelman it, it's entirely plausible
Daj#7482: Robin Hanson has some convincing things about GDP growth and stuff
45#2247: I just need someone to tell me the arguments again like I'm 5 to sleep correctly
andyljones#7746: it's the small but positive risk of being less than 10 years that drives the crazy
45#2247: how small in your view?
Daj#7482: If everything continues as it has for literally 2000 years without interruption (Picketty's growth numbers), we have a logn time to come
andyljones#7746: it is a really hard thing to get comfortable with, but sometimes your decisions should be driven by a thing that will very likely not happen, but which has a large impact if it does
Daj#7482: Even the median best TAI projections say more likely 50% by 2070 |
3dprint_the_world#6486: I actually still think Kurzweil's timeline of 2045 is probably accurate and I haven't seen any convincing arguments to the contrary.
Daj#7482: I mean, it's an arbitrary number, but it is an arbitrary number I like too
3dprint_the_world#6486: I just don't think it will be a sudden thing, more like gradual change that will accelerate
Daj#7482: Like COVID?
Daj#7482: lol
3dprint_the_world#6486: haha
45#2247: well evidence is kurweil literally having written another book called "the singularity is nearER"
Daj#7482: The saying is "slow takeoff actually feels faster than fast takeoff"
45#2247: https://www.goodreads.com/book/show/45024007-the-singularity-is-nearer
andyljones#7746: here, knock yourself out: https://www.alignmentforum.org/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines
andyljones#7746: think there's a metaculus thing as well somewhere
Daj#7482: someone have that OpenPhil report?
Daj#7482: Though let me just say: I genuinely, actually, for real, think that we, as a species, can do this
3dprint_the_world#6486: I think there's still some big unsolved problems in AGI.
andyljones#7746: the brains one?
45#2247: ajeya cotra?
Daj#7482: I think we can solve alignment, I think we can weather this transition, I think we can achieve reflective equilibrium
45#2247: ok adding open phil & andyjones' forecasting thread to my Eleuther reading list
45#2247: will start reading those in a day where openai doesn't release anything & the capitol is not taken into assault by trumpists
45#2247: too much for me right now |
3dprint_the_world#6486: I think so too.
Daj#7482: You're gonna be ok
Daj#7482: There's a kind of meta hope that is useful to cultivate
3dprint_the_world#6486: @45 it's a good idea to unplug from news once in a while.
3dprint_the_world#6486: doomscrolling is a thing
andyljones#7746: something else that helps me a lot with x-risk: remember how when you were a kid/teen and you finally realised that you and your parents would die one day? and you know how you go about your daily life basically ignoring that now?
hedonic treadmill's a heck of a thing. you'll level out in a few days.
Daj#7482: Where even if you have low epistemic hope, you have emotional hope
andyljones#7746: also, talk to people. any people.
45#2247: ye i'm not sure if this discord is actually good for mental health or not
Daj#7482: We're usually funnier
Daj#7482: actually
Daj#7482: hmmm
Daj#7482: lol
3dprint_the_world#6486: really, why do you say that
Daj#7482: Andy says it right, we've all been through it
andyljones#7746: if you find yourself thinking 'maybe i should log off', log off. talk to someone ideally, or pick a book you read years back and really enjoyed and go have a bath. do whatever you usually do to lever your brain out of one spiral and into a better one
3dprint_the_world#6486: yes.
Daj#7482: I do that sometimes |
Daj#7482: I go get a beer with friends or play Dungeons and Dragons
Daj#7482: or just call my mom
45#2247: ok i'll be back after having read those things, have fun guys 👋
Daj#7482: Everyone should call their mom
3dprint_the_world#6486: assuming their mom is alive
Daj#7482: Yes, of course, my condolences
3dprint_the_world#6486: oh, my mom is alive
3dprint_the_world#6486: she's a bitch
Daj#7482: Eh, yeah, can't choose family
IKEA#9631: Fact of the day: "GPT" in french sounds exactly like "I farted"
gwern#1782: we could probably use more AI timeline surveys. everyone keeps citing the old 2014 etc surveys and uhhhh a lot of water under the bridge since then, y'know? heck, even DALL-E seems to be changing a few skeptics' minds right now and that was literally yesterday
gwern#1782: you don't hear so much about 'ai winter' the past year, one notes
Daj#7482: Did you read that post on whether AI winter ever even happened?
Daj#7482: I forgot where it was
3dprint_the_world#6486: which AI winter though?
Dromarion#3383: What's the AI season now? Going into summer 🤔
genai (Immortal Discoveries)#0601: Can modern computer vision (or any know algorithm ex. a simple one that uses no backpropagation) see 1 image of ex. a cat and then if shown 10 dummy images - one of which does have an unseen cat - recognize which image has a cat - which is the cat is saw before but blurred, brighter, noisy, rotated, stretched, flipped, inverted brightness? This requires great accuracy at recognizing something it knows but that is very distorted.
chirp#4545: https://twitter.com/GaryMarcus/status/1346863165888348161
andyljones#7746: > what do you consider is the opposite of 'brute force'?
|
> finding a good prior and building the right structure.
🤣
Sid#2121: I really don't understand how you can fail to be impressed by DALL-E. There's literally thousands of samples presented to you in the blog post
StellaAthena#3530: I agree with this tweet and also think that it’s impressive
Sid#2121: well, i think most people agree with the first two points
Sid#2121: really not sure what he's getting at with brute force / unreliable
StellaAthena#3530: What about the third
StellaAthena#3530: I think there’s no reason to believe that this *isn’t* cherry picked
Sid#2121: they present a much wider variety of samples than most people that report results
Sid#2121: plus, yes it's cherry picked. *By CLIP*
StellaAthena#3530: Agreed. Most people use next to no statistical rigor on their research.
3dprint_the_world#6486: I mean, on a couple of those points, he's not wrong.
Sid#2121: To call it unreliable just feels like a moving the goalposts moments for me, when DALL-E's generalization ability is an order of magnitude better than any previous image to text model
Sid#2121: yes, it has certain prompts where it doesn't perform as well
erin#5432: hey
Sid#2121: but that's like calling a world class marathon runner slow because (s)he tripped up a few times
zphang#7252: does this mean gary played cyberpunk
StellaAthena#3530: The blog post discusses how semantically identical labels break it
StellaAthena#3530: That’s a **huge** flaw |
StellaAthena#3530: It can usually handle two objects but not three
StellaAthena#3530: It doesn’t seem able to keep which objects predicates apply to straight, even in simple sentences
StellaAthena#3530: This is very cool. It very well may be a stepping stone on the path to a revolution in AI. But it isn’t that revolution, not by a long shot.
bmk#1476: the ai effect takes hold once again
FractalCycle#0001: this is a big one; rate of improvement + doesn't seem to be hitting diminishing returns / hard roadblocks *to get those improvements* = surprising progress
genai (Immortal Discoveries)#0601: the opposite of brute force search is a perfect physics sim of the universe
genai (Immortal Discoveries)#0601: both are rediculously inefficient
genai (Immortal Discoveries)#0601: ya
CRG#8707: The ambiguity "mistakes" are interesting. https://cdn.discordapp.com/attachments/729741769738158194/796506949893619792/784140c64ab67bd26e12aada7eedbe5e.png
genai (Immortal Discoveries)#0601: look: https://deepai.org/machine-learning-model/text2img
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/796507118655242310/unknown.png
CRG#8707: Yeah, not to criticize.
genai (Immortal Discoveries)#0601: cuz it is of, close to on
genai (Immortal Discoveries)#0601: of=on
CRG#8707: It can be genuinely ambiguous
bmk#1476: maybe more language pretraining would have helped
bmk#1476: dalle wasnt trained on a whole heck of a load of language
CRG#8707: 250 isolated tokens of image descriptions is not much
bmk#1476: so a text pretraining step would be really interesting low hanging fruit for improvement
bmk#1476: actually, have independent text and image pretraining steps, and then glue it together afterwards |
genai (Immortal Discoveries)#0601: old one > https://experiments.runwayml.com/generative_engine/
bmk#1476: paired data is a lot more expensive
Spy#9778: pretty funny that gary marcus tweeted about this as if the outcome supported his views rather than contradicted them https://twitter.com/GaryMarcus/status/1346863165888348161
CRG#8707: Just mix the text -> image dataset directly into the training data.
CRG#8707: > This ability to process text and images together should make models smarter. Humans are exposed to not only what they read but also what they see and hear. If you can expose models to data similar to those absorbed by humans, they should learn concepts in a way that’s more similar to humans. This is an aspiration — it has yet to be proven — but I’m hopeful that we’ll see something like it in 2021.
Ilya
bmk#1476: something something conservation of expected evidence
genai (Immortal Discoveries)#0601: ya, it did use lots of compute and data, but obviouslly it is extra good
genai (Immortal Discoveries)#0601: doesnt anyone here know that dumb AI nwould need gazillions more data?
genai (Immortal Discoveries)#0601: this is the basics....
CRG#8707: How much compute was hominid evolution again?
genai (Immortal Discoveries)#0601: if you make a simple AI and compress data by finding patterns (better predictor), you compress 100MB to ex. 30MB. When i double my data trained on, 28MB compressed. 27MB. 27.8MB, 27.899999999MB.....then upgrade the intelligence and bamn 23MB.
FractalCycle#0001: >human brains are the best example of GI that we know of
>100B neurons, 100T(?) connections
>NN parameters are kinda like connections
>"giant" "brute-force" models still only have 100B-1T parameters
>"The brute-force paradigm isn't working anymore, let's leave!"
genai (Immortal Discoveries)#0601: the brain has extra "overhead".....nope
genai (Immortal Discoveries)#0601: it doesnt need em all
genai (Immortal Discoveries)#0601: it doesn't link em all, only selecteively |
FractalCycle#0001: ah, yeah forgot about that
andyljones#7746: discuss this three times and ajeya cotra will appear and beat you to death with a google doc
genai (Immortal Discoveries)#0601: i could go on
FractalCycle#0001: the big ML paradigm is still getting more efficient, still seeing gains... like unless the *initial* training gets more efficient (i.e. skipping/optimizing a lot of the model/architecture-selection), the smaller-model intelligence-compression won't work as well
FractalCycle#0001: (and, based on that one hypothesis where a big model is "just" a network of smaller models, the smaller-model paradigm might not even be workable at the initial-training phase)
FractalCycle#0001: (i could be getting a lot of this wrong tho, i'm kind of a noob to this)
genai (Immortal Discoveries)#0601: DALL-E isn't cherry picked, #1 you can tell and #2 i tried GPT-3, IGPT, AND JUKEBOX, look at how good jukebox is > https://www.youtube.com/watch?v=6Q3V238JmNI
genai (Immortal Discoveries)#0601: my friend generated that one
FractalCycle#0001: ya, i noticed the jukebox examples (from some news article where they tried different model sizes), they really got *good* at the larger size
genai (Immortal Discoveries)#0601: https://cdn.discordapp.com/attachments/729741769738158194/796512377859997776/lamb_input.mp3
genai (Immortal Discoveries)#0601: https://cdn.discordapp.com/attachments/729741769738158194/796512410877952030/lamb_generated_1.wav
genai (Immortal Discoveries)#0601: https://cdn.discordapp.com/attachments/729741769738158194/796512417882177536/lamb_generated_2.wav
genai (Immortal Discoveries)#0601: https://cdn.discordapp.com/attachments/729741769738158194/796512418766913596/lamb_generated_3.wav
genai (Immortal Discoveries)#0601: mine ^
genai (Immortal Discoveries)#0601: first tries
FractalCycle#0001: i am simple musician. I hear random pause for a beat or a measure, i am pleased
CRG#8707: Jukebox has been popular on Youtube: https://www.youtube.com/channel/UCThULshLB0qiEoL73KxeNbQ
FractalCycle#0001: lots of videos in that format, easy to find for lots of songs
genai (Immortal Discoveries)#0601: https://cdn.discordapp.com/attachments/729741769738158194/796513251109502996/igpt_WORKS.mp4
genai (Immortal Discoveries)#0601: my try at IGPT first time ^ |
genai (Immortal Discoveries)#0601: "it works" 🙂
genai (Immortal Discoveries)#0601: https://cdn.discordapp.com/attachments/729741769738158194/796513822181294121/NzUyNDc3MzIwMA_zh_3.mp4
genai (Immortal Discoveries)#0601: some1 needs to put GPT-3 in her
genai (Immortal Discoveries)#0601: BTW Blender is bit better, it has reward topic desire ex. i want to get AGI made....it'll keep bringing it up....yup....so amazingly canny
genai (Immortal Discoveries)#0601: And GPT-2 u can try online it is there guys
genai (Immortal Discoveries)#0601: they all r as good
genai (Immortal Discoveries)#0601: we could add reflxes to that wombot too
genai (Immortal Discoveries)#0601: https://cdn.discordapp.com/attachments/729741769738158194/796515166447206450/Robot-Rap.mp4
chilli#5665: wtf am I looking at
bmk#1476: this is square in the middle of uncanny valley
cognomen#6297: abstraction would get you much more flexibility and emotional mileage i think
Louis#0144: nah thats pretty pre uncanny valley
Louis#0144: uncanny value is where you only subconsciously notice the difference
cognomen#6297: unnecessary skeuomorphisms are uncanny valley
Louis#0144: it gives u an off feeling
zphang#7252: anyone remember that one sophia bot
AI_WAIFU#2844: we invented anime to get around this very problem
bmk#1476: so what youre saying is that the future is not robots but VR anime
AI_WAIFU#2844: Yes. Musk is gonna get us full dive VR long before anyone bothers to make a fuckable sex doll.
IKEA#9631: Furries>anime girls |
genai (Immortal Discoveries)#0601: futanari
IKEA#9631: In VR you can do whatever you want and you pick boring standard humans... Smh
genai (Immortal Discoveries)#0601: you can replay that great time, or even tweak it
genai (Immortal Discoveries)#0601: or give her blue hair, large body
genai (Immortal Discoveries)#0601: i meant tall
genai (Immortal Discoveries)#0601: Can modern computer vision (or any know algorithm ex. a simple one that uses no backpropagation) see 1 image of ex. a cat and then if shown 10 dummy images - one of which does have an unseen cat - recognize which image has a cat - which is the cat is saw before but blurred, brighter, noisy, rotated, stretched, flipped, inverted brightness? This requires great accuracy at recognizing something it knows but that is very distorted.
(without using Data Augmentation)
bmk#1476: ~~so you have chosen.. death~~
3dprint_the_world#6486: you overestimate what people consider 'fuckable'
genai (Immortal Discoveries)#0601: you can change your memories to love X....in the end we will onlny like most the immortality, the thing that don't change
3dprint_the_world#6486: I feel like there are better discords for this.
genai (Immortal Discoveries)#0601: that could be said about other replies above too, but ya i am asking on them too as it's important
genai (Immortal Discoveries)#0601: someone mentioned https://en.wikipedia.org/wiki/Siamese_neural_network
genai (Immortal Discoveries)#0601: but can you explain it intuitively?
genai (Immortal Discoveries)#0601: ah hmm https://towardsdatascience.com/a-friendly-introduction-to-siamese-networks-85ab17522942
cfoster0#4356: There are better places to ask this question. I would recommend r/learnmachinelearning
genai (Immortal Discoveries)#0601: ya see, Data Augmention helps cuz CNNs etc cant see da cat upside down or brighter ect, but i found a way to do it.....but these siamese ANNs hmm, they again seem to have a complex method, training....still reading it
genai (Immortal Discoveries)#0601: what is this eludian space similarity...doesn't make sense
Louis#0144: colab is shitting itself on some of phils code
Louis#0144: I have no idea why |
Louis#0144: it has a GPU memory leak on colab
Louis#0144: but not locally...
Louis#0144: not blaming phil ofc
Louis#0144: its colab at fault here im p sure
Louis#0144: or faiss
Louis#0144: I think its how faiss is interacting with colab
Louis#0144: anyone have luck running faiss in ipython environments?
chirp#4545: do you need to run faiss continuously and interactively?
chirp#4545: can you `del` the faiss object periodically? or run faiss in a separate OS process?
Louis#0144: oh true
Louis#0144: I'll del
Louis#0144: I'll see if I can get that working
chilli#5665: what code?
Louis#0144: https://colab.research.google.com/drive/1OFDcfEXZcg_KkyPlpOVLuJtPKdjj6QfF?usp=sharing
Louis#0144: the run there is when I set num_tokens to 100 and batch_size to 1
Louis#0144: note that this runs locally on 16 GB but not on colab which also has 16GB
ethan caballero#6044: OH SHIT! Sam McCandlish left OpenAI! Dario really is recruiting the scaling law people to his startup:
https://www.linkedin.com/in/sam-mccandlish/
bmk#1476: ~~shit, forget what we agreed earlier, someone start drafting up a cold email *right now*~~
ethan caballero#6044: ^ @gwern, your dream of dario scaling law startup is still very plausible. |
paws#3311: oh scaling law expert 😮
Aran Komatsuzaki#5714: oh i gotta apply to their group
bmk#1476: eleuther needs to partner with DarioAI, clearly
paws#3311: does anyone even have a name/website for their group?
ethan caballero#6044: dario.agi
Aran Komatsuzaki#5714: lol
chilli#5665: damn
chilli#5665: pretty crazy
cfoster0#4356: I'm deadly curious what they've got in store
chilli#5665: I wouldn't want to leave OpenAI after their recent announcements 😛
chilli#5665: what do they know that we don't :thonk:
chilli#5665: (as in, if I was already working at OpenAI, the recent announcements would make me more excited to stay)
bmk#1476: why?
triggerhappygandi#0001: What announcements other than gpt-4
bmk#1476: if all the scaling people are gone, then half the appeal of oa is gone
chilli#5665: Dall-E is really cool lol
cfoster0#4356: I still feel like it's a nonprofit research group focused on "so scaling laws, huh? Why are they and what do we do about em?"
triggerhappygandi#0001: Are they tho
cfoster0#4356: But maybe a startup who knows
triggerhappygandi#0001: Dall-E is cool. But GPT-4 will be far more general. But I wonder what else is coming |
chilli#5665: I mean, this all goes back to the previous question of
chilli#5665: "what changed that made all these people want to leave"
bmk#1476: in case my stance hasn't been clear, i'd absolutely love to beat oa and/or msft/fb/google to the punch for 1T or whatever
triggerhappygandi#0001: Microsoft corporatism probably
bmk#1476: so im not looking forward to gpt4
triggerhappygandi#0001: I want to believe it
bmk#1476: because i plan on beating gpt4 to the punch
triggerhappygandi#0001: Hmm. Indeed it would be far better outcome
triggerhappygandi#0001: It's the same reason I don't feel excited about any NLP progress either. Why not be the ones to do it in the first place
triggerhappygandi#0001: That Google hasnt beaten gpt-3 shows that they aren't putting as much effort into actively doing it.
bmk#1476: or maybe the model is still cooking
triggerhappygandi#0001: I mean.. it's been 7 months since gpt-3
bmk#1476: :gameryes:
bmk#1476: *only*
triggerhappygandi#0001: They created an moe in _3 days_
bmk#1476: ok so first off
triggerhappygandi#0001: Yeah yeah I know
triggerhappygandi#0001: Not a fair comparison
bmk#1476: the fact that it's a moe speeds it up drastically
triggerhappygandi#0001: But _3 days_ |
bmk#1476: yes and do you know how long it would have taken them to train a gpt3 using the same hardware?
triggerhappygandi#0001: 6 months?
bmk#1476: well, 2
bmk#1476: but a lot longer
triggerhappygandi#0001: They do have even bigger infrastructure
triggerhappygandi#0001: Google was literally flexing in MLPerf with an ungodly 8192 TPU machine
bmk#1476: yeah good luck using that all the time lol
triggerhappygandi#0001: What are the problems with continuously using it?
bmk#1476: getting your hands on that much compute even internally is uphill
triggerhappygandi#0001: Ah
triggerhappygandi#0001: I mean.. all you gotta do is convince Jeff right?
ethan caballero#6044: cold email/message Dario with something that he would find valuable
cfoster0#4356: $$$$click this for free GpUs$$$$
TeXit#0796: **cfoster0** https://cdn.discordapp.com/attachments/729741769738158194/796650938597113866/314125175111286785.png
cfoster0#4356: O no
rentb33#7342: So is there a way we can point our individual gpu systems to a pool to lend processing power to build the model? Similar to coin mining pool
triggerhappygandi#0001: _clicked_
triggerhappygandi#0001: There is no way the communication will be feasible
3dprint_the_world#6486: a company's external appearance can be very different from its internal politics.
triggerhappygandi#0001: Even using storage and VM from different regions can cause slowdowns |
3dprint_the_world#6486: I've been at companies that were great at dazzling people. But inside, the team is toxic.
Aran Komatsuzaki#5714: thanks i'll do it lol
bmk#1476: see our faq
rentb33#7342: Gotcha, so there’s no way to split it into chunks of work that everyone can process then return to the main server?
bmk#1476: https://github.com/EleutherAI/info
bmk#1476: see the faq
triggerhappygandi#0001: Who knows whether Dario is actually creating his own startup
triggerhappygandi#0001: Kingma just joined Google
triggerhappygandi#0001: And salimans too probably
ethan caballero#6044: the openai post said he's founding a startup
triggerhappygandi#0001: 😅
bmk#1476: well, *implied*
bmk#1476: and not super strongly
triggerhappygandi#0001: Inb4 Ilya says fuck it and leaves too
ethan caballero#6044: they used the word "co-founders"
bmk#1476: there are a whole bunch of things that can be cofounded
bmk#1476: nonprofit organizations, profit-capped organizations, discord servers, etc
triggerhappygandi#0001: How do people manage to both be tenured professors and employees at a company
triggerhappygandi#0001: And do they get 2 full incomes then
3dprint_the_world#6486: being a professor is a title, not a job, necessarily |
triggerhappygandi#0001: But Yann LeCun does take classes in NYU
3dprint_the_world#6486: sometimes they might just teach a few classes a semester.
triggerhappygandi#0001: I doubt they do it for free lol
3dprint_the_world#6486: they get paid
3dprint_the_world#6486: what I mean is it probably doesn't take up much of their time.
triggerhappygandi#0001: Probably. But do they get paid the full income of a tenured professor?
triggerhappygandi#0001: If so then that's literally the safest you can play
3dprint_the_world#6486: I make more than the full income of a tenured professor.
CKtalon#7792: Some professors can use grant money to cover their teaching responsibilities. There are sabbaticals as well
cognomen#6297: so many decades and spambots still haven't evolved beyond copypasting
cognomen#6297: https://xkcd.com/810/
cognomen#6297: could this finally be implemented?
Visarch of Apollo,#7152: @triggerhappygandi We are 6 months from gpt-3?
triggerhappygandi#0001: Yes
triggerhappygandi#0001: 7 actually
bmk#1476: for an estimate of the number of months until our gpt3 replication is finished, please visit https://www.random.org/dice/?num=2
triggerhappygandi#0001: Very cool
triggerhappygandi#0001: I rolled 2 ones
bmk#1476: i wonder why they're trying to do that
bmk#1476: so im assuming you guys have it under control |
gwern#1782: what do they have against the pile? does it trigger some censorship keywords or something?
triggerhappygandi#0001: Probably some censorship thing
bmk#1476: i just googled for the pile to see what people are saying and apparently we're the #3 result on google, right after this gem: https://cdn.discordapp.com/attachments/729741769738158194/796804115379781652/unknown.png
Sphinx#2092: they probably want to make their own pile. inb4 "the stack"
chilli#5665: if they want to make their own pile they don't need to DDOS
triggerhappygandi#0001: Seems like we won't be accepted in the pile
chilli#5665: lol
triggerhappygandi#0001: Smh
chilli#5665: they can just download it
triggerhappygandi#0001: It probably has a lot of pro-hong-kong text or something
bmk#1476: so does, like, literally everything, though
triggerhappygandi#0001: Probably. I am seeing a lot of posts about crackdown on Hong-Kong since last 2 days.
triggerhappygandi#0001: Is something happening?
bmk#1476: aside from random auto genned stuff, i can't find anything on the chinese internet about the pile
Sid#2121: damn. Wonder which file lol
Louis#0144: guess who has quantified and statistically signifiant evidence that GPT3 writes better stories B)
Louis#0144: lmao
Louis#0144: (No spencer not that dw)
Sid#2121: better stories than what?
Louis#0144: GPT2 |
Louis#0144: as in logically sound
Louis#0144: I cant share too many details rn but its exciting
Sid#2121: can you please share many details
Louis#0144: LMAO
Louis#0144: you need to wait till april
Louis#0144: sorry
Sewing#2678: can someone explain to me please the difference between the
Sewing#2678: encoder in e.g. Bert and the Decoder in e.g. GPT ?
Sewing#2678: it it only the masked attention part?
Sewing#2678: since there is no Encoder for GPT like models, there cannot be any cross attention layers, i.e. only Self attention + feed forward akin to the Encoder + additional masking
Sewing#2678: correct?
triggerhappygandi#0001: @Louis how do you even define "better story"?
triggerhappygandi#0001: With bleu?
cfoster0#4356: Yeah the whole encoder/decoder terminology is kind of ill-fitting
zphang#7252: Both BERT and GPT have no cross-attention layers, self attention only.
Louis#0144: plot holes
Louis#0144: etc
Louis#0144: i do storytelling research
cfoster0#4356: What matters is (1) is there cross attention and (2) is there autoregressive masking?
Louis#0144: logical coherency |
triggerhappygandi#0001: Is there a metric for it?
triggerhappygandi#0001: Oh yeah I remember
triggerhappygandi#0001: You talked about that adversarial thing with double negatives
Sewing#2678: so then really the only difference is the masking?
triggerhappygandi#0001: Well there's a _little_ difference between encoder and decoder internal structure in a transformer
Sewing#2678: like what?
triggerhappygandi#0001: No multi-head attention in the encoder https://cdn.discordapp.com/attachments/729741769738158194/796839423358664704/unknown.jpeg
zphang#7252: *masked
triggerhappygandi#0001: Oops
triggerhappygandi#0001: Yeah
Sewing#2678: ?
Sewing#2678: ofc there is
triggerhappygandi#0001: *masked multi-head attention
Sewing#2678: thats what I am saying, the only difference is the masking, correct?
zphang#7252: "more or less" (in encoder-decoder models, decoders have cross-attention)
Sewing#2678: I know
triggerhappygandi#0001: That's probably the only difference I know. That's why bert paper compared so much of its performance to gpt
Sewing#2678: but if I consider only encoder for e.g. Bert or only Decoder for e.g. GPT, then the only differnece in in the self attentin layer, namely that the GPT uses masked selt attention whereas Bert etc. use full self atention?
Sewing#2678: mh ok thank you
zphang#7252: big picture, yes |
Sewing#2678: and one last question:
Sewing#2678: when would I use a transformer-encoder over a trnasformer-decoer and vice versa?
Sewing#2678: what can Encoder only achieve that Decoder only structrues cannot and the other way around?
triggerhappygandi#0001: Encoder only are good for translation. And other things like that which benefit from seeing the entire text (summarization, etc)
zphang#7252: it's more a function of your task at hand
triggerhappygandi#0001: Yeah
zphang#7252: BERT is allowed to see the whole text
zphang#7252: GPT is generating stuff, so it can't see future text
zphang#7252: Translation models see all the source text, and then generate the target translation
triggerhappygandi#0001: Isn't most of SQuAD suite suited for encoders?
triggerhappygandi#0001: I don't think they have many benchmarks for LMs
zphang#7252: SQuAD is basically predicting a sub-string from the passage, so encoders are well-suited for it
Sewing#2678: but I thought translation is where Transformers, i.e. full Encoder/Decoder structures shine
Sewing#2678: and in the end, everything is generating stuff, so I still dont fully grasp the differnece
zphang#7252: in translation, you only generate the target sequence
zphang#7252: so you want the encoder to encode source sequence, and then the decoder to attend over the encoded sequence (and past decoded tokens), and generate more decoded tokens
Sewing#2678: yes
Sewing#2678: so what u are saying is, that nowadays the full Transofmer (Enc and Dec) is rarely used?
zphang#7252: it's used for Translation
zphang#7252: and T5 also uses an encoder decoder (IMO T5 is a little weird) |
cfoster0#4356: Encoder decoder is great when you want to merge two different streams of info
zphang#7252: also BART I guess
cfoster0#4356: So it's also used in multimodal
cfoster0#4356: Although, again, there's really no crisp thing as "transformer encoders" or "transformer decoders"
cfoster0#4356: People use the term "decoder" for autoregressive models, oftentimes
Sewing#2678: mh
Sewing#2678: thinking of Vision Transformers, if I have two different streams of images (i.e. multimodal) and I want to relate them to each other (in feature space) would I use to encoders (one for each modality) and then compare them somehow or would I use one encoder for the first image stream and the second stream as input to the Decoder that then attends over the encodes 1st stream?
cfoster0#4356: Depends on your task
cfoster0#4356: Check this out https://arxiv.org/abs/2011.00747
cfoster0#4356: https://cdn.discordapp.com/attachments/729741769738158194/796845734246350868/Screenshot_20201103-115834_Google_PDF_Viewer.jpg
cfoster0#4356: Cursed diagram
bmk#1476: it's glorious
Sid#2121: how do ppl make these architecture diagrams
Sid#2121: is this a latex thing?
bmk#1476: ¯\_(ツ)_/¯
bmk#1476: we need to figure out and add some to our papers tho, lol
thenightocean#6100: I can draw it in sketch if needed
fazz#8459: Lucidchart is what you need - I've created similar flowstate diagrams with their shape libraries
gwern#1782: a lot of people draw them by hand in illustrator
gwern#1782: https://news.ycombinator.com/item?id=18788244 |
gwern#1782: there's a couple reddit discussions like https://www.reddit.com/r/MachineLearning/comments/67tn9t/d_diagrams_and_graphs_in_papers/
gwern#1782: and remember, you can download the arxiv sources and see how a specific paper did it. usually it'll be obvious from the files whether it was tikz, illustrator, or something else
thenightocean#6100: sketch is usually very fast. I can create entire symbol library for it and it can be quite quick to do it in the future
thenightocean#6100: this transformer took a bit time when I did it from scratch, but in the future components can be easly added from the library https://cdn.discordapp.com/attachments/729741769738158194/796856468359479336/YA8QQ6YT50YFRcP9GQ11OglSclhhrzgDmbo1pmkoztNA7FZi4axWxccACbOtq3oajviTgeudLO9OQTdeWLNHnDqGTkm1dNSLAap9.png
thenightocean#6100: cause I assume all this diagrams have similar components, right?
pretysmitty#6405: hi all, im coming to this community with some questions about training models on a distributed/parallel system. im assuming that GPT-3/neo has to be trained in such a fashion, so hopefully im in the right place! im very new to parallel processing so excuse my bad jargon
Question: my group is attempting to train models via evolutionary/population level style, where we initialize a bunch in parallel and apply a genetic algorithm to optimize. our genetic algorithm uses MPI backend, and we need this work somehow with whatever backend pytorch/tensorflow use for parallel processing. (1) Is CUDA still used when training w distributed systems, or something like OpenACC/OpenCL? (2) Do you all have documentation for how you plan to train GPT-neo? That would be a big help for me to understand the specifics of training w distributed systems
bmk#1476: https://github.com/EleutherAI/info this should answer most of your questions
pretysmitty#6405: Also, my current understanding is that pytorch/tf use CUDA, and I have found some resources on CUDA-aware MPI. If anyone has some knowledge of that, id love to pick your brain \:)
pretysmitty#6405: thanks!
3dprint_the_world#6486: what do people here think of plugging in language models (or other AI models) into stuff like https://digitalhumans.com
3dprint_the_world#6486: i.e. autonomously animated virtual androids
3dprint_the_world#6486: like is there something new that could be done there that you couldn't do in a text-based prompt
thenightocean#6100: yeah, you can now get trapped in uncanny valley
thenightocean#6100: ( 🙂 )
3dprint_the_world#6486: (assuming uncanny valley problems were solved 😃 )
IKEA#9631: This site low-key looks like it could be from a black mirror episode
3dprint_the_world#6486: or https://www.soulmachines.com/
3dprint_the_world#6486: more generally I'm interested about applications of VR+AI |
cfoster0#4356: IMO virtual avatars / virtual androids are super low-hanging fruit in general
3dprint_the_world#6486: in what sense
cfoster0#4356: As in, I think all the pieces are effectively here for high fidelity virtual avatars
jrowe#5371: wait for DALL-E for long coherent video
jrowe#5371: then use gpt-neo to create text cues for an avatar
jrowe#5371: train it on historical data and the ancestry.com database, then "Siri, let me talk to my ancestors"
jrowe#5371: has anyone tried using deepfake replacements of a good human model on an otherwise bad / uncanny valley simulation?
jrowe#5371: might be a cheap way of achieving good enough results?
IKEA#9631: You mean uhhhh like this?
https://www.youtube.com/watch?v=byKy9kGnyvo
Dromarion#3383: VRChat taught me that people are less interested in having 1 to 1 representations of themselves and are more interested in being Shrek
3dprint_the_world#6486: oh yeah totally, I'm just more interested in what you'd *do* with them
bmk#1476: ~~Shrek~~ an anime girl
ftfy
Dromarion#3383: Why have a Waifu when you can be the Waifu :bigbrain:
Dromarion#3383: But yeah I think when people say that they want VR to be realistic, they really just mean immersive. Like who wants to put on a headset and essentially experience their boring real life
cfoster0#4356: What *I'd* do with them? Make videos, prolly
3dprint_the_world#6486: rule 34
jrowe#5371: yup, that's awesome |
jrowe#5371: I think immersive is the x factor. Barney the dinosaur isn't watched because he's superrealistic or believable, its because the whole show is immersive - you get to ignore the finer details and visual abstractions to focus on storytime
Deleted User#0000: so fun story, i was at a bar with a friend and he introduced me to his acquaintance, who worked for lucas films
Deleted User#0000: for that Leia scene, he spent an entire year just working on 10 seconds of that
jrowe#5371: wow
jrowe#5371: and some schmo nerd took a whole weekend to remaster his opus
Deleted User#0000: pretty much. deep learning is going to render a lot of things obsolete
jrowe#5371: yeah
jrowe#5371: like money and work and probably humans
3dprint_the_world#6486: and dogs
3dprint_the_world#6486: so don't breathe a sigh of relief yet @Deleted User
jrowe#5371: i dunno, superintelligence might emerge from a doge blockchain and decide to keep its biological predecessors around for nostalgia
Deleted User#0000: i actually want to take a lot of pictures of ice cream, and put her in a GAN at some point
jrowe#5371: whatever happens is going to be supremely weird
gwern#1782: I'm always blown away at the reality of what "this movie cost $500m to make" cashes out as.
gwern#1782: this absurd level of investment in sfx... and then the *absolutely moronic screenplays and stories* hollywood makes
3dprint_the_world#6486: actors: $50m, cgi: $200m, story: two dolla
gwern#1782: someone help me budget this, the fans are refusing to watch the new star wars movie and my family is dying
Deleted User#0000: to be fair it's a pretty important scene for pleasing die-hard fans
Dromarion#3383: What if we had *Agile Film production*
IKEA#9631: actors: $200M, cgi $50M FTFY |
Dromarion#3383: Just release the next Star Wars film on early access
AI_WAIFU#2844: What you mean like v-tubers?
https://playboard.co/en/youtube-ranking/most-superchated-all-channels-in-worldwide-total
I think it's a nice get-rich quick scheme.
AI_WAIFU#2844: Like clearly general intelligence isn't a requirement https://www.youtube.com/watch?v=JPmrXkUcUOA&t=588s
bmk#1476: So what you're proposing is to make a fully autonomous gpt3 powered eleuther-chan vtuber
Kyler#9100: LMAOOOOO
cfoster0#4356: This is what we're MADE FOR
IKEA#9631: Some v-tubers are making mad bank but the "market" is getting flooded by wannabes who also want their slice of the pie, youd need some serious novelty to make it nowadays
Kyler#9100: we need a good tts lmao
cfoster0#4356: isn't tts, like, solved by now?
AI_WAIFU#2844: No. I'm proposing ~~a harem~~ an army of fully autonomous gpt3 powered vtubers.
Kyler#9100: this is how twitch arrmageddon starts-
bmk#1476: エルウサアちゃん、君にお金をやったいです〜
triggerhappygandi#0001: Ok Google how to live off of 3 hrs sleep?
triggerhappygandi#0001: You don't understand what you want to unleash on the world.
triggerhappygandi#0001: It will be the Fall of Civilization™
triggerhappygandi#0001: Iirc Wavenet was surpassed by HifiGAN probably, but I don't think even that had _solved_ it.
cfoster0#4356: Hmm I'll have to look back at the MOS on it
cfoster0#4356: Wait those are both just the vocoder portions, no? |
triggerhappygandi#0001: They are.
cfoster0#4356: What's SOTA for the synthesizer these days?
triggerhappygandi#0001: https://arxiv.org/abs/1712.05884v2
triggerhappygandi#0001: This according to paperswithcode
3dprint_the_world#6486: is it? does anyone provide *good* performant real-time tts?
3dprint_the_world#6486: afaik the best one is tacotron
3dprint_the_world#6486: but it's not super amazing
sunny#5382: Uberman
gwern#1782: Uberman is a bad idea. you'll get a lot further with modafinil instead
Kyler#9100: y'all this 💩 makes no sense- every single time i exit discord and come back- this server isn't in my server list
Kyler#9100: discord making ZERO sense right now- am i going insane?
cfoster0#4356: What link are you using?
cfoster0#4356: I think there was an error that did that for some link out there
Kyler#9100: `https://discord.gg/5aQSEShF`
Kyler#9100: oh sh-
Kyler#9100: can i have another invite link
Kyler#9100: I found it on the colab for a gptneox demo
triggerhappygandi#0001: Mfw I didn't even know what uberman is
Kyler#9100: i swear i'm not going cray cray-
cfoster0#4356: Try https://discord.gg/BK2v3EJ |
Louis#0144: LOL
Louis#0144: O I misread
Kyler#9100: yas thanks!
Kyler#9100: Phew-
Kyler#9100: what was that sh-
Kyler#9100: i swear i was going DEJA VU every single second-
StellaAthena#3530: @Kyler thanks for the heads up. Fixed the link in the notebook
Kyler#9100: like i was going LITERALLY INSANE
Kyler#9100: your welcome stella! :)
Kyler#9100: thank you
asara#0001: tacotron2 and fastspeech2 are both great
asara#0001: hifi-gan should be chained after both as the vocoder
3dprint_the_world#6486: I've done some custom training on tacotron and the results weren't great, but tacotron2 is on my list of things to try
haru#1367: hi
cfoster0#4356: I forsee *significant changes* once even this level of technology becomes a commodity https://youtu.be/DIw4s1kSyCU
bmk#1476: Changes.. in which direction?
Namhar#6909: like in the new bladerunner movie lol
bmk#1476: I regret to inform you that i have been living under a rock and have no clue what that movie's about
bmk#1476: But I'm going to hazard a guess and guess "really horny"
Namhar#6909: pretty much, but that might not do joi (the AI assistant) much justice. Its worth a watch (both original and 2049). |
cfoster0#4356: We'll start shifting value much more aggressively into virtual domains
cfoster0#4356: I've been thinking recently of the ways AI risk might play out when the overwhelming majority of what we value exists purely in virtual form
cfoster0#4356: Anyways, a conversation for another time
spirit-from-germany#1488: This series is imao pretty good regarding ai and digital beings 🙂
spirit-from-germany#1488: https://youtu.be/HU4mwlTUXnc
3dprint_the_world#6486: the dude in the background asking about pubes and stabbing herself is... very creepy
spirit-from-germany#1488: I found this googling: https://youtu.be/mstLuzTw790
spirit-from-germany#1488: that + this could get very interesting...
spirit-from-germany#1488: https://youtu.be/_9aPZH6pyA8
cfoster0#4356: If EAI wanted to get serious about this kind of work, we could probably do *very* cool things in it
cfoster0#4356: But alas we've already got lots of cool projects 😆
spirit-from-germany#1488: hehe
bmk#1476: We really do need to prioritize our stuff
bmk#1476: We have too many ideas and too little time
spirit-from-germany#1488: I would REALLY love to contribute more for projects like that or others here... the problem is, that I'm working full time as a high school teacher, have 2 small kids who constantly want my attention and that I'm not very quick/ routined at coding ... But as my kids get older, AI gets better and Bitcoin gets higher, I 'll get more time to seriously work on stuff like that 🙂 😉
spirit-from-germany#1488: At some point we will have a multi-modal GPT-X that could easily be connected with speech recognition, ML-agents and Unity to create pretty impressive virtual beings
thenightocean#6100: agree 100%
thenightocean#6100: any ai virtual assistant project that will look like Ana de Armas should be the highest priority IMO ♥️
triggerhappygandi#0001: Anything other than catgirls 🙏
triggerhappygandi#0001: We can't unleash that hell upon earth |
andyljones#7746: wooo, cause (intervention?) prioritisation! apropos of nothing, a simple place to start is scoring projects in terms of
> Importance: What is the scale of the problem in the area? If all problems in the area could be solved, how much better would the world be?
>
> Tractability: How solvable is the problem in this area?
>
> Neglectedness: How neglected is the area?
there's some good criticism of this framework in the last para here
https://concepts.effectivealtruism.org/concepts/importance-neglectedness-tractability
CKtalon#7792: Anyone can explain to me what NVlink does and how it affects training?
Suppose I have 2×3090s and an NVlink. I would like to train a deep language model which exceed 24GB just by the number of parameters needed, will NVlink help? I assume that the remaining VRAM is used for the batches used for training?
CKtalon#7792: Likewise if scaled to A6000 on NVIink
Sid#2121: nvlink is just a fast connection between gpus
Sid#2121: so if your training scheme is transferring data between gpus, it will decrease the bottleneck from that step
CKtalon#7792: so will it help in OOM errors?
CKtalon#7792: because the A6000 spec sheet is advertising that it will be "combined 96GB" with NVLink
CKtalon#7792: Ultra-fast GDDR6 memory, scalable up to 96 GB with NVLink, gives data scientists, engineers, and creative professionals the large memory necessary to work with massive datasets and workloads like data science and simulation.
CKtalon#7792: More like I'm questioning this |
kindiana#1016: you really don't want to use vram over nvlink, even if it "works"
CKtalon#7792: so say an A100 card with 40GB vs a 2x24GB 3090 with NVLink). which can train bigger models for the same (small) batch size
CKtalon#7792: yea, i believe it's better if the card just has plenty of ram
CKtalon#7792: but would like to know if what nvidia is saying is marketing bullshit or what
CKtalon#7792: because i know certain models require the model to be duplicated across all cards
kindiana#1016: even if you can theoretically combine 2 3090s to get 48gb, its going to be like 5x slower
CKtalon#7792: assume speed isn't an issue
CKtalon#7792: like in papers, i do see them saying they trained X model with 8 V100s though they don't specify the amount of ram each V100 has (it can be 16 or 32..). It doesn't seem the big models they train can even fit into a single 16gb to me
CKtalon#7792: that's why i'm a little confused
CKtalon#7792: as to how big a model i can train given a particular card or combination of cards with NVLink
kindiana#1016: once you can't fit a model on one gpu you need model parallel methods to train it
CKtalon#7792: hmm, ok. somehow these papers don't seem to mention that.
kindiana#1016: pytorch lightning sharded might be the easiest way to do that from what I hear
kindiana#1016: (never used it, just heard good things)
triggerhappygandi#0001: Thats kinda stretchy claim. 3090 gives ~80% performance of an A100, and both support NVLink. I doubt it would degrade performance that much
kindiana#1016: if you try to use 48gb of vram over nvlink I mean
kindiana#1016: instead of actual data/model parallel
triggerhappygandi#0001: Ah
kindiana#1016: (if that even works)
andyljones#7746: common enough in certain sections of the literature that it's kinda assumed |
CKtalon#7792: maybe i'm getting it completely wrong, but what I assumed is that the batches are split across the GPUs to update the weights of the models that's individually held in each GPU
CKtalon#7792: (this is for neural machine translation)
CKtalon#7792: but the base model can't be split
kindiana#1016: (which nmt models don't fit on one GPU? I've only seen a couple lol)
CKtalon#7792: well a transformer big already takes 16GB
CKtalon#7792: i'm planning to try deeper models
kindiana#1016: how big?
CKtalon#7792: since the literature says that having a deeper encoder gives better results
CKtalon#7792: the normal transformer-big
kindiana#1016: that's not _that_ big
CKtalon#7792: yea, it's not that big
andyljones#7746: look up "model parallelism", think that's the magic google-able phrase you need
CKtalon#7792: it's just "big" based on the original paper
kindiana#1016: shouldn't take 16gb if you do all the normal tricks
CKtalon#7792: i think a 3090's 24GB fits as 12-6 encoder/decoder, but i can't seem to use batch sizes of 4096 for that
CKtalon#7792: my sentence lengths are also longer than the typicals 50-60 tokens
kindiana#1016: you can do microbatching/gradient accumulation
CKtalon#7792: (planning to go up to 2000 tokens. actually)
kindiana#1016: you really only want to do model parallel if you can't even fit bs=4 or something on a single gpu
CKtalon#7792: then there's also the drop in quality because of small bs 😦 |
CKtalon#7792: kinda the reason why i'm wondering if i can just scale hw to make up for it
kindiana#1016: if you do a bunch of forward backward passes and then do a parameter update
kindiana#1016: its the same as if you took a large batch
kindiana#1016: as long as you don't use batchnorm
Louis#0144: Gm nerds
haru#1367: good morning
jrowe#5371: any knowledgeable neural network folks here? I was wondering if theres a term for the idea that weights in a single layer of a network of a given size can be represented as a spline and permutation index
jrowe#5371: the terms I'm using to search with aren't pulling up anything relevant, but I think its probably a well explored idea?
Louis#0144: https://www.cnbc.com/2021/01/08/openai-shows-off-dall-e-image-generator-after-gpt-3.html
Louis#0144: The title of this article
Louis#0144: Is so fucking infuriating
Louis#0144: Like OAI has consistently tried to distance themselves from Elon so much
Louis#0144: LMAO
bmk#1476: Lol
bmk#1476: I already know what headlines about Eleuther are going to be
thenightocean#6100: oh no
bmk#1476: "why everyone is talking about a discord server occasionally frequented by someone followed by elon musk in twitter"
bmk#1476: [insert picture of elon musk]
Sewing#2678: talking about infuriation, I hate it when magazines, shows and people foreign to our field use the term AI
Sewing#2678: there is no such thing as AI |
Sewing#2678: just numerical optizimation, massive Compute and large data collections
andyljones#7746: 🤨
Sewing#2678: can we knock this into their head ?
andyljones#7746: *you're* just numerical optimization, massive compute and large data collection
Sewing#2678: clever
andyljones#7746: no *true* intelligence woul-
bmk#1476: small brain: use AI for everything
big brain: AI is just numerical optimization
galaxy brain: use AI for everything
Daj#7482: Small brain: Humans are intelligent
Big Brain: Computers are intelligent
Galaxy brain: Thermostats are intelligent
Louis#0144: i got a smart fridge to beat me at chess once
Louis#0144: does that count?
andyljones#7746: dunno about you folks, but i am definitely a p-zombie
Daj#7482: Can we get a :chalmers: emote?
andyljones#7746: damn, did not know chalmers was australian. australia's got a good batting average on philosophers
andyljones#7746: e: and glad to see google agrees with me on who's important https://cdn.discordapp.com/attachments/729741769738158194/797146473082519642/unknown.png
Dromarion#3383: The solution obviously is to have our own Elon Musk. It's the only way to compete
gwern#1782: "Mom can we have elon musk" "no, we have elon musk at home" at home [connor] |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/797150354759221308/4t2pks.png
Louis#0144: is that connor
Louis#0144: @Daj ur moustache is v impressive
Daj#7482: Thank you
Daj#7482: I have trimmed it since then so it looks slightly more civilized
bmk#1476: also the weird crop is because i stole it from the thumbnail of a youtube video
bmk#1476: if you have a better cropped version pls send
Daj#7482: I never have good pictures of myself lol
Daj#7482: I usually copy from my own linkedin profile
Louis#0144: what youtube video
Daj#7482: lmao
Daj#7482: Probably one of the podcasts I was on
Daj#7482: I do have a collection of meme pictures of me me and friends recently made
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/797152337101717554/unknown.png
bmk#1476: :ultrazucc:
Daj#7482: of me (that) me and friends
Daj#7482: Does this help? https://cdn.discordapp.com/attachments/729741769738158194/797152484283383868/pic.jpeg
Daj#7482: lmao
bmk#1476: it will do
Daj#7482: I would post the resulting movie poster we photoshopped but I'm not sure the others would consent |
bmk#1476: lmao
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/797153062032113735/4t2rml.png
Daj#7482: eh
Daj#7482: nah the face closeup was better
Louis#0144: the pink shoes
Louis#0144: 🔥
Daj#7482: My sister was very helpful in doing the photoshoot haha
triggerhappygandi#0001: Nice mustache@Daj
3dprint_the_world#6486: uhm, what would you rather call it then.
3dprint_the_world#6486: AI is a pretty useful shorthand term.
triggerhappygandi#0001: I prefer ML
3dprint_the_world#6486: two separate things though.
3dprint_the_world#6486: people have been using the word AI forever, I don't see what the problem is with it. The only people who have a problem with it are marketing people who think AI is composed of nerds and want nothing to do with it
3dprint_the_world#6486: otherwise it's a great term
triggerhappygandi#0001: I don't have a problem, but ML is more accurate and just as broad. The I in AI is still far-fetched
3dprint_the_world#6486: ML is *too* broad. That's the problem.
3dprint_the_world#6486: it covers e.g. linear regression
3dprint_the_world#6486: and essentially all of statistics (if done on a machine, lol)
3dprint_the_world#6486: anyway terms and abbreviations aren't just a literal sum of their parts.
3dprint_the_world#6486: they have meaning unto their own. |
triggerhappygandi#0001: I refer to this https://cdn.discordapp.com/attachments/729741769738158194/797177015144284170/unknown.jpeg
Dromarion#3383: *Electronic Thonk*
pdillis#2914: I'd say it's the other way around: marketing people won and they pushed AI so that they could sell their (mostly not AI) tech to companies
triggerhappygandi#0001: AI is the superset of everything
3dprint_the_world#6486: it's a cycle. first they push it then distance themselves when it gets too toxic
3dprint_the_world#6486: I'm old enough to remember the 2000's when absolutely no one wanted anything to do with AI
triggerhappygandi#0001: In any case, calling OpenAI "Elon Musk's company" is lazy
triggerhappygandi#0001: I'm old enough to know people old enough to remember that :berk:
3dprint_the_world#6486: ironically, that's when 'ML' as a term became popular
3dprint_the_world#6486: they didn't want to use AI so they invented ML
3dprint_the_world#6486: to make it sound more professional and serious
triggerhappygandi#0001: So a rebranding problem
triggerhappygandi#0001: Let's just call it deep learning simply
gwern#1782: "Mom, can we have elon musk's AI deep learning company [neurallink logo]" MOM: "No, we have musk AI at home" AT HOME: [openai swirly logo]
Dromarion#3383: Shallow learning
triggerhappygandi#0001: :mesh:
triggerhappygandi#0001: I remember when Deepmind came up with Alphazero and people were like "Google's AI beats stockfish while Elon Musk's AI solves Dota 2"
3dprint_the_world#6486: If 'AI' is good enough for John McCarthy, Marcus Hutter, and Judea Pearl, it's good enough for me
triggerhappygandi#0001: ~~who are they I literally only know the names~~
3dprint_the_world#6486: Marcus Hutter formalized a model of an intelligent agent and studied the limits of what such agents can do. Judea Pearl pioneered a probabilistic theory of causality. John McCarthy is your God. |
bmk#1476: beat you to the punch months ago on this one https://twitter.com/nabla_theta/status/1315007242240839681
bmk#1476: oh wait
bmk#1476: *neuralink*?
bmk#1476: *sigh*
gwern#1782: yeah, to make it musk-specific
bmk#1476: well, openai is the other "elon musk's AI company"
bmk#1476: so it checks out
cfoster0#4356: Don't do it, don't do it 😉
3dprint_the_world#6486: aw so hard to resist
3dprint_the_world#6486: but ok
triggerhappygandi#0001: So is Deepmind. The mind is only like 10cm deep smh
triggerhappygandi#0001: More like shallow mind.
triggerhappygandi#0001: :3berk:
3dprint_the_world#6486: the mind is not 10cm deep
gwern#1782: [KOOLAID MAN BURSTS THROUGH WINDOW] "more like ClosedAI??? OH YEAH" [everyone stares in horror]
triggerhappygandi#0001: @3dprint_the_world *brain but don't spoil the joke
triggerhappygandi#0001: Koolaid man but naked
Aran Komatsuzaki#5714: EleutherAI constantly mocks OpenAI but always wants to be recognized by them.
bmk#1476: well, it's mostly the newbies mocking openai
bmk#1476: everyone else has mocked openai enough for a lifetime already |
triggerhappygandi#0001: Fwiw they do make me go "cool"
gwern#1782: senpai, notice us!
Aran Komatsuzaki#5714: we need an emoji for openai
StellaAthena#3530: Senpai notice me
StellaAthena#3530: Oh @gwern already said that lol
Kyler#9100: LMAOOOOO
Bedebao#4842: I recall one project used CHUNGUS as its acronym.
Bedebao#4842: And now I feel obligated to post the classic https://www.youtube.com/watch?v=BYN-OEcD-3w
Sid#2121: brb, making goatse into an emote
Bedebao#4842: Goatse, now that's a name I haven't heard in a long time.
chilli#5665: Is there anybody who's used PyTorch XLA here? I'm curious about what your experiences have been like. I've basically heard from everybody that it's a poor experience, but I'm curious why.
3dprint_the_world#6486: "Goatse? I don't go by that name anymore."
Aran Komatsuzaki#5714: only thing i know is everyone here seems to hate it
3dprint_the_world#6486: @chilli I'm also curious.
chilli#5665: yeah but I'm wondering why
chilli#5665: lol
nz#9710: I'm pretty sure shawn used it quite extensively, not sure if he used it lately though
zphang#7252: it wasn't great, but it's supposedly getting better
chilli#5665: Yeah, but why wasn’t it great?
zphang#7252: fairseq was working closely with the pytorch/tpu team and I was hacking on their code |
zphang#7252: needed to create many VMs to keep the TPUs fed with data (supposedly this has improved?), and random TPU connection time-outs (can't remember the exact error)
zphang#7252: it was unstable enough that I simply switched to using GPUs
chilli#5665: hmm, so it was mostly on the TPU side/the direct ssh connection stuff?
zphang#7252: I directly SSHing to them; pytorch-xla was handling all the TPU interaction
zphang#7252: (I might also just have been bad at using them)
chilli#5665: err, by "direct ssh connection" I mean the recent Jax stuff
chilli#5665: which uses a different VM setup
nz#9710: yea the recent jax stuff was announced at neurips
IKEA#9631: So uh I thought that the 400M image dataset OAI used for DALLE was nuts, but apparently thats about how many pictures get uploaded on facebook alone everyday
And for the entire internet it's like an order of magnitude higher
jrowe#5371: much pictures, such big.
gwern#1782: yeah, but those fb uploads don't get good captions
gwern#1782: every kind or quality step in metadata costs you an order of magnitude or two available data. that's why self-supervised learning is so important. if you can just learn usefully from piles of raw audio/image/video/text, there is *all the data in the world* you could possibly need
gwern#1782: if you need pixel-level instance semantic segmentation with depth maps - you're screwed
gwern#1782: they threw out 9/10ths of yfcc100m to get decent data to learn on, and flickr is relatively high quality
gwern#1782: (this is one reason I'm so optimistic about danbooru2020 uses - the metadata is absurdly high quality compared to pretty much any other dataset in existence. it may be only n=4m, but judge it by its size, do you?)
gwern#1782: (so it's more about tying up connections & overhead than actually trying to saturate connections?)
3dprint_the_world#6486: the problem is never lack of data. The problem is quality of data.
3dprint_the_world#6486: common crawl goes from 45TB to just 500GB once you do just a little bit of cleanup
jrowe#5371: can microsofts DeBERTa be useful to GPT-NEO or is it operating in a different domain? |
jrowe#5371: to make training data easier or some such? From what I was reading I was thinking you could create a hierarchical database from entities parsed with that software in one pass, so if whatever structure you pulled from DeBERTa maps to GPT you might get better common sense / semantic structure
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/797234602908450886/nltk2.py
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/797234608898572328/all_lite.yml
Deleted User#0000: I updated my AI lol
Deleted User#0000: give it a try please 🙂
Deleted User#0000: id like feedback
cfoster0#4356: Love your enthusiasm
cfoster0#4356: However
cfoster0#4356: #general is probably not the best place to ask people to try out your AI
jrowe#5371: check out https://en.wikipedia.org/wiki/AIML
jrowe#5371: its what you're doing, but years worth of input/output responses in the chatbot format
jrowe#5371: theres a bunch of different collections of responses and behaviors comprising personalities, and it'll show you the pros and cons of that sort of conversational software model
jrowe#5371: its one of a dozen or so rabbit holes every AI enthusiast goes down, until they arrive at one of various camps, which almost all divide into either A.) connectionist models of intelligence or B.) Magical thinkers and crazy people
jrowe#5371: check out https://en.wikipedia.org/wiki/MegaHAL - for the longest time chatbots were random, or variations of the stimulus/response AIML/ALICE/ELIZA model
jrowe#5371: MegaHAL was probably the first glimmer that large models based on parsing natural language would maybe work
jrowe#5371: the trouble with megahal and others like it was combinatorial explosion in the algorithms used, and insufficient computing hardware at the time.
jrowe#5371: anyway 😛 good luck with your bot!
Deleted User#0000: @jrowe awesome thanks!
Sparkette#4342: Is this legit or a scam? https://gpt-4.co/about.html
Sparkette#4342: I feel like this would be a pretty big deal if it was actually what it claimed to be |
Sparkette#4342: which is why I doubt it is 😛
kindiana#1016: really looks like a scam :thonk:
Sparkette#4342: if I had to guess, if I signed up, it would just be GPT-2
kindiana#1016: idk what their endgame would be tho
Sparkette#4342: actually they have the same beta application deal that openai is doing, so it's not like I'd just sign up and get something right away
Sparkette#4342: not saying that makes it any more or less likely to be legit though
Sparkette#4342: though the form doesn't ask for anything more sensitive than a name and email address
bmk#1476: prior is bs
Sparkette#4342: prior?
bmk#1476: my prior is that this is bs
Sparkette#4342: I've never heard that term before, "my prior"
Sparkette#4342: does it mean your best guess?
bmk#1476: lol this absolutely smells like bs https://cdn.discordapp.com/attachments/729741769738158194/797334646072803328/unknown.png
bmk#1476: i just mean i think its bs lol
Sparkette#4342: didn't notice the "4 more programming languages" line 😛
Sparkette#4342: I mean I guess technically there's a set number of languages it knows but it's more of a spectrum of how well
bmk#1476: given the poor english, extremely questionable mental model of scaling, and just general crankiness id be willing to bet a large amount of money that this is not legit
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/797335312085811200/unknown.png
bmk#1476: lol https://cdn.discordapp.com/attachments/729741769738158194/797335514277347329/unknown.png
bmk#1476: lol, they even mention us! https://cdn.discordapp.com/attachments/729741769738158194/797335613308534814/unknown.png |
bmk#1476: im debating whether i should join their discord
bmk#1476: im like 90% sure what this person has is some kind of either tiny model trained only on the domain they care about and thus has no generalization capability whatsoever, or is some other kind of weird thing hacked together that bears no resemblance to any of the gpt models whatsoever except insofar as both are capable of doing things
paws#3311: think they just few shot learned the "4 more languages" using the gpt3 api
3dprint_the_world#6486: Check out rasa https://github.com/RasaHQ/rasa
Deleted User#0000: Will do thanks
3dprint_the_world#6486: 100% a scam. Not a doubt.
3dprint_the_world#6486: Sorry but you don't get away with the "not going to give you any code or technical details until you pay and fill out an application form" thing unless you're actually a legit org like OpenAI.
3dprint_the_world#6486: The name 'sourfruit' is fitting too
3dprint_the_world#6486: Why would they call it gpt-4 if it *wasn't* a scam
triggerhappygandi#0001: They already have a pricing
3dprint_the_world#6486: yeah. it's a money grab.
triggerhappygandi#0001: These things will only populate in the future. Can't wait for spurned enthusiasts trying their hand at GPT-5 lol
triggerhappygandi#0001: "It creates videos, talks to you, understands your emotional needs, and with our Premium ™️ Pro ™️ Plan, also holds your hand while you feel sad about being scammed!"
bmk#1476: See what if you glue the other end of the thing onto mturk
ericxtang | Livepeer#9262: @asara moving to the general channel because this discussion isn't so much related to the pile.
Do you have any suggestions on "existing ASR datasets" besides Common Voice? ASR is a little tricky because the audio type is so important - conversational vs. presentation, accents, languages, etc.
triggerhappygandi#0001: THATS ILLEGAL
asara#0001: well VCTK and LibriTTS and LJSpeech and Vox* are all very big and common and open
triggerhappygandi#0001: Don't give any ideas lmao |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.