data
stringlengths 115
7.61k
|
---|
frank cilantro#9153: and transitions w actions and states
dopa#3178: yes
frank cilantro#9153: idk i feel like this is kinda wishywashy
frank cilantro#9153: idk if its checkable in poly time haha
3dprint_the_world#6486: Yeah I always love trolling chess players by saying "Solving chess is O(1), what's the problem?"
StellaAthena#3530: @dopa Complexity Theory studies how the hardness of a problem **scales**. Any game that doesn't scale (for example, chess) is solvable in constant time. When someone speaks of "chess" being PSPACE-C they're not actually talking about chess. They're talking about a generalization of chess to an n x n board. With this generalization, we can now ask "how does the complexity of chess scale with respect to n?"
StellaAthena#3530: The decision problem that is PSPACE-C is "given a chess position on a square board **of arbitrary size**, determine if white has a forced win with optimal play."
StellaAthena#3530: This is not a problem we train AIs to solve.
dopa#3178: is not PSPACE-C is even harder then NP, not sure if this is right statement
StellaAthena#3530: Similarly, it doesn't matter if you can solve TSP for all networks of size less than 100000. If you can't solve the problem for **all finite networks** it means nothing from a complexity theory standpoint
frank cilantro#9153: actually @dopa starcraft is np-hard i found a preprint so it must be tru https://arxiv.org/abs/1201.4995v4 😆
frank cilantro#9153: jk its probably not defined as solving starcraft game but some other thing u can do in starcraft
StellaAthena#3530: The paper that proves that "starcraft" is NP-hard similarly requires generalizing the game to arbitrarily structured boards, including arbitrarily large ones. AFAIK this is not possible in game, and so real-world starcraft is not NP-hard
https://arxiv.org/abs/1201.4995
dopa#3178: I am not sure why it is not possible, because, you can have infinitely many different units, size of board, economy processes
dopa#3178: to make it specific what is SC is multi-agent game
dopa#3178: eg. every unit in SC is independent entity
StellaAthena#3530: As far as I know, the only paper to prove that a game *as it is actually played* is harder than NP is this paper
https://arxiv.org/abs/1904.09828
dopa#3178: so there is needs to be achieved some emergent collective behavior |
StellaAthena#3530: It's possible that some other proof shows that, but the argument given in the paper I linked to requires the map to be unbounded in size and StarCraft has a hard limit of 256x256 according to the internet.
dopa#3178: The ultimate question is, what is the complexity of finding a winning strategy for a particular player,with no assumptions about joint observations or knowledge of other players’ utilities. Since a specialcase of this is the DEC-POMDP, where finding an optimal (joint, cooperative) policy is known to beNEXP-hard [1], this problem cannot be any easier than in NEXP.
dopa#3178: https://papers.nips.cc/paper/2007/file/3435c378bb76d4357324dd7e69f3cd18-Paper.pdf
StellaAthena#3530: If you look at the paper that is cited for that claim, you'll see that the thing that is going to infinity is the **number of players**. Can you play a game of starcraft with 1,000,000 different sides? If the answer is no, then this doesn't help.
https://arxiv.org/abs/1301.3836
StellaAthena#3530: @dopa am I making any sense?
dopa#3178: I need a moment to process this, I clearly missing something in understanding complexity classes
dopa#3178: POSG is generalization of DEC-POMD as far I understand
dopa#3178: game becomes more complicated when there is competition and cooperation dynamics
dopa#3178: what does NEXT^NP means ?
dopa#3178: > Can you play a game of starcraft with 1,000,000 different sides? If the answer is no, then this doesn't help.
you can simulated urban city as game, similar SC dynamics to large extent, arguably.
StellaAthena#3530: I recommend chapter 1 of this book: http://index-of.co.uk/Theory-of-Computation/Computational%20Complexity%20A%20Conceptual%20Perspective%20-%20Oded%20Goldreich.pdf
StellaAthena#3530: It does a good job of laying out the concepts you need for complexity theory
dopa#3178: I have no other option not to study now
dopa#3178: btw. if SC game is multi-agent then you can scale to "infinity"
dopa#3178: I need to be proficient in POSG's complexity that is for sure, or will risk blinders 🙂
StellaAthena#3530: There are lots of multi-agent games that don't scale to infinitely many agents
dopa#3178: is there simple game that is has cooperation-competition dynamics in multi-agent setting ? |
dopa#3178: willing and able to write code, a lot, java/C++/python don't care, like my life depends on it
dopa#3178: an open cooperation-competition dynamics, agents are not explicitly have to cooperate or compete it is there's decisions
StellaAthena#3530: Many board games are like that
StellaAthena#3530: Settlers of Catan and Risk, to name two
dopa#3178: both of them do not scale ?
StellaAthena#3530: correct
StellaAthena#3530: Do you want an example that does?
dopa#3178: yes, please
dopa#3178: life? 🙂
dopa#3178: on third thought, soccer game ?
StellaAthena#3530: Magic: the Gathering, Fluxx, Werewolf
dopa#3178: warewolf seems like a lot of fun
dopa#3178: followup question, this game are discrete time and discrete event settings
dopa#3178: would you agree that continues time discrete event games are more complicated ?
dopa#3178: for example, soccer can be defined as game with continues time and discrete events
dopa#3178: I am not sure if this totally correct to think in such terms, yet
3dprint_the_world#6486: more complicated, not necessarily
3dprint_the_world#6486: it all depends on if there are bifurcation points
dopa#3178: thank you, this discord channel is my #1 favorite
Louis#0144: Stela has lived her whole life without eating any of them. |
So when she eats them, it’s like an extreme form of starvation.
It’s very rare for her to eat anything other than peanuts.
Stela loves peanuts! She goes to the market every day, and is best friends with the owner.
Louis#0144: @StellaAthena
Louis#0144: I prompted it with stella loves peanuts
Louis#0144: it can write plot points both before and after that line
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/788162840816386058/Screen_Shot_2020-12-14_at_4.57.41_PM.png
Louis#0144: FUCK
Louis#0144: MY SIDES
Louis#0144: HAHAHAHA
StellaAthena#3530: OMG
Louis#0144: it spelt ur name wrong
Louis#0144: v disappointing
StellaAthena#3530: “Stel” is a nickname some people call me
StellaAthena#3530: Maybe it overheard that and got confused
Louis#0144: lmao
Louis#0144: you know u gotta make that subreddit now
Louis#0144: I am
Louis#0144: so
Louis#0144: fucking tired omg |
Louis#0144: Ive been figuring out how to evaluate this model for the last four days LMAO
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/788175049080176660/Screen_Shot_2020-12-14_at_5.46.00_PM.png
Louis#0144: that
Louis#0144: is a roblox youtube channel
Louis#0144: that it decided to link to me in the middle of a story about dragons
Louis#0144: Starting a paper on an effective perplexity for storytelling LMs
Louis#0144: Calling it fabula entropy index
Louis#0144: What a badass name
Louis#0144: Holy shit
bmk#1476: Fabulaus
Louis#0144: Effectively measures how clearly an LM can articulate the information stored in a time series of arbitrary sized knowledge graphs
Louis#0144: So BART scores a 19 for instance
Louis#0144: GPT3 only scores an 18.2
Louis#0144: GPT2 is about at 20 or so
bmk#1476: what if you use it to RL tune GPT3
Louis#0144: oh true
Louis#0144: Hm
Louis#0144: Maybe
Louis#0144: Rn it’s human eval based but we’re gonna automate it soon
bmk#1476: do the whole preference learning thing but the signal is your thing |
Louis#0144: GPT3 is the lowest
Louis#0144: Btw
Louis#0144: By far
Louis#0144: But humans score a 1.0
Louis#0144: lol
bmk#1476: youre talking about tuned gpt2 right
Louis#0144: GPT3
bmk#1476: this
Louis#0144: Like API access
Louis#0144: Oh
Louis#0144: Yes
bmk#1476: so if you tune gpt3 maybe itll win by a lot
Louis#0144: It is tuned
Louis#0144: kinda
Louis#0144: As tuned as I can make it
Louis#0144: I’m not sure how this API works
Louis#0144: Can I finetune it directly?
Louis#0144: I guess you’re right I guess this is zero shot performance
Louis#0144: Kinda cool ig
Louis#0144: RT tuned does worse than GPT3 not tuned |
bmk#1476: u can tune it if you beg oa i think
Louis#0144: But if you leave the realm of LMs, my symbolic model scores a toasty 8
Louis#0144: 😉
3dprint_the_world#6486: don't tell that to bmk
bmk#1476: i don't think bmk is around today
Louis#0144: But yeah so I give the LM a knowledge graph
Louis#0144: And try to get it to recreate it by answering true false questions
Louis#0144: Humans score perfectly on a sample size about 40
Louis#0144: So I think I need to make it harder so language models can eventually surpass humans
chirp#4545: So I’ve noticed a lot of my favorite papers have rather large teams behind them
GPT-3, AlphaFold (https://news.ycombinator.com/item?id=25399082), DeepMind’s Imitative Intelligence thing, and also a new OpenAI robotics paper (https://slideslive.com/38941343/asymmetric-selfplay-for-automatic-goal-discovery-in-robotic-manipulation?ref=account-folder-62083-folders)
I wonder if this will continue to be a trend
bmk#1476: hmm, so if there's a correlation between author count and whether you like it, you gotta like pile then
dopa#3178: LOL we where talking about SC computational complexity, when Tetris is NP-hard
dopa#3178: https://tenor.com/view/sad-crying-spiderman-cry-face-ugly-face-gif-5701170
dopa#3178: chess is EXP-hard.
StellaAthena#3530: In mathematics, regardless of the number of authors, they’re listed in alphabetical order and no announcement of contributions
MasterScrat#6910: could you elaborate? im curious about this |
MasterScrat#6910: how does it connect to storytelling? and where does the info in the knowledge graph comes from?
Louis#0144: So a story is just text that describes the evolution of knowledge
Louis#0144: You can extract this knowledge from text
Louis#0144: It’s common practice in a lot of knowledge heavy like emergent storytelling stuff
Louis#0144: Think personal narratives
Louis#0144: Like twitter
Louis#0144: You can then feed this KG to an LM and ask T/F about the text (not the KG)
Louis#0144: It should be able to infer the text from the KG
gwern#1782: I still note that "piled higher and deeper" is the perfect pun for us
Louis#0144: And reason about both
olives ❀#2305: stela likes peanuts
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/788450376399585310/1454430887-20160202.png
Daj#7482: SMBC sets unrealistic standards for people being fun, why don't my girlfriends ever yell at me about people being bees in disguise
asparagui#6391: it's reptilians all the way down
Bedebao#4842: Tell me, has AMD developed something similar to Nvidia's CUDA or not?
Bedebao#4842: Or are nvidia GPUs still the goto for running some ML?
dopa#3178: OpenCL is AMD alternative
dopa#3178: but CUDA outperforms openCL as far I understand, we see how this will playout in future with AMD's new RDNA architecture, me not expert
dopa#3178: just to be clear it is tricky to compare performance
dopa#3178: https://wiki.tiker.net/CudaVsOpenCL/ |
dopa#3178: https://missinglink.ai/guides/tensorflow/tensorflow-support-opencl/
3dprint_the_world#6486: We had a job interview for a GPU dev role and asked the candidate if he would hypothetically be ok with writing OpenCL.
He chuckled.
dopa#3178: my answer would be: exactly what hypothetically you want to to write in OpenCL 🙂
3dprint_the_world#6486: we use CUDA exclusively. We were just considering the possibility of porting some stuff to OpenCL.
3dprint_the_world#6486: also that's the kind of thing you don't actually say in a job interview. But generally speaking it's basic ML stuff.
dopa#3178: I might be wrong, but my impression is that openCL has some advantage in platform support and things related to fpga, while libs are similar openCL is more low level dev driven
3dprint_the_world#6486: most devs I know largely consider OpenCL to be a joke, at least for ML purposes.
3dprint_the_world#6486: performance is poor and library functionality is lackluster.
dopa#3178: for ml like using tensor flow, etc ?
3dprint_the_world#6486: everyone I know uses either nvidia gpus or TPUs
3dprint_the_world#6486: yeah well doesn't have to be tensorflow.
dopa#3178: or ML building experement with eugin/openCL/fpga ?
3dprint_the_world#6486: "ML building experement"?
3dprint_the_world#6486: surely you aren't suggesting there are people out there seriously using FPGAs for actual ML acceleration
dopa#3178: in labs yes
3dprint_the_world#6486: ok.
dopa#3178: for spiking neural networks
3dprint_the_world#6486: but that's different though
dopa#3178: I think 10 years or so, it was experiment where chunk of rat cortex was simulated on robot using fpga board |
3dprint_the_world#6486: they are just using FPGAs to prototype ideas with the goal that they would later become ASICs
3dprint_the_world#6486: oh, right, so *not* ML at all, lol
dopa#3178: this kind of think where openCL power is
3dprint_the_world#6486: but rather brain simulation
dopa#3178: ML like a brain 🙂
dopa#3178: I see it through level of abstraction
bmk#1476: ML is to a brain as an airplane is to a bird
dopa#3178: well both fly 🙂
bmk#1476: sure, but a veterinarian is gonna have a heck of a time fixing an airplane
cfoster0#4356: > sure, but a veterinarian is gonna have a heck of a time fixing an airplane
@bmk I spit my juice out at this
bmk#1476: youre welcome
Daj#7482: Or the other way around: Have you tried turning the patient off and back on again?
3dprint_the_world#6486: I would replace airplane with helicopter
dopa#3178: putting patient in coma is treatment
3dprint_the_world#6486: they both fly, but using mostly different lift mechanisms 😁
dopa#3178: brain for sure learns differently then ML
dopa#3178: I am not arguing against that, but some recurrent neural nets architectures where directly used from insect brain scan
dopa#3178: and they outperformed existing models
bmk#1476: Does anyone actually use anything like that |
dopa#3178: in robotics, I believe so
bmk#1476: sounds like a gimmick tbh
3dprint_the_world#6486: paper?
dopa#3178: moment
Daj#7482: Friston said he was "unsurprised" that backprop and the brain are related
Daj#7482: Brain is backprop boys
Daj#7482: rip
3dprint_the_world#6486: oh god, Friston
3dprint_the_world#6486: the poster child of the degradation of academia
bmk#1476: duke it out
3dprint_the_world#6486: nah
VonChair#2222: @bmk I have some image training data that is not too easy to locate now. I'm thinking I'm going to post it. Care to recommend someone to take a look at it with me first?
dopa#3178: https://www.sciencedaily.com/releases/2018/10/181025142010.htm
dopa#3178: this is not exact one, I had in mid but close
dopa#3178: DARPA has huge effort in this area
3dprint_the_world#6486: > Despite the simplicity of their visual system, fruit flies are able to reliably distinguish between individuals based on sight alone.
That had me doing a "*whaaaaaaa?*" until I realized they're talking about *fruit fly* individuals, not humans
dopa#3178: there better papers for this
dopa#3178: well even rasnet or some other architecture mimics human brain structure too
bmk#1476: what data is this and what help do you need with it? |
Daj#7482: Man, no offense, but your takes on academia seem really bad lol
3dprint_the_world#6486: links please
dopa#3178: hold, I need to dig through zotero it was like 5 years ago
dopa#3178: more like 7 lol
3dprint_the_world#6486: oh I thought you're talking about recent papers
dopa#3178: whats wrong with old papers 🙂
3dprint_the_world#6486: ok, I'm willing to entertain that. Change my mind.
Daj#7482: > nah
Daj#7482: lol
Daj#7482: Same thing we had before with e.g. Wolfram
Daj#7482: Maybe "are bad" is too harsh
Daj#7482: "Do not make sense to me"/"seem unfairly dismissive of non-authorative work" may be more accurate
Daj#7482: Like, "the poster child of the degradation of academia", really?
3dprint_the_world#6486: Wolfram makes perfect sense to me. I actually use Mathematica a lot and love it.
I was specifically addressing his physics theory which, as someone who's spent a lot of time studying physics shit, I think is mostly hype and self-promotion.
Daj#7482: Have you seen, I dunno, social sciences, humanities, biology, psychology...
3dprint_the_world#6486: And I'm happy to talk about Wolfram's physics stuff in more detail.
3dprint_the_world#6486: Actually, *way* more detail, if you want.
Louis#0144: If it does not have a pulse then it is dead.
The heart rate of an animal tells them if it is alive or dead. |
There is no such thing as absolute death.
He is able to tell the difference between something being alive and something being dead, so when he looks at the house, he feels like he’s seeing things that aren’t there.
This makes him feel uncomfortable because he doesn’t want to be in that situation.
It’s similar to how people can see ghosts or monsters from inside their head but they don’t know what those things are.
Hansel's hand still trembles as he pushes open the twice-cooked door.
The last time he saw the house he was glancing back over his shoulder as he and his sister fled into the trees.
3dprint_the_world#6486: 😁
VonChair#2222: @gwern Might you be able to help me with this set of face data?
Daj#7482: Singling out Friston as "the poster child of the degradation of academia" is just so uncharitable it's crazy to me
Daj#7482: Yea you kept going on and on and on about something about math and I kept trying to point away from it
Daj#7482: You're way, way too aggressive in your condemnations imo
Daj#7482: I spent a week studying Friston's stuff and even interviewed the guy, he's a polymath and a gentleman, he doesn't deserve this kinda shit from online randos
3dprint_the_world#6486: I'm not saying he's not a nice guy.
3dprint_the_world#6486: You can be a nice guy and also a poster child for the degradation of academia.
Daj#7482: He's just the "the poster child of the degradation of academia"
3dprint_the_world#6486: Yes.
Daj#7482: "Be kind, failing that, bring evidence"
Daj#7482: He's the most highly cited neuroscientist alive (or 2nd)
3dprint_the_world#6486: Which makes it all the worse.
Sid#2121: as someone who is a complete mushbrain i'd love to know what's wrong with friston in more words @3dprint_the_world |
Daj#7482: Be kind, back up your claims, or stop trolling
Daj#7482: Last time same thing, you fling a ton of shit at Wolfram but don't back it up
Daj#7482: I've read Friston's theories in great detail and I think they're in the top 10% at least of neuro I've read
3dprint_the_world#6486: do you want to talk about his physics stuff? more than happy to.
Daj#7482: If Friston is _bad_ neuro then the average neuro paper must blow me the fuck away
Daj#7482: Go ahead, either back up why Friston is "the poster child of the degradation of academia" in excrutiating detail, or don't say shit like that
3dprint_the_world#6486: I was referring to Wolfram
3dprint_the_world#6486: but happy to talk about Friston too
Daj#7482: We're talking about Friston now, don't change the topic
3dprint_the_world#6486: ?
3dprint_the_world#6486: you brought up Wolfram...
Daj#7482: Go ahead then, if you think this is valuable to discuss
3dprint_the_world#6486: anyway
Daj#7482: I brought up Wolfram because it was the same type of scenario, I don't like people being unkind and then not backing it up
Daj#7482: community norms matter, we are not reddit
3dprint_the_world#6486: But this is what happened last time too, I wanted to talk about the specifics of Wolfram's theory but you kept avoiding it.
3dprint_the_world#6486: but anyway, not important.
3dprint_the_world#6486: let's talk Friston.
3dprint_the_world#6486: Or actually, I'm happy to just shut up and you can explain the Free Energy principle to me (or link me to something that actually explains it). Then I'll read it and we can discuss.
3dprint_the_world#6486: Totally happy to do that. |
3dprint_the_world#6486: And afterwards I'll go into precise detail why I said what I said.
Daj#7482: Well if you know it so well that you know confidently that Friston is "the poster child of the degradation of academia", this should be easy for you
3dprint_the_world#6486: and if you have something else to do and don't have time, that's cool too.
Daj#7482: You can watch the interview I did with him
Daj#7482: It was quite enlightening
StellaAthena#3530: Or maybe a basic amount of humility means that he’s interested in what you think is a good summary?
Daj#7482: Oh, so if someone else makes a statement I think is bad, and I say "please back it up", I should be the one that explains it? Yes, good communication norms
Daj#7482: But sure
Daj#7482: The Free Energy Principle is a pretty general concept that any non-equilibrium steady state system (e.g. a cell) can be described as performing a kind of variational bayesian inference
Daj#7482: The specific kind used is called "minimization of free energy", which sets an upper bound on the accuracy of the model
Daj#7482: It's related to the bayesian brain and other hypothesis about how the brain works, but is more general
3dprint_the_world#6486: see, already things are falling apart. There's no such thing as non-steady state equilibrium. Equilibrium is by definition steady-state.
3dprint_the_world#6486: words have meanings.
Daj#7482: Oh yeah, sorry, I misquoted that
Daj#7482: Wait let me get up the actual terminology
Daj#7482: There we go, just mixed up the order of the two words
Daj#7482: So want me to keep explaining or can we stop acting like children?
3dprint_the_world#6486: yes please go ahead
Daj#7482: The most interesting idea to ML revolves around the tradeoff between minimzing the free energy while maximizing the free energy of the policy, conditioned on the model parameters
3dprint_the_world#6486: sorry what's the right order then |
Daj#7482: non-equilibrium steady state, I edited the post
3dprint_the_world#6486: ok
Daj#7482: He says that any such system that exists over a longer period of time can be described with a markov blanket (inner states that are independent of outer states when conditioned on a "blanket" of sensory states), and can be described as performing "inference" to maximize the "evidence" of its own existence
Daj#7482: This is both trivial and pretty clever
Daj#7482: It's a new lens to view what it means for such a system to exist over extended periods of time
Daj#7482: (by "maximizing evidence" I mean "minimize free energy")
Daj#7482: That's the best summary I can manage without doing actual work myself
Daj#7482: So, now that you've succesfully wasted my time, please show me how this is truly such a godawful terrible misuse of the academic system?
Daj#7482: My favorite "application" of this stuff btw is https://slatestarcodex.com/2018/03/08/ssc-journal-club-friston-on-computational-mood/
3dprint_the_world#6486: can you explain why it's pretty clever
Daj#7482: real nifty
Daj#7482: It's mathematically elegant, ontologically satisfying, and leads to interesting further directions like that computational mood paper
Daj#7482: Can you explain why it's not pretty clever?
dopa#3178: @3dprint_the_world https://ieeexplore.ieee.org/document/6033456 this one, but honestly still cannot find one with recurrent network 😦
dopa#3178: may be memory plays tricks on me lol
dopa#3178: https://www.frontiersin.org/articles/10.3389/fnbot.2017.00020/full this is another one
3dprint_the_world#6486: Overall I think the Free Energy principle is mostly trivial stuff wrapped up in fancy language to grab people's attention and make them think it's something profound. That's my honest non-trolling opinion and you can disagree.
3dprint_the_world#6486: Kind of like the academic equivalent of clickbait.
3dprint_the_world#6486: I've seen people who became totally mesmerized by it and wasted months/years of their life thinking about it and getting nowhere.
dopa#3178: there is github source code thankfully |
3dprint_the_world#6486: I don't think Friston himself sees it this way. His neuroscience stuff is probably great, I have no comment on that part of his work.
3dprint_the_world#6486: The problem is the *non*-neuroscience stuff.
3dprint_the_world#6486: But let's get into specifics. Let's start with terms.
dopa#3178: and free energy pricinple seems like is not computationally efficient
3dprint_the_world#6486: First, calling it the 'free energy principle'. Basically what he means is just 'minimization of prediction error.' Problem is, when you phrase it that way, it doesn't sound so profound.
Daj#7482: Ok, fine, then I will kindly ask you to say it that way in the future, because that's very close to what I think as well, and not throw around ad homines or waste my time reexplaining things you claim to already know
3dprint_the_world#6486: In fact AI has thought about the brain doing that kind of thing for way before Friston.
3dprint_the_world#6486: Now, moving on,
dopa#3178: not only names gets me frustrated but also damm experiments, why not to use existing benchmark that is clear how efficient or not efficient method is ....
Daj#7482: "I have no comment on the biggest part of his work, so therefor I am qualified to say he is "the poster child of the degradation of academia"
3dprint_the_world#6486: the idea of biological systems basically maintaining non-equilibrium steady state is a really widely-known idea. Someone even won a nobel prize in 1977 for actual real theoretical and practical work in this area (and it wasn't Friston. It was Prigogine).
Daj#7482: Academia must be very non-degraded if this is the worst it has
Daj#7482: Just say "I think his work is overrated" and move on
Daj#7482: Most people think that
zphang#7252: *ACL workshops are out
zphang#7252: EACL: https://twitter.com/aclmeeting/status/1338946264348692486
NAACL: https://twitter.com/aclmeeting/status/1338948177127493633
ACL: https://twitter.com/aclmeeting/status/1338948711506993153
("GEM: Natural Language Generation, Evaluation, and Metrics" may be relevant)
EMNLP: https://twitter.com/aclmeeting/status/1338950148026675200 |
3dprint_the_world#6486: Ok, so moving on. Friston has said that the Free Energy principle provides a more or less complete account of what makes a system have cognitive capacity, in general terms anyway.
This immediately runs into a problem -- most biological systems maintain some sort of homeostasis, so why doesn't every biological system have cognition?
3dprint_the_world#6486: When I've asked people this question, their reaction is always "Oh, that's silly, obviously not every biological system has cognition because... uhm... actually that's a good point 🙂 "
dopa#3178: well there is argument even single cell have cognition
Daj#7482: His claim is that you use cognition to minimize free energy
Daj#7482: It's just one mechanism to do so
Daj#7482: Or you can redefine cognition to be any process that minimized free energy
Daj#7482: Which means a single cell organism _does_ have a model of its environment, which Friston would also endorse
dopa#3178: single cell decision process can be argued as cognitive process to some extent I guess
dopa#3178: position, location, and function of cell in body
Daj#7482: Friston has been very clear that he thinks single celled organisms model their environment, and mathematically he's correct
3dprint_the_world#6486: yeah then you're just defining cognition in a way that is useless for actually understanding the human brain, which is supposedly the whole goal here. congrats.
Daj#7482: I disagree with that conclusion?
Daj#7482: Sure the Free Energy Principle doesn't give you an anatomical definition of what part of the brain does what using which chemicals where
Daj#7482: But it never claimed that
dopa#3178: I don't think is redefining cognition, it more of generalization
Daj#7482: It just says "since the brain is part of a system with a markov blanket, it must be doing something to minimize model free energy"
Daj#7482: The FEP is on the abstraction level of "causal graphs", not biological organs
dopa#3178: question is can we generalize cognition for all living systems, where complex behavior is emergent not based on global functions
dopa#3178: the scope of low level cognitive functions is not clear to me |
3dprint_the_world#6486: *But that's not what Friston says*
Daj#7482: Uh, it's exactly what I've heard him say
Daj#7482: cite?
3dprint_the_world#6486: Sure, I'll get back to this once I finish my points.
Daj#7482: OK, I've already spent an entire hour on this no-op of a conversation so I'd appreciate either wrapping up or agreeing to disagree
Daj#7482: in a reasonable timeframe
dopa#3178: what is cognition ? 🙂
dopa#3178: just another arguable topic
3dprint_the_world#6486: that was always an option I was happy with, it's you who decided to get all pissy about me besmirching Friston's good name
3dprint_the_world#6486: lol
Daj#7482: Right
Daj#7482: It's because I do not tolerate ad homines
Daj#7482: Please in future temper your criticisms to the object level or keep them to yourself
3dprint_the_world#6486: I might as well just continue writing out my points, for posteriority's sake.
The second problem is the 'dark room problem', and Friston's explaining away of it, which actually leads to more problems.
3dprint_the_world#6486: The problem is that if you solely dedicate yourself to reducing surprise (or minimizing prediction error), then a trivial solution is to just put yourself in a situation where you have no stimulus.
3dprint_the_world#6486: Like in the metaphorical dark room.
Daj#7482: But it's minimizing suprise _and_ maximizing policy entropy
3dprint_the_world#6486: Friston explains this away by saying:
> The free-energy principle relies on free energy (bounding surprise) being defined relative to a model. We must here understand ‘model’ in the most inclusive sense, as combining interpretive disposition, morphology, and neural architecture, and as implying a highly tuned ‘fit’ between the active, embodied organism and the embedded environment. |
3dprint_the_world#6486: yes
Daj#7482: Ah well, either way, it's bed time for me
Daj#7482: Good night everyone
dopa#3178: >combining interpretive disposition, morphology, and neural architecture
I need some evidence for neuromorphology lol
dopa#3178: this something so complicated does not matter how you look at it
3dprint_the_world#6486: The basic problem here is that introducing this new thing - the 'model' actually creates more problems. You need to explain where the organism gets the structure of the model from, or specifically where the structure of the model comes from the fit between the organism's overall phenotype and environment.
dopa#3178: I look at is as every cell is agent that able to control to some degree random process
3dprint_the_world#6486: And if you take this to its logical conclusion, you're trying to simultaneously say that the free energy principle controls all cognition, but also doesn't control all cognition, just the parts that aren't doing the 'model'
3dprint_the_world#6486: which is a contradictory position
3dprint_the_world#6486: Friston's collaborator Clark tried to again explain this point away but to me none of it really seems adequate.
dopa#3178: the only thing got my curiosity is Markov blanket model it self, but not willing to invest time in it
dopa#3178: what happens if every agent model is based on markov blanket, not sure if it is not false thought, heh
dopa#3178: everything in environment has to be defined as markov blanket - right ?
3dprint_the_world#6486: haha kinda
dopa#3178: so minecraft has to be defined as markov blanket
dopa#3178: every block is living or inanimate entities
3dprint_the_world#6486: The third and final problem (I promise to shut up after this) is when you think about *meta-cognition*. Basically one of the implied ideas of the Free Energy principle is that every living organism has cognition. But the existence of human meta-cognition kind of throws a wrench into this.
To cut a long story short, meta-cognitive abilities like being able to say how confident we are in a certain prediction or the use of attention require large, deep hierarchical models of cognition, or similar concepts. These can't really be made to work in the framework of FEP unless you also stipulate there's some other aspects underlying cognition going on in human brains that you don't see in e.g. worm brains. Which also kind of invalidates FEP.
dopa#3178: why meta-cognition cannot be emergent from low level cognition |
dopa#3178: same way as cells assemble to brain functions and structures
3dprint_the_world#6486: if it is then why don't you see it in lower-level organisms
dopa#3178: may be we do just don't recognize it
dopa#3178: how does cell makes decision what function it should perform
dopa#3178: this seems converges to argument if it is pure random process or it is not, and if it not then what does drives this decision process
3dprint_the_world#6486: hmmm not sure about that
dopa#3178: some how with a fully decentralized process we have symmetric bodies
dopa#3178: hand are same length etc
dopa#3178: and all this is indirectly encoded in DNA
dopa#3178: problem is with low level cognition on cellular, where do you stop
dopa#3178: my thoughts when thinking about just basic low level cognition
dopa#3178: then DNA it self become cognitive process
dopa#3178: just on very very long time frame
dopa#3178: one DNA cognitive thought = 1000 years for example
dopa#3178: it actually very similar to physics question: if universe can produce brain such as humans, why there cannot be universe sized brains ?
3dprint_the_world#6486: maybe, but at the same time it kind of amazes me that in these discussions people are always so willing to stretch definitions to imply humans don't actually have the cognitive capacities that we (clearly, imo) have, and that other systems actually do have cognitive capacities even though there's not that much evidence for it.
3dprint_the_world#6486: like the idea that "trees think, but just on a much longer timescale."
dopa#3178: how do you test it lol
3dprint_the_world#6486: which, maybe, I dunno. We certainly don't know *everything* about trees. But evidence of such cognition has not been forthcoming.
3dprint_the_world#6486: anyway, I'll shut up now. And for people not interested in my rant who think I'm spamming: sorry! 😃 |
dopa#3178: I think this why simulations are so important
dopa#3178: may be there is possibility to show empirically in simulation that DNA has a plan of sort on very long time scales
dopa#3178: this is crazy thought
dopa#3178: it is deep philosophical question to what degree emergence and evolution of life is random process
gwern#1782: there certainly are. MS was boasting about doing random forests at scale in MS Azure using FPGAs for savings
3dprint_the_world#6486: oh wow interesting, I legit did not know that.
3dprint_the_world#6486: Ok I retract my credulity.
gwern#1782: FPGAs aren't just for prototyping, they have their own legitimate niche involving high IO and reasonable flexibility
VonChair#2222: Hey @gwern I have a set of face data and I'm wondering if it's worth putting it up for download.
dopa#3178: there not popular much because it is ITAR tech in many cases
3dprint_the_world#6486: yes but my working assumption was that GPUs or TPUs would be way better for most ML workloads
3dprint_the_world#6486: (I mean, that could still be the case, but interesting nonetheless)
dopa#3178: AWS offer FPGA instances btw
Sid#2121: what's the data?
Sid#2121: if it's smaller than FFHQ i doubt it will gather much traction
bmk#1476: he claims it's ~1tb
Sid#2121: *interest peaking*
What should everyone call you?#2680: Hey, anyone else going to try their hand at the AI Progress Forecasting tournament? Here's a link to the first queston: https://www.metaculus.com/questions/5902/sota-1-shot-on-on-miniimagenet-2021-06-14/
bmk#1476: see, what we should do is find a question where we are able to do it, bet a shitload on our exact release date, and rake in the prizes
bmk#1476: https://www.metaculus.com/questions/4877/when-will-a-language-model-with-at-least-100b-parameters-be-open-sourced-including-for-commercial-use/ |
bmk#1476: the moment we know our gpt3 release date, i'll bet massive amounts on it
kindiana#1016: do you get money? I thought you just get cred for being good at predicting things
kindiana#1016: https://www.metaculus.com/ai-progress-tournament/
kindiana#1016: ah
What should everyone call you?#2680: In case anyone wants some more details: you get cred and the top scorers in different question categories get monery. The prizes are typically $1-2,000.
What should everyone call you?#2680: @bmk Unfortunately that question isn't part of the forecasting tournament.
bmk#1476: dang
kindiana#1016: btw here's a paper on MS's FPGA implementation, but the TLDR is extremely low latency, small batch RNN inference using very low precision weights https://www.microsoft.com/en-us/research/uploads/prod/2018/06/ISCA18-Brainwave-CameraReady.pdf
kindiana#1016: not particularly relevant to large transformers, but for their specific usecase it is pretty impressive
3dprint_the_world#6486: Thanks @kindiana , looks quite cool.
dopa#3178: this paper is good github project for toy network on fpga usb stick 🙂
Louis#0144: https://twitter.com/luislamb/status/1339039335920803840?s=21
Louis#0144: lol
Louis#0144: Mf
Louis#0144: Gary marcus’ed
CKtalon#7792: anyone worked with Coreference SpanBERT before?
CKtalon#7792: Is there a way to get a "confidence" score for the coreferences it detects?
triggerhappygandi#0001: Reading the Linformer paper rn. It says one of the drawbacks of reformer is that it needs very large sequences to _truly_ appreciate it's benefits. How is this a drawback when the entire thing about transformers is their propensity towards bigger and bigger sequences?
kindiana#1016: if your model/data only requires relatively short contexts, the compute to performance tradeoff doesn't lean in favor of reformer
Emad#9608: https://www.sixthtone.com/news/1006531/The%20AI%20Girlfriend%20Seducing%20China%E2%80%99s%20Lonely%20Men/ |
Bedebao#4842: of course it ends up as a tool to spy
triggerhappygandi#0001: What in China _doesn't_?
triggerhappygandi#0001: I don't think Chinese have the same view on privacy as the west anyway. How else would they just get along with their social credit mechanism?
triggerhappygandi#0001: Plus, is this bot even doing anything that Siri/Google Assistant doesn't?
triggerhappygandi#0001: Might as well dirty talk Siri
Emad#9608: well Siri isn't designed for that, paper here: https://arxiv.org/abs/1812.08989 think they could (will) do a lot "better". Surprised gacha games haven't used this more leveraging waifus that don't exist
bmk#1476: On thin ice there
triggerhappygandi#0001: Sorry to overassume@bmk#1476
triggerhappygandi#0001: Didn't mean it as an affront
bmk#1476: I bet you would also outwardly show a disregard for privacy if acting otherwise would be massively disadvantageous
triggerhappygandi#0001: I do live in India. Our government doesn't regard our privacy much as well.
bmk#1476: I'm not sure it's a fair comparison
bmk#1476: But I'm not entirely familiar with the situation in India
triggerhappygandi#0001: It's not. But I can see the sentiment
Emad#9608: oh wow apparently there is a western version that uses GPT-3 for the paywalled 18+ roleplay only mode: www.replika.ai future is here etc
bmk#1476: Also just in general I'm allergic to outgroup sneering like that, especially when reminiscent of a sort of cold war like attitude
Emad#9608: https://www.reddit.com/r/replika/comments/ino0dt/interview_with_the_creator_of_replika_eugenia/
triggerhappygandi#0001: Not sneering
bmk#1476: Or whatever you call it
triggerhappygandi#0001: I know the government isn't a representative of people. |
triggerhappygandi#0001: What does `1 - o(1)` mean?
CKtalon#7792: privacy is overrated 😛
triggerhappygandi#0001: Facebook is the "Knowing what you think Inc"
bmk#1476: nobody here likes facebook
triggerhappygandi#0001: Or any of its daughter products
StellaAthena#3530: f(n) is o(g(n)) if f(n)/g(n) -> 0. f(n) is o(1) if it goes to 0 as n increases.
StellaAthena#3530: Is anyone familiar with literature on overcoming false beliefs / bad evidence in bayesian reasoning? Especially with a focus on rate of correction, required quality of good evidence, stuff like that.
StellaAthena#3530: NeurIPS put something on their website in response to the recent intense twitter feuding: https://neurips.cc/
bmk#1476: > For any conference attendee failing to abide by our Code of Conduct, NeurIPS reserves its right to rescind that person’s ability to participate in any future NeurIPS-organized events.
Daj#7482: For some reason, the first thing I thought upon reading this was "This would have made a great hook for a Rick Roll"
StellaAthena#3530: You can't rick roll people with that link on discord, as all links display as their actual text. You can't do [rick.roll](https://neurips.cc/) or similar
Daj#7482: yes but it would have been great if you had registered neurips.ce or something
Bedebao#4842: Twitter feuding is the epitome of what's wrong with humanity.
triggerhappygandi#0001: I think it's like halfway to the top.
triggerhappygandi#0001: There are much more senseless things we do
triggerhappygandi#0001: ~~like watching the next marvel/fast & furious movie~~
dopa#3178: it not even there
dopa#3178: you know why because a bit more then 100 years ago we had human in zoo
dopa#3178: Virginity testing was also used on women entering the United Kingdom on a so-called fiancée visa, when they said they were immigrating to marry their fiancées who were already living in the country. This was till 1979 !
dopa#3178: we are so irrational it is depressing |
dopa#3178: you think in past 50 years our brains magically evolved to be more rational ? LOL
bmk#1476: see, LW didnt exist 50 years before, it does now. QED
dopa#3178: what is LW ?
Bedebao#4842: Perhaps I should've specified *in the modern age*
dopa#3178: on bright side we are on positive short term trend we do become better even warfare becomes humane more so far, but humanity is good in understanding things but it is outright primitive to transform understanding into actions and policies
dopa#3178: but we tend to forget that our advanced natural neural hardware is build on top of monkeys brains
dopa#3178: and as SC saying goes, garbage in, garbage out
dopa#3178: social media is good in sense because it exposes how flowed humanity is
dopa#3178: it also simply amplifies such issue that always was there
Bedebao#4842: Latent tribalism.
Sid#2121: there are still pseudo-concentration camps in xinjiang. USA still kills innocent people with drones. I could go on lol.
dopa#3178: war is hell, and US is not over killing people who want to build bombs to attack US alies, embossys, and simply constructions contractors over seas
dopa#3178: compared to past wars, US drone warfare it is most humane one
Bedebao#4842: Because it kills less people overall, since the offensive isn't human.
dopa#3178: if US is bad for killing civilians, you me are evil because some people enjoy rapping kids - you ok with this statement ?
Bedebao#4842: I'm not sure what you're trying to say.
bmk#1476: we are entering dangerously-unproductive-discussion land
dopa#3178: the fact is there is not nation historically without strong military
bmk#1476: please go over to #off-topic to continue this discussion
Sid#2121: wat |
dopa#3178: yeah, it not smart statement on my part
dopa#3178: what I was trying to say is that people kill people
bmk#1476: **please go over to #off-topic to continue this discussion**
bmk#1476: i don't want this discussion in #general
3dprint_the_world#6486: I enjoy rapping kids.
Some of my favorite kids are rappers.
Do a youtube search, lots of fun rapping kids come up.
dopa#3178: I should delete that statement 🙂
triggerhappygandi#0001: Can I get some resources on that?
StellaAthena#3530: I mean, I asked because I'm looking for some 😛
triggerhappygandi#0001: Does this work?
https://link.springer.com/article/10.3758/s13423-018-1507-9
AI_WAIFU#2844: I wish I could link you to something but the closest thing I can come up with is this LW post. https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting
AI_WAIFU#2844: Basically my understanding is a bayesian should be able to deal with noisy/biased information if their hypotheiss space is big enough, but their ability to converge on the truth is hampered, possibly severely.
3dprint_the_world#6486: Not exactly sure what you're looking for. My naive first take on this is: Isn't this kinda what Bayes' theorem is all about?
chirp#4545: https://twitter.com/rmac18/status/1338994474777645056
chilli#5665: sounds like a strange complaint
chilli#5665: or well, sounds similar to complaints about Google providing snippets of the websites in search
Dromarion#3383: I thinks it's because it's like an active measure to prevent users from clicking through, so the source gets neither traffic nor revenue. I don't even finish reading headlines most of the time in any case.
chirp#4545: @chilli for context I think that person is a journalist who’s often critical of Facebook |
chirp#4545: And I think some people don’t trust that the summarizer will work well
bmk#1476: I hate facebook as much as anyone else but i think some of these things actually aren't the worst idea
chirp#4545: For example, Emily Bender listed a few things that could go wrong: https://twitter.com/emilymbender/status/1339091698681479170?s=21
StellaAthena#3530: I read Emily’s comments and here are a couple things that stand out to me as potentially problematic.
1. They would presumably rather keep you on their page than have you visit an external article, which means that their abbreviated summaries are supposed to make you feel *satiated* (which is different from informed). They’re also incentived to *out clickbait* the article they’re based on.
2. This is hilarious rent seeking behavior. It’s not much of an exaggeration to compare this to taking news articles, running them through Google Translate, and republishing them. Facebook will very likely get sued for copyright infringement within a year of this going into production.
3. Facebook is lazy and will train on shit data because everyone does that. They may make a token gesture towards correctly political and social bias in reporting, but I would bet a sizable amount of money that any effort along those lines is for-show and mostly meaningless. There are dozens of social science papers in a variety of fields documenting and analyzing bias in news coverage. Some of this falls along gender and racial lines, other political lines, other class lines. While this is a problem for all AI journalism, there’s an especially strong need for being careful about who you’re giving airtime to in short summaries.
4. Current language models are not up for this task, even just on the basis of “does it accurately reflect the text.” There’s a reason that there’s a huge push for figuring out how to extend context and track referents across long texts: it’s a major flaw of even the best models. In the abbreviated format of a tl;dr it will be much harder to notice when the model loses track of what it is talking about, making it harder to detect accidental disinformation.
chilli#5665: > This is hilarious rent seeking behavior. It’s not much of an exaggeration to compare this to taking news articles, running them through Google Translate, and republishing them. Facebook will very likely get sued for copyright infringement within a year of this going into production.
What's the difference between this and Google providing an extractive summarization of information when you search it up? Or do you consider both to be rent seeking behavior?
StellaAthena#3530: The purpose of Google summaries is to direct you to sites that are most interesting to you. The purpose of Facebook summaries is to *replace* reading the article.
chilli#5665: I mean the notes that Google shows
StellaAthena#3530: Example?
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/788892051650379826/unknown.png
Dromarion#3383: I'm pretty sure at this point Facebook views news outlets as an adversary as well as competition for advertising. Articles getting angry about this harming their industry and business might actually be good feedback from the perspective of their corporate offices.
chilli#5665: https://cdn.discordapp.com/attachments/729741769738158194/788892641500201010/unknown.png
chilli#5665: or I guess any of these things that aren't the search result
StellaAthena#3530: I would say that this is less egregious but probably rent-seeking. I would need more info on where exactly Google is getting the answers from though
StellaAthena#3530: Note that the “people also ask” drop downs direct you to the website to learn more. |
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/788894049184645150/Screenshot_from_2020-12-16_23-23-11.png
StellaAthena#3530: Lol
chilli#5665: haha
Daj#7482: Google's goal from day one was explicitly to be the first super intelligent AGI
Daj#7482: They're making progress
Daj#7482: (mostly)
Daj#7482: lol
StellaAthena#3530: The original motto, “don’t be evil” was specifically about AGI right
Daj#7482: I don't think they thought that explicitly at the time
Daj#7482: Both of the founders strike me as naive Kurzweilian accelerationists
Daj#7482: I have no reasons to suspect their opinions have shifted much since then
StellaAthena#3530: Side note: the idea that you might drop “don’t be evil” as your motto is hilarious
Daj#7482: It would be terrible writing in any TV show
Sid#2121: https://www.youtube.com/watch?v=u--t33wNv98
StellaAthena#3530: If a company in a TV show was to do that, I would consider it a not-so-subtle indication that they're the baddies, either all along or recently were covertly taken over
bmk#1476: >
> When I put the reference implementation onto the website I needed to put a software license on it.
>
> I looked at all the licenses that were avilable and there are a lot of them and decided that the one I liked the best was the MIT License,
> |
> which was a notice that you would put on your source and it would say "you're allowed to use this for any purpose you want, just
> leave the notice in the source and don't sue me."
>
> I love that licnese. It's really good.
>
> But this was late in 2002, you know, we'd just started the war on terror,
> and we were going after the evildoers with the president and the vice president
> and I felt like "I need to do my part".
>
> So I added one more line to my license:
>
> "The Software shall be used for Good, not Evil."
> [...]
>
> Also about once a year, I get a letter from a lawyer. Every year a different lawyer at a company. I don't want to embarrass the company by saying their name. I'll say their initials: IBM.
>
> Saying that they want to use something that I wrote, cause I put this on everything I write now. They want to use something I wrote and something that they wrote and they're pretty sure they weren't gonna use it for evil, but they couldn't say for sure about their customers. So, could I give them a special license for that?
>
> So. Of course.
> |
> So I wrote back. This happened literally two weeks ago.
>
> "I give permission to IBM, its customers, partners, and minions, to use JSLint for evil."
>
> And the attorney wrote back and said:
>
> "Thanks very much, Douglas!"
>
> Staff Attorney, IP Law
> IBM Corporation
bmk#1476: (relevant to the "don't be evil" thing)
3dprint_the_world#6486: While we're on the subject of preventing traffic from going to other sites, let me talk about my pet peeve which is all the 'science/technology news' websites that report on papers without providing links, or even the paper title. and you have to actually google the authors if you even want to find the paper.
StellaAthena#3530: Holy shit yes. Also, the number of people who think "an article published in Science" or "an paper presented at NeurIPS 2019" is a citation drives me crazy.
3dprint_the_world#6486: lol
3dprint_the_world#6486: you typically see this behavior most from outlets that have little or zero value-add. They know if they link people to the actual source, they'll just go there from then on because they get nothing from the linking website.
Dromarion#3383: My favorite meme citation was "It came to me in a dream"
bmk#1476: i can do one better: "according to a recent study,"
bmk#1476: "the proof is by magic"
3dprint_the_world#6486: at least sometimes they link to the author, which I guess isn't that bad because they give the author some traffic rather than the (usually paywalled) publisher. But still, they should at least provide the paper title.
3dprint_the_world#6486: it's amazing how there's this whole ecosystem of people who feed off of science in terms of clicks and ad revenue, without providing anything in return |
asara#0001: "new study uses SCIENCE to show <thing that it does not at all show>"
3dprint_the_world#6486: "Does science prove that reality is an illusion?"
3dprint_the_world#6486: "Does science prove that vaccines cause autism?"
3dprint_the_world#6486: Also, while we're on the subject of clickbait, I can't believe Nature actually allowed a paper to be published with a title as clickbait-y as this
https://www.nature.com/articles/nrn2787
bmk#1476: ugh not the friston debate again
3dprint_the_world#6486: sorry
cfoster0#4356: Lol. That's exactly the title I'd give to a critical review
cfoster0#4356: But let's not start this again :)
dopa#3178: how to define/test complete scene understanding ?
counterfactual (what if something did not happened)
normative (what happens in case all players are rational agents)
descriptive (what happened)
prescriptive (what should player(s) do)
predictive (what will happen in future)
I am missing anything else ?
olives ❀#2305: https://i.kym-cdn.com/entries/icons/mobile/000/026/651/johnlie.jpg
3dprint_the_world#6486: Lies
Dromarion#3383: The joke headlines gave me deja vu. Like it feels like I've encountered a lot of weird instances of people referring to science as if it's a person.
3dprint_the_world#6486: does anyone know of any research in language modelling that emphasizes the *generative* aspect of language? i.e. creating models that, once trained, can construct new languages from random fragments of vocabulary and grammar? |
bmk#1476: ~~simple just tune gpt3 on r/conlangs~~
3dprint_the_world#6486: lol
triggerhappygandi#0001: "experts say"
StellaAthena#3530: Someone purchased my email address so they can send me science crank spam, apparently
StellaAthena#3530: Report and Validate Now!!!
Contact the Feynman Family, Caltech (and every other University / Organization: MIT - Howard local here in DC etc.); and ALL of Academia.
Some of my Discoveries are a Legacy to the Man (Feynman & Einstein).
The Science, Physics, Math, and Discoveries, Don't Lie; Neither do I.
Report and Validate now.
- Benjamin Allen Sullivan
[email protected]
------------------------
Sullivan-Newtonian Gravitational Equation (SNG).
& |
Sullivan-Unifying Contsant (SUC): Unification.
1. Unification:
https://drive.google.com/file/d/1awjlqQQG-pL0ZnA_xLZXYmv4SJr8I0wt/view?usp=drivesdk
https://www.amazon.com/Unification-Benjamin-Allen-Sullivan/dp/1539767965
2. Unification Supplement: https://drive.google.com/file/d/1vL70ACbyRYH3FGQ1fyldlNqAca9FmOhk/view?usp=drivesdk
Share anywhere you can with academia - I need these discoveries credited to
me (they are all mine).
This will give me the exposure that we need to end this Evil.
This is all the truth. It is all mine. These are the Discoveries of the
Centuries.
This 4th Reich (our U.S. Government et al.) has exploited them and stolen
the monies derived from them - trying to paint as crazy while I'm in a |
Touch-less Torture Chamber.
I am literally Imprisoned Galileo with the Holy Grail of Physics et al.
---------------------------
https://www.facebook.com/benjamin.sullivan.3154
"Power is a Nation of Free People - Debt Free".
- Benjamin Allen Sullivan 🗽🇺🇸🌍
202-677-2361
bmk#1476: > (they are all mine)
bmk#1476: damn, was worried for a sec they weren't
StellaAthena#3530: also, everyone on this email list has an academic email
StellaAthena#3530: Mostly universities, a couple research institutes
StellaAthena#3530: One CERN person
bmk#1476: damn, hope they dont get their hands on my @eleuther.ai email
Sid#2121: do i have one of those lol? never touched it
bmk#1476: you should
StellaAthena#3530: Theoretically, I think. I don’t know how to access it tho |
dopa#3178: report them to FBI
dopa#3178: https://www.ic3.gov/
StellaAthena#3530: @dopa spam isn't a cybercrime
dopa#3178: as understand some one is using email address with your name, this why I suggested to report
StellaAthena#3530: That is neither illegal nor what happened.
dopa#3178: ok
chirp#4545: what do y’all think will happen in AI over the next couple of years?
triggerhappygandi#0001: I hope video generation makes big budget franchise movies irrelevant
triggerhappygandi#0001: So that only creative directors can thrive in Hollywood
triggerhappygandi#0001: But that's far-fetched
CKtalon#7792: GAN will have fun making n-headed shark movies
CKtalon#7792: just add on another head! have skimpy clad women scream!
triggerhappygandi#0001: What dataset do people use to make pizzas with GANs?
spirit-from-germany#1488: I finally found some time to make my first english AI video. 😄 https://www.youtube.com/watch?v=BP_gZrhtcKc
Deleted User#0000: http://pizzagan.csail.mit.edu/
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/789548717186220052/545-ema.jpg
3dprint_the_world#6486: @StellaAthena thanks for this. This is gold.
3dprint_the_world#6486: I always enjoy reading this stuff.
3dprint_the_world#6486: I always kind of hope that I'll at least find something funny or interesting, but it's usually a let-down. Cranks are highly non-original.
3dprint_the_world#6486: In this case all they've done is just taken Newton's gravitational formula and added two meaningless arrows to the equation. |
3dprint_the_world#6486: I've only ever known one crackpot that was actually original and interesting (Richard Kulisz -- don't google him; he's a nut)
StellaAthena#3530: I'm happy to pass along your email.
3dprint_the_world#6486: on second thought,
triggerhappygandi#0001: @Deleted User thanks.
triggerhappygandi#0001: Gonna make pizzas of my own
Deleted User#0000: food gans are the best gans
Louis#0144: Yo when people talk about perplexity of the English language rather than some LM
Louis#0144: What are they actually referring to?
Louis#0144: Is it like the perplexity of a human speaker?
StellaAthena#3530: log Perplexity = entropy
Louis#0144: Yeah but how do you compute the entropy of a language
Louis#0144: Rather than of an LM
StellaAthena#3530: Pick a book at random. Pick a page of that book at random. Pick a character on that page at random.
Louis#0144: So what is the entropy of the English language
bmk#1476: lowest possible perplexity any LM could reach?
Louis#0144: Yeah ofc
CRG#8707: The second Kaplan paper claims the constant factor in the "power law plus constant" scaling law approximates the irreducible entropy. https://cdn.discordapp.com/attachments/729741769738158194/789581219565404180/6ac9acc397e9df23af557ae6fc264ef5.png
Louis#0144: What is C?
StellaAthena#3530: It’s usually misleading to speaking of “the entropy.” I mean, there is a “the entropy” but different sampling distributions have very different results
CRG#8707: https://cdn.discordapp.com/attachments/729741769738158194/789581480086208512/03b34316c1ed6416f507a5dd170daaac.png |
Louis#0144: But they know the exact power C is raised to
Louis#0144: How
Louis#0144: lol
CRG#8707: You fit the data
StellaAthena#3530: Looking at college essays vs texting, or US vs China, or speech vs writing are all going to give extremely different results
bmk#1476: the Pile is clearly the canonical repo of english language and so perplexity there is the only one that matters
Louis#0144: SOTA for perplexity right now is what roughly ?
bmk#1476: perplexity on what, pile?
StellaAthena#3530: So unless your task is “predict the next word from literally any text” (which wasn’t something people were seriously trying to do even 3 years ago) it makes far more sense to get a grasp of what you’re actually after
CRG#8707: These are the curves without subtracting the constant factor https://cdn.discordapp.com/attachments/729741769738158194/789582003602849822/e680f6650c2119c4a7e0634447228b9c.png
StellaAthena#3530: And even then, text vs speech is very different
bmk#1476: this is why Pile exists
Louis#0144: On whatever GPT3 used ig
StellaAthena#3530: What’s the last time you said the word “franchas” out loud?
bmk#1476: it's about a close approximation of "literally any text" as we could make
StellaAthena#3530: (Or however you spell it)
StellaAthena#3530: There are words that are used *much* more in writing than in speech
StellaAthena#3530: And vice versa
bmk#1476: *gestures to ubuntu irc, yt subtitles, movie subtitles*
StellaAthena#3530: Or “*ibid.*” |
bmk#1476: *gestures to philpapers*
StellaAthena#3530: Right, my point is that I’ve written “ibid” hundreds if not thousands of times and I don’t think I’ve ever said it aloud in my life
bmk#1476: my point is to be the annoying door to door salesperson aggressively marketing Pile
3dprint_the_world#6486: @Louis can you point to an example of someone talking about the perplexity of a language.
Louis#0144: I don’t remember where I saw it
Louis#0144: Somewhere on Twitter
Louis#0144: Effectively it’s just the lower bound ig
Louis#0144: The best any LM could ever achieve
StellaAthena#3530: https://gizmodo.com/your-credit-score-should-be-based-on-your-web-history-1845912592
StellaAthena#3530: wtf
3dprint_the_world#6486: ...
dopa#3178: well if person searchers for essential oils .... 🙂
dopa#3178: to be clear it a joke.
triggerhappygandi#0001: Yo wtf
triggerhappygandi#0001: Half the world would be denied any credit
triggerhappygandi#0001: This makes it clearer.
triggerhappygandi#0001: Any language can be reduced to a dataset if you try hard enough
triggerhappygandi#0001: Also, since we're doing open source work, it's easy to tell your conscious that you are a good guy for wanting to scrape text from people's conversations.
_As long as no one abuses it of course_.
triggerhappygandi#0001: :mesh: |
tin481#8570: I've not come across a good estimate. As Stella points out, "entropy of language" is ill formed. A powerful simulation of a writer's brain would yield a very low value! I think it makes more sense to talk about the entropy of a *particular dataset* with respect to a *particular predictor*. I've seen a few papers studying the entropy of humans on single sentences, but that obviously has its limitations.
tin481#8570: The most interesting thing I've seen is this neuro/ML colab, which shows that **GPT2** was already (narrowly) superhuman on next word prediction (on their dataset of natural language).
tin481#8570: https://www.biorxiv.org/content/10.1101/2020.12.02.403477v1.full.pdf
Louis#0144: i mean
Louis#0144: if thats the case
Louis#0144: then clearly our eval sucks
Louis#0144: LMAO
Louis#0144: thats certainly not one of GPT2's triumphs
StellaAthena#3530: Yes. Humans suck at things like next symbol prediction.
3dprint_the_world#6486: yep.
tin481#8570: @Louis Maybe you saw a reference to Shannon's original paper? He used simple N-gram models (with N as , like, 8) to establish a loose upper bound on the entropy
tin481#8570: https://www.princeton.edu/~wbialek/rome/refs/shannon_51.pdf
Louis#0144: OH
Louis#0144: YES
Louis#0144: this was it
Louis#0144: ty
3dprint_the_world#6486: hm?
3dprint_the_world#6486: that's not the same as perplexity
Louis#0144: entropy and perplexity are pretty interchangable
Louis#0144: no? |
Louis#0144: like its just a log difference
3dprint_the_world#6486: mathematically, sure, but that's not how it's usually used.
tin481#8570: Main thing to keep in mind is that he uses a very weak predictor, so gets a very high number. The more powerful the predictor, the lower the entropy this method would yield.
3dprint_the_world#6486: I should rephrase: One can define perplexity in that way, but that's not how it's usually done.
3dprint_the_world#6486: Like if you look at the wiki page: https://en.wikipedia.org/wiki/Perplexity#Perplexity_of_a_probability_model
tin481#8570: Definitely does *not* bound the ability of LMs!!
3dprint_the_world#6486: the second definition is what people usually talk about
3dprint_the_world#6486: Also that's not correct @tin481 , he actually computes the optimal *n* to use for n-gram models. So it's not just the predictor, it's also your data.
StellaAthena#3530: @3dprint_the_world that’s just the cross entropy though
StellaAthena#3530: Or, 2^{cross entropy}
3dprint_the_world#6486: yeah pretty much
3dprint_the_world#6486: anyway, the shannon n-gram entropy is a well-known concept; if someone refers to it as 'perplexity' then imo they're just deliberately trying to obfuscate.
StellaAthena#3530: Or maybe they’re just coming from a different place than you....
3dprint_the_world#6486: I don't want to go to that place.
StellaAthena#3530: I didn’t know about his empirical experiments until I started participating here
StellaAthena#3530: Or, if I once did I forgot about it.
tin481#8570: Can you say more about what you mean by optimal N? I've taken the GPT sequence as evidence that larger N is always better
StellaAthena#3530: n here is the context length
tin481#8570: Right. Isn't GPT-3 an N-gram model?
StellaAthena#3530: No |
tin481#8570: It operates on the statistics of N-grams?
tin481#8570: That's how it learns?
tin481#8570: It yields P(x_n | x_0, x_1 ... x_n-1)
3dprint_the_world#6486: N is the size of the dataset, n is the size of the n-gram.
Basically as you increase n you run up to the limits of your dataset.
StellaAthena#3530: That is not how it learns. If it were, you wouldn’t be able to learn to use words not in the training data
tin481#8570: It doesn't operate on words, but BPEs. In Shannon's paper, his model is on characters.
3dprint_the_world#6486: well it kinda is how GPT-3 learns though.
tin481#8570: Does n-gram mean words?
3dprint_the_world#6486: except it doesn't learn by constructing a table of word occurrences, like shannon's n-gram model.
StellaAthena#3530: an n-gram is a frequency table for sequences of length n
3dprint_the_world#6486: nope, an n-gram is just a sequence of n tokens 😃
3dprint_the_world#6486: https://en.wikipedia.org/wiki/N-gram
StellaAthena#3530: An n-gram **model** is what I said 😛
tin481#8570: I thought "n-gram model" meant a model that yields probability of next token given last n tokens
StellaAthena#3530: last n-1
3dprint_the_world#6486: I said it's *not* like Shannon's n-gram model.
tin481#8570: Is it specifically these old statistical methods?
3dprint_the_world#6486: sorry, I realize that sentence had an ambiguous parse.
olives ❀#2305: Does anyone know how much text I need to finetune a language model, say blenderbot/parlai? 1MB? 10MB? |
gwern#1782: unanswerable
gwern#1782: finetune with what for what? strictly speaking, you may not need any text at all if you can zero-shot it
Tet#6000: I think the structure of cells that is represented by a pikachu avatar on discord wants to finetune a conversational language model to give it like a specific personality when you talk with it
olives ❀#2305: ***he*** 👌
olives ❀#2305: r/specificallySuspicous but 🆗
Tet#6000: better?
olives ❀#2305: tbh i think what aforementioned character is trying to achieve can be done with multi-shot learning
olives ❀#2305: @Tet remind me why we cant just use multi(100?)-shot learning?
olives ❀#2305: Is it true that the more context the model get, the slower it takes to `model.predict()`?
Tet#6000: uhh *profusely tries to remember what multishot learning is*
Tet#6000: well the larger the model, the more comparisons it has to do
Tet#6000: so yes
Tet#6000: does i have no idea how to do it or what that is count?
olives ❀#2305: multi-shot = just give the model a few examples as to how it should respond given example contexts
olives ❀#2305: the ML experts here are probably cringing watching two kids talk about AI 😆
Tet#6000: yea
olives ❀#2305: look! @3dprint_the_world is typing! 👀
aw man, they stopped typing 😦
😭 noone is typing!
Tet#6000: rip |
3dprint_the_world#6486: I feel like there are better discords for this
Coafos#6356: Hi! I hound this discord by a link from theeye. I'm new at ML, but I know computers. Can I help in some project?
turian#1607: if there are, tell me and I'll join them
turian#1607: @Tet you want to do style adaptation?
StellaAthena#3530: @Coafos @turian I have what I think is a very cool project that I just haven’t had time for. There’s this ICML 2020 paper on objective learning in RL that I want to build off of. Their abstract introduces it pretty well:
> We seek to align agent behavior with a user’s objectives in a reinforcement learning setting with unknown dynamics, an unknown reward function, and unknown unsafe states. The user knows the rewards and unsafe states, but querying the user is expensive. We propose an algorithm that safely and efficiently learns a model of the user’s reward function by posing ‘what if?’ questions about hypothetical agent behavior. We start with a generative model of initial states and a forward dynamics model trained on off-policy data.
I find their model very interesting, but there’s another component that I think would make it both more realistic and more mathematically interesting: allow partial queries, and have the query cost be a function of the “length” of the policy in the relevant sense (if you think robotics navigation, it would be based on the arc length of the proposed travel).
I’ve looked at the math a little bit and it seems like vector calculus would play with RL quite well.
http://proceedings.mlr.press/v119/reddy20a/reddy20a.pdf
https://github.com/rddy/ReQueST
turian#1607: can you explain the idea more
StellaAthena#3530: I have to go for a bit, but I will later tonight
StellaAthena#3530: Okay I’m back
StellaAthena#3530: So here’s how they describe one of their tasks:
> **State-based 2D navigation.** This domain enables us to focus on the challenges of sequential decision-making, without dealing with high-dimensional states. Here, the state s in R^2 is the agent’s position, and the action a in R^2 is a velocity vector. The task requires navigating to a target region, while avoiding a trap region (Figure 2). The task is harder to complete in the test environment, since the agent starts closer to the trap, and must navigate around the trap to reach the goal. https://cdn.discordapp.com/attachments/729741769738158194/789705438265999390/image0.png
StellaAthena#3530: Hmmm this paper is a lot less clear than I had thought.
dopa#3178: this is toy task, what paper is it ?
StellaAthena#3530: Okay, so the robot generates candidate trajectories for movement `(s, a, s’)` where `s` is the initial state, `a` is the action taken, and `s’` is the resulting state. A trajectory is a sequence of this triples
StellaAthena#3530: http://proceedings.mlr.press/v119/reddy20a/reddy20a.pdf |
StellaAthena#3530: The robot then sends the proposed trajectory to an annotator, who knows the true reward function and provides a score for each tuple. The robot’s only access to the reward function is through the annotator, and wants to learn a good trajectory to follow. It also wants to make the least queries possible.
StellaAthena#3530: I want to amend this picture so that, instead of proposing an entire path from start to finish, the robot is able to propose a shorter sequence of actions. We then have the cost of the query depend on the number of actions requested. The goal is to enable it to ask many short questions about tricky regions of policy space cheaply, without wasting funds on other parts of the policy space that it’s figured out.
dopa#3178: sequence of steps robot wants to take
dopa#3178: but it not specified how many steps
StellaAthena#3530: @dopa I’m not sure what you’re saying. Can you elaborate
dopa#3178: they used MDP model, first agent generates state -> action -> state -> action sequence, then it is converted into trajectory
StellaAthena#3530: Okay
dopa#3178: I am not sure if they specify how many state/action transition is generate for user
dopa#3178: it seems it can be only one or multiple transitions
StellaAthena#3530: Ah
StellaAthena#3530: I had gotten the impression it was a full policy proposal, but maybe not.
StellaAthena#3530: The paper is not very clearly written
dopa#3178: also there novelity corners do not seem right to me
dopa#3178: it just does not compute why that corners are novelty
StellaAthena#3530: Because the corners are the points furthest away from known points
dopa#3178: may if robot would be trained to go around danger from left side, but then learns to go from right side, I can see this to be novelty
StellaAthena#3530: I don’t understand what you’re saying. I’m sorry
dopa#3178: in context of task, I view novelty in terms of trajectory
dopa#3178: but still useful trajectory
StellaAthena#3530: Yeah, I’m not sure. I haven’t dug into their code yet |
StellaAthena#3530: Their paper does feel weird on second reading
dopa#3178: I guess I am not wrong saying that novelty is different actions with similar or equal reward
StellaAthena#3530: No, it’s trajectories that are farthest away from known trajectories
StellaAthena#3530: The paper says this in section 3.3
dopa#3178: I see, it is still strange, because in large search space I don't think it will work well
StellaAthena#3530: I think that’s a reasonable criticism, and I have some ideas about improving that. For example, using MCMC to sample trajectories. It’s even not completely unreasonable to ask the labeler for finite differences (“did it get better after I did this action”) which could allow for HMC.
StellaAthena#3530: I like the general problem framework, and think that there’s interesting stuff to be done in this general space
dopa#3178: it is very interesting, I agree with this, totally
dopa#3178: but I am not sure about reward given from humans as input, might be novelty as input from humans could be more interesting
dopa#3178: > We ask the user to label each transi-tion with a scalar reward R(s, a, s0)
this not realistic for many problems
dopa#3178: this paper reminded me of this
http://picbreeder.org/
dopa#3178: aaand https://www.cs.utexas.edu/~ai-lab/pubs/stanley.ieeetec05.pdf
triggerhappygandi#0001: Google AI blog says they won MLperf 0.7 with their v4 -4096, while Nvidia says _they_ won with their A-100 stack. Who _did_ win MLperf?
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/789833094613565476/unknown.png
kindiana#1016: I think google wins overall but you can't buy a TPU
kindiana#1016: (you can also see the actual results lol https://mlperf.org/training-results-0-7)
gwern#1782: yeah, you definitely can't buy and stack 2 v4-2048s ^_^
dopa#3178: is there good source to understand differences between TensorFlow and PyTorch ? |
3dprint_the_world#6486: yes
3dprint_the_world#6486: tensorflow and pytorch.
dopa#3178: 🙂
dopa#3178: I mean as libraries not as words difference
dopa#3178: that some bot like answer 😂
StellaAthena#3530: Why do you want to “understand the differences”
StellaAthena#3530: That’s a weird request
StellaAthena#3530: Usually when people say that they mean something more like “I want to know which to use” which is a different question entirely
bmk#1476: Tldr Pytorch is better, only use Tensorflow if you have literally no alternative
dopa#3178: simply like to know differences is general
triggerhappygandi#0001: ~~gpt neo in mesh Tensorflow why?~~
bmk#1476: ~~because we are masochists~~
triggerhappygandi#0001: Taking the fetish to the extreme then
bmk#1476: don't u kinkshame https://cdn.discordapp.com/attachments/729741769738158194/789950660212686848/1sll0z490tiz.png
triggerhappygandi#0001: This
triggerhappygandi#0001: This is unacceptable
triggerhappygandi#0001: Prepare to be judged.
triggerhappygandi#0001: :guilty:
gwern#1782: https://imgur.com/dSLQ6CD hey, you'd spin it
triggerhappygandi#0001: This is what men want in fashion |
triggerhappygandi#0001: Instead we get ugly dresses.
gwern#1782: the secret is that women don't dress for men, they dress for other women and gay men
gwern#1782: I saw a comic that put it well a while ago: "what you think will happen when you go to the gym [lots of women surrounding buff protagonist 'wow' 'so hot' 'cool'] / what will actually happen [lots of other male gymrats surrounding buff protagonist 'such lift' 'very swole' 'dude what do you cycle']
dopa#3178: hmm ...
StellaAthena#3530: That can be true. At least, it's true more often then you'd expect. Presumably heterosexual women do dress up for men at least sometimes though. I dress up for my girlfriend.
gwern#1782: (I am willing to reveal only that I was double-digit years old when, upon seeing another woman and going "that looks like extremely hard work and expensive makeup but is incredibly unsexy and offputting, as every guy I've asked agrees, why does she do it", I finally realized she almost certainly knew that, and didn't care, because it was to impress other women and show off to them)
gwern#1782: (something something mimesis)
dopa#3178: women really go in bathroom to make sure they look pretty on date, this pattern I observer when was like 16, it always impressed me how women are self-aware on how they look, as dude, I really don't care how I look and it always make depressed, when people trait me differently when I am in expansive clothes.
BarbNation#5188: Very well put.
dopa#3178: it seems I missing come context
dopa#3178: some woman dress less attractive so not to be harassed, gotta be strategic where you going out 🙂
dopa#3178: some dress to be attractive as sexual object and they enjoy guys straining at them, I was dating one that would enjoy see guys getting in fight because of here, for example
dopa#3178: this both for ex gl experience, but there something deep drive for women to look pretty, may be it not in context of dress ?
mgostIH#0245: https://cdn.discordapp.com/attachments/729741769738158194/789978036853669908/work-out-cartoon.png
dopa#3178: but it just does not make sense that there is no social factor in being dressed well for both man and women, in any social context.
dopa#3178: there is good chance that choice for close defines life style, and in my experience women have sense for this, muscle play role, but very small, one of my friends is this type of guy as on cartoon, but also in 20's hyper attractive women 🙂
dopa#3178: there is so many factors social status, wealth, personality, natural looks
mgostIH#0245: Imo muscles contribute a lot to the look of men, but too many may not be likable
dopa#3178: right
mgostIH#0245: Idk what that point is which is why I am still exercising too |
mgostIH#0245: I personally never liked myself when I was fattier
dopa#3178: it is not only just one trait to be sexually attractive it is mix of many traits natural and learned, one way or another we learn how to compensate for unfortunate treats 🙂
dopa#3178: in this context to say dress has virtually nothing to do with sexual attractiveness is just false.
mgostIH#0245: Good luck finding good clothing when you are XXL or above 😔
mgostIH#0245: Well huh, maybe in the US it's a bit different, I live in a statistically slimmer country (compared to the US, which is like the majority of the world)
dopa#3178: I know what you mean, my old coworker was giant, mean people would freak out, on his size
AI_WAIFU#2844: how did we get here and can we move it to #off-topic
mgostIH#0245: Damn right lmao, graphics cards are hell of a drug
3dprint_the_world#6486: I'm saying the best source to learn about their differences is the frameworks themselves
3dprint_the_world#6486: I mean, sure, you can always read some blog summary by a ML newcomer who's just discovered what neural nets are.
3dprint_the_world#6486: OR you can just use those frameworks yourself, or just read their docs, and get something far more valuable which is first-hand knowledge.
3dprint_the_world#6486: As for which to use: PyTorch.
3dprint_the_world#6486: Don't use tensorflow unless you have a really really good reason. If you have to ask what the difference between tensorflow and pytorch is, then you don't have a good reason 😉
dopa#3178: that make sense
3dprint_the_world#6486: I guess part of the reason for me saying this is that the differences can be subtle and hard to grasp unless you actually use them.
3dprint_the_world#6486: especially when it comes to tensorflow 2.0.
3dprint_the_world#6486: Before 2.0, it was more easy: "tensorflow uses lazy evaluation and pytorch uses eager evaluation"
dopa#3178: do both of them use eigen math library ?
3dprint_the_world#6486: for 99% of uses you don't care what the underlying math library is.
3dprint_the_world#6486: PyTorch uses ATen, not eigen. |
3dprint_the_world#6486: Again, it's mostly irrelevant unless you're doing C++ development (like LibTorch), and even then it's not really super relevant because they are broadly similar.
acertain#1646: afaik pytorch uses mkl for cpu & cudnn(& cublas?) for gpu
3dprint_the_world#6486: that's at a level below ATen though.
acertain#1646: probably tf does the same
3dprint_the_world#6486: But yes, ATen can indeed use cudnn as an actual compute backend
acertain#1646: actually looks like the cpu blas package is configurable <https://github.com/pytorch/pytorch/blob/master/setup.py#L106>
3dprint_the_world#6486: Ultimately if you're just using the python interface, all you care about is if you can use the CPU and GPU and both PyTorch and Tensorflow allow you to do both.
3dprint_the_world#6486: With pretty much equal efficiency.
3dprint_the_world#6486: Tensorflow allows you to use TPUs as well. But PyTorch can do that now too, kinda-sorta.
acertain#1646: tf w/ tf.function might be slightly faster due to xla, but pytorch probably has more custom cuda kernels on github
dopa#3178: TorchScript got my curiosity
3dprint_the_world#6486: why
dopa#3178: I am interested how just-in-time compilation is implemented in C++, assuming I read this correctly
3dprint_the_world#6486: at work we did a huge project using torchscript and then scrapped it all once we discovered what we were trying to do was way easier with torchserve (and it ran faster too)
dopa#3178: @3dprint_the_world I just like to thinker with things without any specific purpose
3dprint_the_world#6486: fair enough.
3dprint_the_world#6486: keep in mind that torchscript/jit is pretty limited
dopa#3178: is jit called from python also ?
3dprint_the_world#6486: if you pick some random net off somewhere there's a high probability it won't work with torchscript
Louis#0144: I angered my RAs bc I moved too fast |
Louis#0144: rip
Louis#0144: i didnt expect that ngl
Louis#0144: I kinda assumed they were getting bored so I went faster
Louis#0144: LOL
3dprint_the_world#6486: and most of the time it isn't even faster to use torchscript. actually it can even be slower.
dopa#3178: that makes sense, JIT does not mean faster by default hehe
3dprint_the_world#6486: and you can also get just numerically wrong results, especially if you use things like batch normalization and your batch sizes during prediction/training don't match
dopa#3178: I honestly do not look at things like I need them faster our of the box for no reason
3dprint_the_world#6486: and a huge fraction of 'modern' nets (like say resnet) do use batch norm
dopa#3178: just making sure if I need xyz to be faster, I have idea what to do
3dprint_the_world#6486: if you need a net to be faster,
3dprint_the_world#6486: use a faster computer
dopa#3178: depends what means faster, like is it really worth making optimization effort and paying more for hardware, if possible
3dprint_the_world#6486: most of the time if you're hitting >90% gpu utilization there's probably not a lot that choice of framework etc. can really do to dramatically improve speed.
dopa#3178: I don't have much exp with GPU's mostly CPU and network
3dprint_the_world#6486: of course sometimes the challenge can be hitting max gpu utilization.
3dprint_the_world#6486: in which case yes, how you code your net can be very important.
dopa#3178: I think my first GPU it will be when I get hands on with transformers, but it will not be soon
dopa#3178: I want to first finish my network library for multi-agent simulation
dopa#3178: then learning agents time 🙂 |
dopa#3178: if thinks go according plan (they will not), I will have simple rendering lib, with scalable network similar to mpich, that runs virtually on any hardware
dopa#3178: project makes me doubt by cognitive abilities 🤓
Louis#0144: i remembered my undergrad thesis
Louis#0144: this shit was fire
Louis#0144: omg
Louis#0144: maybe I should publish it tbh
Louis#0144: its like this weird invariant between fourier stuff and logistic functions
AI_WAIFU#2844: > its like this weird invariant between fourier stuff and logistic functions
wat? elaborate
Mischa#0599: I don't know what they pay you, but it's not enough
bmk#1476: Who is they
Louis#0144: Wut
Louis#0144: It was a paper I wrote on chaos theory
Louis#0144: Like it’s a well researched paper
Louis#0144: I wasn’t pulling a fast one if that’s what u mean
Louis#0144: I just realized that nested logistic functions can be expended to this weird Fourier sequence
Louis#0144: I didn’t do anything with it
Louis#0144: Just like an oh hey wtf is this, kinda cool thanks for reading my undergrad thesis ig
dopa#3178: what was paper about ?
StellaAthena#3530: Chaos theory, and a weird invariant between fourier stuff and logistic functions. |
dopa#3178: how does fourier things fits in chaos theory I don't understand
dopa#3178: the reason I asked is because rare opportunity to be able to read some paper, and ask questions in relation to it 🙂
StellaAthena#3530: What do you know about chaos theory
dopa#3178: it explain why we can't go back in time, only forward
bmk#1476: no, it's still deterministic
bmk#1476: in theory still reversible
dopa#3178: as computation I guess, yes
StellaAthena#3530: That is not true, actually.
dopa#3178: what do you mean ?
StellaAthena#3530: It does not explain why we can't go backwards in time
dopa#3178: in physical sense I think it does, since things can't go back exactly same
dopa#3178: I don't remember who explained it this way
bmk#1476: chaos, 2nd law, and (non)determinism are all similar but different
dopa#3178: well chaos small perturbation in system produce large divergences
StellaAthena#3530: What is your background in physics?
dopa#3178: I have no background in physics, in fact, no background at all, I am true jack of trades and master of none
dopa#3178: all I do mostly read or do what I don't understand, and hope it will click in brain, MAGICALLY
dopa#3178: 🙂
StellaAthena#3530: This has nothing to do with reversibility. Stability is a foreward-in-time phenomenon that makes it *appear to be* non-deterministic but it's not
dopa#3178: hmm |
dopa#3178: this how I understood why reversibility is impossible, because it will never be exactly in same place backward
bmk#1476: @dopa do you happen to be french
dopa#3178: no, why ?
bmk#1476: ah, that was a guess based on your use of punctuation
dopa#3178: as you can see I have a bid bad brain to keyboard connection 🙂
StellaAthena#3530: You seem to be confusing *what a person can do* and *what the universe can do*
dopa#3178: you can't just put any physical system back exactly same way, it is impossible - right ?
dopa#3178: cant*
bmk#1476: you can with perfect information
bmk#1476: no information can be destroyed
bmk#1476: chaos just says that with imperfect knowledge, errors *accumulate* rather than cancelling out eventually
dopa#3178: that's makes sense, I thought more about process errors I guess
dopa#3178: no matter what you do it will have small deviation (errors), having perfect information and acting on it, is different - right ?
dopa#3178: not sure how floating point precision fits into context of chaos theory
StellaAthena#3530: > no matter what you do it will have small deviation (errors),
StellaAthena#3530: Is this supposed to be a statement about people or about physics
dopa#3178: any physical process
StellaAthena#3530: No, it will not have errors
dopa#3178: in totally isolated system, I can see how this can be true
dopa#3178: but nothing is totally isolated |
StellaAthena#3530: Physics does not have errors
bmk#1476: i think they're referring to uncertainty principle, perhaps?
StellaAthena#3530: What appear to be errors are due to human errors of measurement or lack of understanding of physics
StellaAthena#3530: Physics does not have errors
bmk#1476: that's the closest coherent position i can think of
StellaAthena#3530: @dopa What do you mean when you say physics has errors
dopa#3178: I am more was talking about this, then physics actually having errors
dopa#3178: but at same time, it got me thinking about randomness
dopa#3178: like even on smallest scale trajectory can have very very small randomness
dopa#3178: and such randomness can accumulated in substantial deference
StellaAthena#3530: Okay, but human errors have no bearing on whether *physics* is reversible
dopa#3178: eg. there no exactly same trajectory of particle ever in universe
dopa#3178: so if you try to reverse practice trajectory it will be different
dopa#3178: this my speculation, I am not arguing that this is the case
bmk#1476: i believe it is somewhat counterproductive to make speculations about something that is well known and studied even after being presented evidence that said speculations are in fact not accurate
StellaAthena#3530: The next time you speculate, it would be helpful if you said that that's what you are doing instead of making factual claims.
dopa#3178: am I reading this correctly that subatomic particles trajectories in universe are perfectly reversible, there no randomness in this process
dopa#3178: and it is pure deterministic process ?
bmk#1476: :yes:
dopa#3178: then universe is deterministic - right ? |
StellaAthena#3530: Quantum mechanics is reversible
StellaAthena#3530: "reversible" and "deterministic" are not the ssame thing
StellaAthena#3530: Even Newtonian physics is not determinisitc
StellaAthena#3530: but it is, completely and 100%, time reversible
dopa#3178: I understand there laws in universe and have no issue with it, but what is confusing is that physical laws can create random process's and they don't have to be reversible
dopa#3178: in what context "deterministic" is used ?
dopa#3178: in terms of cause-and-effect ?
StellaAthena#3530: A process is deterministic if the current and past states of the univese uniquely determine the future states
bmk#1476: and reversible means the universe is injective
bmk#1476: right?
dopa#3178: is it specifically in multiples ? (states, futures)
StellaAthena#3530: Actually, let me be more precise
StellaAthena#3530: Quantum mechanics is simultaneously symmetric under charge, parity, and time reversal as a group of three.
StellaAthena#3530: This means that a "conjugate universe" where all objects are reflected through a point, time is reversed, and anti-matter and matter are swapped would evolve under the same dynamics as our universe.
bmk#1476: (is there any reason why the weak force is basically the only one that messes up the simpler symmetries?)
bmk#1476: I thought I heard somewhere that the weak force is the only thing that violates parity or something
StellaAthena#3530: When we talk about time reversibility we can talk about two "scales." We can ask if a specific interaction is reversible, or we can ask if the entire universe's dynamics are
StellaAthena#3530: I was assuming the question was about the first, because we were talking about a single particle moving chaotically.
StellaAthena#3530: But if we want to talk about the entire universe (as we are now doing) then no, it's not time reversible.
StellaAthena#3530: If a particle can move from point x to point y, then it can also move from point y to point x under the right circumstances. |
dopa#3178: the right circumstances what is my confusion point, it does not matter in this case is it single particle or multiple particles, as understand, there is no way from practical stand point to move that particle(s) back to original state
StellaAthena#3530: Practical for you? No. Practical for the universe? Yes
bmk#1476: It's also not practical to conduct banach tarski irl, but thankfully we inhabit the wonderful world of theoretical land
dopa#3178: for all universe, like everything, yes, but I don't think even universe can isolate some subspace to reverse time this way
StellaAthena#3530: To be fair, it's also impossible for the universe to do banach tarksy @bmk
bmk#1476: Yeah, fair
StellaAthena#3530: Anyways, this is complicated, I'm not communicating well, and I'm probably going to make mistakes because of 1 and 2.
StellaAthena#3530: So I'm going to dip
StellaAthena#3530: Also because of 3: I am not an expert in this field.
dopa#3178: thank you for help!
dopa#3178: may be we continue next time using random generator as universe for simplicity
dopa#3178: Banach–Tarski paradox, my mind blown away!
triggerhappygandi#0001: Some guy once told me that even if you're not gay, you should atleast once visit a gay bar. They will get you drink/food for free.
triggerhappygandi#0001: Atleast for the first time.
triggerhappygandi#0001: Can we generate random numbers on commercial computers?
triggerhappygandi#0001: I know quantum computers can give us _truly_ random numbers, but I heard that even regular computers use the heat emitted from the hardware somehow to generate random numbers.
bmk#1476: but do numbers really exist?
bmk#1476: how can you generate random numbers if numbers themselves don't even exist
triggerhappygandi#0001: :ptsd:
triggerhappygandi#0001: I wasn't prepared to go this deep |
dopa#3178: so is mathematics discovered or invented ?
triggerhappygandi#0001: Both
triggerhappygandi#0001: Many people say
dopa#3178: then numbers kind of exist right ? 🙂
triggerhappygandi#0001: Iirc Lex Friedman asked that question in his episode with 3blue1brown guy
triggerhappygandi#0001: And he said it's both
dopa#3178: interesting
bmk#1476: neither
bmk#1476: how are you so certain that mathematics exists
dopa#3178: if mathematics does not exist then it is invented by humans mind
dopa#3178: if mathematics discovered then it is exist outside human mind
bmk#1476: neither
dopa#3178: what do you mean by neither ?
dopa#3178: universe is also neither ?
bmk#1476: there is no such thing as mathematics
bmk#1476: it's a conspiracy invented by mathematicians
dopa#3178: hahaha
guac#4716: math is the most extreme form of procrastination
dopa#3178: can't do much in life without math
guac#4716: i'm sure my cat is doing fine |
dopa#3178: even chickens able to count till 4
guac#4716: arithmetic or mafs?
dopa#3178: this what I tell to kids at least 🙂
dopa#3178: is not arithmetic a basic math ?
bmk#1476: how can arithmetic exist if numbers dont exist
dopa#3178: well also, cat might be doing math, just not aware of this
guac#4716: so math is discovered
bmk#1476: something cant be discovered if it doesnt even exist
guac#4716: let me hit this bong hold up
dopa#3178: lol
dopa#3178: you need to explain more what do you mean by not existing
dopa#3178: if math does not exist then what does ?
dopa#3178: does 0 exist ?
dopa#3178: or it is pure construct of mind ?
guac#4716: it's all abstract thought only certain animals can reach certain rungs
guac#4716: 0 exists for us
bmk#1476: there is no such thing as abstraction
guac#4716: i like that
bmk#1476: RETVRN TO CONCRETE
dopa#3178: abstraction is construct of mind, but math is not ? |
dopa#3178: if abstraction is construct of mind, then math is probably is too
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/790112992892682240/i2a3jdmumjt41_1.png
guac#4716: lol what would we do without knot theory thoughhhh
guac#4716: how do people's brains not turn into mush after they stop math-ing
bmk#1476: Without knot theory the shoe industry would collapse, this is why Big Math is trying to cover up the fact that math doesn't actually exist
guac#4716: fucking Kanye and his yeezy's
dopa#3178: I don't care about proof's, all I want read paper, and convert math into code
dopa#3178: and suck in it 😦
dopa#3178: like years of my life, I still have to spend days/weeks/months to grasp how to convert math into code 😦
guac#4716: what kind of math are you talking about? things are usually easy sailing aside stability issues
guac#4716: numerical algos?
dopa#3178: I think any math, like understand sigma, integration, etc ...
dopa#3178: but when I see them in paper I am like hold one what is x_i etc ...
dopa#3178: like I think I understand meaning of symbols, but I cannot understand connections
dopa#3178: if that makes sense
guac#4716: can i interest you in our lord and savior, anki: https://apps.ankiweb.net/ ?
guac#4716: nah but seriously if you're struggling with basics, you might need to just memorize the symbols and build up your intuitions.
dopa#3178: I kind of have visual memory
dopa#3178: I remember things like pages of texts
guac#4716: put that shit to good use lol do you draw? |
dopa#3178: nope
dopa#3178: I like I memorized EMT test for fun
dopa#3178: I also forget things it does not stay for ever
dopa#3178: but some things do, some books I still remember page number, chapter names, pictures, and what it was about
dopa#3178: like I never forget ever where I put things 🙂
dopa#3178: The Princeton Companion to Applied Mathematics
dopa#3178: this what saves my life usually
dopa#3178: but it is still an extreme effort of comprehension
dopa#3178: schrodinger's equation I need to look at github code, there is not other way for me to get it
dopa#3178: 🙂
triggerhappygandi#0001: Aaaaaaaa stop
triggerhappygandi#0001: Things exist. Numbers count things. Ergo numbers exist.
triggerhappygandi#0001: :chonk:
triggerhappygandi#0001: This is final
dopa#3178: but it is an abstraction and does not have to be same for all counting living systems
dopa#3178: is math universal abstraction for all counting living systems in universe or it is not ?
3dprint_the_world#6486: re the earlier discussion about women.
I asked my gf if women dress for men, other women, or themselves.
Her answer: "Yeah."
3dprint_the_world#6486: how old are you @dopa |
dopa#3178: almost 40
dopa#3178: what did you think ?
dopa#3178: probably 15, haha
guac#4716: dopa life is shorrrrrrt. follow your nose. if you like a certain math just make a hobby out of it. you can't digest everything lol
dopa#3178: most things in life for me are just hobby, not that I am insane wealthy, I simply don't really care
dopa#3178: some people don't like me for that, how you cannot care, you need this, that, blah, blah
dopa#3178: I just want to hit intellectual limit before I die
dopa#3178: will I get at point where I just give-up learning, and be like that's it I cannot go any further
dopa#3178: fun fact, after age of 40 or so, humans start losing 0.2% of brain mass
guac#4716: mortality is a bitch
dopa#3178: it is the only sure thing that we know about in universe.
dopa#3178: that's why is good to get 3+ year olds a hamster 🙂
guac#4716: or an ipad to distract them from it hehe
dopa#3178: it insane to think how tablet/phone become natural in out daily life's
dopa#3178: it was not that long time ago, when there was palmOS
guac#4716: how was writing on those puppies?
dopa#3178: I was not writing on them much to be honest, but played RTS game alot
dopa#3178: it had multiplayer too
dopa#3178: https://en.wikipedia.org/wiki/Warfare_Incorporated
guac#4716: oh hell yeah that looks bitchin' |
dopa#3178: it was Dune II clone to large extent
dopa#3178: at one point we had champion ship at bar for drinks, afterhours
guac#4716: ah yes the pre-covid era. the world was fun ay
dopa#3178: well can't wait to get vaccinated hehe
triggerhappygandi#0001: Gotta learn how to extend age beyond 500 in our lifetime
triggerhappygandi#0001: I don't want to die at 80
triggerhappygandi#0001: Life is too short
triggerhappygandi#0001: :neetz:
Dal#7192: Is there anything to suggest that skill fluency/learning in organic brains is anything more than strengthening the existing pathways the brain uses to accomplish the task? I.e. is there any evidence to suggest that the organic brain performs any internal optimization on problems?
dopa#3178: Synaptic plasticity has many forms (my guess), I think it is both about creating new synapses, destroying, or reinforcing existing once.
dopa#3178: brain is not fully connected network
Dal#7192: What I'm getting at is whether, in terms of energy for processing, the brain gets any better at performing tasks or simply is able to perform them more readily
dopa#3178: depends on practice and attention of once effort I guess
dopa#3178: most tasks are learned by brain, and we have predisposition for learning and improving some task, like walking, talking, etc...
Dal#7192: That doesn't really suggest anything about what I'm asking, I think
dopa#3178: seems like I am not sure what you mean by: gets any better at performing tasks or simply is able to perform them more readily
CRG#8707: Ins't this kind of the same thing according to the lottery ticket hypothesis?
dopa#3178: like walking is not directly encoded in brain, it is learned task that can be taken to extreme level, like marathon running, acrobatics, etc...
Dal#7192: That is fascinating and I can't wait to explore it, but I don't think it's the same thing
dopa#3178: also after Hemispherectomy (half brain is removed) brain functions are transferred from removed one to working one |
Dal#7192: The first time the brains performs a task, it uses 100 calories, and has to adapt 100 neural connections, and is able to complete the task in 10 ms. The 1000th time it performs the task it has to adapt 10 neural connections, finishes the task in 3 ms, and uses how many calories?
Dal#7192: I'm not sure calories is the right unit here, I'm just substituting for general efficiency
Dal#7192: Is there anything to suggest the brain identifies/creates novel neural structures other (more optimal) than the ones it uses when initially learning a task?
dopa#3178: I don't think it is how it works
dopa#3178: brain is highly interconnected system with functional and structural connections
dopa#3178: it seems you suggesting some local area that has to change only
Dal#7192: I'm suggesting an arbitrary area up to and including the whole
Dal#7192: Can I get from the top of my text to the bottom of my text using | if I started with )
dopa#3178: it seems it learns some new neural path and then reinforces as more person experience that task
dopa#3178: but it does not mean strictly new connection, it might be suppressing other part of brain
Dal#7192: That's my understanding as well. A brute force reinforcement/refinement approach rather than any optimization process.
dopa#3178: it not a bruteforce
dopa#3178: if it was we already would have artificial brains 🙂
Dal#7192: I think the industry is closer than it realizes but I also think we're using different terminology 🙂
Dal#7192: I'm still debating whether the brain has any higher order process than pure in-out topologies of arbitrary complexity (and inherent specialization).
dopa#3178: it does optimization to extent, because synaptic path way will naturally search for more optimal path, I am not expert in this
Dal#7192: It strikes me that even if there is a higher order process occurring, a brute force topology can (and could be optimal at) achieving sapient-level or even sapient-like output
dopa#3178: brain has structural and functional typologies and interaction between and among them produce behavior
Dal#7192: Yep. And that is a very computable architecture.
Dal#7192: > because synaptic path way will naturally search for more optimal path |
This is more or less what I was asking about
dopa#3178: we do not fully understand neural architecture in brain
dopa#3178: may be local areas are well mapped out, but how information is integrated is unknown to large extent
Dal#7192: Sure. I don't think we're close to replicating instinctual structures, but that's a different problem
dopa#3178: even in artificial neural networks is unknown to some extent
Dal#7192: Probably will be until we get an oracle with enough capacity to model it and explain it to us in a way we comprehend ^^
Dal#7192: This looks pretty promising for my question https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021692/
Louis#0144: lmao literally all the tweetqa models in huggingface's repo entirely miss the point of tweetqa
Louis#0144: its like comically bad
Louis#0144: theyre all extractive but tweetqa specifically needs abstractive
Louis#0144: i feel like this is the case for a lot of their models tho... just code monkeys reimplementing them and running them on colab not actually aware of what it is theyre implementing...
Louis#0144: 🤷♂️
Louis#0144: w/e
Louis#0144: every single one of these models is a waste of space https://cdn.discordapp.com/attachments/729741769738158194/790331461617975296/Screen_Shot_2020-12-20_at_4.34.55_PM.png
Louis#0144: lol
Louis#0144: w/e guess ill upload my own
cfoster0#4356: The proliferation of models without model cards or any real description/evaluation is an *issue* to say the least
Louis#0144: yeah....
Louis#0144: ngl like half the models they host should be deleted
Louis#0144: they just give no value to anyone |
Louis#0144: why bother hosting them
Louis#0144: beyond that you dont know if some of them are poissoned...
Louis#0144: im sure theres atleast one or two poissoned models
Louis#0144: idk
Louis#0144: huggingface puts the bar way too low
gwern#1782: who would be bothering uploading poisoned models?
StellaAthena#3530: Who would bother teaching Microsoft chatbots to spout Nazi propoganda?
gwern#1782: that has immediate lulzy consequences
gwern#1782: (and I'll just note I'm not sure that actually happened. what I read shortly after the Tay incident was that it was actually a repeat functionality being abused to puppet the account, and ever since then, everyone has simply repeated the story 'trolls taught Tay nazi propaganda by simply chatting with it'. it reeks of your typical leprechaun, especially in this area where people will talk endlessly about echo bubbles or dunning-kruger or youtube radicalization without ever bothering to mention the followups showing it wasn't real or didn't replicate. everybody "knows" Tay just like they "know" Cambridge Analytica's ads worked or they "know" the backfire effect is real.)
bmk#1476: i remember hearing something about how this particular case is actually less exciting than it's made out to be
bmk#1476: oh lol didnt see gwern's message
bmk#1476: yeah that's probably it
chirp#4545: Re the scaling post that’s going around - Julian Michael, an NLP researcher, left some interesting comments about what might not be possible from scaling alone. https://www.lesswrong.com/posts/k2SNji3jXaLGhBeYP/extrapolating-gpt-n-performance?commentId=x7cGXDCju6qsnzRfg#comments
Back when GPT-3 was getting really hyped up, he made what I thought was a really insightful post about whether language models can learn meaning: https://twitter.com/_julianmichael_/status/1286324775288098816
One thing that jumped out to me is that he explicitly mentions “Transformative AI” — that’s an Open Philanthropy term that I’ve seen going around more and more lately.
triggerhappygandi#0001: Is this in context to the bot they let loose on twitter a few years back? Tay something.
triggerhappygandi#0001: Nvm gwern confirmed it
3dprint_the_world#6486: YES. This frustrates me to no end. |
chirp#4545: so I'm really curious what you all make of Julian Michael's arguments about scaling, and how it might not be a big deal in real world applications: https://www.lesswrong.com/posts/k2SNji3jXaLGhBeYP/extrapolating-gpt-n-performance?commentId=MB2YrtYN7fjwY7eNA
really really not sure what to think
i can imagine both extremes, and AFAIK i can't rule either one out
- pessimistic: "we have no idea how to formula the most important real-world problems as tasks that benefit from scaling up of models/data/etc"
- optimistic: "there is X family of tasks (e.g. conversation) that would be a huge deal in the real world, and scaling is all that is needed to make something that's transformative"
es#4913: Fascinating read, thanks for sharing
andyljones#7746: it reads as very god-of-the-gaps to me. now that the Hard Thing has turned out to be Easy, *actually* it's going to be this other thing that's the Hard Thing.
there are also a bunch of points in there that would be daft if you were talking about another human rather than a very large language model. i suspect the author would consider that an argument well-made, whereas i - and likely a lot of the regulars here - will consider it damning.
it's fair to say that his roadblocks are *plausible*, but p'sonally i don't consider them *likely*.
Technobird22#2055: You are the Jack Clark who wrote the Open AI blog posts?
Technobird22#2055: Very nice articles! Awesome seeing so much of the community here
Daj#7482: I don't think Jack uses his Discord account
Daj#7482: he was here once briefly
Daj#7482: how did you even find that message lol
Technobird22#2055: lol I forgot
Technobird22#2055: I think I scrolled... a lot |
Daj#7482: No kidding, that was like almost 6 months ago lol
Technobird22#2055: lol yeah
Technobird22#2055: Sounds like an interesting project btw
Daj#7482: Glad you think so! It's been fun and interesting so far and is primed to get much more interesting real soon
Technobird22#2055: sounds great
Technobird22#2055: Are you the one "in charge"? 😄
Daj#7482: Yesn't, we don't really have a super formal leadership
Technobird22#2055: ah okay
Daj#7482: But I'm the nominal figurehead I guess
Technobird22#2055: cool
Technobird22#2055: nice chatting to you btw
Technobird22#2055: Don't know how I could contribute just yet
Technobird22#2055: I've been playing around with GPT2 and finetuning it
Technobird22#2055: but I haven't been very "technical" lol
Technobird22#2055: I've mainly integrated it into a discord bot, and it's been quite interesting
Daj#7482: Our main bottleneck for developing GPTNeo is dev time, though we've been making good progress. Our data project (The Pile) is wrapping up v1 nicely atm, though I'm not strongly involved with that. Other than that, some webdesign work is being done and the like
Technobird22#2055: sounds great
Technobird22#2055: btw, how big do you expect the final GPTNeo to be? and how much RAM/VRAM will be required to run it?
Daj#7482: Our goal is at least as large as GPT3 (175B params), but our real goal if possible is 1T
Daj#7482: Running something that big in production is a different kind of challenge and we will be looking into stuff like distillation, pruning, quantization etc to compress it to a usable form |
Daj#7482: But that's far down the line
Technobird22#2055: yeah
Technobird22#2055: Huggingface seems to do a good job with distillation
Technobird22#2055: Seems like you've got someone from HF here as well (Teven), but not active either
Daj#7482: Teven shows up from time to time
Daj#7482: We talk to him semi regularly
Technobird22#2055: got it
Technobird22#2055: What can @Isaac McHorse do?
Daj#7482: not much we kinda neglected working on him haha
Daj#7482: I think he yells at you for saying bikeshedding
Isaac McHorse#2007: HEY! DON'T BE LIKE THAT!
Daj#7482: thanks Isaac
Technobird22#2055: lol
Technobird22#2055: bikeshedding
Isaac McHorse#2007: WHAT ARE YOU DOING BIKESHEDDING? BACK TO WORK!
Technobird22#2055: aosfajs;odfjasij;ogadgbikesheddinga fosdjasldfj;asljfljasl;j;akjdlfjasldfjlasdfdjdfdlajls;kj
Isaac McHorse#2007: I'M NOT WORK ING! I'M JUST PLAYING!
Technobird22#2055: lol, keyword search
Isaac McHorse#2007: JUST BE A HUMAN AND KEEP WORKING!
Technobird22#2055: notbikesheddingasidjsdfhasdiuhcxhivhiucxzhoizxcuhzxcihozxcovchxovz |
Isaac McHorse#2007: OH HELL NO! F*$K YES! WORK!
Technobird22#2055: issac
Technobird22#2055: bike shedding
Daj#7482: -> #the-faraday-cage-archive
Daj#7482: for bot spam
Technobird22#2055: sorry
Daj#7482: All good
Technobird22#2055: ```py
import random
...
if "bikeshedding" in message.content:
bikeshedding_msgs = ["WHAT ARE YOU DOING BIKESHEDDING? BACK TO WORK!", "I'M NOT WORK ING! I'M JUST PLAYING!", "JUST BE A HUMAN AND KEEP WORKING!", "OH HELL NO! F*$K YES! WORK!"]
await message.channel.send(random.choice(bikeshedding_msgs)
return
```
Isaac McHorse#2007: WHAT ARE YOU DOING, YOU LAZY LAZY PERSON?.
Technobird22#2055: done just made a bikeshedding bot lol
Isaac McHorse#2007: OH HELL NO! F*$K YES! WORK! |
Technobird22#2055: stupid keyword
Daj#7482: probably close to as complex as Isaac is internally lol
Technobird22#2055: lol who's isaac?
Technobird22#2055: I finetuned it on a discord server on Satellite receiving and Radio technology, and now it speaks too much jargon lol
Technobird22#2055: but it's also been.. interesting as well
Technobird22#2055: https://cdn.discordapp.com/attachments/791248595080052758/791252587646681118/unknown.png
Daj#7482: The bot
Technobird22#2055: oh thought there was a real Isaac you were referring to
Technobird22#2055: ...Unfortunately, I don't have the compute power to run stuff like that
Technobird22#2055: https://cdn.discordapp.com/attachments/729741769738158194/791262477815644170/unknown.png
Technobird22#2055: who's that?
Technobird22#2055: ~~Is that you?~~
Technobird22#2055: oops, a bit late to work on this
Technobird22#2055: https://docs.google.com/document/d/1wfCZBd18DMNt6YcC6boPNMd9qzzH3zpHHfKj4dezk0g/edit#
Daj#7482: That is not me lol, you can see me in various podcasts bitching about decision theory and other nerd shit haha
Daj#7482: Not sure how up to date the doc is
Technobird22#2055: oh lol ok
Technobird22#2055: ~~TPU Podcast~~
Technobird22#2055: I'm going to head off now; nice little chat with you earlier
Technobird22#2055: Impressive work you're doing |
Serge#0241: where do you plan to store all the petabytes of source text? cloud?
Serge#0241: or is the dataset uploaded somewhere already?
StellaAthena#3530: It’s not petabytes, it’s more in the tens of TB. Not to mention the fact that we only store the compressed data.
Anyways, the answer to your question is that we have friends who are serious players in the data archival space. When you know people who don’t remember the last time they purchased less than 1 PB of storage, getting them to set aside tens or even hundreds of TB for you is pretty easy.
Serge#0241: nice!
StellaAthena#3530: If you want to download the Pile, you can do so here: https://github.com/EleutherAI/the-pile
StellaAthena#3530: If you’re interested in getting acclimated, we have an FAQ here. Not sure where you found the Google Docs link but it’s depreciated: https://github.com/EleutherAI/info
Serge#0241: oh I just clicked the link right above here, but yeah it's probably outdated. thanks for the links
StellaAthena#3530: Not sure what “right above here” means but if you mean the channel description, doesn’t that explicitly describe the Google doc as obsolete? (I’m not complaining about you, I want to make sure it shows up to you the same as it does to me)
Serge#0241: https://cdn.discordapp.com/attachments/729741769738158194/791304617904439306/image0.png
Serge#0241: Just clicked here out of curiosity
Serge#0241: Completely new to this thing
StellaAthena#3530: Oh! Gotcha.
StellaAthena#3530: Sorry. I recently woke up and am apparently having trouble reading.
Serge#0241: That’s fine, the links you provided are on point
Serge#0241: I should ignore the doc then
StellaAthena#3530: I actually completely missed the fact that you and Techno are different people 😛
StellaAthena#3530: Anyways, welcome! I’m biased, but I think we are a pretty cool group that does really interesting things
chirp#4545: Update: they’re now making $22k a month https://twitter.com/PaulYacoubian/status/1341516425282916362 |
StellaAthena#3530: Hey, is anyone free to help me test some GitHub access control stuff? Should only take five minutes.
Daj#7482: Grabbing food, can help in 20min
StellaAthena#3530: Okay, someone not named Connor, Sid, Leo, Aran, or Phil.
Daj#7482: Haha I see
StellaAthena#3530: (unless you have a second GitHub account for some reason)
andyljones#7746: sure
chirp#4545: are there some small tasks i could help out with? been here for long enough, and i think i have time to contribute 😄
StellaAthena#3530: Absolutely. What is your skillset?
StellaAthena#3530: (I know you've been around for a while, but my memory is crap. Please remind me)
chirp#4545: web developer, focused on frontend
also can do backend-y data-engineer-y stuff, but not super experienced at it
also can write
StellaAthena#3530: So there are three main work-streams right now for established projects:
1. Web dev stuff. We have a basic static website, but not many people with graphic design skills. Making our website less shitty would be greatly appreciated (especially before we release the Pile on Jan 1!).
2. Data engineering stuff. We have a language model and evaluation tasks, and need work done on actually evaluating the model on the tasks. This is pretty straightforward, just a bit boring and a bit of a PITA.
3. Neural networks stuff. We are migrating the GPT-Neo code from TPU to GPU.
It sounds like 1 or 2 would be better options to start off on than 3, which requires a familiarity with actually building and maintaining large neural networks. Since we just started the migration it's probably not a great time to try to onboard someone who needs to learn, but once things are more stable there'll absolutely be tasks people with little or no experience with NNs can contribute to and build their skills with. |
bmk#1476: 2 probably isn't really a major thing atm either
chirp#4545: I can help with 2! (#1 i could help with a bit, but my webdev experience is much more on the JS side rather than design)
Deleted User#0000: https://airflow.apache.org/blog/airflow-two-point-oh-is-here/ is all you need for 2
bmk#1476: there isn't much to do for 2 atm
chirp#4545: @StellaAthena even though i don't think i can help much with the web design, i'm happy to help with content!
also happy to help out with any devops-y stuff, if that's important
for 3, could i subscribe to the PRs? so that if i jump in later, i can do it more smoothly
StellaAthena#3530: What do you mean there isn't much to do for 2?
bmk#1476: what were you thinking that needs doing
chirp#4545: actually maybe i can help out with 1, if it doesn't go beyond "find a nicer-looking website template and implement it"
edit: but if that's all we need maybe we can just pay for it. happy to chip in
StellaAthena#3530: Implementing the eval harness? The vast majority of the tasks don't have evaluation code
StellaAthena#3530: Follow this repo: https://github.com/EleutherAI/gpt-neox
bmk#1476: eval harness needs a complete rewrite
bmk#1476: or at least a big refactor
bmk#1476: @chirp by devopsy stuff does that include kubernetes and stuff
bmk#1476: we could use someone who knows kubernetes |
StellaAthena#3530: Sure, but it doesn't seem *hard* if you use airflow
StellaAthena#3530: Just annoying
chirp#4545: ehh i don't have kubernetes experience specifically. i was thinking of just setting up CI and stuff
bmk#1476: are you referring to airflow + eval harness?
chirp#4545: ooc why do we need kubernetes?
StellaAthena#3530: yes
bmk#1476: like rewriting eval harness to be an airflow thing?
StellaAthena#3530: Yes
bmk#1476: that doesn't really fix any of the currently pressing eval harness issues, modulo the fact that a rewrite would be warranted anyways
StellaAthena#3530: @Deleted User has recommended this multiple times in the past, which is a sign its a good idea IMO
bmk#1476: dont you mean lucid
StellaAthena#3530: Well yes, the task is to *rewrite the eval_harness*.
chirp#4545: hmm i'm don't have too much context, so i have some pretty basic questions
- what do we want out of the eval harness?
- what are the biggest problems with it?
- why do we think something like airflow will help?
bmk#1476: i don't know enough about airflow to know whether it's the right tool for the job here tbh
bmk#1476: i don't feel like it is
bmk#1476: the biggest issue is actually on the mesh-tf side |
StellaAthena#3530: Regardless of any mtf problems we need to have *actually working eval harness code*
StellaAthena#3530: Yes, we need other things too but 1) nobody wants to do this (source: I've been trying to get people to do it for months) and 2) it seems like a better skills match for @chirp than "go learn mesh-tf bullshit"
bmk#1476: ok so i agree with that but i'm still not sure we want to rewrite *using airflow*
bmk#1476: i dont understand enough about airflow to see why it's better for this usecase than just a monolithic evaluation library
bmk#1476: seems like extra complexity with no benefit
chirp#4545: what do we need out of the eval harness? at the moment i don't even understand that much
chirp#4545: i'm hearing about rewrites, airflow, etc. but i have no context
StellaAthena#3530: We have a language model. We have a list of widely used evaluation tasks. We would like code that evaluates the language model on the evaluation tasks.
bmk#1476: but our current code is massively flawed and needs to be rewritten and we're debating whether to use airflow in the rewrite
chirp#4545: what do the evaluation tasks entail? just "run language model on X inputs and score it according to Y criteria?"
chirp#4545: @bmk could you point me to our current code?
StellaAthena#3530: Yup! The GPT-3 paper has a list of evaluation tasks that they preformed: https://arxiv.org/abs/2005.14165. We have collected the datasets necessary to do the evaluations but haven't written the actual evaluaiton code.
bmk#1476: https://github.com/EleutherAI/lm-evaluation-harness
chirp#4545: and what's flawed about it? is it more like "it doesn't work at all" or is it like "it's not very elegant"?
bmk#1476: the way it's architected will never work
chirp#4545: @StellaAthena thanks! are we sure that we want to replicate all of GPT-3's evaluation tasks?
bmk#1476: yes
chirp#4545: or only like, a subset of them
bmk#1476: all of them
bmk#1476: possibly even more than they did |
bmk#1476: more is better
StellaAthena#3530: You can do only a subset of them if that's what you have time for, but our goal is all of them
chirp#4545: i mean, i don't have too much context, but i feel like we've gotta prioritize
chirp#4545: just because we don't have nearly the eng resources of openai
bmk#1476: we're not in a hurry to get this done
bmk#1476: we have a lot of time and a lot of hands
chirp#4545: https://cdn.discordapp.com/attachments/729741769738158194/791388409452953620/unknown.png
chirp#4545: ^ that makes me think a full eval harness will be a lot of work
chirp#4545: not saying we can't do it eventually
chirp#4545: but there's gotta be a few tasks that are highest leverage
bmk#1476: the hardest part is laying the foundation
bmk#1476: adding more tasks is actually not that much work
cfoster0#4356: ^
chirp#4545: gotcha
chirp#4545: i can believe that
StellaAthena#3530: The highest leverage is making a working harness that does any evaluation at all
StellaAthena#3530: aka what @bmk said
StellaAthena#3530: Doing this well is extremely high leverage. The more complete and flexible this code is, the easier a time we will have for all future language modeling research. Our ideal is to be able to connect it to an arbitrary language model and to be able to implement whatever tasks we may need in the future.
chirp#4545: so I guess what's the hard part about it?
|
if you want to plug in a newly trained model, and get the eval results out
- you need to download and run the model somehow
- you need to store the results somewhere
i assume that actually running the eval isn't that hard, once the infra is set up?
bmk#1476: so i think the best way to think about it is this
chirp#4545: 'cause it's just running the LM and a small amount of eval code
bmk#1476: the current code would work really well except for one major flaw
chirp#4545: once everything is in place (which i assume is the hard part)
chirp#4545: (btw do we use google drive? notion? do we have a shared place where we keep stuff?)
bmk#1476: thanks to some dumbass who wasn't thinking ahead, the code is structured so that the evaluation tasks *directly query* the model
bmk#1476: the problem with this is of course that with mesh tf, it takes literally minutes per query
bmk#1476: like, it spends a minute initializing, does a number of batches, and then spends a minute shutting everything down
bmk#1476: so the way the code currently works you can't just batch stuff up
bmk#1476: it actually waits *every single time*
bmk#1476: since our eval harness will likely have a load of data, this evaluating would take literally months to finish
asparagui#6391: airflow is just a fancy cron more or less
asparagui#6391: to use it you will need a python script to call
asparagui#6391: i would start there |
bmk#1476: if that's true, then i don't think airflow is the right tool for this job, like at all
asparagui#6391: before you can productionize something you need something to productionize, more or less
chirp#4545: @bmk @StellaAthena i just wrote up a quick and dirty proposal - how does this look? https://www.notion.so/ericyu3/Spec-LM-Evaluation-Harness-ebde0948a6b044caaa8a4624d6bd3aa6
chirp#4545: could especially use feedback on the programming model (is python asyncio a reasonable choice?) and prior art (how do other groups implement large-scale evaluation?)
also, i don't know how anything about our infra
- how are we going to spin up the evaluation worker?
- where can we download the model weights from?
chirp#4545: (also, should i move this to #lm-thunderdome ?)
StellaAthena#3530: Yes. I'm in the middle of something time sensitive right now but I'll review what you wrote when I have time (or @bmk can)
3dprint_the_world#6486: what is Eluether's stance on distributed training a la SETI@Home (or Bitcoin)?
I kinda got the sense before that there was a hostile stance towards distributed training. Or that the community felt it was unnecessary.
bmk#1476: Depends on how you define distributed
bmk#1476: Also SETI and bitcoin are completely different in almost every way
3dprint_the_world#6486: yes that's why I used them as examples
bmk#1476: And neither model would work for us
3dprint_the_world#6486: Of course.
bmk#1476: So i need to know where you draw the line between distributed and not
3dprint_the_world#6486: I'm thinking a model where you e.g. download a client and get some access to the community-trained model depending on how much training you do.
bmk#1476: Is mesh tf distributed |
3dprint_the_world#6486: more like volunteer computing
bmk#1476: I am strongly against volunteer computing until someone can figure out how to do it properly, and i don't think that's going to happen anytime soon
3dprint_the_world#6486: why? tbh it seems like the only way to make this happen.
bmk#1476: If you want to be that someone you can go ahead, but i won't get my hopes up
asparagui#6391: compute is cheap, bandwidth is not
3dprint_the_world#6486: that's an utterly meaningless statement.
3dprint_the_world#6486: no offense.
3dprint_the_world#6486: compute is *not* cheap, especially not at GPT-3 scales.
bmk#1476: The setting we're currently in is that we have a lot of compute that's owned by one party with low bandwidth between nodes
bmk#1476: We have way more than enough compute to do GPT3 multiple times over if we can solve the bandwidth issue
asparagui#6391: ^^
StellaAthena#3530: It doesn't work. It would be great if it did, but nobody knows how to make it work.
3dprint_the_world#6486: I've heard this stated before, but what are the actual numbers
StellaAthena#3530: Several hundred RTX 6000s and maybe 500 or so V100s depending on how things go.
bmk#1476: And also possibly several hundred (thousand?) consumer cards
StellaAthena#3530: and several thousand RTX 4000s
Sid#2121: https://youtu.be/a9jWco4xw-U?t=34
3dprint_the_world#6486: even with 500 V100's, it would still take ~1 year or so of training
3dprint_the_world#6486: based on the figures I've seen
bmk#1476: We have more than that |
StellaAthena#3530: 500 V100s would be about 6 months by my math
bmk#1476: It's complicated but for now just know we've done the math
StellaAthena#3530: @3dprint_the_world But really the core problem is: how do you even do that. Do you know of a way to distribute training that would make what you're suggesting work?
StellaAthena#3530: Do you know how to validate that a bad actor isn't deliberate sabotaging our training?
bmk#1476: This problem happens up be one of those where it's really easy to come up with armchair solutions that would never work
3dprint_the_world#6486: I believe you, but just saying "trust me" isn't very convincing. Like who's paying for all this compute.
bmk#1476: So i don't think it's worth our time to dissect every potential solution that comes up
bmk#1476: We're not allowed to talk about it for now
3dprint_the_world#6486: Ok fine but it would be great if that were made super clear and stickied.
Sid#2121: well, i think we are. but we never really discussed it actually. once we progress a little further in the process we'll make more info available @3dprint_the_world
3dprint_the_world#6486: Because if the source is e.g. private funding, people need to know they're working for a privately funded project.
3dprint_the_world#6486: Just saying.
Sid#2121: not private funding
Sid#2121: literally no money is being exchanged
3dprint_the_world#6486: blowjobs?
Sid#2121: and no one's 'working' for anyone, no one's getting paid
Sid#2121: yes
Sid#2121: i suck a mean dick
3dprint_the_world#6486: and on another note, if the problem is the bandwidth then that's a pretty hard limit -- supercomputer clusters spend more time optimizing interconnect hardware than the actual cpu hardware a lot of the time
bmk#1476: Yes, we're aware |
3dprint_the_world#6486: not many ways to get around that with code.
bmk#1476: If you really can't get around it with code, have fun with your volunteer computing training thing
StellaAthena#3530: We are not hiding things from people working on the project. We are just respecting the fact that our benefactor is on the quiet side. It's not stickied in this channel because "people in our discord" and "people working on the project" are very different groups and frankly because this is a "new last week" thing and we only just connected to their servers for the first time today.
3dprint_the_world#6486: "None of your business" is a completely fine response.
3dprint_the_world#6486: But "Help us out with coding, just don't ask any questions" isn't.
bmk#1476: That's absolutely not what we're saying
StellaAthena#3530: Nobody has ever said "help us out with coding, just don't ask questions"
3dprint_the_world#6486: ok. But it's kind of what it feels like. Like I have to ask a bunch of really annoying questions and feel like I'm pissing people off just to know that there is a private benefactor (which I had no idea until now)
Sid#2121: well, it really only just became a thing, like stella said
Sid#2121: we're not trying to hide anything, but at the same time, we have *literally never done any outreach or publicity*
Sid#2121: so i'm not sure why that should change now
bmk#1476: i am withdrawing from this conversation because i feel it has ceased to be productive
StellaAthena#3530: The reason I'm annoyed with you is not that you're asking questions, but because your framing of those questions falls somewhere between "unflattering with no apparent reason" and "lying." I'm sorry that nobody personally informed you of this development, but we've talked about it several times in #gpt-neox-devs.
3dprint_the_world#6486: I see. well that's cool.
3dprint_the_world#6486: ok maybe just a misunderstanding on my part.
3dprint_the_world#6486: sorry if I offended anyone.
Sid#2121: no one's offended, no worries
Sid#2121: we do keep the info repo fairly well updated, and we'll add all this info to there once things are a little more concrete
Sid#2121: https://github.com/EleutherAI/info
3dprint_the_world#6486: at the same time though, I'm sure that anyone excited by the prospect of an open-source GPT-3 and wishing to contribute to the project will have "who's providing the compute" as a pretty top priority question. And they may even give stupid suggestions like "use volunteer computing" |
3dprint_the_world#6486: lol
Sid#2121: yea, agreed. We have had that crop up a few times and should add it to the FAQ at this point
bmk#1476: we already do
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/791451712996900874/unknown.png
3dprint_the_world#6486: but see this is what I mean, that's an open-ended answer that just opens up more questions.
bmk#1476: we even have a (slightly outdated) description of how we have compute https://cdn.discordapp.com/attachments/729741769738158194/791451921385783337/unknown.png
3dprint_the_world#6486: neither of those actually answer the question.
Sid#2121: :berk: ok i feel like you're just being disingenuous now, in what way is the question not answered
3dprint_the_world#6486: the impression I personally get from those answers is "we don't have the compute yet"
Sid#2121: "slightly outdated"
Sid#2121: ^
Sid#2121: our main priority is running experiments, not updating FAQs.
Sid#2121: also, it's fucking christmas
StellaAthena#3530: We have a paper going out in a week, are learning the new servers, and nobody’s had the time to update it in the past week. It was accurate when written and I’m sure when things calm down we’ll get around to updating it.
3dprint_the_world#6486: excellent
kindiana#1016: (if you are interested in discussing/debating distributed training schemes I'm always happy to oblige lol)
3dprint_the_world#6486: not sure what the point is though, if the compute is available.
Sid#2121: it seems like https://learning-at-home.github.io/ is fairly promising aside from that it's not robust to potential bad actors (correct me if i'm wrong tho)
bmk#1476: > The setting we're currently in is that we have a lot of compute that's owned by one party with low bandwidth between nodes
> We have way more than enough compute to do GPT3 multiple times over if we can solve the bandwidth issue |
Sid#2121: well, it might become relevant in the future. Still something worth discussing even if EleutherAI isn't gonna use it.
kindiana#1016: I think to fully utilize the GPUs with insufficient interconnects, efficient implementations will look closer to distributed training than deepspeed et al
bmk#1476: also it's dmoe which is bad
StellaAthena#3530: It’s cute, I think more than anything.
3dprint_the_world#6486: I only suggested it because: I read the FAQ, that gave me the impression compute wasn't available, and distributed community-based training is how a lot of similar projects have gotten off the ground. Even though it has a lot of problems, like bandwidth etc., at least there's the theoretical possibility of just 'throwing more nodes at it' until the problem is solved. The nodes don't cost anything and there's no upper limit.
Sid#2121: i am yet to see any actual evidence for the 'moe bad' angle
cfoster0#4356: *it was whispered to me in a dream*
cfoster0#4356: Anywho, I'll submit a PR to update the faq once we know what's up more solidly. Don't want to put something on it that's speculative
3dprint_the_world#6486: my hunch is that even though MoE might be less optimal (well, *way* less optimal) than just training one big expert, you can compensate for that by just throwing more nodes at the problem.
3dprint_the_world#6486: like, given enough nodes, you can literally just run distributed gradient descent and get equal performance to standard gradient descent.
3dprint_the_world#6486: and given enough nodes, you can solve the game-ability problem just by sending the same work to multiple people and cross-checking results.
kindiana#1016: what sort of distributed gradient descent?
3dprint_the_world#6486: e.g. https://arxiv.org/pdf/2003.02818.pdf
kindiana#1016: that would require all the nodes have all the weights? which is infeasible for models of this size
3dprint_the_world#6486: yeah, in D-SGD all nodes have to have the full copy of the model.
StellaAthena#3530: That method would be slower than just running SGD yourself
kindiana#1016: I'm not sure if a million small experts who can't interact can even theoretically match a MoE model or a dense model
3dprint_the_world#6486: sure, I'm just saying that theoretically, given enough nodes, you can overcome other problems. Obviously it's much less feasible than MoE for this kind of problem.
StellaAthena#3530: They do interact. What happens is you take the weighted average of what you think the update should be and the mean update across all adjacent nodes in the network
StellaAthena#3530: Even with big-O factors, this is slower than SGD by a lot. |
3dprint_the_world#6486: yeah of course, there's no free lunch
StellaAthena#3530: Yeah but isn’t this just paying to watch someone else eat lunch?
kindiana#1016: ("theoretically" solving these problems is not particularly interesting IMO when you have a finite amount of compute xP)
3dprint_the_world#6486: well not necessarily; with enough nodes you can converge faster than just training alone.
bmk#1476: just getting enough people on board to match the amount of compute we have is already a monumental undertaking
StellaAthena#3530: @3dprint_the_world If I fixed a function to be learned, a set of initial parameters, and weights α,β are you saying that there is some very large number N such that for n > N nodes this algorithm is faster than SGD?
StellaAthena#3530: I don’t actually see why that would be true
3dprint_the_world#6486: yeah I agree, getting enough people on board to match something like 500 V100's training on the same cluster would be... hard
bmk#1476: and that's assuming that attacker resistance and added latency etc don't have any overhead
3dprint_the_world#6486: but then again, people often underestimate the power of *popular* distributed computing projects. e.g. Folding@home is currently sustaining 2 exaflops; that's enough to train GPT-3 in ~2 days
3dprint_the_world#6486: I don't even know how many flops bitcoin is sustaining....
StellaAthena#3530: you left off the “... using a framework and methodology that can’t train neural networks”
3dprint_the_world#6486: (well, not flops in the strict sense)
kindiana#1016: the other problem is that the bandwidth required appears to be very high relative to F@H et al
kindiana#1016: at least hundreds of megabits per gpu
StellaAthena#3530: The paper you linked to doesn’t seem to be capable of outperforming SGD. can you explain to me how it could even in theory do so?
3dprint_the_world#6486: You mean it can't outperform SGD even given an unlimited number of nodes?
StellaAthena#3530: See this comment
StellaAthena#3530: How are you even converting the question of training a NN into one this algorithm solves?
StellaAthena#3530: You haven’t put forth even a sketch of a proposal for how this could work |
3dprint_the_world#6486: The setup here is basically:
- We have a model that's so large that on each node we can only do small batch sizes, maybe even just batch_size=1.
- We assume there's enough bandwidth between nodes to send weights across the net in similar time to doing a GD update.
- We have a very large number of nodes available.
StellaAthena#3530: Okay
3dprint_the_world#6486: under *those conditions*, it's faster than SGD.
StellaAthena#3530: Under those conditions there’s no reason to think it would successfully train a neural network
3dprint_the_world#6486: that's what the paper is about though.
kindiana#1016: (too much math and not enough graphs for me personally lol)
StellaAthena#3530: Where does it say that?
3dprint_the_world#6486: I mean there's no reason SGD can successfully train a net either 😉
3dprint_the_world#6486: the paper just addresses local minima convergence
StellaAthena#3530: Seriously, have you read the paper? How carefully? Because I’ve read it (before today) and don’t see any reason to believe this will work
3dprint_the_world#6486: work in what sense
StellaAthena#3530: Or even how to frame training a NN in these terms
StellaAthena#3530: If you just partition the data and hand it out to each node, what you get is a weighted average of NNs trained on small amounts of data
3dprint_the_world#6486: hm. maybe there's something I missed then.
3dprint_the_world#6486: I don't see why you say that. The weight updates are broadcast at *each iteration*
3dprint_the_world#6486: If we just do one iteration, sure, what you're saying applies.
StellaAthena#3530: Okay, so you agree there’s a minimum number of steps each NN has to run for this to even be plausible? |
StellaAthena#3530: Is there any evidence in this paper that that minimum number of steps is significantly less than the number of steps a single NN takes?
StellaAthena#3530: Because I’ve looked and can’t find any.
3dprint_the_world#6486: good question, not in this paper, I don't think (unless I missed it)
StellaAthena#3530: I’ve also tried training 1,000 NNs for 10 steps and sharing weights using this approach in a grid and it doesn’t learn the data very well.
StellaAthena#3530: (On MNIST)
3dprint_the_world#6486: that's a fair point, in practice you often wouldn't use this naive scheme, I guess it just makes it easier to theoretically analyze.
In practice you'd probably use CoCoD-SGD or something.
3dprint_the_world#6486: the general ideas are the same though.
StellaAthena#3530: CoCoD?
StellaAthena#3530: If you describe a reasonable protocol I’m very interested
StellaAthena#3530: You just haven’t yet
3dprint_the_world#6486: https://arxiv.org/pdf/1906.12043.pdf
3dprint_the_world#6486: they did some practical comparisons there
3dprint_the_world#6486: also lots of links to related ideas like Ring-Allreduce
3dprint_the_world#6486: also to get good results in practice you probably need to use higher learning rate than single-node SGD. This might be unintuitive but it's just because you're effectively using a much larger batch size.
StellaAthena#3530: The abstract claims a performance improvement over normal SGD, but I don’t see that in the paper. Do you?
3dprint_the_world#6486: Isn't that in section 5
StellaAthena#3530: Oh is that what S-SGD is?
3dprint_the_world#6486: Yep, S-SGD is just synchronous update which is basically just normal SGD
StellaAthena#3530: Then this is definitely interesting |
StellaAthena#3530: Have you tried it?
StellaAthena#3530: I worry the synchronous update adds some overhead but we’ll see
StellaAthena#3530: But this has definitely shot to the front of the list of things to try
3dprint_the_world#6486: I did some experiments last year with something very similar to this and my conclusions were basically "it's better than single-node but still very far from optimal", but I guess it has simplicity going for it. Really easy to implement.
StellaAthena#3530: Did you find any ways to improve it?
StellaAthena#3530: Even a factor of 2 improvement in run time is massive when we’re talking about tens of thousands of GPU-months
3dprint_the_world#6486: yeah there's this https://arxiv.org/pdf/1912.12844.pdf which I looked at and seemed interesting but I never got around to trying.
StellaAthena#3530: Definitely leave a comment in #gpt-neox-devs about it. If you’re right about this we should definitely test it
Mischa#0599: Holy Holidays... I am way behind trying to get my turnkey Self University™️ done in time for Jan 1 especially with family and seasonal stuff but I will be back here more regularly after the new year.
Mischa#0599: I usually at *least* lurk thoroughly to catch interesteing ideas and happenings here, but I haven't even had room for that some days
spirit-from-germany#1488: Merry Christmas,everyone! 🙂 🎉
Aran Komatsuzaki#5714: I asked Santa Claus for v3-4096
dopa#3178: https://tenor.com/view/home-alone-macaulay-culkin-kevin-merry-christmas-greetings-gif-3578205
StellaAthena#3530: There are a bunch of people who hang out and chat with us, but aren’t involved with any of our research. Tagging, for example, @CRG @asparagui @triggerhappygandi @andyljones @Nick Cim @dopa @Ethycs @3dprint_the_world @kindiana
Are y’all interested in doing AI research? If so, what are the things that are blocking you from doing so? Lack of interest in language modeling? Lack of experience with AI? Lack of experience with the kind of language modeling we are currently doing? Time constraints? Something else?
We have a freaking massive amount of compute that we functionally waste every day, and a lot of smart people who seem interested in AI. I feel like if we could get some of the “talking non-researchers” to spend even 5-10 hours a week writing code, y’all could be completing research projects and publishing papers in a couple of months.
It’s totally fine if people don’t want to. It in my experience people who like talking about research *want to* but are blocked by other things (most commonly not knowing where to start). |
If you *don’t* chat and just lurk and want to be taught to do research that’s also great! Please speak up 🙂
Louis#0144: i asked for a 3090 and my coworker this morning says by dumb luck he managed to get an extra one he doesnt need
Louis#0144: LMAOOO
Louis#0144: so ig i have a 3090 on the way now?
Louis#0144: i got it at a discounted price
Aran Komatsuzaki#5714: that's awesome lol
andyljones#7746: I mean I'm a full-time indie alignment researcher with a funded project underway. But I hadn't realised you'd got GPUs to spare! What's the policy on getting time on those?
dopa#3178: @StellaAthena I am very much interested in research, but not affiliated with any university so no publishing for me (me 6th grade drop out), I am most interested (obsessed) in multi-agent simulations of automated planning with communication in partially observed environments and such system(s) interactions with humans. If there is compute power available I would be very much interested running learning agents experiments in RoboCup 2D environment, with focus on empirical analysis of transfer learning between tasks, and eventually extending such domain to multi-human, multi-robot interactions, if successful.
We discussed that I need help with math, not long ago.
StellaAthena#3530: Oh right. I knew that 😛 yeah, I should have had you on the list of people to skip.
We sorta have GPUs to spare. We have TPUs to spare by the bucket (literally) and we are going to need to run a bunch of NLP code on GPUs in order to test and calibrate our systems. Since the main interest is in the timings rather than the output itself, it can probably be co-opted to train transformers for purposes of independent interest. We do also have a small amount of GPUs whose use is unrestricted.
andyljones#7746: I'm neck deep in custom kernels, so TPUs are just so much melted sand for me. How much is 'small amount of GPUer'?
To be clear: if I took compute off you folks, I would *absolutely* open the project up to anyone who wants to contribute.
StellaAthena#3530: I’m not sure, that’s a question @bmk would be better suited to answer. I have the ability to spin up a DGX-1 (8 V100s) for personal use but can’t grant access to it to anyone else. I can run code for you on it though if that’s helpful.
andyljones#7746: 🤔 That might be really nice for a big one off at the end of the project - how many days could you plausibly leave an experiment running for?
StellaAthena#3530: That depends on the week tbh. I have use of company systems as long as nobody else needs it for work work. I would guesstimate that nobody uses it 50% of the time though, so if you don’t mind pausing and restarting a bit it shouldn’t be too hard to get you plenty of hours. |
andyljones#7746: Yep, that'd be extremely cool, thanks! Will give you a bump when the time comes.
3dprint_the_world#6486: @StellaAthena keen on running some distributed training experiments **if** you think that would be useful.
3dprint_the_world#6486: or things related to that
StellaAthena#3530: @3dprint_the_world absolutely
3dprint_the_world#6486: I do have a lot of other time commitments but utilizing spare resources sounds fun.
bmk#1476: we have 2-3 1080Tis that i can run your code for you on if you can make it completely hands-off to run, as well as a few dozen 2080Tis (though only up to 8 per machine), though I'm not sure what the rules are for what we can use those for
Mischa#0599: I want to be taught research but I am probably still a long ways away. I'm going to spend the next few years with my head down building a strong foundation in core areas and then I will switch over to a more "just in time" style of learning than "just in case" learning, though I will add in more projects as I get a bit further in.
bmk#1476: Most of us have no idea what we're doing lol
bmk#1476: *I* have no idea what I'm doing
bmk#1476: By most I think I actually mean all
dopa#3178: I am naturally suspicious if I know what I am doing.
andyljones#7746: This is really great to know, thanks. I don't have any code all wrapped up with a bow attached right now, but I'm gonna spend some time thinking about it - even the 1080 TI time'd work out to be valuable if it can be left to run for ages.
Who should I chat to about the guidelines around the 2080 TIs?
bmk#1476: I'll try to find an opportunity to ask for you and get back to you on that
andyljones#7746: That'd be amazing, but at the same time don't sweat it - I appreciate that you're really going out on a limb here.
Mischa#0599: I guess to your detriment, I'm willing to stay on the bench until I am reasonably confident that I know at least somewhat what I'm doing. I have a personal milestone of publishing in Distill by 2024. That's a very long journey because I'm more or less on the ground floor.
Mischa#0599: But in all honesty some of the projects here will probably line up with my studies maybe a year or more from now and I can make that work
Mischa#0599: And also Merry Christmas ya filthy animals
triggerhappygandi#0001: Don't we all |
triggerhappygandi#0001: Oh boy. I'm just trying to catch up with you guys before I can contribute meaningfully.
triggerhappygandi#0001: I understand like half of what goes on in #research
StellaAthena#3530: FWIW, I think learning as you go is highly worthwhile. At least, if you have a basic understanding of coding neural networks, there’s stuff that you can do.
triggerhappygandi#0001: That I do. But writing something as big as GPT-3 still somehow feels formidable to me.
triggerhappygandi#0001: I will try to make active contributions though, hopefully very soon.
dopa#3178: all can contribute is write code 🙂 one time I played a computer game, got upset that air-conditioner did not work according basic laws of thermodynamics in game, reverse engendered game fixed code and send it to game studio, my code was included in game
triggerhappygandi#0001: Chad move
triggerhappygandi#0001: How do you know it was violating thermodynamics though?
dopa#3178: it was a heat pump that was not consuming electricity correctly also was not including environment temperature
triggerhappygandi#0001: How did you guys come up with so much compute though?
bmk#1476: If you're good at reverse engineering code, some help with figuring out why our GPTNeo code is inefficient would be great lol
triggerhappygandi#0001: Is Tensorflow research grant the path to millions of TPU-hours?
dopa#3178: where is that code ?
bmk#1476: Nobody knows how to reverse engineer mesh Tensorflow to figure out why things are slow
bmk#1476: https://github.com/EleutherAI/gpt-neo
dopa#3178: I like that type of challenges 🙂
bmk#1476: https://github.com/tensorflow/mesh
bmk#1476: Nobody can figure out why efficiency is so bad for big models
triggerhappygandi#0001: I took a look at the code a while back. Couldn't find anything that would be inefficient, since you also have linear attention
bmk#1476: This has nothing to do with linear attention |
bmk#1476: We're just talking about regular dense attention
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/791769406405476354/unknown-41.png,https://cdn.discordapp.com/attachments/729741769738158194/791769406664998943/unknown-36.png
bmk#1476: Have fun
dopa#3178: why are you sure it is TF that is slow and not something else in the code ?
bmk#1476: We are 90% sure our code has something wrong with it
bmk#1476: We have heard that up to 50% attention is possible with mesh-tf with nearly identical models
bmk#1476: We just don't know what we're doing wrong
triggerhappygandi#0001: My main problem is the lack of compute. I usually play around on colab pro. Never tried working on distributed TPUs. Will try it though.
triggerhappygandi#0001: Gotta get familiar with mesh Tensorflow
dopa#3178: story of my life, haha
bmk#1476: The bigger the model the less efficient our code
bmk#1476: We know it has to do with network communication obviously but we don't know how to reduce it
triggerhappygandi#0001: Didn't you guys create a model 2x larger than gpt-2?
triggerhappygandi#0001: *a functioning model*
bmk#1476: We created a 100B model, yes
bmk#1476: Just really fucking inefficiently
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/791770899770245160/unknown-28.png
bmk#1476: More screenshots
triggerhappygandi#0001: Have you taken a look at how they managed to train T5?
bmk#1476: If you want to line by line compare their code to our code, be my guest |
dopa#3178: is that same behavior on GPU and CPU or it only on TPU ?
bmk#1476: We don't have GPU at all
triggerhappygandi#0001: As part of my work I will have to take a glance at T5 anyway. Hopefully I can learn something you missed lol
andyljones#7746: aaaaaaaaaaaaAAAAAAAAAAAAAAA
dopa#3178: | You can also choose to train GPTNeo locally on your GPUs.
it is said on github, seems like I misunderstanding how it runs
bmk#1476: In theory you can, in practice I have no idea if it actually works because we've never tried it and don't ever plan on trying it
dopa#3178: got it, does google offer free access to TPU's, I am new to this so will need to everything to debug it
bmk#1476: Yup
dopa#3178: goal is to maximize TPU flops utilization from 19% to >50% ?
Sid#2121: In practice it's really not as bad as the screenshot @bmk posted makes it out to be lol. We tend to get 20-50% utilization depending on model size. Obviously still lots of room for improvement, but i think most improvement will come from a better mesh layout
bmk#1476: I'm talking specifically about 100B
dopa#3178: I need to use my 300 credits for this 😦
bmk#1476: 100B is like 2% efficiency
Sid#2121: yes, and literally no one has tried to optimize it
bmk#1476: Yes, and I'm trying to get dopa to work on optimizing it
Sid#2121: we can jump from 0.5% efficiency to 20% just by changing the layout
dopa#3178: how many tries will I have for 300 ?
Sid#2121: and i'm saying the most improvement will come from optimizing the layout
Sid#2121: that should last ages |
Sid#2121: you can also just use our machines
dopa#3178: I will work on it, because this is best way to learn about transformers 🙂
Sid#2121: I doubt it lol, you should try building one yourself. (not wanting to put you off helping us out, that's just a better way to learn)
dopa#3178: I am not sure if 300 I already used for google on another account will try right now, please don't build your expectations that will not fail I am pleb lol
dopa#3178: true, but best way to build things from scratch at least for me is when I have sound objectives.
if like I hate xyz lib, then it is fun to build my own one 🙂
Sid#2121: fwiw @bmk in @Aran Komatsuzaki 's moe scaling experiments we've found that turning off dropout increases efficiency by ~20% and also improves performance
Sid#2121: so we should just have that off by default
bmk#1476: huh
bmk#1476: btw how have those experiments been going, and is there anything i can help with after pile
Sid#2121: not sure what's really left to do, maybe setup T5 for some experiments but i think @Aran Komatsuzaki was going to do that
Aran Komatsuzaki#5714: yeah throughput increased with 10~20% and also needs like ~30% fewer tokens to achieve the same perf. we'll see more precise speedup later.
Aran Komatsuzaki#5714: yeah T5 one isn't complicated and doesn't need any assistance in terms of data.
Sid#2121: why does everyone still use dropout :thonk:
Aran Komatsuzaki#5714: @Sid btw please let me know when the training is finished.
Sid#2121: i guess it's more effective for smaller models/more epochs
bismarck91#5255: Is there a way to actually calculate utilization before running the model?
Aran Komatsuzaki#5714: yeah it's very good for when your dataset is pretty small.
Sid#2121: @Aran Komatsuzaki just the n_embd:1024 models left now
Sid#2121: they're about 3/4 trained |
Aran Komatsuzaki#5714: cool 🙂
Sid#2121: can i post up the tensorboard? maybe people would be interested
Aran Komatsuzaki#5714: yeah why not?
bmk#1476: why not use omniboard
Sid#2121: we were having some problems with it @bmk
Sid#2121: we're running things in parallel and it was misreporting some runs
bmk#1476: what in particular
Sid#2121: i.e, several runs just had exactly the same loss curves
Sid#2121: when that's not the case
bmk#1476: oh, did you start the runs at the exact same time
Aran Komatsuzaki#5714: yup
bmk#1476: that's an issue with the port assignment
dopa#3178: since I have no idea how everything works, what dataset I should use as benchmark initially ?
bmk#1476: you could have like, asked me, lol
bmk#1476: the omniboard code assumes you don't start two runs at exactly the same time
Sid#2121: posted the tensorboard up in #research
Sid#2121: it's not at exactly the same time, actually
bmk#1476: but like within a minute of each other
Sid#2121: most likely, yeah
bmk#1476: yeah that'll do er |
bmk#1476: i can push a fix
Sid#2121: i just find it easier to compare runs with tensorboard, anyway
Sid#2121: the runs are still going on the omniboard
bmk#1476: also btw what about activation function ablation
bmk#1476: are we doing that
Sid#2121: could do
Sid#2121: i'm more interested in training a model for release on the pile first
bmk#1476: i can take over that part
bmk#1476: i already implemented a shitload of activations lol
Sid#2121: aran set up a nice framework for running experiments in parallel
dopa#3178: that is activation I am thinking about I know a few more 🙂, it would be interesting to test them in transformers instead of neat
Sid#2121: https://github.com/EleutherAI/moe-scaling guessing you can access this @bmk ? if not i'll invite you
bmk#1476: can access
StellaAthena#3530: He should be able to. I don’t think you can hide repos for admins
bmk#1476: ok so since i already have all the activation function code implemented, can you run the experiments for those for me? and i'll write up the analysis for that in the paper
Aran Komatsuzaki#5714: from my experience, activation function scaling isn't really fruitful when you have stuffs like GLU and moe
bmk#1476: this is just to rigorously show that
Sid#2121: after xmas, sure. I'm gonna prioritize training a big model so we can release something for gptneo before we forget about it forever, though
Aran Komatsuzaki#5714: @Sid btw i thought about your suggestion of training larger models
Aran Komatsuzaki#5714: since we already can draw scaling curves from our runs, larger models are just going to be on the same line. |
Aran Komatsuzaki#5714: so, i'm not sure if we can find anything new from that
Aran Komatsuzaki#5714: we can probably calculate the amount of saving relative to gpt2-xl from the line we can draw from the runs we already made.
Sid#2121: sure, makes sense
StellaAthena#3530: We should train several models and check the variability
Sid#2121: for which experiment @StellaAthena
StellaAthena#3530: @Sid we should check whether our projected scaling laws are consistent across several runs.
Sid#2121: why would they *not* be?
StellaAthena#3530: Why would they be?
Sid#2121: the code is all deterministic at this point
StellaAthena#3530: Oh, I mean on different training data
Aran Komatsuzaki#5714: from my experience the variance in that case is still small enough
StellaAthena#3530: Like what @bmk said, we probably know what will happen but it’s worth showing
StellaAthena#3530: Especially because OAI hasn’t bothered to for some reason
3dprint_the_world#6486: I'm literally employed as a professional ML and AI software engineer, with many years of industry experience, and I have no idea what I'm doing
StellaAthena#3530: Everyone’s a fraud here, but at least we are open about it
bmk#1476: > open
unlike ||REDACTED||
3dprint_the_world#6486: most governments
3dprint_the_world#6486: hah, yep. |
bmk#1476: as i take it, professional is another word for "forced to do this for a living" and industry experience is code for "being stuck inside a dysfunctional organizational hellscape", right?
bmk#1476: and engineer is code for "i don't know why it works, but it does and i don't ask too many questions"
dopa#3178: or it does not work and no one has idea why, but I have to figure it out
3dprint_the_world#6486: more like:
professional: "This is the only thing I'm actually good at."
industry experience: "I don't have enough guts to actually start my own company"
engineer: "not engineer."
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/791790715478081546/6hhvuan4icd41.png
bmk#1476: this describes eleuther pretty accurately
3dprint_the_world#6486: it also describes the company I work pretty accurately
3dprint_the_world#6486: I like you already.
StellaAthena#3530: I feel personally attacked by the first line of this
bmk#1476: i mean, at least you know why it *should* work whereas with most of our, er, more *empirical* work, we happen to fit the third line more than anything else
dopa#3178: @bmk the profile printscreen's you posted are they from colab ?
asparagui#6391: @StellaAthena i've been distracted with trying to get this book out the door, but that is (knock on wood) done
asparagui#6391: i have a decent amount of compute, but my limiting factor is usually free cycles
asparagui#6391: work has a tendency to have fires
asparagui#6391: i am currently searching for my next windmill to tilt at
StellaAthena#3530: Congrats on the book!
asparagui#6391: you were talking about imagenet the other day that's up my wheel house |
asparagui#6391: i have done a few times now
asparagui#6391: 50-100 maybe
asparagui#6391: i am currently jumping through the hoops to submit it to mlperf
StellaAthena#3530: Oh yeah. We have a side project we’ve put on hold to finish up the Pile that involves ImageNet
Technobird22#2055: Question regarding Google Colab: With Colab pro, you get to use T4s / P100s more often and have longer runtimes / higher RAM?
Also is a T4 or a P100 "better"?
asparagui#6391: t4 is a later gen yes
asparagui#6391: but better is really a function of what you're trying to do
asparagui#6391: i have a really old card that has a bunch of memory
asparagui#6391: it is very slow but can run most code that the fancier ones cannot
asparagui#6391: most people don't use anything more advanced than fp32
Technobird22#2055: ok
Technobird22#2055: I forgot, which has more VRAM?
Technobird22#2055: A T4 or a P100?
Technobird22#2055: Also, just wondering, how often do you get T4s without Colab Pro?
James#6892: LOL!
kindiana#1016: I've been writing a model parallel transformer implementation which can (eventually) run on a set of heterogeneous, unreliable nodes with limited bandwidth. Have been experimenting on a single GPU currently, but its getting close to where I would be able to use a couple nodes of 2080tis (or some of those bandwidth limited V100s) to try a gpt2 sized model
kindiana#1016: (outdated) code here: https://github.com/kingoflolz/swarm-jax
kindiana#1016: because its jax it can also theoretically run on TPUs too, once connor or whoever has the google cloud account with tpu quota gets access to the jax on tpu alpha
asparagui#6391: @kindiana if I had quota what all would be needed to make things work |
kindiana#1016: not sure at the moment, there's some additional work required to make it work well on multiple GPU hosts, but there's also certainly going to be some additional weirdness for TPUs lol
asparagui#6391: haiku and ray you have buzzword bingo down
Technobird22#2055: Hello. I only just joined this server recently and haven't been very active, but I'm interested in joining in on the research that people here are doing. However, I haven't done much with AI and I'm worried I don't have the programming knowledge or expertise required for taking part.
asparagui#6391: i dunno what the other people would say
asparagui#6391: but tbh interest > everything else
asparagui#6391: alles ist practice
asparagui#6391: https://www.youtube.com/watch?v=TxBXaMQP2Kg
bmk#1476: ~~theorie und praxis klaffen oft auseinander~~
StellaAthena#3530: ^^ this
bmk#1476: especially since we plan on pumping out a load of research over the next year, as a collective new years resolution we should totally adopt a more structured project system
bmk#1476: what i mean is right now we have a bunch of ideas randomly floating around and that's not very good for making sure things get done
StellaAthena#3530: Yeah
StellaAthena#3530: I’ve been thinking about that too
bmk#1476: we should collect all the ideas in one place, assign people to each project, figure out timelines, do organizational things like gantt charts to keep timelines in mind or what the heck ever, etc
asparagui#6391: pm software 😛
bmk#1476: and we should have a proper framework for deciding which ideas to elevate to actual eleuther projects and spend our compute and manpower on, rather than a massive patchwork of random channels that map only vaguely onto which projects we're doing and where nobody really knows which projects are "actually" happening
AI_WAIFU#2844: are we gonna get a jira and do scrum?
bmk#1476: no
bmk#1476: well, .. idk
AI_WAIFU#2844: I want to do a VRchat standup |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.