data
stringlengths 115
7.61k
|
---|
researcher2#9294: What about specifc tpu?
researcher2#9294: https://www.amazon.com/HP-R0W29A-Tesla-Graphic-Card/dp/B07PGY6QPT
Louis#0144: I did LM training for two years and I never needed more than like 20GB of VRAM since I used super computers for larger tests
Louis#0144: (which were infrequent)
kindiana#1016: $0.2/hr for a 2080ti is the going rate on vast.ai, I'd expect 3090s to start out a bit higher but eventually level out to be about the same price
bmk#1476: a single V100 is $1k/mo on gcloud
bmk#1476: to match performance id be breaking even in 2 months
Deleted User#0000: > no one is using 24gb for gaming
@Louis but can it run crysis on max settings?
researcher2#9294: THERE IT IS
Louis#0144: 1) partner with a university. uChicago gave me soooo many CPU hours
Louis#0144: I dont even go to uchicago
Louis#0144: lol
Louis#0144: nor did i
Louis#0144: 2) figure out how to run small unit tests before a large deployment to a cluster
bmk#1476: this isnt for work with a university
Louis#0144: Colab works well for that
Louis#0144: its research?
Louis#0144: it doesnt matter if its not with a uni
Louis#0144: a lot of them will give u CPU hours |
Louis#0144: no problem
bmk#1476: no strings attached?
Louis#0144: uToronto hands GPU time out like its candy
Louis#0144: Beluga
researcher2#9294: Louis makes a good point, depends on use case I think. If you're running this thing 24/7 for months, buying your own could be good. Otherwise cloud.
Louis#0144: tons and tons of students that have never been in the canadian education system use beluga
researcher2#9294: Factor in power bill also.
Louis#0144: @bmk not no strings attach
Louis#0144: I got audited twice
Louis#0144: and they did a background check
bmk#1476: are there restrictions
Louis#0144: 'but besides that
Louis#0144: all good
Louis#0144: yes theres restrictions
StellaAthena#3530: Not ones we would care about though
Louis#0144: you have to do most of your compiling on their systems
bmk#1476: can i do it for profit
Louis#0144: you cant really bring packages onto their server pre-compiled
bmk#1476: i.e an AI company
Louis#0144: yeah but you need to disclose that |
Louis#0144: nonprofit will get priority
Louis#0144: but im sure its fine
Louis#0144: > Not ones we would care about though
@StellaAthena not sure what you mean?
bmk#1476: so i cant reliably get gpu time?
Louis#0144: uh I had a friend who had a startup and got a lot of time from stanford a few years back
Louis#0144: it was pretty reliable
Louis#0144: just not during peak hours before conferences
StellaAthena#3530: Oh I thought this was for us
bmk#1476: no it's not
StellaAthena#3530: Like, for EleutherAI
bmk#1476: no it's for a different thing
StellaAthena#3530: What’s it for?
bmk#1476: i want to buy a 4x 3090 for a different project
Louis#0144: im telling him its a bad idea and he should just ask an institution with a lot of money for server time
Louis#0144: lmao
StellaAthena#3530: Yeah, probably. What’s the project @bmk
Louis#0144: I ran an AI startup for a few years and server costs murdered us
bmk#1476: i don't think i can disclose but it's LM related
StellaAthena#3530: That’s going to make University partnership more tricky |
Louis#0144: lambda labs offered me a partnership at one point
Louis#0144: The project fell through though
Louis#0144: It was a for profit project
Louis#0144: you might wanna contact them
Louis#0144: they do all CV tho
bmk#1476: look i dont need *that* much compute
Louis#0144: they wanna get into NLP last time I checked
Louis#0144: we were talking about training LMs
bmk#1476: also im not sure what my employer's opinion on that would be
Louis#0144: lambda labs wanted some IP ownership
Louis#0144: if I recall
bmk#1476: yeah probably not gonna fly
Louis#0144: ask your employer for money
Louis#0144: if they think its a good idea
bmk#1476: to what
Louis#0144: theyll pay for the GPUs
Louis#0144: lol
bmk#1476: yes i already have the money im asking for help choosing hardware
bmk#1476: please
Louis#0144: tbh |
Louis#0144: I would do a rack of 1080tis
Louis#0144: lol
Louis#0144: theyre cheap and easily replacable
Louis#0144: dont go for the newest stuff
Louis#0144: I have a server rack of like 6 GPUs in my basement
Louis#0144: I got them super cheap
Louis#0144: with the money you would spend on 3090s
kindiana#1016: 10 and 20 series are going to get really cheap if 30 series supply is good
Louis#0144: you can probably buy two whole racks
Louis#0144: ^
Louis#0144: exactly
Louis#0144: maybe 3 racks
Louis#0144: if you buy the CPUs used too
Louis#0144: 6 - 8 GPUs each
kindiana#1016: I dont know if you actually want to have multiple racks though lmao, the power and noise will be pretty bad
bmk#1476: i need the ram and the fp16
Louis#0144: more VRAM than youd ever need
Louis#0144: dude its for a company
Louis#0144: who tf cares about noise
Louis#0144: it can be loud as fuck |
Louis#0144: put it in a sound proof room
Louis#0144: ez
Louis#0144: Also
Louis#0144: loud fans are cheaper
Louis#0144: get cheap loud fans
bmk#1476: please i just need a recommendation for motherboard
Louis#0144: dual socket imo
Louis#0144: tons of PCIE lanes
Louis#0144: get cheap old xeons
kindiana#1016: theres no motherboard with 4 pcies spaced 3 slots apart afaik
Louis#0144: server boards
bmk#1476: that's what i need
Louis#0144: dual socket
Louis#0144: has that
Louis#0144: lol
bmk#1476: link pls
Louis#0144: https://www.servethehome.com/asrock-rack-3u8g-c612-8-way-gpu-server-review/
Louis#0144: this one is good
Louis#0144: expandability in the future
Louis#0144: redundent power |
Louis#0144: loud as fuck but once again who cares
bmk#1476: cant fit a 3090
Louis#0144: sure it can
Louis#0144: those 2 slot plates are removable
bmk#1476: oh
Louis#0144: or you can run a water loop
Louis#0144: this is one of the situations where a water loop actually makes sense in enterprise
bmk#1476: this is going in my bedroom
Louis#0144: you can run your loop through the enter rack
Louis#0144: oh
Louis#0144: LOL
Louis#0144: dont put this in your bedroom dude
Louis#0144: I used to have a server rack in my bedroom
Louis#0144: its loud and hot
Louis#0144: and youre going to regret it
Louis#0144: quiet servers are double to triple the price
bmk#1476: i dont care if its server hardware
bmk#1476: im fine with consumer hardware if it fits
Louis#0144: youre gonna care when its keeping you awake and youre sweating to get anything done
Louis#0144: thats the thing |
Louis#0144: you are gonna want redundent power for something at this scale
Louis#0144: and youre gonna want dual CPU
Louis#0144: you cant get that at the consumer level
bmk#1476: i dont need redundant power
bmk#1476: i checkpoint often
Louis#0144: whats the TDP of a 3090?
bmk#1476: and i dont need dual cpu
Louis#0144: x4
Louis#0144: its like
Louis#0144: 350W
Louis#0144: lol
Louis#0144: youre gonna be running dual power supplies
Louis#0144: and with 4x 3090s with their memory bandwidth
Louis#0144: theyre gonna be starved without more PCIE lanes
bmk#1476: threadripper
bmk#1476: https://www.newegg.com/evga-220-t2-1600-x1-1600w/p/N82E16817438041?Description=1600w%20power%20supply&cm_re=1600w_power%20supply-_-17-438-041-_-Product
bmk#1476: 1600W
Louis#0144: you gotta atleast take my word on that, I did super computing for ages
Louis#0144: not enough
Louis#0144: youd want 1800 or 2000 |
Louis#0144: the GPUs alone are 350 each
Louis#0144: the CPU is 180
Louis#0144: mobo is probably 10 - 15
bmk#1476: https://www.newegg.com/evga-supernova-750-g-120-gp-0750-x1-2000w/p/1HU-00J7-006T1?Description=2000w%20power%20supply&cm_re=2000w_power%20supply-_-9SIAHT8BHT5519-_-Product
bmk#1476: ok 2000w
Louis#0144: storage is its own nice chunk if you do PCIE storage
Louis#0144: why cant your employer hold the server
Louis#0144: you dont want that liability
Louis#0144: all youre gonna be doing is SSH
Louis#0144: lol
bmk#1476: look it's complicated
bmk#1476: i just need help finding a mobo that would work
bmk#1476: heck, im fine with risers
bmk#1476: as long as it's not too difficult to set up
Louis#0144: I still think dual socket is best tbh
Louis#0144: better TDP for one
bmk#1476: why not just use a threadripper
Louis#0144: expandability and upgradability is a key benefit imo
Louis#0144: you arent using ECC memory but
bmk#1476: i dont need ecc |
Louis#0144: lets say you needed to leave something training for a month
Louis#0144: youd want ecc
Louis#0144: lol
Louis#0144: ive been fucked by bitflips
bmk#1476: thats not necessary i checkpoint often
Louis#0144: it sucks
Louis#0144: at the end of the day its more pcie lanes for cheaper than a threadripper
Louis#0144: since dual socket CPUs are older
Louis#0144: you can go with a CPU from 2013/2014
bmk#1476: so id get a board&cpus off ebay?
Louis#0144: yeah
Louis#0144: CPUs basically never die
Louis#0144: or if they die theres lots of flames
bmk#1476: ok i might do that, and buy one of them mining rig frames
researcher2#9294: lol
Louis#0144: another thing to add is that server PSUs are insanely efficient
researcher2#9294: Another word of warning, that stuff stinks, large activated carbon air filter high recommended.
Louis#0144: much much better than gold
Louis#0144: oh yeah
Louis#0144: thats true |
Louis#0144: @bmk server PSUs are pretty cheap if you know where to look
Louis#0144: even 2000w
bmk#1476: where do i look
researcher2#9294: how many amps can you pull from a household circuit?
researcher2#9294: getting up there now
Louis#0144: oh fuck
Louis#0144: actually
Louis#0144: thats true
Louis#0144: you might need an electrician to help with installation
Louis#0144: oof
bmk#1476: meh ill just avoid using my microwave for the next year
researcher2#9294: I cooked a wall socket doing bitcoin mining
researcher2#9294: dw im not smart
Louis#0144: I fucked my breaker up doing traveling sales man
Louis#0144: LMAO
researcher2#9294: hahahaha
researcher2#9294: who says nerds dont live on the edge
Louis#0144: I would honestly set your server up to exhaust its heat directly out the window
Louis#0144: or have a shed with radiators in your yard
Louis#0144: Ive done the later |
Louis#0144: :^)
bmk#1476: i dont have a yard
Louis#0144: thermal throttling is gonna be a serious issue
bmk#1476: this is alberta
Louis#0144: also you should let your city know, they look for people cooking meth by photographing their roof during the winter
Louis#0144: dont know anyone whos had police show up with a warrant bc of a server
Louis#0144: but dont be the first
bmk#1476: i dont think thats a serious concern for me
researcher2#9294: Do you own, can you run a duct extractor directly into roof?
Louis#0144: ^
Louis#0144: thats a good point
researcher2#9294: though it does come back down
bmk#1476: i dont have anything illegal except for the several hundred tb of illicit materials on my nas
researcher2#9294: slowly
Louis#0144: you can put radiators in your attic
Louis#0144: LMAO
researcher2#9294: roff
Louis#0144: tbh though if youre in alberta
Louis#0144: put the radiators on your windowceil
Louis#0144: and make a jank AC unit |
Louis#0144: LOL
bmk#1476: look
researcher2#9294: make a video with Linus
researcher2#9294: profit
Louis#0144: use your computer as the condenser
bmk#1476: i think we're slightly past the point of useful suggestions at this point
Louis#0144: :^)
Louis#0144: ok but the duct idea is legit a good recommendation
Louis#0144: I did that for a while
Louis#0144: its gonna give off a LOT of heat
Louis#0144: especially if its in your room
bmk#1476: eh im used to my room being a few degrees hotter than the surroundings from my single 1080ti
bmk#1476: how much worse could it be
Louis#0144: oh boy
Louis#0144: LOL
Louis#0144: I was sweating in northern NY in January
Louis#0144: no heat on
Louis#0144: dude
Louis#0144: its way worse
Louis#0144: LMAO |
bmk#1476: how cold does it get there anyways
Louis#0144: pfft
Louis#0144: it was like -10C or so?
Louis#0144: p cold
bmk#1476: thats nothing
researcher2#9294: Not going to make any more suggestions. Now questions for louis, any advantages to having all in 1 motherboard vs distributed?
bmk#1476: -10 is not cold
Louis#0144: I lived in Waterloo for a bit where it regularly hit -40c
Louis#0144: its not that cold compared to waterloo
Louis#0144: waterloo is fucking awful
Louis#0144: im so happy I left
bmk#1476: its even colder up here, im pretty sure
Louis#0144: @researcher2 its a space vs speed trade off
bmk#1476: another 10 degrees north
Louis#0144: distributed is often faster but takes up more space
Louis#0144: 1 mobo takes up less space, denser, allows for different cooling architectures
Louis#0144: distributed means its easier to repair when something goes wrong
Louis#0144: no down time
Louis#0144: and distributed most of all means higher redundancy
Louis#0144: which was useful for me back in the data since i had a 3TB datset |
Louis#0144: dataset*
researcher2#9294: I will get to understanding of scaling soon hopefully. Just got my code working on 1 gpu, next is distributed. Working with pytorch btw. Next step is pytorch distributed, but I see facebook using slurm in their code too?
Louis#0144: look into super computer schedulers
researcher2#9294: You need to load the full model on each gpu?
Louis#0144: your life will be controlled by the scheduler
Louis#0144: depending on where you deploy
researcher2#9294: right, ok
Louis#0144: I like CRON
Louis#0144: it allows for CUDAoverSSH
Louis#0144: LMAO
Louis#0144: meaning I can distribute cuBLAS over SSH
researcher2#9294: cron, not linux cron?
Louis#0144: pretty low latency
Louis#0144: theyre the same thing
researcher2#9294: kk
Louis#0144: you know screen under linux right?
researcher2#9294: yus
Louis#0144: basically all that CRON does is remotely create a new screen and then output some result back over SSH
Louis#0144: latency is super low
researcher2#9294: sounds very easy |
Louis#0144: ya
Louis#0144: CRON is the idea that tasks are small enough that you can basically rely on the scheduler linux directly provides
Louis#0144: unlike SLURM
researcher2#9294: most of my distributed stuff has been sockets, but using existing linux stuff would be much faster (to develop)
Louis#0144: which is its own entire system
Louis#0144: SLURM basically schedules an appointment for u
Louis#0144: LMAO
Louis#0144: where as CRON tries to break down your task as much as possible
Louis#0144: Sage uses CRON
Louis#0144: since Sage is actually super easy to distribute
researcher2#9294: when you say cron, your setup is whacking stuff in the local linux scheduler (cron), that just runs a script to ssh and return remote result.
Louis#0144: kinda
Louis#0144: last time I used CRON i had 4 CPUs and I set it up so that each node would be receiving some subset of BLAS instructions
Louis#0144: the results were then gathered back
Louis#0144: but the tasks were small enough such that anyone else triyng to do the same thing could easily find space
Louis#0144: the idea with CRON is to break the task up into super small steps
Louis#0144: higher bandwidth requirements but lower latency
researcher2#9294: er
researcher2#9294: doesn't smaller jobs = more requests = more aggregate latency?
Louis#0144: latency as in how long till you can start computing |
Louis#0144: with CRON you need to know the exact set of instructions that are going to occur
Louis#0144: and you schedule them at fixed intervals
Louis#0144: idk if Im explaining this well
researcher2#9294: I think I get a general idea
researcher2#9294: if you have a github with this it would complete the picture
researcher2#9294: or someone elses
Louis#0144: but the idea is you break it down as far as you can, schedule them to run at fixed times, and then any gaps between those fixed times can be taken by someone else
Louis#0144: the wiki page is more then enough
Louis#0144: look at multi-user capabilities under cron
researcher2#9294: fixed times, so you're not doing pool type setup, but a whole bunch of jobs that just pick at a data source?
researcher2#9294: I c
Louis#0144: ya
Louis#0144: exactly
Louis#0144: CRON isnt good for DL
Louis#0144: lol
researcher2#9294: OK
Louis#0144: or anything stochastic
researcher2#9294: I thought we were talking about DL lol
Louis#0144: no
researcher2#9294: hence my 🤷 |
Louis#0144: do not use CRON for anything stochastic
Louis#0144: at all
Louis#0144: use CRON for traveling sales man
Louis#0144: or linear programming stuff
Louis#0144: where you need to update a portfolio at fixed intervals
Louis#0144: things like that
researcher2#9294: tbh, it would actually work as long as your setup has identical hardwares and jobs
Louis#0144: which is basically never...
researcher2#9294: k
Louis#0144: not in DL
researcher2#9294: Thanks for the rundown, I now know who to harass for distributed compute questions 😄
Louis#0144: oof
researcher2#9294: dw my first lecturer said to spend hours on your own first
researcher2#9294: only ask if stuck
researcher2#9294: google does 98% of problems
Louis#0144: ANYWAY
Louis#0144: everytime I check up on intel
Louis#0144: the dumpster fire rages on
Louis#0144: apparently like not even their chief officers knew of the 7nm delay
Louis#0144: LMAO |
Louis#0144: well like not all of them
Louis#0144: what a fucking joke
researcher2#9294: wtf is going on there, I bought the stonks after crash
Louis#0144: its a mess
researcher2#9294: after second crash
Louis#0144: they dont know how to compete with AMD
researcher2#9294: (7nm)
Louis#0144: and bob swan
Louis#0144: has his head
Louis#0144: so far up his ass
Louis#0144: that Im worried hes become a torus
Louis#0144: swan needs to step tf down
Louis#0144: hes running intel into the ground
researcher2#9294: likelihood?
Louis#0144: they arent gonna go bankrupt if thats what you mean
Louis#0144: lol
Louis#0144: but like
researcher2#9294: I'm thinking long term
researcher2#9294: I want a duopoly not monopoly
Louis#0144: long term they need to stop making enthusiast chips |
Louis#0144: they need to focus on budget
Louis#0144: theyve lost the enthusiast market
Louis#0144: they have almost zero chance to come back
Louis#0144: all the fabs are denying them contracts
Louis#0144: and they cant get their own fabs up
Louis#0144: its a mess
researcher2#9294: 😒
Louis#0144: as it stands right now if they dont focus on budget theyre gonna be hurting a LOT in a few years
Louis#0144: they need to do what AMD did back in like 2004
Louis#0144: no more enthusiast CPUs at all
Louis#0144: focus low power
Louis#0144: and low power enterprise
Louis#0144: intel management is too fucking stuck up to admit it
Louis#0144: they arent even gonna try to save themselves
Louis#0144: theyd much rather go down in a ball of fire
researcher2#9294: I have no clue about the chip industry. I always thought it was basically an underlying architecture which is progressively crippled going down the price tiers.
Louis#0144: the issue is if they admit it their stock is gonna tank so hard
Louis#0144: like theres gonna be another big crash soon
Louis#0144: tbh at this rate it might be 3 or 4 years before intel 7nm
researcher2#9294: plz no |
Louis#0144: at that point AMD will be past 7nm....
Louis#0144: they might be at 5 by then
Louis#0144: I think their 5nm fabs go up in 2024
Louis#0144: lol
Louis#0144: last time I checked
researcher2#9294: Surely this isn't just a ceo issue?
Louis#0144: no
researcher2#9294: are they brain draining or something?
Louis#0144: its management
Louis#0144: its not the CEO
Louis#0144: its all of the management
Louis#0144: intel has the best engineers in the business
Louis#0144: you mean to tell me that some of the most brilliant people in the world cant make CPUs that stomp out AMD?
Louis#0144: intel has so much more man power too
Louis#0144: theyre like almost a magnitude larger
researcher2#9294: so they're not focusing on long term architecture improvement but rather trying to maximize now (enthusiast gear)?
Louis#0144: the issue is that they need a massive restructuring
Louis#0144: but if they admit this their stock is gonna plummet again
Louis#0144: lol
bmk#1476: idk but as an enthusiast i must say |
bmk#1476: intel enthusiast chips suck
Louis#0144: they do!!
Louis#0144: thats part of the issue
bmk#1476: so idk what you mean
bmk#1476: if theyre focussing on enthusiasts too much im not feeling it
Louis#0144: intel enthusiast chips have fallen behind because for so long they had zero competition
Louis#0144: so now theyre trying to catch up to AMD
Louis#0144: but theyre failing
Louis#0144: because AMD has such a massive headstart
Louis#0144: so the main thing for them to do is restructure, forget enthusiast chips, and focus on budget and low power
Louis#0144: intel has major rooted issues
Louis#0144: this isnt just a "oh AMD is doing better than them, finally some competition"
Louis#0144: this is a "intel is really in a tough spot... this might entirely change their future"
Louis#0144: intel is getting fucking curb stomped right now
Louis#0144: theyre scared shitless
Louis#0144: they have absolutely zero response to AMD and they wont have a response for ATLEAST another 4 years
bmk#1476: can amd please do this to nvidia too
Louis#0144: LOL
Louis#0144: intel NEEDS to bend over and take it in the butt from AMD
Louis#0144: give up the enthusiast and server market |
Louis#0144: focus on super low power
Louis#0144: that would fix this a lot
Louis#0144: thats what AMD did back in 2004-2015
Louis#0144: maybe in a decade they can come back who knows
Louis#0144: but if they brute force enthusiast hardware theyre going to hemorrhage money
Louis#0144: oh and swan needs to step down
Louis#0144: bringing on swan was an awful idea
Louis#0144: idk wtf they were thinking
bmk#1476: why cant amd's gpu wing do as well
bmk#1476: amd gpus still suck for ML
bmk#1476: its almost as if they dont give a shit about ML
asparagui#6391: when you say using cron as a job scheduler what are you talking about?
Louis#0144: Fixed time scheduling
Louis#0144: Not useful in DL
Louis#0144: useful for other stuff
asparagui#6391: kk yeah that's what i thought you meant
asparagui#6391: you don't like slurm?
Louis#0144: I like slurm
Louis#0144: It’s what I use
Louis#0144: I was asked about cron |
asparagui#6391: must have misread things
Louis#0144: I like both CRON and SLURM
Louis#0144: they’re both good
Louis#0144: Both have their benefits
Louis#0144: I use SLURM for DL stuff when needed
Louis#0144: I use CRON for when I need to distribute lots of interrupts
Louis#0144: Not that useful for the kind of stuff I do anymore
Louis#0144: But it used to be useful for me
asparagui#6391: what do you mean by interrupts?
Louis#0144: CRON issues CPU hardware interrupts at fixed time intervals
Louis#0144: That’s how it works
Louis#0144: It tells the CPU to change frames and do whatever CRON tells it to do
asparagui#6391: scheduling something every five seconds, you mean?
Louis#0144: If you need lots and lots of interrupts spread over a network
Louis#0144: CRON is perfect then
Louis#0144: Well five seconds is long
Louis#0144: I was doing stuff like every millisecond
Louis#0144: Typically
asparagui#6391: ahh kk
Louis#0144: It’s good for high volume trading stuff for instance |
Louis#0144: It’s good for updating portfolios in real time
Louis#0144: It’s good for GPS stuff
Louis#0144: It’s good for operations optimization (like manufacturing plants)
Louis#0144: Anything that needs to be real-time
Louis#0144: A thing to note is that CRON tasks can be skipped and timing can dynamically change
_harias_#9907: Just came across this on HN: https://learning-at-home.github.io/
kindiana#1016: its pretty cool, but to train gpt3 sized networks the bandwidth required is almost prohibitive, on the order of tbps for months
Liminal_Warmth#8151: Hi Folks!
Liminal_Warmth#8151: I just got invited 🙂
Liminal_Warmth#8151: Excited to join the conversation
Louis#0144: Welcome to the party
Louis#0144: 🎉
Daj#7482: Hey @Liminal_Warmth ! Welcome!
Daj#7482: Check the onboarding doc in the #deleted-channel description and don't hesitate to ask questions, the regulars are happy to help and know what's going on
Liminal_Warmth#8151: Uh... while I'm here, I don't suppose anyone has a link to the Toronto fiction dataset you could slide my way? I was planning to painstakingly reconstruct from smashwords bit by bit but figured it was worth asking
Liminal_Warmth#8151: Ideally I'd like the unprocessed text... but I'm happy to roll my sleeves up and get scripting too (that was my plan for today and tomorrow)
Liminal_Warmth#8151: How are you guys financing this right now? Entirely on research credits?
Louis#0144: TRFC
Louis#0144: TFRC*
Sid#2121: yes, and some private donors |
Sid#2121: I'd never heard of that specific corpus but I'd wager we could provide you with *many* more books hah
Sid#2121: but we err on the line of legality, slightly
Liminal_Warmth#8151: Smart
Liminal_Warmth#8151: Isn't the smashwords corpus legal though?
Louis#0144: all corpi are legal
Liminal_Warmth#8151: It's obtaining books that are currently free
Louis#0144: lol
Liminal_Warmth#8151: Granted it hasn't been challenged in court
Louis#0144: You can give any amount of copywritten information to an AI
Louis#0144: perfectly legal
Liminal_Warmth#8151: ehhh
Louis#0144: It has been challenged before
Liminal_Warmth#8151: it is legal-ish
Louis#0144: I can find the case later
Liminal_Warmth#8151: the google ruling in 2015 applied to search cases, not generative
Liminal_Warmth#8151: that hasn't been tested yet
Liminal_Warmth#8151: the author's guild has opinions about that
Liminal_Warmth#8151: but the precedent is good
Liminal_Warmth#8151: I've been doing a lot of research 😄
Louis#0144: you are entirely within your right to take copywritten text and train LMs on it though, my lab does that constantly w/ legal advising |
Louis#0144: you can even show copywritten content to turkers
Liminal_Warmth#8151: hmm interesting
Liminal_Warmth#8151: that's useful and encouraging
Sid#2121: I'm pretty sure there's no precedent though is there @Louis
Louis#0144: yeah, we get to use harry potter for research with no fear of legal ramifications
Liminal_Warmth#8151: well...
Louis#0144: little to no fear
Liminal_Warmth#8151: haha
Liminal_Warmth#8151: it's all gravy until someone sues you and you lose
Liminal_Warmth#8151: however!
Liminal_Warmth#8151: that doesn't seem likely
Liminal_Warmth#8151: all the big players are obviously doing it
Louis#0144: theres no way they could justify that if its contained within common crawl
Liminal_Warmth#8151: and that outcome would be heavily opposed by anyone with an interest in AI... which is everyone
Louis#0144: most human literature is contained within common crawl
Sid#2121: @Liminal_Warmth seems like smashwords has now imposed download limits (https://github.com/sgraaf/Replicate-Toronto-BookCorpus/issues/2) , and I can't find Toronto fiction dataset anywhere, but yeah, we should have hundreds of GB worth of books soonish
Liminal_Warmth#8151: yeah they have
Sid#2121: if you want us to send some to you, ping @bmk or @shawwn
Liminal_Warmth#8151: have to rotate vpn every 50 downloads
Louis#0144: what you can get in trouble for is accidentally DDoS'ing when you scrape |
Sid#2121: I'm not sure if we can easily divide out fiction or not unfortunately
Louis#0144: which Ive done before
Louis#0144: :^)
Louis#0144: luckily the company was cool about it
Liminal_Warmth#8151: If you guys CAN parse out the fiction I'd appreciate it. I'm assembling as big of a set as I can from available sources.
Liminal_Warmth#8151: But it's cool if not
Louis#0144: do you have FFO
Liminal_Warmth#8151: I need to think about how to clean it
Sid#2121: we just found 52GB worth of fanfiction
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/751528273707794552/Stories.zip
Louis#0144: I use this dataset for my work rn
Sid#2121: depends how good a quality text you want lol
Liminal_Warmth#8151: I purposes avoid fanfiction
Louis#0144: 250 short stories
Liminal_Warmth#8151: purposely!
Louis#0144: not fanfiction!
Louis#0144: flash
Sid#2121: hahaha, fair
Liminal_Warmth#8151: 😄
Louis#0144: Im a narratologist, I avoid fanfiction too |
Louis#0144: 😛
Louis#0144: narratologist in training*
Liminal_Warmth#8151: I want the data to be published quality (or at least independently published quality) to improve the outcomes since I'd like to make this available to working authors who will want well-edited text and minimal spelling/grammar errors coming out
Liminal_Warmth#8151: not that fanfiction is bad
Liminal_Warmth#8151: just... it's not all good 😄
Louis#0144: What I sent you is 491 fiction short stories
Louis#0144: all cleaned
Louis#0144: lol
Liminal_Warmth#8151: Oh!
Liminal_Warmth#8151: Thank you
Louis#0144: np
Liminal_Warmth#8151: I appreciate that
Louis#0144: Theres weird stuff with footers "If you enjoy Flash Fiction Online, consider subscribing or purchasing a downloadable copy. Your donations go a long way to paying our authors the professional rates they deserve. For only $0.99/issue that’s cheaper than a cup of coffee. Or subscribe for $9.99/year."
Louis#0144: but that can be parsed out
Daj#7482: tbh if we scraped like top rated AO3 or something it'd probably be publishing quality
Daj#7482: Or, you know, the pirate way
Liminal_Warmth#8151: Yeah I had considered that!
Louis#0144: AO3 has a LOT of meme content
Liminal_Warmth#8151: but still nervous about Ao3
Louis#0144: you either want to scrape story prompts, flash fiction, or writing competitions |
Daj#7482: Guess it depends on how big of data you want
Louis#0144: I have a much bigger dataset of short stories on my computer
Louis#0144: 4.6 GB
Louis#0144: close to 16k short stories
Louis#0144: theyre all good quality
Liminal_Warmth#8151: I'm also trying to figure out if I'm better off training a super-large diverse set or specializing corpi by genre and letting a user select what type of book they're writing
Daj#7482: Neat
Sid#2121: @Liminal_Warmth yeah ok so our data should be easily separatable into fiction / non fiction
Sid#2121: it's from here https://www.reddit.com/r/DataHoarder/comments/fyb6gt/4_million_txt_books_in_537gb/
Liminal_Warmth#8151: jeeeez
Liminal_Warmth#8151: that is large
Sid#2121: we have it downloaded already, so we should probably be able to send you the fiction stuff?
Daj#7482: We do large here lol
Liminal_Warmth#8151: Would love that
Louis#0144: books isnt the way to go if you want to do anything besides language modeling
Louis#0144: lol
Louis#0144: short stories are better
Liminal_Warmth#8151: Why?
Louis#0144: fewer issues with keeping track of frames, fewer issues with long range reasoning, fewer issues with coference. SoTA for story generation is about ~20 sentences and is *still* rulebased
Louis#0144: lmao |
Liminal_Warmth#8151: (Also I really appreciate your guys' advice, I'm pretty new to ML)
Louis#0144: LMs get stomped when it comes to story generation
Louis#0144: it isnt even a close comparison
Louis#0144: you basically are required to use symbolic or neuro symbolic
Daj#7482: I don't think longer or shorter books make a difference to liminal's use case
Liminal_Warmth#8151: The most powerful aspect of GPT-3 I've seen is that if you feed it 1000 words and ask for 10-30 more it retains both the writing style of the text and the context of the scene
Liminal_Warmth#8151: that's what I want to replicate
Daj#7482: Since if I understand correctly it's a LM task
Daj#7482: Yea
Liminal_Warmth#8151: hang on I did a few write-ups
Louis#0144: it doesnt retain context of the scene sadly
Louis#0144: it has massive issues with that
Daj#7482: GPT3 does amazingly well with that
Liminal_Warmth#8151: https://www.patreon.com/liminal_warmth/posts?filters%5Btag%5D=GPT-3
Louis#0144: ITs better than GPT2
Louis#0144: but its still not as good as ASTER or something from 2018
Louis#0144: or Dungens2DQN
Liminal_Warmth#8151: I'm currently poking at GPT-2 again
Daj#7482: I strongly disagree there Louis
Liminal_Warmth#8151: Should I use a different one? |
Louis#0144: lmao
Louis#0144: I will fight tooth and nail over this
Louis#0144: Ive been doing this for almost 9 yrs
Daj#7482: I...don't care that much lol
Louis#0144: Just started grad school on this specific topic
Daj#7482: GPT2 is probably the best open LM model @Liminal_Warmth
Sid#2121: @Louis interested in this rule-based story generation SoTa
Liminal_Warmth#8151: cool
Daj#7482: GPT3 will soon be available on a commercial basis
Daj#7482: Or at some point maybe our version of GPT3
Louis#0144: https://twitter.com/mark_riedl/status/1301551778509455366?s=20
Liminal_Warmth#8151: I'm experimenting with collab first but eventually I need to figure out how to get google cloud set up for the 1.5B model
Louis#0144: this is current sota for story generation
Louis#0144: still symbolic
Liminal_Warmth#8151: I can't use GPT-3 for my use case right now
Daj#7482: I see. Well you can try Louis' symbolic based stuff but I was never impressed by kt
Daj#7482: But it's super subjective
Sid#2121: @Liminal_Warmth we'll be releasing > GPT2 models soon
Louis#0144: @Liminal_Warmth if you are set on using GPT2 you probably want to use a switching dynamic to track world states (its end to end, just stores a seperate variable for world state- not actually symbolic)
Sid#2121: and trained on better data 🙂 |
Sid#2121: with colab notebooks, hopefully
Louis#0144: https://arxiv.org/abs/2004.03762
Liminal_Warmth#8151: haha that would be awesome
Sid#2121: I think it's still to be seen at what number of parameters finetuning becomes pointless and prompt engineering becomes the way to do things
Daj#7482: fwiw I've worked with a company that makes writing help for authors too and most authors are blown away by GPT2/3 out of the box and don't really need complex world state
Liminal_Warmth#8151: I'm a product person at heart, I just pretend to be a dev... so I'm often in way over my head on ML
Louis#0144: oh yeah GPT2 is good for collaboration
Liminal_Warmth#8151: But I've _really_ upleveled my python skills playing with all this 😄
Louis#0144: since you can cherrypick
Louis#0144: but it isnt good without cherrypicking
Louis#0144: need I bring out the unicorn example?
Louis#0144: ;P
Louis#0144: the unicorn example was tauted when GPT2 came out but tbh it was awful
Daj#7482: Yea that's why I'm saying for this use case LM and good UX is fine
Liminal_Warmth#8151: Collaborative writing is the model I want anyway
Liminal_Warmth#8151: the author community isn't ready for guided auto-gen
Liminal_Warmth#8151: gotta ease people in
Daj#7482: Yea, im Sure people around here are happy to give advice, especially once our model works
Liminal_Warmth#8151: even collaborative AI writing, when I demo'd our beta to some author friends, was EXTREMLEY anxiety producing
Sid#2121: > Yea, im Sure people around here are happy to give advice, especially once our model works |
@Daj it does work lol!!
Daj#7482: Though tbh you should mostly just use Hugging Dave's stuff lol
Louis#0144: but it isnt autogen... Stuff like Dungeons2DQN takes the pre-existing story, constructs a world state variable, and then suggests the next event
Sid#2121: it's just slow 😦
Daj#7482: > @Daj it does work lol!!
@Sid sampling? Of the big model? On GPU?
Sid#2121: oh, on TPU
Daj#7482: Yea slow hah
Sid#2121: but colab has tpus
Daj#7482: But I saw the loss
Liminal_Warmth#8151: The presentation and messaging around stuff like this is really key... it's going to scare a lot of people
Sid#2121: and my latest push speeds it up quite a bit
Daj#7482: Did the model die at 40k steps btw?
Sid#2121: no, it's still going
Daj#7482: Huh I think the tensorboard is stuck?
Daj#7482: Or it was earlier
Daj#7482: > The presentation and messaging around stuff like this is really key... it's going to scare a lot of people
@Liminal_Warmth agree btw, more a psychological problem than a technical one
Louis#0144: @Liminal_Warmth Speak with Chris Martens at NCSU or Max K. at Santa Cruz. Theyre both doing stuff like this. Chris is taking a 100% symbolic approach, Max is using GPT2 to help collaborate on video game scripts
Liminal_Warmth#8151: > but it isnt autogen... Stuff like Dungeons2DQN takes the pre-existing story, constructs a world state variable, and then suggests the next event |
@Louis This is cool... where can I read more about this?
Louis#0144: Soon, the final paper comes out rly soon
Louis#0144: in like a few days
Louis#0144: lol
Liminal_Warmth#8151: Please send the link when you can!
Sid#2121: collaborative writing > autogen @Louis
Liminal_Warmth#8151: I'd love to read that
Louis#0144: kk
Daj#7482: Neat, post it here when it does Louis
Sid#2121: > Huh I think the tensorboard is stuck?
@Daj yep looks like it :/ idk
Daj#7482: But loss looks really good
Sid#2121: it's at ~60k
Daj#7482: We did it boys
Sid#2121: i'll grab some samples
Louis#0144: https://www.youtube.com/watch?v=YemciyRtYeI
Louis#0144: Heres the talk
Louis#0144: paper comes out soon
StellaAthena#3530: @Louis is this you?
Louis#0144: My lab |
Louis#0144: not me
StellaAthena#3530: Nice
Louis#0144: thats my advisor
Liminal_Warmth#8151: ooh thanks for sharing! I'll watch this now
StellaAthena#3530: @Louis do your have access to a data set of text from text games?
Louis#0144: yes
Louis#0144: uh
Louis#0144: I might be able to send it to u
Louis#0144: but I doubt I can share it around to everyone
Louis#0144: Id need to ask mark
StellaAthena#3530: That could be very helpful for the Pile, depending on what size it is and what exactly it contains
Liminal_Warmth#8151: god there's so much to read about this and not enough hours in the day lol
Daj#7482: Welcome to machine learning
Daj#7482: Where Google publishes a paper that completely revolutionizes your field on average every 3-6 months
Daj#7482: Or 4 times in 4 weeks
NB-21#6298: Hey I'm curious, are you guys generating any synthetic data for the pile? Could be feasible to generate large amounts and interesting for things like math word problems and maybe some simple reading comprehension
Daj#7482: Not at the moment afaik
Daj#7482: The idea has been batted around a few times
Daj#7482: But we just have _so much_ real data and so little dev time
StellaAthena#3530: We have a lot more real data than we have developers or compute time |
StellaAthena#3530: Data is not a bottleneck
NB-21#6298: What's the main bottleneck with existing datasets then?
Louis#0144: developers
Louis#0144: lol
NB-21#6298: Makes sense
shawwn#3694: > The most powerful aspect of GPT-3 I've seen is that if you feed it 1000 words and ask for 10-30 more it retains both the writing style of the text and the context of the scene
> that's what I want to replicate
Here you go: https://github.com/shawwn/gpt-2/blob/stream-tokens/src/generate_samples.py
shawwn#3694: @Liminal_Warmth
shawwn#3694: GPT-2 seems to be able to do that task
Liminal_Warmth#8151: Excccccellent
Liminal_Warmth#8151: I have to go record a podcast but I will sift through all of this later today and come back with some questions
Liminal_Warmth#8151: Thank you all so much for being so helpful!
shawwn#3694: sure!
Liminal_Warmth#8151: quick question: Is it possible with GPT-2 to give it a larger prompt context than the 1000 tokens or so GPT-3 allows?
Liminal_Warmth#8151: Like could I feed in 10k words for more context on the next line?
shawwn#3694: No, GPT-2 is generally half the size of GPT-3 in that aspect
Liminal_Warmth#8151: ah unfortunate
shawwn#3694: 1024 tokens vs GPT-3's 2048
Liminal_Warmth#8151: that's big enough to work |
Liminal_Warmth#8151: but not ideal
shawwn#3694: If you want to run the sampler, you can do so like:
```
git clone https://github.com/shawwn/gpt-2
cd gpt-2
git checkout stream-tokens
python3 download_model.py 117M
python3 src/generate_samples.py
```
and it'll start spitting out a rolling context window https://cdn.discordapp.com/attachments/729741769738158194/751541090070560899/unknown.png
shawwn#3694: you can give it a prompt with --prompt somefile.txt
Liminal_Warmth#8151: You guys assume a lot of my technical skills but I'll do my best ^_^
Liminal_Warmth#8151: Still very much a newbie
shawwn#3694: no worries. We all were
Liminal_Warmth#8151: Is there a good guide anywhere for setting up the 1.5B model in google cloud?
Liminal_Warmth#8151: I assume that's much faster than using collab TPUs?
Daj#7482: Speed will depend on your hardware and implementation
Liminal_Warmth#8151: Right so costs scale |
Daj#7482: Yep
Liminal_Warmth#8151: But I had a tech savvy author tell me he got a decent model working in Google cloud with about 10 hours of training for $40
Daj#7482: Yea that sounds about right
Liminal_Warmth#8151: That's totally worth it 😄
Daj#7482: You can get a pretty high end GPU for that price
Liminal_Warmth#8151: Especially with the $300 free credits
Daj#7482: Unfortunately I'm totally out of the loop as to what beginner material would be good
Liminal_Warmth#8151: I just don't want to burn them before I know what I'm doing
Liminal_Warmth#8151: which is why I'm looking at collab first
Daj#7482: So can't help much there I'm afraid, I'm too far into the deep end lol
Daj#7482: Yea Definitely get a colab working first
Daj#7482: tbh you can probably do equivalent training for free on colab with some more patience and tweaking
Liminal_Warmth#8151: I'll play with a bunch over the weekend
Liminal_Warmth#8151: probably best to start on 117M first, get it down, and then move up
Liminal_Warmth#8151: And I still have to clean my data set a bit anyway
Daj#7482: Yea that sounds like a good plan
Sid#2121: @Liminal_Warmth I've found colab pro to be good for my needs
Sid#2121: you can get pretty much all the high-ram GPUs and TPUs you would need for like <$10 a month
Liminal_Warmth#8151: But what if I'm very impatient? 😄
Sid#2121: refresh the page until you get a V-100 lol |
Sid#2121: you get assigned a GPU pretty much at random. there's P100s, some other ones, and V100s
Sid#2121: the V100s are pretty speedy
Louis#0144: You can combine your LM with some persistent knowledge base
Louis#0144: there’s a ton of papers on doing that with BERT
Louis#0144: Basically once you hit the maximum of your window, you start putting data into a KG that your LM can query
Louis#0144: The KG can be literal sentences
researcher2#9294: KG, is that like this MARGE the brains trust has been playing with?
Louis#0144: Knowledge graph
researcher2#9294: Oh right, is that symbolic reasoning?
Louis#0144: Kinda
Louis#0144: Pseudo symbolic
Louis#0144: They’re not really symbols
Louis#0144: They can literally be text
researcher2#9294: Louis, halp
researcher2#9294: 😄
researcher2#9294: https://discuss.pytorch.org/t/saving-and-loading-optimizers-in-distributed-data-parallel-situations/49036
researcher2#9294: how tightly coupled is the optimizer to nn.dataparallel
researcher2#9294: ?
researcher2#9294: Apparently I can just save and load the underlying model which is easy, but I didn't get what they were saying about the optimizers.
researcher2#9294: The optimizer must be initialized on the dataparrallel model before loading the state_dict so my guess is yes (tightly coupled)? |
researcher2#9294: Not that I plan on shuffling GPUs, but it's nice to not have to ever worry
researcher2#9294: https://cdn.discordapp.com/attachments/729741769738158194/751648676035821568/unknown.png
researcher2#9294: This confuses me because I've never had to move the optimizer to a device before with single gpu?
researcher2#9294: Does it automatically go onto the same device as the model is on before the optimizer is created?
researcher2#9294: Yeah, confused
researcher2#9294: ok nevermind, looks like the guide I'm reading is a bit out of date
researcher2#9294: https://cdn.discordapp.com/attachments/729741769738158194/751650389861990470/unknown.png
Louis#0144: Oh god
researcher2#9294: And NCCL missing on windows too
Noa Nabeshima#0290: this is gold https://cdn.discordapp.com/attachments/729741769738158194/751657962497507348/image0.jpg
bmk#1476: `???`
bmk#1476: `pls halp`
Noa Nabeshima#0290: https://arxiv.org/abs/2009.01325
Noa Nabeshima#0290: Last page
Noa Nabeshima#0290: oh I might have misunderstood
Noa Nabeshima#0290: that's where it's from anyhow
Louis#0144: Nice dog bro
Sid#2121: @Daj the model is already spitting out D&D text
Sid#2121: ```Dungeons & Dragons is a roleplaying game. What does this mean exactly? It means that D&D can be used to create everything from a game of Dungeons & Dragons, to seven-legged monsters that bear a striking resemblance to a three-dimensional version of the character in your favourite action movie. But more importantly, Dungeons & Dragons bridges the gap between miniature gaming and roleplaying with a richly detailed world and vivid characters. This established the rules for the roleplaying medium and made mass-market paper and dice games a thing of the past. But as many of us were suddenly reminded last week when EA announced it was ending development of Dungeons & Dragons 3rd Edition in order to focus its efforts on the MMO/online shooter Battlefield 4, a story of dependency is being told.
``` |
Sid#2121: `A little earlier this month, a friend emailed me to tell me she'd got a panic attack after playing a D&D session via the internet.`
Sid#2121: ```
This author knows what it's like to have an online session where people eat your face and play with dolls and league tables.``` is this what d&d is lol?
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/751814490353565746/predictions_70000.txt
Daj#7482: Yes that sounds very accurate
Daj#7482: I think AGI is achieved
Sid#2121: ok, we can go home now
Ravna#1831: > this is gold
Ravna#1831: so it learned that clickbait equals good summarization
Ravna#1831: not entirely wrong
AI_WAIFU#2844: > this is gold
This is social commentary.
StellaAthena#3530: Nice nice
Deleted User#0000: lel
jmmr#8671: i heard this was a good place to talk about agi
bmk#1476: yup you've come to the right place
Noa Nabeshima#0290: Hey, so I think we need better AGI terminology
Noa Nabeshima#0290: Here's a concept that I think is nice
Noa Nabeshima#0290: LGI for "language general intelligence"
Noa Nabeshima#0290: 80% LGI would be a language model that can answer any question at the 80-th percentile of humans for that question |
Noa Nabeshima#0290: So then 90%, 100%, 50% all follow
Noa Nabeshima#0290: GPT-3 is not a very high percentage LGI because there are straightforward questions you can ask that make sense in context that it has trouble answering. But it is higher than 1%.
Noa Nabeshima#0290: It's ambiguous to what extent the definition includes things like "includes appropriate context"
Noa Nabeshima#0290: I think a good way of defining it would be the LM is set up such that it can be prompted with a question and it understands that you're asking it for information for the LGI task.
Noa Nabeshima#0290: Or maybe even, "in the appropriate context, can this LM answer questions at p% of humans for any question?"
Noa Nabeshima#0290: And you can extend the term to other modalities
Language Vision General Intelligence would be LVGI
Noa Nabeshima#0290: Although it's not clear how to benchmark it
Noa Nabeshima#0290: And you could imagine language models that are very high percentile in most domains but fail in some small percentage: if you don't know Mandarin and 15% of alive humans speak Mandarin you're capped at 85% LGI despite being intelligent overall
Noa Nabeshima#0290: But most humans can't for example predict the next pixel of an image given the binary representation, so you don't need your language model to be very good at that to match the criterion.
chirp#4545: personally my favorite AGI-adjacent concept is "Transformative AI"
chirp#4545: focuses on what the impact will be, rather than the inner workings
cfoster0#4356: Not to clog this with even more terminology, but I also like Drexler's notion of "Comprehensive AI Services" (CAIS) which would probably frame GPT-N as one implementation of an English language AI service
VonChair#2222: Hi
bmk#1476: hey @VonChair !
Liminal_Warmth#8151: Hey I had a question I wondered if anyone here has looked at--I've been researching server costs to run GPT models and I saw someone say "just 50 requests/second for an instance running a 1gb model [of GPT-2] generating 50-100 words can crush a machine." Does this match what you guys have seen? Do you know if there are any ways to optimize this?
StellaAthena#3530: What was the context of that sentence? Setting aside if it’s computationally reasonable, the *whole point* of running it on a server is to... well... do the computation on the server.
Liminal_Warmth#8151: It's from writeup.ai's description of the limitations of a textgen model on the server the creator built
Liminal_Warmth#8151: I'm trying to work backwards to figure out the cost of hosting your own model that you've trained
Liminal_Warmth#8151: for the 1.5B parameter model |
Liminal_Warmth#8151: But I'm surprised it would struggle so much with requests post-training which is what I would think the really resource intensive period would be
WAUthethird#4977: Inference is less intensive than training
StellaAthena#3530: Inference is a lot less intensive than training
StellaAthena#3530: If you have the ability to train a model, you can absolutely do inference on it.
gwern#1782: of course, you aren't training GPT-2 from scratch on 1 GPU... so even talking about serving on 1 GPU implies that
VonChair#2222: Why not just try it?
VonChair#2222: Can it be computed by a CPU?
VonChair#2222: Or does it need a GPU?
gwern#1782: yeah, you can do it on CPU, assuming it's a good CPU. I think I benchmarked my threadripper at like 15x slower than my 1080ti
gwern#1782: fabrice bellard, iirc, had a public gpt-2 CPU demo, written in C++ or something, the mad man
WAUthethird#4977: The more threads you have, the better, but it does work
VonChair#2222: I have a 64 core Dell R820 doing nothing right now.
Liminal_Warmth#8151: But like let's say I set up GCP to get it trained and then once we have the model, I'm trying to build service to throw requests at it
VonChair#2222: Do you have an OVA file I can import to see what happens to the CPUs?
Liminal_Warmth#8151: How many requests should I be able to throw per second at a V100
Liminal_Warmth#8151: for example
WAUthethird#4977: I wouldn't say requests/sec is a particularly good measurement
Liminal_Warmth#8151: When you're trying to calculate concurrent users an app could support?
Liminal_Warmth#8151: What would be a better metric to look at?
WAUthethird#4977: generation time per token? |
kindiana#1016: with an efficient implementation you should be able to generate 1000s of tokens per second, assuming sufficient batching
Liminal_Warmth#8151: then my next question is how many parallel threads can it do 😄
gwern#1782: (but isn't that conditional on the size of the prefix? a full 1023 BPE token prefix will take a lot longer to generate the next token than a 0 BPE empty prefix)
StellaAthena#3530: If the server is just for you, or for a small group you won’t be running it until it breaks.
StellaAthena#3530: You care more about tokens per second
Liminal_Warmth#8151: and yeah Gwern is right, I'd be pushing toward the high end of the token submission
Liminal_Warmth#8151: probably 800+
kindiana#1016: if you only need context+completions less than the context size you can do it a lot more efficiently
WAUthethird#4977: but assuming it's for a service it'll probably be more variable than that
Liminal_Warmth#8151: The server would be for a commercial app I'd want to scale up with users to avoid disappointing people... I'm trying to project costs
Liminal_Warmth#8151: I think we had 20 alpha testers on GPT-3 so I'd need to handle at least that... but that's not 20 requests per second
Liminal_Warmth#8151: It could be spiky but with our design I wouldn't expect more than 10-20 requests per second even with well over 200 users
Liminal_Warmth#8151: and that seems high, but I'm erring on the side of caution
WAUthethird#4977: what GPUs would you be planning on running?
May be best to benchmark them with your setup to gain a more realistic approximation of your needs, rather than relying only on requests per second
Assuming of course you don't already have that data
StellaAthena#3530: Oh commercial use is a different ballgame
StellaAthena#3530: > May be best to benchmark them with your setup to gain a more realistic approximation of your needs, rather than relying only on requests per second
^^ This
Liminal_Warmth#8151: I do not have the data already 😄 |
Liminal_Warmth#8151: But hope to go start experimenting soon
Liminal_Warmth#8151: I just wondered if anyone happened to have tried to do this already and had a ballpark idea of load
Liminal_Warmth#8151: But yes, that makes a lot of sense thank you
gwern#1782: just benchmark the worst-case of completing 1024 tokens and calculate from there
Liminal_Warmth#8151: We'll try that! Thanks folks
Liminal_Warmth#8151: Appreciate your thoughts 🙂
gwern#1782: (it won't be as conservative as you think it will be because you'll lose performance to batching and overhead and stuff like that even if the average token length turns out to be more like 800)
Liminal_Warmth#8151: that's a good point
Louis#0144: The guardian needs to take down their trash article on gpt3
Louis#0144: It’s so poorly done
Sid#2121: Link @Louis ?
Louis#0144: https://twitter.com/MirowskiPiotr/status/1303313733528293376?s=20
Louis#0144: ok in the guardian's defense
Louis#0144: the guardian opinions is already trash
Louis#0144: LMAO
Louis#0144: so like this is their normal level of quality
ethan caballero#6044: When GPT-2 was published, was it largest language model ever trained at the time?
Aran Komatsuzaki#5714: @ethan caballero Not really. The largest one was LSTM+MoE (2017) by Shazeer et. al.
Aran Komatsuzaki#5714: It was 137 billion.
Aran Komatsuzaki#5714: I heard they trained 600B or so model sometime around 2018, but I don't know for sure. |
ethan caballero#6044: Was GPT-2 trained on more (relatively clean) data than any language model ever trained at the time?
Aran Komatsuzaki#5714: I'm not really sure, but if you take into account the amount of clean data and the size of the model together, then GPT-2 definitely was the best atm.
bmk#1476: Ranking moe along with gpt2 is really unfair
bmk#1476: Rankings should be based on the mean number of parameters updated per training iteration
ethan caballero#6044: @bmk so GPT-2 used more FLOPs than LSTM+MoE (2017)?
bmk#1476: I'm not actually sure
bmk#1476: Also, flops isn't necessary proportional to number of parameters updated per pass either
bmk#1476: Local attention is less flops but same params
bmk#1476: Convnets is more flops for same params
Louis#0144: FLOPs can 99% of the time be disregarded
Louis#0144: Unless you’re talking about like magnitude
Louis#0144: It’s a useless metric on its own
Louis#0144: Actually wait no even when discussing training in a situation that isn’t literally a super computer
Louis#0144: Flops is useless
gwern#1782: GPT-2 was the largest *dense* non-embedding model, I think I would phrase it
gwern#1782: you can find much larger mixture/ensemble things, and much larger embeddings, but you can't find anything public which is a straightforward feedforward end-to-end trained NN, afaik
gwern#1782: (MoEs skip training almost all of the net, so they're not dense, and embeddings aren't deep, just extremely wide, and various randomized things don't train end-to-end)
bmk#1476: does local count as dense?
Louis#0144: dense is a weird word
Louis#0144: theres a lot of grey area tbh |
Louis#0144: I think dense is anything thats multi-partite
Louis#0144: but multi-partite NNs can be made nondense via local competition
kindiana#1016: at gpt3+ scale the actual softmax(qk) operation doesn't consume very many flops compared to the feedforwards, so I'd totally classify local attn as dense
kindiana#1016: flops is pretty proportional to parameters at this scale lol
Louis#0144: Density has nothing to do with performance though
Louis#0144: What
Louis#0144: Density is 100% a different kind of model than sparse NNs
Louis#0144: Softmax can create sparse structures but it doesn’t do so in massively wide windows
Louis#0144: That has nothing to do with FLOPs
Louis#0144: It entirely has to do with the fact that softmax is emulating local competition in a way
Louis#0144: And when you have many neurons that are neighbors, competition behaves differently
kindiana#1016: i feel like we're talking past each other lol, I'm just saying that the feedforwards/qvk calculations in transformers take most of the computation, and if those are dense, the whole network should be classified as dense
Louis#0144: But how do you define dense
Louis#0144: I define a lack of density as pronounced level sets in the connection weights or overall sparsity (weights missing)
Louis#0144: In which case flops has nothing to do with density
Louis#0144: They aren’t related
kindiana#1016: for a n dim -> n dim layer, do you have a weight that goes from every input dim to every output dim
kindiana#1016: you can have dense weights which get sparse when training
kindiana#1016: but you are still doing dense matmuls
Louis#0144: Ok so your latter def is my earlier def |
Louis#0144: And your earlier def is my latter def
Louis#0144: In which case wtf
Louis#0144: I don’t think sparsity wrt softmax has anything to do with FLOPs
Louis#0144: Or you can’t inherently say a network is dense by saying it doesn’t use many FLOPs for softmax
kindiana#1016: if most of the flops in a network are from doing softmax(qk), and you use sparse qks, I'd say its sparse
Louis#0144: > at gpt3+ scale the actual softmax(qk) operation doesn't consume very many flops compared to the feedforwards, so I'd totally classify local attn as dense
@kindiana that isn’t the same thing as what you said here though
kindiana#1016: the feedforwards are dense and the feedforwards are most of the flops, not the attention operator, so gpt3 is dense
kindiana#1016: even if it does local (or sparse) attn
Louis#0144: Yeah GPT3 is dense
Louis#0144: I agree
Louis#0144: Well actually we don’t know if GPT3 is dense
Louis#0144: We don’t have the connection weights
Louis#0144: Nor have we profiled it
kindiana#1016: I think the difference is I see density/sparsity as only the property of the architecture, from an engineering perspective of if you do dense or sparse matmuls, and not if the weights converge to zeros during training
kindiana#1016: I do see the other perspective as a valid one though, esp for things like model compression
kindiana#1016: but for training big nns using tpus, it doesn't care what values the weights are lol
ethan caballero#6044: How many times more data was 175B GPT-3 trained on than 1.5B GPT-2 (from GPT-2 paper)?
kindiana#1016: ~10B tokens vs 300B tokens
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/753080311629480107/unknown.png |
rook#2597: https://cdn.discordapp.com/attachments/729741769738158194/753116586717741126/IMG_20181216_150704-EFFECTS.jpg
ethan caballero#6044: Approximately how many bpe tokens is a KB of text?
kindiana#1016: ~250
ethan caballero#6044: @kindiana is that for compressed or uncompressed KB of text?
kindiana#1016: uncompressed
ub8697#3037: Has anyone made any headway in creating a twitter dataset? Seems like threads would be the most interesting content. Could scrape threadreader site, perhaps, but according to the homepage there are less than a million threads on there.
ub8697#3037: Twitter allows one request per 4.8 seconds per IP.
ub8697#3037: IPs are $0.65 per month from Luminati - could likely get cheaper elsewhere. Also, those are datacenter IPs, not residential. I'm assuming Twitter won't make the distinction.
ub8697#3037: Bandwidth is about $0.10/GB
ub8697#3037: Each tweet in the returned JSON is about 1kb due to all the stuff that comes with it. Each request to the search api (non-authenticated) returns 20 tweets. Not sure if you can manipulate params to get more.
ub8697#3037: 60\*60\*24/5 = 17280 requests per day per IP ==> 345,600 tweets per day per IP
ub8697#3037: $0.03456 per day in bandwidth, although now that I think about it, it should be cheaper due to gzip
ub8697#3037: $0.0216/day per ip
ub8697#3037: So about a dollar per month per IP, and you get 10 million tweets
ub8697#3037: So spend a thousand bucks and get 10 billion
kindiana#1016: internet archive is already scraping a lot of twitter: https://archive.org/search.php?query=twitterstream&sort=-publicdate
ub8697#3037: Which seems too cheap...
ub8697#3037: Oh wow! 2 gig per day!
ub8697#3037: That's awesome
ub8697#3037: I wonder if that's firehose, or just the subset. Guessing it's just the subset via the normal API |
kindiana#1016: I'm not sure if twitter is particularly high quality, given tweet length limitations lol
kindiana#1016: its the "Spritzer" stream, about 1% of all tweets
ub8697#3037: Ah okay - random subset or "top" 1%? Guessing random since it's streaming?
kindiana#1016: think its random
kindiana#1016: meant to be 1% of the firehose
ub8697#3037: Yup
ub8697#3037: RE quality, I wonder how many decently long threads get posted per day relative to other content on the web. I wouldn't be surprised if there were more twitter thread words than books published... ever?
ub8697#3037: Words in threads vs words in books, I mean
ub8697#3037: I could be way off there though
kindiana#1016: 200gb of tweets per day if 2gb per day is just 1% of it
ub8697#3037: Apparently there are 200 billion tweets posted per year, so it'd cost about $20,000 to scrape all of 2019, if the above calculations are correct, and if one found a way to efficiently grab them all via the public search api
ub8697#3037: You can search for "1/n" and stuff like that to return threads only, but there are pagination limits, so you'd have to get creative
ub8697#3037: I was playing with using the top 10k english words to get all the top tweets for a particular day
ub8697#3037: Each word as an exact match search
ub8697#3037: With min_faves condition
genai (Immortal Discoveries)#0601: Why are yous recreating GPT3? To better understand it or to simply be able to use it?
Sid#2121: https://tenor.com/view/why-not-both-gif-11478682
genai (Immortal Discoveries)#0601: So, correct me if wrong, but are Transformers finding many many many rare-ish patterns? For example, one not so often come across is last names, Tom Larse has a mom named Jenny [Larse]. With markov chains or PPM or simple old fashioned text predictors, we model only a few but VERY COMMON patterns, like Syntax and Semantics, ex. word2vec, but they won't work when they reach last names etc, because, they are rare patterns. If we used more data the accuracy would improve of course.
How, does, GPT, find out the last names pattern? How does it know to put Larse after Jenny? We don't teach it it, it doesn't invent that rule, it finds it in text. How does it find it? |
Daj#7482: The truth is we don't know how GPT works, or NNs in general (some brave souls in #interpretability are trying to shed some light)
genai (Immortal Discoveries)#0601: Yes, the Clarity Team, like OpenAI's hehe 🙂
Daj#7482: We're basically budget OpenAI yes lol
kindiana#1016: intuitively that example is something which the attention operator would be good at learning, referencing previous text with context based and/or position based queries
Daj#7482: There is also https://arxiv.org/abs/2008.02217
Daj#7482: imo it seems likely transformers at least partially are huge associative memories
genai (Immortal Discoveries)#0601: When Google invented the Transformer, they must have known what they were doing, right? They understand not what it'll come up with but they understand exactly how it works, no?
Daj#7482: I can write down the code of AlphaGo but that doesn't mean I can predict what move it will play
Daj#7482: Same thing I can build a very big calculator and not predict what the result of a very big equation will be
genai (Immortal Discoveries)#0601: GPT seems to use mainly Attention to do everything. And BPE off the top my mind. Erm, how does GPT model semantics, I mean how did they force it to model that pattern? I tried studying it but I don't fully understand it.
Daj#7482: No one does
Daj#7482: You feed text into the model, it learns it
kindiana#1016: if modelling semantics helps loss goes down it does it 🤷
Daj#7482: The state of NN theory is very sorry
genai (Immortal Discoveries)#0601: Hmm but the Google devs must have known for at least one pattern how it can pick up patterns like the last name one I mentioned above. ?
kindiana#1016: you can look at the qk vectors to see what things it "looks" at to predict a token
genai (Immortal Discoveries)#0601: It's one thing making word2vec, but it's another thing letting it discover word2vec on its own lol.
kindiana#1016: but you can't really "know" how
Sid#2121: @genai (Immortal Discoveries) with neural nets it's all interpretability after the fact
Sid#2121: we can know empirically what architectures 'work well' but it's mostly through trial and error |
Sid#2121: that said, there are several interesting ways to visualize how the multihead attention works and selects options - https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens
Sid#2121: https://aletheap.github.io/posts/2020/07/looking-for-grammar/
genai (Immortal Discoveries)#0601: so that's why these simple old nets, like word2vec, or markov chains / BackOff, are understood to me / interpretable all the way even during and after trainig, but the NNs of today are often finding their own rules/ patterns, like the last name one, or semantics....?
genai (Immortal Discoveries)#0601: (and hence not so interpretable)
genai (Immortal Discoveries)#0601: ?
Sid#2121: https://github.com/jessevig/bertviz for visualizing attention too
genai (Immortal Discoveries)#0601: Yes I've seen the 2 upper links many time 🙂
Sid#2121: but yeah, that's correct. Deep networks form their own 'rules' and ways of processing input -> output, and it's up to us to figure out how they work after they've been trained
genai (Immortal Discoveries)#0601: it's interesting that a markov chain solves many patterns, ex. dogs usually bark, beds are usually soft or sold.....you make one thing and it solves many problems.....but then you add another algorithm to it ex. word2vec, and you can now solve all sorts of (rarer) questions/problems. Eventually we run out of algorithms to code in pre installed, and need it to be taught them or find or invent them. What sort of algorithm would, like a markov chain or word2vec, not solve many patterns, but be taught or find or invent many patterns? Interesting to think about.
genai (Immortal Discoveries)#0601: You guys sure yous can't answer how GPT can find out on its own without being taught or inventing it the last names pattern? How does Attention do this?
genai (Immortal Discoveries)#0601: There must be a pattern ! 🙂
Sid#2121: intuitively, it has some way of 'recognizing' the context. So it can 'tell' when a name is mentioned, because names are mentioned in specific contexts usually, and it can also tell when it would be appropriate to put a last name after the name. If the name has been repeated before in the context, the attention mechanism looks back and gives a higher weight to the tokens that came after the name previously, repeating the surname.
genai (Immortal Discoveries)#0601: I mean, when we look at an example to learn from "Tom Larse has a mom named Jenny [Larse]" we can see a pattern if look hard....Larse appears twice.....and in another case we see the last name used twice as well ex "Hanna Beth has a dad Ron Beth"
Sid#2121: notice this is all very handwavey, because it has learnt the rules itself. We can't tell you how it decides which token should be given the most weight, because all of those rules are stored in the weights of the network, which are pretty opaque to us humans.
genai (Immortal Discoveries)#0601: Ya I see that now hehe.
genai (Immortal Discoveries)#0601: I plan to look at many of these small micro patterns and look for a pattern in the finding, inventing, and teaching of them.
genai (Immortal Discoveries)#0601: Have yous heard of Blender and PPLM?
Daj#7482: > I plan to look at many of these small micro patterns and look for a pattern in the finding, inventing, and teaching of them.
@genai (Immortal Discoveries) I'm sure the people in #interpretability would be interested in hearing your results and helping out
genai (Immortal Discoveries)#0601: Ok thanks. |
genai (Immortal Discoveries)#0601: Have yous heard of Blender and PPLM?
Sid#2121: Blender as in the 3d modelling software?
genai (Immortal Discoveries)#0601: no, it's essentially GPT but smarter, as far as i can see
Sid#2121: link?
genai (Immortal Discoveries)#0601: 1 sec
genai (Immortal Discoveries)#0601: see the second half of this video to see it in action
genai (Immortal Discoveries)#0601: https://www.youtube.com/watch?v=wTIPGoHLw_8
genai (Immortal Discoveries)#0601: and for PPLM:
genai (Immortal Discoveries)#0601: https://eng.uber.com/pplm/
genai (Immortal Discoveries)#0601: also this sums it up somewhat, helps to read it:
genai (Immortal Discoveries)#0601: https://distill.pub/2020/bayesian-optimization/
genai (Immortal Discoveries)#0601: I was onto this before these came out. And I'm still a step ahead of them. What this does is forces your "GPT" to talk about certain things and not talk about certain things, namely not harming us and saving all life. You give it goals, then it thinks about them. The real fun though hasn't started, it's when it learns food = money, and then starts now talking about money where before it used to not. This updates its rewards. It exploits domains, collecting specific data it wants from specific thought experiments or real experiments. Exploration is the opposite, you collect as diverse data as possible so to accurately model the world correctly, but isn't things you'd naturally predict ex. football or semantically cats are stones, because you don't know much about those domains yet. I mean they both help, selling Mario games and trying new games is just as fruitful in some sense, but I think exploitation is most powerful / fastest.
researcher2#9294: >
> Has anyone made any headway in creating a twitter dataset? Seems like threads would be the most interesting content. Could scrape threadreader site, perhaps, but according to the homepage there are less than a million threads on there.
> @ub8697 Nice analysis, love a back of envelope calculation!
researcher2#9294: I imagine twitter would get a little upset about this though haha
researcher2#9294: anybody know how much the firehose costs out of interest?
researcher2#9294: Personally I might actually avoid training an AI on twitter if worrying about alignment, so much toxic in there.
researcher2#9294: But I guess it has to learn all the dark things eventually
Louis#0144: I’ve written about the hopfield stuff myself |
Louis#0144: But from more of a cog neuro perspective
Louis#0144: If anyone has questions
bmk#1476: > I imagine twitter would get a little upset about this though haha
@researcher2 *mumbles* not the most illegal thing
Eddh👽#7290: Interesting. Do you have a link @Louis ?
Louis#0144: https://www.louiscastricato.com/post/joint-representations-of-connectionism-vs-symbolism-via-attractor-networks-and-self-attention
Louis#0144: @Daj might be reviewing some HoTT papers next month
Louis#0144: i agreed to be the reviewer for an alg top conference
Louis#0144: and Im realizing now how many papers are HoTT...
Daj#7482: Hah good luck
Louis#0144: ty
bmk#1476: Is HoTT still a hot topic (pun not intended)?
bmk#1476: (trying to prioritize rn)
StellaAthena#3530: Yes, but don't prioritize it.
StellaAthena#3530: It has massive prerequisites and getting to the point where you can have an intelligent conversation about it is very time consuming.
gwern#1782: the firehose is "if you have to ask, you can't afford it"
gwern#1782: you'd be better off going the research route. I believe they offer a very limited firehose access for researchers
Louis#0144: lol
Louis#0144: I have a friend scrapping twitter
Louis#0144: do you guys know nitter? |
Louis#0144: the GPL twitter thing
gwern#1782: yeah, people have been using it for irc bots because it doesn't make it a nightmare to get the title/text and twitter keeps breaking things
gwern#1782: I hadn't heard of it until a few months ago tho
Louis#0144: hes a good friend of mine
Louis#0144: the person who made that
Louis#0144: on the backend its literally a scrapper
Louis#0144: loll
gwern#1782: better that one man should suffer for the sake of all of us than all of us
Louis#0144: nah
Louis#0144: hes got backing from FSF
Louis#0144: lol
Louis#0144: hes getting legally advised by FSF I should say
Louis#0144: hes safe
Louis#0144: twitter cant do shit
gwern#1782: I meant more in terms of losing sanity points to twitter's constant modifications
Louis#0144: oh yeah
zphang#7252: speaking of IRC, are there worthwhile IRC datasets?
Sid#2121: there's ubuntu IRC, which is already in the pile I believe
bmk#1476: There are quite a few that I haven't gotten around to scraping
bmk#1476: See #data-sources |
StellaAthena#3530: Ubuntu IRC is in the data set, yes
StellaAthena#3530: Here are some other IRC links from #data-sources https://github.com/ckan/irc-logs/blob/master/ckan_freenode_irc_logs_2014-2018.log.gz
http://eavesdrop.openstack.org/irclogs/
https://ghostscript.com/irclogs/
https://irclogs.thegrebs.com/debian/
http://crimsonfu.github.io/irclogs/
3dprint_the_world#6486: @Louis Do you have a link to the cog neuro/hopfield stuff you mentioned?
Louis#0144: above
Louis#0144: attractor networks are hopfield networs
Louis#0144: same thing
Louis#0144: different name
Louis#0144: like theyre *genuinely* the same thignm
3dprint_the_world#6486: I see
3dprint_the_world#6486: have you proved this? Or is it a conjecture?
Louis#0144: That attractor networks are the same as hopfield?
Louis#0144: It follows directly from definitions
Louis#0144: They’re both continuous time n cliques
Louis#0144: LOL
Louis#0144: There’s nothing to prove in that regard
3dprint_the_world#6486: maybe I'm missing something obvious, but it's not immediately apparent to me what you mean by a hopfield net being a continuous time n-clique |
3dprint_the_world#6486: perhaps I'm missing the context here
Louis#0144: The topology of a hopfield network
Louis#0144: Is an n clique of neurons
Louis#0144: The neurons are simulated in continuous time until the network converges
Louis#0144: It’s a Boltzmann machine
Louis#0144: An attractor network is the same notion
Louis#0144: It’s an n clique of neurons that are simulated in continuous time
Louis#0144: Attractor networks predate hopfield networks
Louis#0144: Just different contexts entirely
Louis#0144: Attractor networks are from bio neuroscience
Louis#0144: Where hopfield networks are from statistical modeling
Louis#0144: It’s the same thing
Louis#0144: They’re *literally* the same thing
3dprint_the_world#6486: Are hopfield nets simulated in continuous time? iirc they have a discrete-time update rule.
3dprint_the_world#6486: also the update rule in hopfield nets is deterministic, and there's no notion of temperature
3dprint_the_world#6486: Boltzmann machines have a temperature parameter
3dprint_the_world#6486: In the limit of zero temperature though, sure, Boltzmann machines 'reduce' to hopfield nets
Louis#0144: Boltzmann are discrete time
Louis#0144: sorry brain fart
Louis#0144: Hopfield is discrete time |
Louis#0144: But the way it’s simulated is still with PDEs basically
Louis#0144: If you want to think about it that way
Louis#0144: It’s like a weird pseudo discrete time thing
Louis#0144: That’s mostly because hopfield networks have really weird attractor points
Louis#0144: Where as attractor networks don’t hide that they’re dynamical chaotic systems
Louis#0144: It’s literally in the name
Louis#0144: And they literally use PDEs
3dprint_the_world#6486: on an unrelated note, is there anything in the docs of gpt-neo talking about how much compute it will need?
Louis#0144: I can actually give references to this stuff if you’d like, worked in a lab that studied hopfield networks for a few years
Louis#0144: But it doesn’t prove attractor networks are equivalent
Louis#0144: It’s kinda assumed
3dprint_the_world#6486: like how much compute will be needed to train it to a gpt-3 level
StellaAthena#3530: Probably over 900
StellaAthena#3530: And the GPT-Neo docs are hugely lacking tbh. #gpt-neox-devs is talking about that currently.
shawwn#3694: yeah, see the doc in the channel description https://cdn.discordapp.com/attachments/729741769738158194/753420191383289947/unknown.png
3dprint_the_world#6486: ah I see, there seem to be multiple versions of that doc, and the one I had didn't have that section. Thanks.
StellaAthena#3530: Again, the GPT-Neo docs are hugely lacking despite my best efforts to fix them.
3dprint_the_world#6486: So what does that translate to in $$ terms?
kindiana#1016: $0 with TFRC
kindiana#1016: for the TPUs at least |
3dprint_the_world#6486: oh cool, has EleutherAI joined TFRC?
kindiana#1016: specifically daj/shawwn has a big tpu quota from tfrc
shawwn#3694: quota is often not the same thing as "being able to use that quota" though
shawwn#3694: both daj and my quotas are now suffering from unable to create TPUv3 pods
shawwn#3694: (I just made a TPUv3-32 that preempted in less than 15min)
3dprint_the_world#6486: isn't that indicating that your quota is exceeded?
shawwn#3694: nah, quota is the total theoretical number of TPUs you can create
shawwn#3694: "capacity" refers to the ability for you to actually create those TPUs
shawwn#3694: "preempting" just means they garbage collected some TPUs
shawwn#3694: (TPUs also preempt after a maximum of 24 hours)
3dprint_the_world#6486: gotcha
genai (Immortal Discoveries)#0601: Anyone read yet the Blender/ PPLM/ mining gold links above?
genai (Immortal Discoveries)#0601: What do you think about it? AGI needs to be forced to think about certain things. Do you have another way to do it?
Daj#7482: They're neat methods, but obviously not strong enough to control actual AGI
Daj#7482: When we talk about alignment we mean stuff like what MIRI or Paul Christiano's group is doing
genai (Immortal Discoveries)#0601: No it's not about controlling AGI, Blender / PPLM are just essentially we could say a GPT but with 1 new idea added to it that brings us closer to AGI; forcing it to say certain thnigs/questions. The reason I believe it is a improvement for our GPT tech is because it makes it talk about food and survival over other things, and can evolve its goal too ex. now it talks about AGI all day (me haha).
genai (Immortal Discoveries)#0601: I mean, otherwise, how could an AGI/Transformer evolve its goal to AGI and start saying AGI all day? Why would it say AGI all day now whereas before it didn't? AGI isn't a common word. If you look deep into the semantics and data of physics I'm not sure you would see intelligence is the most common thing, or is it?
genai (Immortal Discoveries)#0601: Of course most people don't talk about AGI but....we're closer to the key to the universe 🙂
genai (Immortal Discoveries)#0601: With me my goal started out (very naively I must say) wanting to go to Heaven, I soon later wanted immortality by a real means though and became a true atheist. I was looking at known ideas, like cryonics, repairing yourself, AI, to make us not die. I didn't know much about AI at that time, as if it didn't exist at all really. I latched onto human AI once I saw an article talk about how we learn by trial and error motor actions, learning how to walk etc, that led me to see the deeper truth behind our magical selves and from that moment I grew into the field very very fast learning so much. So why did I make AGI images and words my favorite thoughts? Hmm, first of all, linked to them is that they can be smarter than us and save us from ageing, so naturally, it's a really good thing, not because I looked at molecules and saw intelligence is the underlying common feature, but simply because AI gives me infinite food etc and no pain etc or death, so I naturally want what gives me my rewards, so AI it was. But I don't think it all day, if it looks like a path that won't make me immortal. Before my realization I thought AI was a very un-understood thing, I didn't think we could make it in time / how to. So I didn't, say it as much. I'd say cryonics, AI, ageing drugs, etc. But once I saw how AI works a bit I saw more and more, we can do it, and in time, it's going to happen, and I saw just how powerful intelligence is, its way way way more powerful than cryonics or ageing drugs. AI is something we can do soon I thought and also much powerful, so it's a good word on my mind. Of course cryonics is a seemingly easy way that already half-works! I honestly would love to try thousands of experiments to improve the information retained.
Daj#7482: re Blender/PPLM: I guess I'm confused why you bring it up, say it's not about AGI, then immediately start talking about AGI again. I think these methods may have some marginal value to AGI but they don't seem like the most promising avenues of research here, I'd rather look into efforts to combine transformer world models with RL (e.g. DeepMind and OA both came out with papers doing stuff like this recently) |
re Journey to AGI: Yea that feels relatable, my journey towards AI was somewhat similar, but more driven by curiosity/a search for the "most fundamental" on my side, before I became fully utilitarian and it became clear this was the best approach to ending suffering
genai (Immortal Discoveries)#0601: The Blender algorithm is the RL version of GPT though. Of course it lacks the external body RL part.
genai (Immortal Discoveries)#0601: it's just a small but important improvement
genai (Immortal Discoveries)#0601: What would make a GPT start saying/thinking about AGI all day? It's not because it was prompted on AGI for an hour, you can tell someone about AGI all day and they'll forget about it, because they have their own rewardful words.
Daj#7482: Fair I guess I only skimmed the paper
Daj#7482: > What would make a GPT start saying/thinking about AGI all day? It's not because it was prompted on AGI for an hour, you can tell someone about AGI all day and they'll forget about it, because they have their own rewardful words.
@genai (Immortal Discoveries) I don't see why I would want GPT to do this, you could just train it on appropriate data or find an appropriate prompt and turn down the temperature
genai (Immortal Discoveries)#0601: All humans take up a life career....my mom is a believer in Jesus and loves working a job for money. Some people love drawing art along with other hobbies. And food, sex, houses, a car. Things that keep us alive longer. And we learn new desires (as I explained above how it happens). While one man works on smarter batteries another man works on smarter cars, another on smarter homes. We each take up some high priority job available.
While you could train it on a dataset of DNA or um, cryonics data, or ML, you do want it to know some the other fields and thinking like we talk naturally about all things ex. Earth, galaxies, animals, tables, cells, shapes, texture. The thing you're really meaning when you think about a "ML/other dataset" is that you wish the AGI to have more weight on that area of space, but that's exactly what I'm talking about, without having to restrict it to a narrow dataset though.
Daj#7482: I think what you're thinking about in a technical sense is reward modeling
Daj#7482: Finding the correct type of reward signal for a strong RL model (GPT is not RL, it's an unsupervised prediction model, which is interesting for other reasons)
Daj#7482: I think this might be interesting to you: https://openai.com/blog/learning-to-summarize-with-human-feedback/
genai (Immortal Discoveries)#0601: I saw that article. It didn't interest me though because it uses our human intelligence to do the summarization. When I say reward, I mean another type of neural weight, just like frequency and relationalism and recency/expectancy. You can summarize something by removing common words or words that don't interest you (food, AGI, cash).
"Finding the correct type of reward signal for a strong RL model"
How do you propose we learn new rewards, if not by something like word2vec?
genai (Immortal Discoveries)#0601: What do you man by Reward Signal?
Daj#7482: I think you may want to read up on some of the literature on RL, I think a lot of your questions/ideas have been addressed in the literature
Daj#7482: RL is a mature and big field, I'm not sure I remember how I even got into the field so I can give you recommendations haha |
genai (Immortal Discoveries)#0601: I learn faster though when have a mentor, I can ask questions and get the info I really need.
Daj#7482: Well yeah if you have a mentor with the time and willingness haha
Daj#7482: You'll probably have to enroll in a university to get that though
genai (Immortal Discoveries)#0601: Do you mean RL for text prediction, or RL for those robots?
Daj#7482: Reinforcement Learning is a general technique of optimizing the expectation of a reward signal
Daj#7482: I unfortunately don't have the time to give you a full intro to RL, I'd recommend checking some of the fantastic MOOCs on RL that exist
Daj#7482: I have fond memories of https://www.udacity.com/course/reinforcement-learning--ud600
Daj#7482: Though it's not super up to date since all of RL is deep learning nowadays
genai (Immortal Discoveries)#0601: But how would it work for text prediction? I want to ignor robotics for now here IOW.
Daj#7482: > I think this might be interesting to you: https://openai.com/blog/learning-to-summarize-with-human-feedback/
This is the best paper on RL for text prediction
Daj#7482: I'm sorry but your questions seem a bit confused, I think you need to catch up on some of the basics of the field before I can help you
genai (Immortal Discoveries)#0601: You mean the RL+text question?
Daj#7482: There are also other discord servers more geared towards beginners that might help you
Daj#7482: > You mean the RL+text question?
@genai (Immortal Discoveries) You mentioned you didn't know what "reward" meant in this context, this seems to imply you're not too familiar with the background of RL and AI research, so I'm recommending you read up on that first before trying to jump right in the deep end :)
genai (Immortal Discoveries)#0601: How can the openAI AI that learns to summarize by human RL be AI, it's using humans, the humans know somethng it doesn't know.....
Daj#7482: That's...not the point
Daj#7482: Did you read the paper?
Daj#7482: The humans provide the reward signal, not the training data |
Daj#7482: Of course the humans "know something", they know _what they want_
genai (Immortal Discoveries)#0601: If you mean what RL means, isn't it mostly about robots learning to walk etc?
Well I read it some bit....hmm.....yes but why doesnt the AI have that reward signal itself instead of getting it from the humans
Daj#7482: The whole point of AGI is to get it to do what we want, so we have to communicate that at some point
Daj#7482: > If you mean what RL means, isn't it mostly about robots learning to walk etc?
>
> Well I read it some bit....hmm.....yes but why doesnt the AI have that reward signal itself instead of getting it from the humans
@genai (Immortal Discoveries) Yea I'm sorry my friend but I think this is the wrong discord for you, we're happy to have you but we're pretty strictly not for beginners
Daj#7482: I could recommend the communities in #communities , they might be able to help you more to get started
Daj#7482: But to answer your question: RL is a broad fami,y of techniques that tries to solve a problem defined by a reward signal, that's it. It can be _any_ kind of task. It's the most general case of learning (i.e. supervised and unsupervised learning can be cast as RL problems with dense reward)
And the AI cannot have the reward signal because it's arbitrary. You have to tell the AI what you want at some point
genai (Immortal Discoveries)#0601: Oh then do you mean, by RL, the unsupervised text predictors like GPT? Or even for ex. word2vec?
genai (Immortal Discoveries)#0601: cuz we tell it what to do...
Daj#7482: I think we're talking past each other
Daj#7482: I think you're not super familiar with the terminology of the AI world, which makes understanding and answering your questions rather hard
Daj#7482: Either way, I must return to my dayjob now, see you around
genai (Immortal Discoveries)#0601: ok
Sid#2121: @genai (Immortal Discoveries) https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZDM-OYHWgPebj2MfCFzFObQ |
FractalCycle#0001: quick question:
I noticed in #deleted-channel that some (many?) people in this group have API access to the existing GPT-3. Any advice on how to get API access? (I think my application was poorly-thought-out, and I'm not sure about "trying again")
Daj#7482: Since the beta is going public in less than a month and the backlog is huge, it's probably unlikely you can still get in
Daj#7482: What were you planning to do with it?
FractalCycle#0001: basically playing around with it like Gwern has done, trying random ideas. E.g., i heard about that one webapp that makes simple code from descriptions, so trying to recreate that seemed cool. Also applying it to other domains (like when they built a chess engine from GPT-2)
FractalCycle#0001: neat that beta going public soon!
Kazumi#1297: What is the architecture of gpt-n? Is it just BPE => transformers *n => BPE? I want to make my own small scale gpt-n style network
Kazumi#1297: I have an idea for incorporating long term memory that I want to try out
Kazumi#1297: Also, are the q and the k vector symmetric in it's function?
StellaAthena#3530: > I have an idea for incorporating long term memory that I want to try out
@Kazumi I would love to chat about this. I've been wondering about "temporal modeling," where we tell the model when the text was written so it can distinguish between current and historical parlence. Is this the kind of thing you have in mind?
Kazumi#1297: I want to have it save either the query or the key vector of an input and the text as a dictionary in a file, and add an attention term early on in the model to pick up which text is relevant to the current task
Kazumi#1297: thinking of only adding new text to the dictionary when the new text's query or key has low correlation to every other text in the memory, meaning it's new information it hasn't seen yet. And replacing a text in the dictionary when the query or key has a high correlation with a particular text, meaning there is a newer information available
StellaAthena#3530: It sounds like the focus is on allowing NNs to "sort out chaff" and ignore text that doesn't add significant new content. Is that a fair charaterization?
Kazumi#1297: idea comes from this, but I want to make it so the text that's retrieved isn't static
https://arxiv.org/abs/2002.08909
Kazumi#1297: yeah
Kazumi#1297: for now, the only task I can think of that would require very long term memory are chatbots, but that's what I'm working on anyways so I want to try it out
StellaAthena#3530: Errr
StellaAthena#3530: Does anyone know what this means |
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/753655621290819764/image0.jpg
Kazumi#1297: uhhhh, guessing it's not particularly good news
StellaAthena#3530: Yeah
StellaAthena#3530: Me too 😦
Kazumi#1297: ~~did you try turning it off and on again~~
Aran Komatsuzaki#5714: @Kazumi if you save query/key vectors for REALM-like method, they'll get stale, which is why REALM retrieves text instead of dense vectors.
Kazumi#1297: I'm still thinking of exactly how I'm going to do this
bmk#1476: > neat that beta going public soon!
@FractalCycle I don't think the beta is going public
bmk#1476: It's going to *start costing money* in a month
bmk#1476: But still only for beta users
Kazumi#1297: > > I would love to chat about this. I've been wondering about "temporal modeling," where we tell the model when the text was written so it can distinguish between current and historical parlence. Is this the kind of thing you have in mind?
> @Kazumi
@StellaAthena
could you not do that by putting a date and time in the context text? that's what I do with my conversational chatbot
StellaAthena#3530: It’s pretty different between general language models and chat bots.
StellaAthena#3530: Chat bots effectively work with time series data
Kazumi#1297: hm, how could you give it a time, if not by plane text? using one number that could represent something small to something large isn't that efficient, I'd imagine so I'm not sure what you can do
gwern#1782: > But still only for beta users
@bmk correct. the FAQ specifically says that they will not be opening up to more users. not like hundreds of thousands of GPUs are going to magically drop out of the sky once the calendar rolls over to October, after all |
FractalCycle#0001: @bmk @gwern ah, okay, thanks for the clarifies!
Kazumi#1297: https://twitter.com/hardmaru/status/1301362995356774401
Kazumi#1297: it doesn't look cheap
FractalCycle#0001: ya, damn
Deleted User#0000: well, Sid, Daj, and BMK have already hit 1.3B
Deleted User#0000: 🤷♂️
Deleted User#0000: yea, this is what happens when you answer to investors
Kazumi#1297: making bigger networks with more expressive power is fine, but what I want are smaller network that are reasonable to run with minimal compromise
FractalCycle#0001: eleuther NeoGPT in general will need lower resource requirements than GPT3, both for training and for running (since most of us don't have much money or hardware)
Daj#7482: Depends on how much hardware papa Google bestows on us
Daj#7482: (or whatever other sugar daddy we pick up)
Daj#7482: ¯\_(ツ)_/¯
Aran Komatsuzaki#5714: @Daj If we promote our results, we can probably find sugar daddies
Daj#7482: Yea I'm buckling down for a shitstorm after we release
Daj#7482: Hope you guys are too
Daj#7482: I'm already the resident politician behind the scenes lol
Aran Komatsuzaki#5714: I'm specialized at promotion
Aran Komatsuzaki#5714: lol
FractalCycle#0001: what type of shitstorm? like total noobs joining, media getting the wrong idea, something else?
Daj#7482: All of the above plus unknown unknowns |
Daj#7482: If my experience with GPT2 is anything to judge by, this will cause some _weird_ things to happen
Aran Komatsuzaki#5714: We already have hundreds of noobs here
Aran Komatsuzaki#5714: They are gone after several days
Daj#7482: There's already a lot of very weird things happening behind the scenes
Aran Komatsuzaki#5714: I mean inactive
Daj#7482: Noobs/inactive I'm not worried about
Daj#7482: It's hard to put into words what I'm worried about
Kazumi#1297: I try to follow this project but it's a lot to take in
FractalCycle#0001: i may be a noob, but i do want to help (once i gather up more energy in general, looking into ADHD treatment soon)
Daj#7482: It's kind of a vague nebulous concept of weird/unpredictable events
Daj#7482: Don't worry @Kazumi and @FractalCycle ! we're very lurker and observer friendly :)
Daj#7482: And you've have added some pleasant chit chat, which is appreciated
FractalCycle#0001: thanks! I saw like "noob-friendly" issues/PRs in one of the repos i think
FractalCycle#0001: the global a.i. stage is speeding up in general, which feeds into tons of risks
Daj#7482: I've had some funky nightmares lol
Daj#7482: but yeah don't worry about not contributing, but if you do, just ping the regulars and we'll be happy to help
FractalCycle#0001: as a ~~sketchy~~ security-minded person myself, i can think of tons of wacky evil uses for any number of the a.i. things that have come out lately.
FractalCycle#0001: thanks!
FractalCycle#0001: not sure how many evil uses would actually work, or be more effective than non-a.i. evil things. But it's certainly something to be wary of.
Kazumi#1297: has gpt-n been used for spam/political interference yet? |
Daj#7482: the thing is I _genuinely don't care about "evil" people using GPT3_
Aran Komatsuzaki#5714: I don't know about Delip much, but doesn't he have a lot of compute resources?
Daj#7482: a) Security by obscurity never works (evil people always get it)
b) Much more importantly: https://youtu.be/EUjc1WuyPT8?t=4280
FractalCycle#0001: > a) Security by obscurity never works (evil people always get it)
this
Daj#7482: I've written like 30k words worth of essays on this lol
FractalCycle#0001: oh dank, i gotta read those
StellaAthena#3530: > what type of shitstorm? like total noobs joining, media getting the wrong idea, something else?
@FractalCycle speaking as someone who has had a paper they’ve written be tortured by the popular press in a dozen countries, you’d be shocked at how far people will go to egregiously misrepresent your work
FractalCycle#0001: i was just discussing on another discord (i run an effective altruism club at a college), about different movements misunderstanding each other.
Daj#7482: @FractalCycle Part one of my rambling manifesto blog posts, if you dare read them lol: https://towardsdatascience.com/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8
FractalCycle#0001: > rambling manifesto
that's like one of my favorite genres lol!
Daj#7482: > @FractalCycle speaking as someone who has had a paper they’ve written be tortured by the popular press in a dozen countries, you’d be shocked at how far people will go to egregiously misrepresent your work
@StellaAthena Yea, I got _super_ lucky last time that people cast me as a kind of David vs Goliath. It could have gone _so much worse_
Kazumi#1297: there's so wildly different views of gpt-n, from people saying it's just memorizing text and saying how far we are from AI to here
Daj#7482: Pinned a message.
StellaAthena#3530: I got off easy too. I wrote a technical game theory paper that nobody understood the point of so they said it was an AI breakthrough (mostly)
StellaAthena#3530: There’s one article I have saved though because 90% of the sentences contain a falsehood |
Daj#7482: lol
StellaAthena#3530: Like, actually 90%. I counted
Daj#7482: One of the things I'm most scared of is that AI is probably going to be classified as weapon's technology at some point
FractalCycle#0001: i say the key thing about how "intelligent" it is, is how much meta-learning it's done. Children can learn a lot by reading (although they have other brain "modules" already). So how much world-modelling is GPT3 doing from its text?
The thing that made GPT3 stand out to me, is all the tests people do on it where it sounds approx. as creative as a midlevel-creative person.
Daj#7482: And once nationalism and military gets involved things get ugly, fast
Daj#7482: "Rogue hackers support terrorists"
Deleted User#0000: maybe market ourselves as a fancy markov chain?
shawwn#3694: crypto went through that phase in the 90's
Deleted User#0000: skeptics will be happy, the public won't believe
Daj#7482: Yea I have to say it's shocking how much AI atm feels like closed source software and crypto in the 90s
FractalCycle#0001: oh yeah, good book about accelerating military tech rn is *The Kill Chain*. Not sure i agree with it advocating more autonomous lethal weapons, but it at least gives info about where weapons are headed no matter what.
Deleted User#0000: and we get to do things in peace
Daj#7482: I've read a good amount of history of the hacker culture from back then
Daj#7482: And the parallels are eerie
FractalCycle#0001: note to self: don't promote Eleuther AI too hard until later i guess
Daj#7482: > oh yeah, good book about accelerating military tech rn is *The Kill Chain*. Not sure i agree with it advocating more autonomous lethal weapons, but it at least gives info about where weapons are headed no matter what.
@FractalCycle Thanks I've been looking for a good argument pro-autonomous weapons, will check it out
Deleted User#0000: yea, we know enough of Gary Marcus' lines to just say what he says |
Deleted User#0000: and downplay the whole endeavor
Daj#7482: > note to self: don't promote Eleuther AI too hard until later i guess
@FractalCycle I genuinely think we're among the most idealistic, cooperative and honest hackers the world could have asked for
FractalCycle#0001: i hope so
Daj#7482: I'm breaking my back trying to be in contact with and please everybody, but man
Daj#7482: Man
Daj#7482: Politics is hard
Daj#7482: Ethics is hard
FractalCycle#0001: yes and yes, and add movement-building and dealing-with-media
Daj#7482: mhm
Daj#7482: we could get lucky and things turn out fine like last time for me (although that resulted in a pretty significant nervous breakdown too)
FractalCycle#0001: (i'll tell some of my more techy friends about the server, not sure they'll agree with this ethos. I'm here to help on tech, alignment, and to play with anything we build (although i also need more energy for that))
Kazumi#1297: there's an out of place meme about AI alignment that I'm dying to share, but I don't feel like I've contributed enough to have meme privilege
Daj#7482: Our goal is AI alignment and making the world a better place/reduce suffering
Daj#7482: everything else is instrumental
StellaAthena#3530: Here’s the article that’s almost entirely falsehoods if anyone is interested: https://www.ballerstatus.com/2019/05/14/magic-the-gathering-too-complex-for-pc-gaming-motherboards/
The actual paper was a game theory paper about the computational complexity of a board game.
FractalCycle#0001: what happened "last time"? (Sorry if i'm not informed on recent a.i. politicking history). A link to an article is fine
Daj#7482: > there's an out of place meme about AI alignment that I'm dying to share, but I don't feel like I've contributed enough to have meme privilege |
@Kazumi You better post it right now or I'll ban you
Daj#7482: lol
Daj#7482: > what happened "last time"? (Sorry if i'm not informed on recent a.i. politicking history). A link to an article is fine
@FractalCycle Ah yes sorry
Kazumi#1297: I think I found the solution for the alignment problem https://cdn.discordapp.com/attachments/729741769738158194/753684871192510524/kBP2BZFP_400x400.png
Daj#7482: Read the post I linked plus https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51
Daj#7482: it was so obvious
Daj#7482: > Read the post I linked plus https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51
And I can give you some insider takes that were left out of the public facing article if it interests people hah
FractalCycle#0001: thanks, i'll bookmark your article and catch up on it ~~after a month or two once i get treatment for what I think is ADHD, because i don't have the spoons to look into it in-depth right now~~ soon!
Daj#7482: I feel you my dude
Daj#7482: Good luck with that, it can be tough
StellaAthena#3530: I’m curious why you are now okay open sourcing GPT-Neo
Daj#7482: You'll get through it
Daj#7482: > I’m curious why you are now okay open sourcing GPT-Neo
@StellaAthena Are you asking me?
Kazumi#1297: I have a list of links to read, it never seems to shrink even though I've spent the last few weeks reading through them
StellaAthena#3530: Yes
FractalCycle#0001: thanks @Daj , good to have support.
In the meantime, i'll let a few others know about the server. You're probably right that security through obscurity is not going to work for a.i. safety, at least not for very long. |
StellaAthena#3530: (I haven’t gotten to the end of the article)
Daj#7482: Well open sourcing was the plan from the start so it's not a recent opinion I came to, but it has evolved a lot over the months
Daj#7482: And it's...hmm well I can give you the argumentation in many different lengths hah
StellaAthena#3530: Shameless plug: if anyone’s interested in AI + Security, I am a board member of DEF CON’s AI Village. We have a year round discord server where we chat about research ideas, computer security, and organize reading groups: https://discord.gg/2xeADS
Daj#7482: Neato
FractalCycle#0001: o thanks!
FractalCycle#0001: i def have multiple cybersec friends
Kazumi#1297: seems like my 90+ server list will also grow
StellaAthena#3530: We have a broad range of security interests, from crypto to malware detection, to ethics and privacy (that’s how I got into the security scene)
WAUthethird#4977: I'm in 100 servers, every time I want to join another one I have to leave another
FractalCycle#0001: i'm interested in... i'm not sure how to call it, maybe "pure alignment" or "agent alignment"? Whatever yudkowsky and christiano argue about, but that altman doesn't talk about.
Kazumi#1297: yeah, I haven't hit the discord hard limit yet
FractalCycle#0001: what's the hard limit?
Kazumi#1297: 100
Kazumi#1297: unless you're a bot
FractalCycle#0001: thx
StellaAthena#3530: Yeah I keep deleting severs lol
StellaAthena#3530: I have three I am active in
StellaAthena#3530: This, the AI Village, and one for competitive *Magic* players.
FractalCycle#0001: i stay in some mainly for emotes and announcements |
Kazumi#1297: I'm usually only active in quiet servers
FractalCycle#0001: some to keep tabs on if anyone uses them ever again
Daj#7482: > i'm interested in... i'm not sure how to call it, maybe "pure alignment" or "agent alignment"? Whatever yudkowsky and christiano argue about, but that altman doesn't talk about.
@FractalCycle Alignmentforum master race
Daj#7482: 100 servers lol
Daj#7482: I try to have just enough so they fit on my screen
Kazumi#1297: by the way, any server I can ask about things I thought about research papers?
Daj#7482: What do you mean?
Kazumi#1297: well, I can ask the question right now to see where someone can direct me to
Daj#7482: Sure, we do have #research or here in general for many kinds of paper discussions
Kazumi#1297: I'm confused about this part in attention is all you need, it's doing (Q.W_q).(K.W_k)^T, which is Q.W_q.W_k^T.K^T, isn't that just the same as Q.W.K^T? why is it using 2 separate linear layers? https://cdn.discordapp.com/attachments/729741769738158194/753688744753889362/unknown.png
zphang#7252: I am interested in your insider takes
Daj#7482: Ah I'm sure many people can answer that here @Kazumi , not sure if/when people will be around to help
Daj#7482: > I am interested in your insider takes
@zphang Mine?
zphang#7252: yep
not necessarily now, whenever you feel like sharing
Daj#7482: Oh yeah sure I'm happy to share some of the juicy embarrassing inside takes of being famous for a week hah
Daj#7482: Probably not tonight since I need to finish that email but I'm sure I'll talk about it sometime
Daj#7482: Ask me during my talk! |
StellaAthena#3530: > I'm confused about this part in attention is all you need, it's doing (Q.W_q).(K.W_k)^T, which is Q.W_q.W_k^T.K^T, isn't that just the same as Q.W.K^T? why is it using 2 separate linear layers?
@Kazumi this is (QW_q)(KW_k)^t right
zphang#7252: Oh right, is there a link to that somewhere
Daj#7482: https://old.reddit.com/r/slatestarcodex/comments/ik0rhz/next_ssclesswrong_meetup_guest_speaker_connor/
StellaAthena#3530: I find the period notation really hard to read
Kazumi#1297: yeah, composition of multiple linear multiplication is the same as a single multiplication, right?
StellaAthena#3530: Here let’s move to #research
Daj#7482: Would love to see lots of you guys at the meetup, I think it'll be a really nice turn out
Daj#7482: I'll try to make the talk entertaining
Daj#7482: (it would be a great marketing stunt to release the code during my talk but that is probably too early lol)
bmk#1476: Way too early
Daj#7482: Yea I know
researcher2#9294: > I think I found the solution for the alignment problem
@Kazumi I requested an emote be created for this before, BUMP
Kazumi#1297: (totally didn't steal the idea from you)
researcher2#9294: no finger pointing here, I just want some woolooloo
researcher2#9294: ayoooooooo
bmk#1476: I don't get the joke
StellaAthena#3530: https://youtu.be/l0xh4qopQpk
researcher2#9294: I don't know whether Kazumi was suggesting we give AI religion or we bow down to it as a god. |
researcher2#9294: @StellaAthena Can't wait for the nightmares
StellaAthena#3530: There's a whole album where this came from
StellaAthena#3530: The creator is quite prolific
StellaAthena#3530: This one is my favorite I think https://www.youtube.com/watch?v=JZIVmKOdrBk
StellaAthena#3530: But they're all great
researcher2#9294: hahaha
researcher2#9294: want some cake, well none for you
researcher2#9294: 😦
RorroArt#7226: so, how are you guys gonna build gpt-3?
StellaAthena#3530: Out of LEGOs, mostly
StellaAthena#3530: A little KinEX
StellaAthena#3530: And some bubble gum
RorroArt#7226: lol
StellaAthena#3530: That’s a very general question, so I’m not really sure what you’re looking for. Do you have a more specific question?
RorroArt#7226: I mean, are you collecting the data from scratch, and how are you gonna get the computing power for training it?
StellaAthena#3530: You can see the data we are currently collecting here: https://github.com/EleutherAI/The-Pile
StellaAthena#3530: We have almost 300 GiB done and another 500 GiB in the pipeline
StellaAthena#3530: Our goal is 1 TiB of curated and processed text data with as little reliance on Common Crawl as we can get away with.
shawwn#3694: @RorroArt https://twitter.com/theshawwn/status/1302239161861062657
FractalCycle#0001: > (it would be a great marketing stunt to release the code during my talk but that is probably too early lol) |
i legit thought elon would say he had a neuralink installed during that demo lol
gwern#1782: @RorroArt the summary is: they are hand-collecting existing datasets (plus creating a few themselves) from past work, mostly, to clean and merge together; the GPT-3 codebase will be based on tf-mesh to shard the models across TPUs; the TPUs will be provided for free by TFRC which has granted people here access to up to TPU-2048 (subject to availability - currently extremely low). work is being staged off GCP and the occasional Hetzner or OVH dedicated server (for cheap bandwidth). no one knows how the final trained model will be made available - probably they won't be offering an API since can't guarantee TPU availability. may just throw it over the wall as a download/torrent seeded from a Hetzner/OVH dedicated.
gwern#1782: https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/?OCID=msr_blog_DeepSpeed3_tw
Aran Komatsuzaki#5714: HF and Microsoft released blocksparse kernels almost simultaneously lol
zphang#7252: it's gumbel-softmax/concrete all over again
gwern#1782: nah, the blocksparse guy is external, university, and preceded HF considerably: https://www.reddit.com/r/MachineLearning/comments/iq55ig/p_pytorch_extension_for_gpuaccelerated_block/g4qdx1c/
gwern#1782: "The trillion-parameter model has 298 layers of Transformers with a hidden dimension of 17,408 and is trained with sequence length 2,048 and batch size 2,048." <-- would you rather fight one 1t-parameter Transformer or 15 66b-parameter experts?
Kazumi#1297: has anyone worked on an image transcriber yet?
Kazumi#1297: I'm starting to think that MSCOCO's text detection dataset is not good at all, I've looked through about 50 samples and most are not really great examples of images with text in it, and the bounding box is usually off, either being in the wrong place, or straight up where there are no text. and only 42% of the images have any text in it
https://vision.cornell.edu/se3/coco-text-2/#download https://cdn.discordapp.com/attachments/729741769738158194/753876045945634876/img_1.png
Kazumi#1297: https://cdn.discordapp.com/attachments/729741769738158194/753876050869616721/img.png
Kazumi#1297: working with COCO dataset has always been weird https://cdn.discordapp.com/attachments/729741769738158194/753876750798291044/unknown.png
Noa Nabeshima#0290: Why does it make sense for Google to develop TPUs but not Microsoft?
Noa Nabeshima#0290: or Amazon for that matter
shawwn#3694: it doesn't. Facebook's software + Microsoft's hardware would be a killer combination.
shawwn#3694: I think the world will be horrified to realize that TPUs are going to dominate ML
kindiana#1016: amazon does have their own tpus
kindiana#1016: "amazon inferentia"
kindiana#1016: its more like tpuv1 though, only supports 8/16 bit math
kindiana#1016: I'm waiting for micron or one of the other hbm/gddr vendors to make a tpu lol |
kindiana#1016: its memory bandwidth all the way down
shawwn#3694: Yeah.. I wonder if the world can catch up
shawwn#3694: TPUs aren't a panacea, and in particular they're slower than GPUs in terms of latency (inferencing GPT is noticeably snappier on GPUs), but in terms of raw flops + mem bandwidth, TPUs are a clear winner
kindiana#1016: I dont think theres anything inherently slower about tpus for inference, its just how google has set it up with tensorflow grpc as the interface to them instead of having them be locally attached
kindiana#1016: hardware wise I think others will certainly catch up with google, tpus are relatively simple compared to making cpus/gpus because you can have software managed caching and latency hiding
kindiana#1016: nobody has made an accelerator with good sw support though, even amd is barely getting there lol
Sid#2121: > has anyone worked on an image transcriber yet?
@Kazumi take a look at ASTER
Kazumi#1297: aster?
this?
https://github.com/bgshih/aster
Sid#2121: yeah
gwern#1782: I speculate that nvidia has been giving MS a sweetheart deal to not buy into custom hardware. MS is not afraid of ASICs or weird hardware, they boasted about their FPGAs for random forests, iirc, but note their close collaboration with nvidia for DeepSpeed with the Megatron people
Ravna#1831: It also seems like that a decently-sized cache is enough to control the memory overhead within an acceptable percentage. Going all-in with huge-SRAM-on-chip like how Graphcore or Cerebras does it may just be premature optimization on the wrong thing that's already past the diminishing return threshold.
gwern#1782: yeah, that's something I've been wondering for a while about cerebras: it seems optimized to iterate a relatively small non-attention model as fast as possible, but both of those choices are bucking the trend
gwern#1782: still no public benchmarks or usecases...
jmmr#8671: how do you build agi
shawwn#3694: pretty simple
Aran Komatsuzaki#5714: all it takes is to wait for five years
3dprint_the_world#6486: Feed the following prompt to GPT-3: "Build AGI." |
Noa Nabeshima#0290: Has anyone looked into why the SOTA on all the benchmarks in the GPT-3 paper are so much better than fine-tuned BERT?
3dprint_the_world#6486: What do you mean
kindiana#1016: if you're only targeting a couple benchmarks its much easier to tune the model structure, pretraining/evaluation scheme and hyperparameters to perform better on a single task
3dprint_the_world#6486: I doubt the model structure has actually been tuned for the downstream tasks. Given the cost of training the model. But I could be misunderstanding.
kindiana#1016: model structure as in autoregressive vs masked language modelling, large memory layers, or retrieval based components
kindiana#1016: those tend to be different for sota of different benchmarks
kindiana#1016: its usually a transformer core but theres also usually some tricks in the model structure to get there
3dprint_the_world#6486: But I think Noa is asking about *fine-tuned* BERT.
kindiana#1016: I thought the question was why is SOTA better than fine tuned bert
3dprint_the_world#6486: i.e. "Why does GPT-3 do better without fine-tuning, than BERT with fine-tuning"
3dprint_the_world#6486: At least that's how I interpret the question. But I could be wrong.
Ravna#1831: The most obvious answer is that the the BERT NN used for benchmarking is too small to have the capability to hold all these information no matter how much fine tuning you do.
zphang#7252: And the follow-up question is: "Which benchmarks?"
zphang#7252: GPT-3 surely does much better than one might expect, but it generally far underperforms the SOTA
zphang#7252: also e.g. they compared against BERT rather than RoBERTa for SuperGLUE, and RoBERTa pretty much showed that BERT was just undertrained
zphang#7252: There're other tasks like TriviaQA which I think a bigger model trained on more diverse data (GPT-3) will outperform on
zphang#7252: But as for why the SOTA comparison (not-GPT) numbers are much better than vanilla BERT, besides the RoBERTa factor, as Ben mentioned a lot of SOTA models have random tweaks and tricks added
zphang#7252: like specific data augmentation, ensembling, and other stuff
bmk#1476: I have a truly marvelous method to build agi, but this discord message is too small to contain it
Eddh👽#7290: write a paper |
Sid#2121: you're still thinking in pre-agi terms
Sid#2121: build the AGI, get it to write the paper for you
Sid#2121: https://youtu.be/B5oGJTUpbpA?t=82
bmk#1476: @Eddh👽 i honestly can't tell if you're meming off my meme
bmk#1476: i'm going to assume you are
Noa Nabeshima#0290: https://cdn.discordapp.com/attachments/729741769738158194/754410837178974258/unknown.png
Noa Nabeshima#0290: https://cdn.discordapp.com/attachments/729741769738158194/754410873241600000/unknown.png
Noa Nabeshima#0290: I'm confused how fine-tuned SOTA is so much better for SuperGLUE in particular
zphang#7252: BERT is undertrained
Noa Nabeshima#0290: even with tricks: how are your tricks getting you up to human level, much higher than few-shot GPT-3 AND fine-tuned BERT??
zphang#7252: for comparison, look at the benchmark results for RoBERTa on SuperGLUE: https://super.gluebenchmark.com/leaderboard
Noa Nabeshima#0290: Wow, got it
zphang#7252: their benchmark submission also includes ensembling
zphang#7252: it looks like the GPT-3 people took the BERT-fine-tuned baselines from the original SuperGLUE paper, which you would expect would be the least impressive baseline
zphang#7252: that said, I think it's worth characterizing which tasks have what profiles of fine-tuned >>> GPT and GPT > fine-tuned
Noa Nabeshima#0290: That was clarifying, thank you.
I'm still confused how fine-tuning makes the models that much better though. It looks like few-shot pre-training w/o fine-tuning won't reach fine-tuning performance on certain tasks for a long time which is confusing to me.
zphang#7252: Well fine-tuning does use, many, many more training examples than few-shot, and can allocate relevant model capacity to solving the task.
zphang#7252: Or if you want to be more cynical, fine-tuning probably picks up on artifacts from the data-creation procedure that help to solve a task, that few-shot doesn't have enough examples to pick up on. |
Daj#7482: Just a quick reminder I'll be giving a talk about GPT3, AI alignment, Eleuther and more tomorrow at 10:30AM PDT in case anyone wants to attend
https://old.reddit.com/r/slatestarcodex/comments/ik0rhz/next_ssclesswrong_meetup_guest_speaker_connor/
Daj#7482: My current cut of the talk is like 3 hours long hopefully i can cut it down until tomorrow haha
bmk#1476: ~~just talk faster~~
Daj#7482: The most difficult part is that I know some non AI people will attend so I want to make it accessible
Daj#7482: Which means I can't use words like "unsupervised", "instrumental convergence" and "paperclip maximizer"
bmk#1476: hmm
Daj#7482: I also don't want to end the talk with "what should we do? I dunno lol good luck"
Daj#7482: haha
Daj#7482: I think I've found solutions
Daj#7482: Just needs more practice
bmk#1476: i mean i want to hear solutions
zphang#7252: moar parameters + moar words = moar power
bmk#1476: so please, write a post or something
Daj#7482: Solution: Get really good at math and read AF
bmk#1476: i really want to read about it
Daj#7482: Yea I was thinking about turning things into a longer post
bmk#1476: >really good at math
gimme another decade or two |
Daj#7482: If I can find the time
bmk#1476: i really dont want you to get limited by the format and only talk about AI 101
Daj#7482: The problem is that there is a lot of uncertainty about what is the right thing to do, or useless or even harmful
Daj#7482: And how much someone not on the far tail end of intelligence can _really_ do
bmk#1476: you should maybe reserve the last 30% or whatever to not worry about making it accessible
Daj#7482: > i really dont want you to get limited by the format and only talk about AI 101
@bmk Yea this will not be a full enumeration of my beliefs
zphang#7252: I feel like compared to most advanced areas of research, AI/ML is generally surprisingly intuitive
Daj#7482: > you should maybe reserve the last 30% or whatever to not worry about making it accessible
@bmk We'll see, I think I actually may have a clever compact way to do the ending, at the cost of being very dramatic/quixotic
bmk#1476: ooh
Daj#7482: Which is my writing style anyways and I should just own it
bmk#1476: haha
bmk#1476: well, please do write a followup, i dont want all the fun bits jammed into the last bit
Daj#7482: > I feel like compared to most advanced areas of research, AI/ML is generally surprisingly intuitive
@zphang Funnily enough, I have never had any problem explaining AI alignment to normies
Daj#7482: Even my mother gets it intuitively basically immediately
Daj#7482: It's ML experts that doN't seem to get it
zphang#7252: normies also thought the singularity was arriving in 5 years, 5 years ago
Daj#7482: so did Minsky |
bmk#1476: i think that one miri post explains this phenomenon well
Daj#7482: "Never trust someone to understand something when their livelihood depends on them not understanding it"
bmk#1476: "i spent many dozens of hours tuning GAN parameters by hand to get the cool results, so there's no way the stuff you're describing will happen!"
Daj#7482: that too yea
Daj#7482: "It's just a flu virus in China, I've seen many flu viruses, it's fine"
bmk#1476: "do you have any idea how hard it is to get RL to do anything at all? there's so much tuning of hyperparams needed!"
zphang#7252: I would put it more charitably. Researchers would rather lean toward being more conservative than overselling results
zphang#7252: (Except when it's specifically their own work)
Daj#7482: Their papers don't give me that impression lol
Daj#7482: Yea
Daj#7482: I've become exponentially more cynical about institutions over the years
Daj#7482: And individuals as just incentive optimizers
Daj#7482: I wrote this whole blog post justifying why I am leaving academia and never published it because it was too cynical haha
bmk#1476: https://intelligence.org/2017/10/13/fire-alarm/
>The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.
Daj#7482: I should just link that essay as my talk and call it a day lol
bmk#1476: this quote resonates too damn hard
bmk#1476: haha
Daj#7482: I'll never be able to put it better |
bmk#1476: >The author does not know how to build AGI using present technology. The author does not know where to start.
:guilty:
Daj#7482: I'll be giving a 3 step guide to building strong AGI in my talk :D
Daj#7482: Paperclip maximization or your money back!
bmk#1476: awesome
bmk#1476: now i just need one more to complete the rule of threes and write a blog post about how there are more ways to build AGI than there are Portal games
Daj#7482: Haha
Daj#7482: One of the best things about preparing a talk is I get to do one of my favorite things: Go through my legendary notebook of bad jokes and observations
Daj#7482: It's probably the best thing I have made tbh
Daj#7482: Each talk I pick 2-3 of them to use lol
bmk#1476: my twitter feed *is* that book
Daj#7482: haha, nice
bmk#1476: it's a handy journal of what i'm currently learning about too
bmk#1476: so many things in math have extremely memeable names
zphang#7252: I still lean on the more conservative side, but I guess I'll wait to be more convinced tomorrow lol
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754423352474599494/Screenshot_from_2020-09-12_21-28-56.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/754423424029425664/EgXx1jRUcAIOPmN.png
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754423591302332446/Screenshot_from_2020-09-12_21-29-53.png
bmk#1476: hahaha |
Daj#7482: Technically true, the best kind of true
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754423805136470227/Screenshot_from_2020-09-12_21-30-44.png
Daj#7482: Some of these not even I know what the context was
bmk#1476: that is amazing
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754423929837453483/Screenshot_from_2020-09-12_21-31-15.png
bmk#1476: someday you need to give a talk that consists entirely of memes
bmk#1476: just squish them all together
Daj#7482: Oh boy that would be a fun challenge
bmk#1476: no substance, only jokes
bmk#1476: -1 point for every substantive statement you make
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754424339285147658/Screenshot_from_2020-09-12_21-32-51.png
Daj#7482: I miss audience interactions
bmk#1476: i miss ~~audience~~ interactions
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754424535704404098/Screenshot_from_2020-09-12_21-33-40.png
bmk#1476: X=monad Y=burrito
Daj#7482: Some of these are just dumb lol https://cdn.discordapp.com/attachments/729741769738158194/754424721021468821/Screenshot_from_2020-09-12_21-34-18.png
bmk#1476: what if you use all of them as a prompt to gpt3
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754424843709054976/Screenshot_from_2020-09-12_21-34-53.png
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754424926697423018/Screenshot_from_2020-09-12_21-35-13.png
bmk#1476: wait there's a perfect meme for this one |
bmk#1476: i need to find it, one sec
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754425103974006904/Screenshot_from_2020-09-12_21-35-54.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/754425217299906601/k4qtf978myg31.png
Daj#7482: I want this as a poster on my wall https://cdn.discordapp.com/attachments/729741769738158194/754425224274903040/Screenshot_from_2020-09-12_21-36-20.png
bmk#1476: i'm puttingall these into gpt3
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754425473265827880/Screenshot_from_2020-09-12_21-37-22.png
Daj#7482: please do
Daj#7482: If there's ever a biography about me I want these to be in it https://cdn.discordapp.com/attachments/729741769738158194/754425722935705681/Screenshot_from_2020-09-12_21-38-13.png
Daj#7482: I will pass on my list of bad jokes as a family heirloom
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754425923499196526/Screenshot_from_2020-09-12_21-39-10.png
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754426106492223600/Screenshot_from_2020-09-12_21-39-55.png
Daj#7482: I need to stop
Daj#7482: Lifehack https://cdn.discordapp.com/attachments/729741769738158194/754426305121878166/Screenshot_from_2020-09-12_21-40-40.png
zphang#7252: > After decades of digging, archaeologist have finally discovered a lost notebook of famed researcher Connor Leahy. The public awaits the unveiling of the contents of the notebooks this Sunday, and the lost discoveries that may be contained within.
Daj#7482: Y E S
Daj#7482: I need to leave copies of this in obvious locations
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/754426593870479460/unknown.png
bmk#1476: ;-;
Daj#7482: hahaha
bmk#1476: `If you're feeling down, just remember that it could always be worse. Something could be wrong with you, for example.` |
Daj#7482: That's actually great
bmk#1476: `The world's most expensive cookie is made with ants.`
Daj#7482: adding it to the notebook
bmk#1476: `What's the point of putting goalposts on a football field? They'd only get in the way and cause unnecessary injuries.`
bmk#1476: `All potatoes are communist.`
bmk#1476: `My theory: Pac-Man is an allegory for the dangers of capitalism.`
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754427047014564011/Screenshot_from_2020-09-12_21-43-37.png
bmk#1476: `I like my coffee like I like my women: ground up.`
Daj#7482: mood https://cdn.discordapp.com/attachments/729741769738158194/754427231928975420/Screenshot_from_2020-09-12_21-44-22.png
Daj#7482: > `I like my coffee like I like my women: ground up.`
@bmk Yikes hahaa
bmk#1476: `The best way to cut down a tree is with a plane.`
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/754427302716375050/unknown.png
bmk#1476: thanks OA
Daj#7482: I have some off the top of my head too
Daj#7482: Friends are like trees. They fall down when struck multiple times with an axe.
bmk#1476: `I'm not a vegetarian because I love animals. I'm a vegetarian because I hate plants.`
Daj#7482: Friends are like snowflakes. They disappear when you pee on them,
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/754427526813712424/unknown.png
bmk#1476: spooky |
bmk#1476: does gpt3 know the date
Daj#7482: It shouldn't I think
bmk#1476: `The glass is always half full... of vodka.`
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754427740374958130/Screenshot_from_2020-09-12_21-46-22.png
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754427860621721660/Screenshot_from_2020-09-12_21-46-53.png
bmk#1476: `There are three kinds of people in this world: those who can count and those who can't.`
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754428012488949770/Screenshot_from_2020-09-12_21-47-27.png
bmk#1476: `My favorite number is 0.1. It's almost zero.` just got this from gpt3
Daj#7482: These are not bad tbh
bmk#1476: `Theoretical physicists are just like regular physicists, but with more theory.` :bigbrain:
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754428266210787438/Screenshot_from_2020-09-12_21-48-28.png
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754428431462170714/Screenshot_from_2020-09-12_21-49-08.png
bmk#1476: `Tell a man that there are 400 billion stars in the universe and he'll believe you. Tell him a bench has wet paint on it and he has to touch it to be sure.`
Daj#7482: :bigbrain: https://cdn.discordapp.com/attachments/729741769738158194/754428556058296391/Screenshot_from_2020-09-12_21-49-37.png
Daj#7482: :ultrazucc: https://cdn.discordapp.com/attachments/729741769738158194/754428714992926730/Screenshot_from_2020-09-12_21-50-13.png
Sid#2121: where has @Daj ended and gpt3 begun lmao
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754428856814796961/Screenshot_from_2020-09-12_21-50-47.png
bmk#1476: `Algebraic expressions are the highest form of poetry.` tbu
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754428992785875014/Screenshot_from_2020-09-12_21-51-21.png
bmk#1476: `Everything is connected in one way or another. Except turtles. They're discrete.` |
Deleted User#0000: > Just a quick reminder I'll be giving a talk about GPT3, AI alignment, Eleuther and more tomorrow at 10:30AM PDT in case anyone wants to attend
> https://old.reddit.com/r/slatestarcodex/comments/ik0rhz/next_ssclesswrong_meetup_guest_speaker_connor/
@Daj will it be recorded?
Sid#2121: > `Tell a man that there are 400 billion stars in the universe and he'll believe you. Tell him a bench has wet paint on it and he has to touch it to be sure.`
@bmk if this is gpt i'm ending this project right now
bmk#1476: `If you don't like me, your opinion is invalid.`
bmk#1476: @Sid it's regurgitated
bmk#1476: just google it, it's a carlin quote
Daj#7482: > @Daj will it be recorded?
@Deleted User If the tech doesn't fuck up, yes
bmk#1476: `Everything is connected in one way or another. Except turtles. They're discrete.` this however is gpt3
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/754429254053265479/Screenshot_from_2020-09-12_21-51-59.png
bmk#1476: and afaict it's original
Deleted User#0000: cool, i'll get to see how imagined you and you differ
Deleted User#0000: lol
Daj#7482: Physically or intellectually haha
bmk#1476: i absolutely love this turtles one
Deleted User#0000: both lol
Daj#7482: The turtles one is pretty great
bmk#1476: `If the square root of -1 is imaginary, what's the square root of the imaginary number?` |
Daj#7482: Well I hope I don't disappoint lucid haha
Daj#7482: My haircut probably will
Daj#7482: Great thing to say to linux nerds with a Tux sticker on their laptop: "Nice club penguin sticker"
Daj#7482: sqrt(i) is -1 I think?
Daj#7482: I don't remember
Daj#7482: or was it 1
Daj#7482: god I suck at math
Deleted User#0000: lol nah, i'll be cheering you on
Daj#7482: I do need a haircut though
Daj#7482: I still have quarantine hair
Sid#2121: nah quarantine hair is the way forward
Sid#2121: haircuts are unpleasant
Daj#7482: tbh I have some pretty glorious hair when long
Daj#7482: but that's work too
Sid#2121: either go bald or go full on homeless
Sid#2121: the only two haircuts
Daj#7482: Well I have the homeless look pretty down
Daj#7482: Or homeless Don Quixote
bmk#1476: `The moon is just a cool rock.`
Sid#2121: not wrong |
Daj#7482: hahah
bmk#1476: `I don't know if you're familiar with the term 'web 2.0', but it's when your mom goes on the internet.`
bmk#1476: `The only reason I'm still on Facebook is so that I can tell people I'm deleting Facebook.`
bmk#1476: `Mountain Dew will dissolve a mouse.`
bmk#1476: `I'm sure we can agree that the best kind of peace is the peace where everyone is dead.`
Sid#2121: :guilty: comforting that our AI is telling us this
Daj#7482: :firealarm:
bmk#1476: `The number one cause of divorce is marriage.` :bigbrain:
Daj#7482: I am once again shocked by how good these are
bmk#1476: `If you think the world is too cruel, you're probably just bad at it.`
bmk#1476: afaict most of these are original
bmk#1476: `What if there were no hypothetical questions?`
Sid#2121: https://www.reddit.com/r/Showerthoughts/comments/9li6is/the_number_one_cause_of_divorce_is_marriage/ i'm willing to bet a lot are from shower thoughts lol
bmk#1476: ah, hmm
Daj#7482: Probably a lot aren't original
Daj#7482: But almost none of mine are too
bmk#1476: `The order of operations is PEMDAS. P is for parentheses, E is for exponents, M is for multiplication, D is for division, and A is for addition. S is for stupid.`
Daj#7482: actual gold
bmk#1476: `What is the last thing to go through a fly's mind? Its ass.` this one is funny in a surreal way
Daj#7482: maybe if you squish it in a certain way |
Daj#7482: Or fly's are just vain
Daj#7482: Scheming little fuckers
Daj#7482: Can we take a moment to appreciate what a phenomenal name for an animal "Fly" is btw?
bmk#1476: `A computer is like a cold. It's not good for you, and you can't get rid of it.`
Sid#2121: > Can we take a moment to appreciate what a phenomenal name for an animal "Fly" is btw?
@Daj surprised that username wasn't taken already
Daj#7482: I like that we didn't even call birds flies
bmk#1476: `The sun is a giant space heater.`
Daj#7482: A Fly is the _essence_ of flying
Daj#7482: Nothing else but fly
bmk#1476: `I have multiple personalities, but they're all me.`
Daj#7482: A fly is the platonic ideal of flying :bigbrain:
Daj#7482: One from the notebook: Frogs are just mouths with just enough limbs to fling themselves at food.
bmk#1476: `The reason fire trucks are red is because that's the color of the people running away from it.`
bmk#1476: `A little pencil can write a large check.`
bmk#1476: `All roads lead to Rome, but I'm not sure Rome wants them to.`
bmk#1476: `A lot of people are afraid of heights. Not me, I'm afraid of widths.`
Daj#7482: We should write a paper on deepities generated by GPT3 and try to track down if they are original or not
bmk#1476: haha yes we actually should
bmk#1476: i'll get on generating, like, 1k of em |
Daj#7482: Imagine writing an actually fun paper lol
Daj#7482: Can we cite my joke notebook
Daj#7482: please
bmk#1476: haha yes
bmk#1476: is that single experiment worth an entire paper?
zphang#7252: we should actually have a search system for the pile
bmk#1476: i'd strongly doubt it
bmk#1476: awesome idea!
Daj#7482: We can publish it on April 1st
bmk#1476: i dont know anything about search but
Daj#7482: > we should actually have a search system for the pile
@zphang Yea, seeing as OpenAI kinda fucked this up and maybe had validation data in their train data...
Daj#7482: If we wanted to do this right this should be high priority
bmk#1476: do you know how to set up a search engine?
Daj#7482: Do I look like a DevOps person
bmk#1476: i have no clue
bmk#1476: haha
Deleted User#0000: @bmk just use elasticsearch
bmk#1476: how do that
Deleted User#0000: it's been commoditized |
Deleted User#0000: so, there's cloud services
StellaAthena#3530: I know how to do like a web search engine
Deleted User#0000: but it's easy to set it up at the scale you'd like
zphang#7252: there are "parody" conferences, if you actually want to get published
zphang#7252: like http://sigbovik.org/2020/
Deleted User#0000: it's not like you are expecting 100k hits a day
Daj#7482: How flexible are these search engines? Just keyword matches?
Daj#7482: > like http://sigbovik.org/2020/
@zphang This looks awesome lol
Daj#7482: > We especially welcome the three neglected quadrants of research: joke realizations of joke ideas, joke realizations of serious ideas, and serious realizations of joke ideas
bmk#1476: yes
Deleted User#0000: yup, they can do all sorts of custom indexing, starting with keyword
bmk#1476: i need a sigbovik paper
Deleted User#0000: all the way up to vectorized search (with BERT) if you'd like
Deleted User#0000: https://medium.com/analytics-vidhya/quick-start-elasticsearch-with-python-a2578cd87339
Deleted User#0000: it's amazingly powerful
Deleted User#0000: and almost all startups use this
bmk#1476: hmm, how much would it cost to set up?
Deleted User#0000: Instacart led the charge some years ago
bmk#1476: because we're gonna be searching, what, 1TB of data? |
bmk#1476: not too big but i just want a feel for how much hardware that is
Daj#7482: "Is GPT3 like...conscious...you know what I mean, dude? - A systematic analysis of 'deep' content generated by an autoregressive language model"
Deleted User#0000: i'm not sure, you'd have to do some research there
zphang#7252: given that we have no latency requirement, we'll find a way
bmk#1476: `I'm a human being. I'm not a human doing.`
bmk#1476: ah, ok
Deleted User#0000: most of the work i've done at companies were with migrating to new indices
bmk#1476: can/should we just dump whatever we have so far into es?
bmk#1476: or should we wait till the pile is done
zphang#7252: wait till the v1 I guess?
bmk#1476: also we should write a nice explorer UI
Deleted User#0000: yea, you can, you just have to define a schema
Daj#7482: It probably costs hosting cost? Or can it be run offline?
bmk#1476: 1TB offline?
Deleted User#0000: {id, source, url, text}
bmk#1476: errrrrrrrr
Daj#7482: 1TB cloud on GCS is like 100€
Daj#7482: per month
Daj#7482: or 50€
bmk#1476: def not gcp |
Daj#7482: Something like that
bmk#1476: maybe hetzner
Deleted User#0000: yea, get a cheap hetzner
Daj#7482: That's why I ask if it's hosted only
bmk#1476: `The problem with the gene pool is that there is no lifeguard.`
Daj#7482: Or it's OSS
Deleted User#0000: it's open source
Daj#7482: I could have just clicked on the link
Daj#7482: Apologies
Daj#7482: haha
Deleted User#0000: np lol
3dprint_the_world#6486: > A lot of people are afraid of heights. Not me, I'm afraid of widths.
That's a Steven Wright quote
Deleted User#0000: there's also the good ol' Lucene
Deleted User#0000: postgresql has ok indexing on text fields, some people just use that for search
bmk#1476: yeah some are unoriginal
3dprint_the_world#6486: But to be fair, Steven Wright was probably an early incarnation of GPT-3.
Deleted User#0000: but yea, elasticsearch is safe and easy
bmk#1476: i have a twitter thread of gpt3 jokes that i've done a cursory google on, though if someone could help verify their originality that would be great
bmk#1476: https://twitter.com/nabla_theta/status/1304871133510942720 |
bmk#1476: (the prompt, for reference:)
bmk#1476: ```A list of my favorite jokes:
I'm in the top 100% of people.
People in the 50's didn't care too much about pollution, because they hasn't yet invented the environment.
If you don't understand recursion, read this sentence.
X is like Y except they're different in every imaginable way.
If a baby is born underwater, it can live its whole life underwater.
Is that in metric seconds?
Oh sorry, that's supposed to be an uppercase 7.
Humans eat more bananas than monkeys. Only a few thousand monkeys a year!
Things happen for a reason, but sometimes that reason is that you're stupid.
The more you shake it, the more the color changes. Just like a baby.
Fertility is hereditary. If your parents didn't have any children, chances are you won't either.
Isaac Newton died a virgin. So I have one up on the greatest scientific mind of history! I'm not dead.
Apply freshly cut garlic to a freshly open wound or burn to immediately intensify the pain.
Friends are like trees. They fall down when struck multiple times with an axe.
With sufficiently bad math, we are within error margins.
French is Spanish in cursive.
``` |
Daj#7482: I'm so proud
bmk#1476: `You are actually a rectangle, disguised as a person.`
Noa Nabeshima#0290: https://www.reddit.com/r/NoStupidQuestions/comments/a6hfcr/can_a_baby_survive_underwater_if_it_never_takes/
Daj#7482: 99% of my jokes are stolen if that wasn't obvious btw
Daj#7482: Mostly from Half As Interesting videos lol
bmk#1476: HAI has the greatest jokes ever
3dprint_the_world#6486: > My reality check just came back positive.
bmk#1476: Wait I just got an idea
bmk#1476: So you start with a one shot prompt
bmk#1476: "here is a list of my favorite jokes"
bmk#1476: Generate 64 candidates
bmk#1476: Have a tournament bracket to select the best
bmk#1476: Add to prompt
bmk#1476: Repeat
Daj#7482: "Florida is a PvP enabled zone"
Daj#7482: That's a fun idea bmk
bmk#1476: im too lazy to put it together rn but if there's enough interest i might do it later
3dprint_the_world#6486: > If you're gonna do something wrong, at least get paid for it.
researcher2#9294: > "Florida is a PvP enabled zone"
@Daj oof |
cfoster0#4356: Was that GPT-3?
cfoster0#4356: Oh it's a meme.
Kazumi#1297: I'd imagine making jokes is hard, it's not funny unless the punchline is unpredictable from the context except with hindsight
bmk#1476: "Everything is connected in one way or another. Except turtles. They're discrete."
bmk#1476: @Kazumi if this isn't humor, i dont know what is
Kazumi#1297: I feel I hear a woosh on me
bmk#1476: that was generated by gpt3
bmk#1476: and afaict it's completely original
Deleted User#0000: @bmk could it be a reference to Discworld?
Deleted User#0000: in the books, the world is a turtle
bmk#1476: turtles all the way down?
Deleted User#0000: oh right lol
3dprint_the_world#6486: All it takes for GPT-3 to produce python code is this short prompt: `#!/usr/bin/python` 😄
Kazumi#1297: I wonder how good raw source code is for part of the dataset
3dprint_the_world#6486: yeah
zphang#7252: gpt russian roulette:
```python
os.system(openai.Completion.create(prompt="Run this bash command: ", n=100, stop="\n"))
```
bmk#1476: better prompt: |
bmk#1476: ```
"""TOP TEN BASH COMMANDS (YOU WON'T BELIEVE NUMBER 7!)
1. rm -rf / --no-preserve-root
2."""```
researcher2#9294: A guy at old work did this once, he still gets shit
zphang#7252: Does anyone have a link to Daj's talk? I didn't realize you had to sign up ahead of time
bmk#1476: https://meet.google.com/ndo-jzks-hur
asparagui#6391: https://docs.google.com/presentation/d/1OIyJ5XFKu-hWT9MuzRHYW70uoYJoC2Ecj6NTOu9NDlY/edit#slide=id.p
asparagui#6391: @Daj you talk good
Daj#7482: Thank you :)
bmk#1476: That was absolutely amazing
bmk#1476: You're really good at conveying the urgency of the situation
thenightocean#6100: Yeah, you got me quite scared actually
Daj#7482: Thank you! I'm glad it wasn't too cringey hah
Sid#2121: Was it recorded? I wasn’t at home to watch 😓
StellaAthena#3530: Yeah it was
Sid#2121: Is it uploaded anywhere yet?
StellaAthena#3530: Idk
CRG#8707: Will probably be uploaded here: https://www.youtube.com/channel/UC99aUzBOquAHJbgUC5eJoZg |
Daj#7482: Well that was super fun! (and exhausting)
Daj#7482: Thanks to everyone that attended!
Neuromancer42🔮#6297: Thank you for creating the space and sharing the link publically!
RorroArt#7226: can i ask a random question?
RorroArt#7226: do you think that rl is the key to agi?
3dprint_the_world#6486: Matrix multiplication is the key to agi
bmk#1476: I know connor thinks so @RorroArt
bmk#1476: I personally think RL may not be necessary for something highly agenty
bmk#1476: Well, ok, connors idea is the whole world model neocortex thing
bmk#1476: Personally I think a world model plus search (no reward signal, no credit assignment) is already AGI
RorroArt#7226: thats interesting
RorroArt#7226: and what about neuroevolution?
bmk#1476: What about it?
kindiana#1016: backprop is just too good tbh
3dprint_the_world#6486: I once heard something along the lines of "evolution is a good approach if you can make billions of brains for free, train each one in a real world environment for years, and can wait hundreds of millions of years to get any results"
StellaAthena#3530: Yup
StellaAthena#3530: Sounds about right to me
bmk#1476: evolution is basically just noisy sgd with momentum
RorroArt#7226: make sense
bmk#1476: by the way, if anyone here has read iterated amplification and can actually math, a translation into normalspeak would be nice |
Subsets and Splits