data
stringlengths 115
7.61k
|
---|
chilli#5665: ...
jrowe#5371: 1% of 1%
Deleted User#0000: > do you think *really* understand something like, say, the Zero optimizer?
@chilli you all really underestimate yourselves
gwern#1782: I'm a little puzzled why people are so interested. the official OA 1.5b has been out forever, and it's far from the biggest publicly available model. I guess it's a mix of people not realizing that 'EA is releasing its GPT-3 models' doesn't mean GPT-3-*175*b but something vastly less useful, and hopes being raised for a GPT-3-175b+ EA release in the near-future ('wait, EA is for real?!')
Deleted User#0000: Very few people know what you kmow
EricHallahan#1051: I don't understand how any of GPT-NeoX works at all. The only reason I am a dev is because I did some stuff with docker.
EricHallahan#1051: :berk:
chilli#5665: Like, there's areas that I've published research in, where for something very closely related I don't think I understand it.
chilli#5665: hmm, maybe...
jrowe#5371: what bigger models are available that can do generation?
chilli#5665: but like, as another example, how many people *really* understand VAEs?
chilli#5665: Like, to the level of this article: http://ruishu.io/2018/03/14/vae/ ?
Deleted User#0000: Deep learning is like at the same time popular, and yet the talent is still sparse
EricHallahan#1051: Lurkers: Yes, we are for real.
Deleted User#0000: Is what I've realized
gwern#1782: T5, Megatron, I think allen and salesforce released 1 or 2 models comparable or larger than 1.5b?
chilli#5665: right, I think there's just a ton of people who don't really understand things.
jrowe#5371: ah, right - ty
chilli#5665: Who kinda just skirt by on the parts that they do understand. |
chilli#5665: Who might even be providing real value to their employers or whatever
jrowe#5371: yeah, interested bystanders
gwern#1782: (I mean, aside from connor's 1.5b replication ofc. probably a decent number I'm forgetting at this point)
Deleted User#0000: I think there's just too many niches to get lost in in deep learning. So to know a subject really well like say Aran about transformers
chilli#5665: but don't really understand the underlying technologies they're using.
Deleted User#0000: Is quite rare still
marcus.llewellyn#2923: Some people like sausage. Some like making sausage. You guys are the latter. 😉 Others just wanna put it on a bun and put condiments on it.
asara#0001: agree, but this is true of most areas, e.g. think of webdev, and how many webdevs actually understand anything from HTTP to the DOM to TCP to encryption to OWASP 10, etc
chilli#5665: eh, that's different.
chilli#5665: what I'm trying to work on is not any of those.
chilli#5665: Like, I do research in machine learning/work in the field of machine learning more generally.
asara#0001: right I was thinking on the product/employer side, not the research side
chilli#5665: If I was working as a network engineer I feel like I would really want to understand HTTP
chilli#5665: I guess another reason I wonder about these things.
chilli#5665: Is that something like the Zero optimizer really does feel ... obvious?
chilli#5665: Like, I feel like if I had a *real* understanding of the space before the zero optimizer came out.
chilli#5665: I think I could have come up with the zero optimizer.
jrowe#5371: depends, that's kinda my bailiwick, most of what you do is level 3 and below
chilli#5665: level 3?
chilli#5665: :thonk: |
chilli#5665: is this networking terminology
jrowe#5371: osi model
chilli#5665: still lost
chilli#5665: (although I see what it is from a google search)
jrowe#5371: physical, data link, network, transport, session, presentation, application
chilli#5665: I'm curious, do other people feel this way too?
chilli#5665: It's possible I'm just falling prey to hindsight bias
chilli#5665: but I feel similarly about stuff like GPipe
Aran Komatsuzaki#5714: yeah i could've built Transformer-XL, for example
chilli#5665: Although possibly, perhaps the real moat here isn't the actual idea itself
chilli#5665: rather it's the actual knowledge/expertise to build the system itself from the underlying compute primitives
chilli#5665: (which I don't know how to do).
bmk#1476: yeah actually implementing it is the hard part
chilli#5665: hmm, but maybe that's not really true either.
Aran Komatsuzaki#5714: there are various bottlenecks: computes, implementation, knowledge etc
chilli#5665: Since there are certainly papers (including my own) where I feel like anybody who really understood a set of baseline facts would be able to do what I did.
bmk#1476: I have no idea how to actually implement gpipe in practice
Aran Komatsuzaki#5714: most people lack at least one of them, which prevents the paper to be released
Aran Komatsuzaki#5714: we now have pretty much all, relative to our competitors.
Deleted User#0000: knowledge is still the big bottleneck |
Deleted User#0000: I wouldn't hang out here if I could have found a group in say SF
StellaAthena#3530: 2real4me
Deleted User#0000: That isn't bound up at some company
Aran Komatsuzaki#5714: yeah. can we build such a group?
Deleted User#0000: Isn't that this group?
Deleted User#0000: Lol
bmk#1476: are you saying you want to build an eai competitor
nz#9710: *this is getting out of hand, now there's two of them!*
Aran Komatsuzaki#5714: not really @bmk
Deleted User#0000: I guess there's other groups out there too, like fastai or yannics channel
Deleted User#0000: But they have their attention on other things
Deleted User#0000: And they don't have the full expertise either
Aran Komatsuzaki#5714: we're trying to improve our knowledge base by summoning our holy spirit James Bradbury et. al.
chilli#5665: For example, the set of baseline facts that I think matter for the zero optimizer are:
1. People are running into limits in terms of how big a model they can fit on a device (even after using data parallel as much as they can).
2. Optimizers represent a significant chunk of the remaining memory (compared to parameters).
Deleted User#0000: Prob the only group out there that has the most similar expertise is huggingface
Deleted User#0000: Tbh
chilli#5665: If you knew those 2 things I think anybody could have come up with the zero optimizer.
chilli#5665: I didn't realize the second one until I discussed the zero optimizer with some people here (and I actually thought about where the memory is going...) |
chilli#5665: And I guess 3, if you understand where the communication is happening in a typical data-parallel setup.
chilli#5665: I guess my point is: actually understanding things is underrated, and often, if you actually understand things, you can often find obvious things to do.
Deleted User#0000: Yea, I think that's true on the performance side of things. There's still room for improvement if you understand the problem well, and can bring in ideas from other fields. Pipelining is reminescent of pipeline processors, which I touched at Cornell doing electrical engineering
Deleted User#0000: For the neural net side of things, I don't think things are that obvious. that side amounts to a lot of random exploration
guac#4716: damn shawwn with the heat on HN
Daj#7482: shawwn posts the same stuff every time we do anything
Daj#7482: I advise not engaging, it's not worth it
jrowe#5371: how's it feel living rent free in someone's head lol
Daj#7482: tbh kinda sucks, I wish we could just talk and be friends again
EricHallahan#1051: Okay, I should make an HN account?
jrowe#5371: yeah, pride can be a shitty thing sometimes
EricHallahan#1051: Also, did we check r/gpt-neo?
EricHallahan#1051: I guess that is kind of relevent.
Daj#7482: wdym
EricHallahan#1051: Someone created an r/gpt-neo
EricHallahan#1051: over a month ago
Daj#7482: I vaguely recall that happening
EricHallahan#1051: I wonder how that looks.
jrowe#5371: https://www.reddit.com/r/GPT_Neo/
bmk#1476: > they've walked back their GPT-3 replication claims to replicating 1.5B due to the fact that their architecture doesn't scale. |
:omniberk:
chilli#5665: technically true
chilli#5665: or err, depends on what he meant
bmk#1476: well, technically false, because we just released 2.6B lol
EricHallahan#1051: lol
chilli#5665: I meant more like
chilli#5665: "Unfortunately, we cannot reliably get TPU-2048 pods"
chilli#5665: lol
EricHallahan#1051: Oh, yeah, GPT-Neo doesn't scale.
EricHallahan#1051: much further
bmk#1476: well, it could scale with a minor change that nobody has gotten around to implementing lol
EstebanSir#2189: hey uhhh, what is the tpu name in google colab?
EstebanSir#2189: never used the tpu
EricHallahan#1051: It is a v2-8 IIRC
EricHallahan#1051: But I don't know, me neither.
EstebanSir#2189: no yeah, right, but in the github they ask for a name or an id in the main.py command?
jrowe#5371: gotta switch to tpu mode, iirc
EstebanSir#2189: what do i put there?
EstebanSir#2189: (the parameter is called --tpu) |
EstebanSir#2189: oh, looking at the code, i can just type "colab" into it
EricHallahan#1051: I guess that is what you should do lol
Sid#2121: yup, you got it
EstebanSir#2189: ;-; google colab is downloading at 3 mb/s
Sid#2121: it might be the eye, you can try the other download link that's commented out
Daj#7482: lol you might have actually ccaused a spike to saturate the eye https://cdn.discordapp.com/attachments/729741769738158194/823349923772366898/Screenshot_from_2021-03-22_01-18-06.png
Sid#2121: damn. Didn't realise people would actually care about these models, lol
EricHallahan#1051: Sid, are you in favor of that message now on the GPT-NeoX repo?
EricHallahan#1051: I should have precommitted. `:|`
chilli#5665: :thonk: this might actually get more likes than the pile release
EstebanSir#2189: holy crap this is going to take too much time- i'm going to see that other link you mentioned
Daj#7482: lmao
EricHallahan#1051: Uh, I was the 1000th star on that repo.
Daj#7482: I guess Aran did put "GPT-3" in the tweet
Daj#7482: Technically correct but lol
EricHallahan#1051: There were 999 people there before me.
bmk#1476: :sadge:
Daj#7482: The Pile will get citations from DM and shit though
Daj#7482: :chad:
chilli#5665: I guess maybe we're too desensitized to multi-billion parameter models? |
chilli#5665: lol
chilli#5665: and the general public is still shocked by them?
EricHallahan#1051: I have never used one beyond maybe an hour of GPT-3 use.
Daj#7482: 50-50 people think it's literally GPT3-175B
Daj#7482: actually there has been less of that so far than I expected
Daj#7482: most people seem to get it and be cheering us on
Daj#7482: on Twitter at least
Daj#7482: Which hell yeah, high five twitter 🙏 🐦
EstebanSir#2189: sorry, where is that link?
EricHallahan#1051: Reddit is dead. https://old.reddit.com/r/GPT_Neo/
chilli#5665: I mean, it was never alive
chilli#5665: lol
EricHallahan#1051: lol
triggerhappygandi#0001: Plebbit
EstebanSir#2189: im going to wait till tomorrow to try this out
EstebanSir#2189: hopefully the download times are smaller
triggerhappygandi#0001: Could it be that a lot of people are eating up the bandwidth?
Daj#7482: the-eye seems to actually be at the limit yea lol
Daj#7482: Not sure if we're solely responsible but that's pretty surprising imo
triggerhappygandi#0001: Well Aran got 400 likes in an hour |
triggerhappygandi#0001: Thats.. too fast?
Daj#7482: Actually now that I'm thinking about it, the easy to use colab explains why people would flock to use it so quickly
Daj#7482: also not having removed the optimizer weights definitely made the download heftier lol
CRG#8707: Are those necessary for finetuning?
StellaAthena#3530: If you want to pick up where we left off, yes. If you want to do independent fine-tuning no
Daj#7482: I'm actually unsure whether they'd help or not. Might help?
Daj#7482: Definitely aren't necessary
Daj#7482: would help for continued pretraining as stella says
StellaAthena#3530: RIP my karma farming: https://www.reddit.com/r/MachineLearning/comments/ma9kaw/p_eleutherai_releases_13b_and_27b_gpt3style/
chilli#5665: l feel like somebody's gotta have done these experiments.
chilli#5665: @zphang ?
triggerhappygandi#0001: Oof 0
StellaAthena#3530: I would bet money that nobody has done those experiments at the 1B+ scale
triggerhappygandi#0001: No dopamine
jrowe#5371: hn hug of death too , gpt-neo has been #1 for 3 hours
ethan caballero#6044: Does "Scaling Laws for Transfer" say whether or not they re-init Adam's parameters during finetuning? : https://arxiv.org/abs/2102.01293
CRG#8707: Looks like most BERT/Roberta etc, reset the optimizer state for finetuning.
StellaAthena#3530: No post in the past 12 hours has gotten more than 10 karma or 5 comments. I guess nobody is on the subreddit today?
zphang#7252: naw, I haven't actually poked at the model. happy to do so once they get ported to HF/T though
chilli#5665: or err, I just mean in general |
chilli#5665: like, when you do fine tuning do you need the optimizer weights.
zitterbewegung#4846: congrats on gpt-neo
StellaAthena#3530: No. You can restart the optimizer and train from "scratch"
triggerhappygandi#0001: Probably. I barely post anything on social media for this exact reason. Not getting likes is not 0 dopamine its actually negative lol.@StellaAthena
chilli#5665: well, you can, but I wonder how much it matters...
StellaAthena#3530: Right, I was answering
> Do you need the optimizer weights
StellaAthena#3530: You don't
StellaAthena#3530: It may help
EricHallahan#1051: Thank you.
zitterbewegung#4846: i was going to try get access to gpt-3 but now i will use gpt-neo for my video game
EstebanSir#2189: this is actually incredible, very glad you guys decided to make such a project, OpenAI would never release big models like these i don't think (yes i know it isn't as big as it could be right now, but if you guys plan to get to 10b parameters soon, that's good enough for me to be really impressed)
triggerhappygandi#0001: Isn't it best practice to save optimizer_state_dict in torch, but it doesn't matter if you don't?
chilli#5665: yeah, curious how much it helps.
triggerhappygandi#0001: The problem is getting everyone to run a 10B model for free, which is an uphill battle.
zitterbewegung#4846: i don't think openai will release anything IMHO they will be bought out by microsoft and keep itself as a service
triggerhappygandi#0001: People already talking about DALL-E API on their slack lmao
triggerhappygandi#0001: As if this is the new order now. Release paper, then an API
EstebanSir#2189: true words
aero#1357: so open |
zitterbewegung#4846: like personally i don't like how openai has moved torward press releases without the ability to reproduce experiments
zitterbewegung#4846: thats not even science
triggerhappygandi#0001: They might as well curate the examples for their blogs from now on to impress you with the results while in reality the model is dud.
triggerhappygandi#0001: "iT Is tOo dANgeROuS To rElEAsE"
jhsu#8763: A lot of science is not reproducible, machine learning and cs has had it good.
aero#1357: you dont understand, someone might use it to write misleading news articles, something humans are incapable of
zitterbewegung#4846: it was my first job in college to reproduce master /phd students work to package it better. now its a hobby of mine
zitterbewegung#4846: i have been trying to do a video game using transformers but i think gpt-3 can give me a better storyline or better conversations
StellaAthena#3530: "Too dangerous for you, but not too dangerous to sell for profit" does not inspire confidance.
aero#1357: translation: our investors dont see profit in open source
zitterbewegung#4846: there isn't profit in open source
zitterbewegung#4846: you can get amazoned
jrowe#5371: there is profit in open source. profit isn't the point of most open source
zitterbewegung#4846: well yea
zitterbewegung#4846: ML can be put behind an API
zitterbewegung#4846: and you charge for the API access
zitterbewegung#4846: similar to how huggingface works
zitterbewegung#4846: i was wondering if gpt-neo would be put on huggingface
zitterbewegung#4846: i mean gpt-neox
triggerhappygandi#0001: Neox can be easily put on hf |
triggerhappygandi#0001: Due to no local attention
zitterbewegung#4846: you have an organization account?
triggerhappygandi#0001: On github?
EricHallahan#1051: (Blame CUDA 11 and DeepSpeed)
StellaAthena#3530: @zitterbewegung There's a small annoying problem that's preventing it from happening. If you want to fix it for us we would love to
zitterbewegung#4846: does an issue exist in one of the repos i would love to help if i can
zitterbewegung#4846: i mean do you have the issue documented
StellaAthena#3530: These models were trained with alternating global and local attention layers. HF's GPT-2 framework doesn't give the option for local attention. They have other models that offer local attention, but you can't just copy the code over because the models aren't written exactly the same way
StellaAthena#3530: The solution is to either a) write a custom class for the transformers library or b) open a PR to HuggingFace that adds local attention as an option.
zitterbewegung#4846: okay
StellaAthena#3530: I'm also a fan of c) listen to Leo in the first place and train it with all global attention, but my time machine is broken
zitterbewegung#4846: so this is a stopgap measure until C occurs i assume?
Daj#7482: I mean tbf the original GPT3 is global-sparse, so this is _technically_ more accurate reproduction
zitterbewegung#4846: well i mean ill do option B
zitterbewegung#4846: worst thing is they say no
StellaAthena#3530: They won't 🙂
zitterbewegung#4846: option a sounds cool
Daj#7482: Teven just mentioned he might look into this
StellaAthena#3530: We're buddy buddy with them
zitterbewegung#4846: oh okay |
zitterbewegung#4846: i can make the ticket or i could draft the ticket and you can approve it
StellaAthena#3530: We have an open ticket actually
zitterbewegung#4846: oh ok
StellaAthena#3530: It's not *hard* it's just that nobody has been invested enough to make it happen
StellaAthena#3530: Actually, does anyone at @Hugging Face 🤗 know why there isn't a GPT-3 architecture in the `transformers` library?
zitterbewegung#4846: uh
zitterbewegung#4846: gpt-3 hasn't had a source code release
zitterbewegung#4846: and they never shared weights
zitterbewegung#4846: its all behind an API isn't it?
triggerhappygandi#0001: Because there isn't a model? @StellaAthena
zitterbewegung#4846: yea you can only access it through an api
triggerhappygandi#0001: Why would they write a script if no model
StellaAthena#3530: I was under the impression that the HF implementations were done from scratch
zitterbewegung#4846: well those are
zitterbewegung#4846: but only gpt-2 exists
Daj#7482: this emoji seems threatening somehow https://cdn.discordapp.com/attachments/729741769738158194/823361494849814538/Screenshot_2021-03-22_02-04-24.png
triggerhappygandi#0001: I doubt T5 was from scratch.
zitterbewegung#4846: google did t5 and they released their weights
Aran Komatsuzaki#5714: i wanna make my name gold like that when people mention me lol
triggerhappygandi#0001: Golden name |
Daj#7482: Well you'll have to get EleutherAI Premium™️
triggerhappygandi#0001: Premium™️ for $69.420 a year?
zitterbewegung#4846: no just sell gpt-neo as a non fungible token
triggerhappygandi#0001: Lmao
triggerhappygandi#0001: 700GB model weights as nft
Daj#7482: Putting 700GB directly onto the Ethereum blockchain :ultrazucc:
zitterbewegung#4846: the NFT doesn't go on the chain but that was a joke
zitterbewegung#4846: i mean the data
StellaAthena#3530: TBH, if we made a 1TB file, hashed it, claimed it was a language model, and sold it as a NFT people would believe us
Daj#7482: probably
zitterbewegung#4846: like
zitterbewegung#4846: NFTs are software licences
zitterbewegung#4846: i did some ethereium programming and i looked at it
triggerhappygandi#0001: Why not? Might as well do it. @StellaAthena
zitterbewegung#4846: i know how to do that its like flask
zitterbewegung#4846: https://github.com/CryptoWizard89/AICRYPTOART
zitterbewegung#4846: or it would be better to sell "tickets"
zitterbewegung#4846: to a show
zitterbewegung#4846: which would be api access tokens that would expire after X amount of time but this is sort of off topic
zitterbewegung#4846: going back to the original issue |
Aran Komatsuzaki#5714: @kindiana we should make an image like this out of our OpenWebTextImage and sell it as a NFT lol https://cdn.discordapp.com/attachments/729741769738158194/823362822981025812/fig1.jpg
EstebanSir#2189: has anyone sold r/place as an nft already?
zitterbewegung#4846: probably not
𓅬 gabriel_syme 𓅬#3220: that would only cost a city's power for a day
Daj#7482: nice can I offer Paris
zitterbewegung#4846: @𓅬 gabriel_syme 𓅬 no thats wrong that would probably be more like texas for a year
𓅬 gabriel_syme 𓅬#3220: lol my bad, got my magnitudes wrong
EstebanSir#2189: do the entirety of France, better to have some extra
zitterbewegung#4846: france probably uses less energy than texas
asara#0001: There's a notable potential for it to look bad as far as the reputation of a person/group goes
asara#0001: wrt OpenAI saying models are 'too dangerous' to release, I'm at least *moderately* sympathetic. There are some *very large* companies that basically do nothing but spam/astroturfing/fake comments+websites, and so on, as their entire profit models (not to mention *many* governments which want to do this, many of which do not have the resources do make a GPT-3 or similar on their own)
asara#0001: While it may be released just a bit later one way or another, it seems nicer if the Internet at least gets a bit of a warning of what is to come, before it actually comes.
asara#0001: For a more cynical response though I'd just say "Does a *direct preview* of what is to come even matter?" and I'm not even sure. I don't think most organizations, technologies, countries, etc., are even prepared for the next 0-3 years of AI, let alone past that. But we can at least say an attempt was made
zitterbewegung#4846: i mean you cant put pandora back in the box
zitterbewegung#4846: it doesn't work that way
zitterbewegung#4846: writing the paper and releasing it is more scientific
sl24#8080: pretty general question, but with the 2.7B parameter model, can I write like a paragraph that makes sense?
zitterbewegung#4846: to be honest the ethical and social concerns should be done while the paper is being written and it could have people analyze it before publication and then you just have a small embargo not an black box api
zitterbewegung#4846: @sl24 how many words in the paragraph
sl24#8080: 100-200 |
sl24#8080: @zitterbewegung
Deleted User#0000: yay
Deleted User#0000: front page of hacker news
sl24#8080: wow
StellaAthena#3530: Yes
Aran Komatsuzaki#5714: who is pizza on HN? sid or connor?
Daj#7482: Neither afaik
zitterbewegung#4846: @sl24 100 might be pushing it
zitterbewegung#4846: @sl24 that might requrire a 124B model but you should try it first
zitterbewegung#4846: @sl24 also what is your definition of making sense
sl24#8080: ah ok
sl24#8080: i guess like clearly describe a topic, like the effect of a on b
StellaAthena#3530: @zitterbewegung I strongly disagree. See the attached image. Prompt is the top section, GPT-2 is the bottom right section. Our 2.7B model is twice the size of GPT-2 https://cdn.discordapp.com/attachments/729741769738158194/823374902852583475/Screen_Shot_2021-03-21_at_9.57.18_PM.png
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/823375039973818419/Screen_Shot_2021-03-21_at_9.58.23_PM.png
StellaAthena#3530: These are from Appendix A of "Language Models are Unsupervised Multitask Learners"
StellaAthena#3530: It might get some stuff factually wrong, but it'll be mostly coherrent
Deleted User#0000: i wish i did something during quarantine
Deleted User#0000: i just play video games and work from home
EricHallahan#1051: I was learning to drive and hacking my car. `:P`
EricHallahan#1051: Succeeded at the first, failed at the second lol |
EstebanSir#2189: oh hacking a car sounds dangerous, what did you do with it?
EricHallahan#1051: Was trying to gain access to some of the deeper parts of the ECU.
StellaAthena#3530: Oh it's really not
StellaAthena#3530: At least
StellaAthena#3530: Not if you're competent
StellaAthena#3530: And the bar for competency is lower than you think
EricHallahan#1051: I managed to find the section of the binary where it is handled, just never got around to fully decompiling Honda's "security" algorithm.
Deleted User#0000: nice the only thing i learned was how to get out of the loony bin
EricHallahan#1051: (Which aren't really that secure as it is more "security via obscurity" rather than actually secure algorithms.)
EricHallahan#1051: (Note that this is changing.)
bmk#1476: reminds me of defcon people hacking cars the manufacturers touted as unhackable
bmk#1476: the best way to get something hacked is to declare it unhackable
EricHallahan#1051: I never actually touched the car during it, so no, it is *extremely* safe, except for the Russian torrents.
zitterbewegung#4846: or make it so if you hack it you get bitcoins
EricHallahan#1051: *cough cough* NVIDIA *cough cough*
MasterScrat#6910: Hey everyone, where would be the place to ask about the newly announced notebook + pre-trained models?
EricHallahan#1051: Right here, actually.
MasterScrat#6910: I can't get it to work 😕 i thought i was doing something wrong but i see other people are having issues
MasterScrat#6910: Does it work for you? are you able to run inference without fine-tuning and it generated believable text?
aero#1357: I got it up and running but its generating mostly nonsense |
```
In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The answer including a swimming horde of coral sea Han’s where cracking up there at The murder the in the animal like M. In the his liggers in the in the weird forest, ian with the it may is it is the in the fearsome MRT Latories the in the in the and the Mio-me
in the the warmth grass deer is the warrior, the mastery herd of the in the the triple it D. Or view the in the wildlife С."
In the land
F the
In However a reef is also a what in
one
fusion
< the
An example |
In B at the
jer the world
It is "
The richest in the park the the
rage
the story of the Mil the
The on in the
the
```
im probably doing something wrong 🤷
MasterScrat#6910: ```
In a shocking finding, scientists discovered a herd of unicorns living in a remote, |
previously unexplored valley, in the Andes Mountains. Even more surprising to the
ValamblingNBULTS ampl mil vault EB CowboysMemory parachute005 Sheen imag then passport338 AnaOct respectively embELY...
```
MasterScrat#6910: here's what i get. seem similarly non-sensical
bmk#1476: er, are you sure you loaded the checkpoint?
bmk#1476: (ive never used the notebook before btw)
bmk#1476: but i can show some samples from the actual training run
aero#1357: 2.7B off the eye, I downloaded it to my NAS and it's definitely loading them
bmk#1476: for comparison
bmk#1476: hmm
StellaAthena#3530: I am working on getting it working. It appears that some corners were cut in the first draft
EricHallahan#1051: (Me neither.)
StellaAthena#3530: Right now I can get to the end, but when I run inference it says
` (0) Invalid argument: Expected size[0] in [0, 12562], but got 12565
[[{{node gpt2/wte_1/Slice_6}}]]`
bmk#1476: hm nvm apparantly we dont have predict_steps on
jrowe#5371: "ValamblingNBULTS ampl mil vault EB CowboysMemory parachute005 Sheen imag then passport338 AnaOct respectively embELY"
i, for one, welcome our new ValamblingNBULTS overlords
bmk#1476: lol k here's the problem @StellaAthena : vocab has to be 50257 |
bmk#1476: so just remove `vocab:x` from the layour
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/823381323036885052/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/823381442665906226/unknown.png
MasterScrat#6910: thanks, i'll check again tomorrow then!
bmk#1476: also @StellaAthena this is the wrong dataset format https://cdn.discordapp.com/attachments/729741769738158194/823381783230808094/unknown.png
EricHallahan#1051: `null`
StellaAthena#3530: No that's right
bmk#1476: didnt we simplify it down to just `"dataset name"`
StellaAthena#3530: You just need to replace `"dataset_name"` with the actual name of the dataset
bmk#1476: without the padding nulls
bmk#1476: i thought we simplified it down at some point
StellaAthena#3530: e.g., `[["HackerNews", null, null, null]]`
bmk#1476: maybe we didnt, hm
StellaAthena#3530: @bmk I tried hardcoding the vocab to 50257 and that still didn't work
StellaAthena#3530: Same error
bmk#1476: errr
bmk#1476: try 50256?
aero#1357: the dataset also specifies the vocab size, I had to edit it in there to get it working
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/823382307872178196/Screen_Shot_2021-03-21_at_10.27.15_PM.png
aero#1357: also doesnt seem right you need a dataset to do inference |
bmk#1476: i knoe you added a padding token at some point
EricHallahan#1051: Try every number.
StellaAthena#3530: @aero You don't *actually* but the way the code is structured it checks the dataset's path even when it does inference
EricHallahan#1051: Actually, try every number in the set of natural numbers.
aero#1357: I tried commenting out the check initially and it breaks, uses the dataset to determine the vocab size (at some point) so I made a dummy one
EricHallahan#1051: (Need to use mathematician talk here to get through.)
StellaAthena#3530: @aero What dataset size did you use
aero#1357: 50257
StellaAthena#3530: https://cdn.discordapp.com/attachments/729741769738158194/823382979187441694/Screen_Shot_2021-03-21_at_10.29.56_PM.png
StellaAthena#3530: This is what I have
bmk#1476: wait lol i just looked at this screenshot what are you doing o.O
bmk#1476: you cant put the number in there
bmk#1476: change the vocab size number
aero#1357: ```JS
{
"n_vocab": 50257,
"path": "./bundestag_0tfrecords",
"eval_path": "",
"tokenizer_is_pretrained": true,
"tokenizer_path": "gpt2", |
"eos_id": 50256,
"padding_id": 50257
}
```
bmk#1476: change this to 50257 https://cdn.discordapp.com/attachments/729741769738158194/823383232632193064/unknown.png
bmk#1476: change this to 50257 https://cdn.discordapp.com/attachments/729741769738158194/823383276517064714/unknown.png
bmk#1476: remove `heads:x,` https://cdn.discordapp.com/attachments/729741769738158194/823383323342143528/unknown.png
bmk#1476: @StellaAthena also why is this downloading from eaidata?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/823383530700668928/unknown.png
bmk#1476: eaidata probably cant handle the traffic lol
StellaAthena#3530: It looks like it's running
StellaAthena#3530: @bmk IDK, ask Sid. He's the one who dropped this without testing it
StellaAthena#3530: > In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
>
> Bebek Uranzoglu, another member of the research team from the University of British Columbia, was working on a project the Latino-Canadian rodeo competition equipos to document a rare and remarkable ecosystem in the Andes Mountains.
>
> His curiosity was piqued when he spotted an adolescent herd of about 10 unicorns foraging in a forest near the valley of the Jumbo Flu Group. The unicorns — whose numbers once swelled to 46,000 — were perched on the forest floor and watched the researchers work.
>
> Urizoglu grew excited when he spotted another group that seemed to be thriving in an area below the herd. The team hoped the apparent population growth would indicate a human presence.
> |
> But when a team of researchers set up a camera trap, they were surprised to find the unicorns in the first place, and in a forest near a lake — in fact the forest was almost entirely made up of the animals. Despite their own magical presence, the team could not see the herd was populated by humans.
>
> “The whole place almost smelled like animals,” says Bebek. “We were never able to find human footprints at any of the points we stood at. The trees were so large, you wouldn’t have been able to walk 40 meters through them. We assumed that the truth of the matter was, ‘Well the deer didn’t like this forest at all.’”
bmk#1476: im going to bet right now that after hf integration works, soon after HF will become the preferred way to use the models
MasterScrat#6910: awesome! did you update the existing notebook? or is this a new one?
bmk#1476: i look forward to citing this message once we have our models working in hf
EricHallahan#1051: He should have notified us before dropping, it seems like a very short-sighted decision.
aero#1357: Gotta be something I messed up then https://cdn.discordapp.com/attachments/729741769738158194/823384096016433172/message.txt
EricHallahan#1051: I would bet 1000 USD on that right now if I was into gambling.
StellaAthena#3530: @aero My guess is that you accidentally started training the model
bmk#1476: @StellaAthena can you push the working inference notebook
StellaAthena#3530: And it overwrote the downloaded weights
StellaAthena#3530: @bmk Adding a couple QoL fixes and then I will
bmk#1476: with all the weird training stuff removed
bmk#1476: ah k
StellaAthena#3530: @bmk How hard would it be to allow you to sample from a model without having a dataset saved
bmk#1476: the easiest way to accomplish that would be to figure out how to do it in hf
aero#1357: is gpt-neo open to prs?
StellaAthena#3530: @aero Yes
jrowe#5371: the name of the researcher changed |
jrowe#5371: uranzoglu to urizoglu lol
StellaAthena#3530: Yeah, that happens something
StellaAthena#3530: GPT-2 doesn't have great long-term memory
StellaAthena#3530: If you generate long passages, sometimes referants will get mixed up too
cfoster0#4356: Ah yes I remember that name... GPT-2...
jrowe#5371: interesting that it maintained most of it
jrowe#5371: that's not a name in any language I can find, either. it went deep and made up its own language, so I'll cut it some slack
AI_WAIFU#2844: Actually the Neo 2.7B model has twice the context length of GPT-2 right?
bmk#1476: :yes:
AI_WAIFU#2844: 😎
jrowe#5371: it could have made up a language rule, an/i for proper naming
StellaAthena#3530: Anyone down to give my modified Colab a try?
StellaAthena#3530: @jrowe @aero @MasterScrat
aero#1357: 👀 sure
aero#1357: currently doing a clean clone of the repo to see if its something I changed
StellaAthena#3530: I DM'd you
StellaAthena#3530: No no
StellaAthena#3530: The Colab file was horribly broken and clearly not tested
StellaAthena#3530: :/
aero#1357: oh, ive been using the gpt-neo repo directly havent tried colab yet 😅 |
StellaAthena#3530: Ah
bmk#1476: disadvantages of massively open source projects: split personality syndrome
StellaAthena#3530: That should work
StellaAthena#3530: As in, I know that works because it's currently running on my computer
guac#4716: untested release :guilty:
jrowe#5371: on mobile, no pc access til tomorrow night
aero#1357: ```
The team have made an interesting theory that the unicorns could possibly live among humans, either as a snack or via interactions with humans. This could be the reason why they have been able to communicate with humans in their strange language. The idea is not completely out of the question, as the tiny living quarters they live in are relatively small, and would be packed with food in a way that would force them to interact with humans without them wanting to eat them.
The fact that they the unicorns operated as a functional society with language would prove to be quite a feat considering that they also share a similar similar genetic structure with humans. The team theorized that they could use their language system to communicate with humans, after all, we do it all the time.
The Swiss scientist, Dr. Nikolaus Schmid, was also one of the researchers on the project. He talks about the possibility of aliens communicating with us and how they have intelligence that we do not have. He explained,
“They have consciousness that gives them a strange sense of humour, and I’m sure that they have all sorts of intellectual abilities. Unfortunately, we can only see a tiny fraction of the actual brain cells.”
Schmid also explained how the unicorns were able to communicate with each other and managed to survive it given that they had no cell phones but enhanced cellular technology. He said,
“I don’t believe that unicorns drive cars or recognize people by their name, they don’t get angry, they love children, and they have no nationality or religion.”
```
bmk#1476: try getting it to generate code! |
bmk#1476: thanks to pile, I'm expecting it to be even better at code generation than even full size gpt3
jrowe#5371: nice
jrowe#5371: unicorns are atheists
aero#1357: ```Python
class InceptionBlock(nn.Module):
"""
Builds a between 3 and 9 convolutional layers followed by 2 max-pooling
layers. The encoder and decoder share the same number of parameters and
same number of layers so you can specify parameters for both with
`args.num_channels_encoder` and `args.num_channels_decoder`.
"""
def __init__(self, args, num_channels):
"""
Initializes the InceptionBlock.
inputs are (b, c) length-wise concatenated inputs `(x,y)`, returned from
`torch.nn.parallel.Conv2d` or `torch.nn.parallel.GlobalMaxPool1d`.
"""
super(InceptionBlock, self).__init__()
|
# From or to input size
args.nb_in = 3
args.nb_out = 9
args.num_channels = num_channels
# set to one to enable only the ih_h1 part of ih_3x3x3
args. h1_pad = int(1e-4)
args. nb_branch = 0
# sets the filter width and squashes the batchnorm network by implicit bias term
args. nb_filter = 1
args. nb_group = 1
# static batch normalization parameters,
# so that the new trainable parameters do not depend on train samples
# static parameters are only useful under no-bias conditions
# when you use least-squares or segmentation, you should use the
# trainable parameter.
# static parameters are "local" to the current layer.
# So for example, if you want to train only the feature network, and |
# fix the weights of the decoder, use static parameters that are
# shared by all layers belonging to the decoder.
# So now we can init the whole network with static parameters,
# like:
# init_static_bnnet(args)
# the tensorflow version will use self.bn as "nb_group".
self.bn = args.bn
self.rewire_bn = rewire_bn(bn_type='static')
self.layer_bn = self.rewire_bn(args)
```
it comment's it's code better than most devs
bmk#1476: which part was the prompt? first line?
aero#1357: none of it, prompt is, sec
aero#1357: ```Python
class ConvBlock(nn.Module):
def __init__(self, channels, stride=1):
super(ConvBlock, self).__init__()
self.convolution = nn.Conv2d(channels//stride, channels, 3, stride=stride, padding=1)
self.normalization = nn.BatchNorm2d(channels) |
#self.activation = nn.LeakyReLU()
def forward(self, x):
```
bmk#1476: lemme try it with gpt3 for comparison
StellaAthena#3530: The fixed notebook is now on the repo\
cfoster0#4356: Should someone reply to the HN comments about it now working? :hap:
bmk#1476: hm it looks like it wasnt interested in continuing the function so it just made a new class?
StellaAthena#3530: Naw, it's not like shawn bothered trying
StellaAthena#3530: Nobody on HN noticed it not working
cfoster0#4356: Nah not shawn
bmk#1476: i dont think it's worth it personally but no strong opinion
cfoster0#4356: Ie https://news.ycombinator.com/item?id=26536138
StellaAthena#3530: Weird
bmk#1476: gpt3 really doesnt want to write code ;-; https://cdn.discordapp.com/attachments/729741769738158194/823403147250499594/unknown.png
StellaAthena#3530: that doesn't show up for me
bmk#1476: it just wants to talk about code
aero#1357: @jrowe if you are curious what was causing it to output gibberish for me earlier, it was "train_steps" being set to 0 in the json, check out the notebook, it sets it to 401000 which works
aero#1357: wow 👀
aero#1357: ```Python |
class ConvBlock(nn.Module):
def __init__(self, channels, stride=1):
super(ConvBlock, self).__init__()
self.convolution = nn.Conv2d(channels//stride, channels, 3, stride=stride, padding=1)
self.normalization = nn.BatchNorm2d(channels)
#self.activation = nn.LeakyReLU()
def forward(self, x):
x =
# x = self.convolution(x)
# x = self.normalization(x)
# x = self.activation(x)
x = self.convolution(x)
x = self.normalization(x)
x = self.activation(x)
return x
|
class Resblock(nn.Module):
def __init__(self, stride=1):
super(Resblock, self).__init__()
self.res = nn.Sequential(
nn.Conv2d(9, 9, 3, stride=stride, padding=1),
nn.BatchNorm2d(9),
nn.LeakyReLU(0.2)
)
self.output = nn.MaxPool2d(2)
def forward(self, x):
return self.res(x)
```
with prompt included
bmk#1476: :ultrazucc:
bmk#1476: it's a bit confused but it's got the spirit
bmk#1476: cant wait to see even bigger models at it
aero#1357: gpt gonna take my job
AI_WAIFU#2844: #COBOLgang #JobSecurity
aero#1357: give me some cobol syntax 👀 |
jrowe#5371: I'm gonna advocate for my department to set up hpc for the biggest gpt-neo model created, and spend a year setting up prompts around doing my job
jrowe#5371: the rest of the it dept as well, it'll save us big $$$
StellaAthena#3530: @aero Some people are asking for samples on HN, if you care to post a couple of yours: https://news.ycombinator.com/reply?id=26535289&goto=item%3Fid%3D26534000%2326535289
aero#1357: posted some stuff 😮
aero#1357: ```
Big ugly stick.
Bowl full of chicken peas.
New York is universally inadvisable.
Long-eared seagull.
This morning I have a tumor in me.
A dog bark.
Invisible dog hides.
A crow blind. |
A cat can’t see if it had a billion eyes.
Bumblebee.
Sheep herder, sheep herder.
A fawns abducts.
Two black birds are trapped.
Bottle on her finger.
Elephant sees another elephant.
Bull.
Mice in a box in the library.
A church swelter. |
The door of a hotel opens.
Bosnian honey melons.
Grapes in excess.
Cat is on the loose.
Soil is shoveled into a glass jar
```
Tongue twisters, sorta 😂
chilli#5665: what was the prompt?
chilli#5665: oh nvm
chilli#5665: scrolled further up
TET#9641: Almost all of these sound like one line story prompts.
Louis#0144: CAT IS ON THE LOOSE!
Louis#0144: OH NO SHE’S GOT A GUN
aero#1357: dont worry he has a billion eyes so he cant see
Louis#0144: Btw if anyone here is from Georgia tech |
Louis#0144: Lmk
Louis#0144: Bc you get an orange name
Louis#0144: Like me
TET#9641: Do I still get it if I transferred out? :thinkstache:
Louis#0144: Yes
TET#9641: Then yes :yesh:
Louis#0144: Where are u going now
Louis#0144: @bmk can u give him a GT tag pls
TET#9641: I transferred to SPSU (now the domain of KSU) but I graduated 👨🎓
bmk#1476: @TET a ngnl fan, i presume? given the username and the pfp
TET#9641: https://tenor.com/view/nogamenolife-sora-coin-flipping-anime-gif-6238123
TET#9641: … ~~no~~ *maybe*
bmk#1476: misuse of gif, youre not allowed to say either yes or no after posting that gif
bmk#1476: you gotta say something ambiguous
TET#9641: Better?
bmk#1476: better
zphang#7252: https://tenor.com/view/action-drama-sci-fi-inception-top-gif-3400497
zphang#7252: ... no
TET#9641: Whoo, just stopped myself from posting WandaVision gifs. Probably too close to the "everythings a spoiler" line given how recent it is.
Teemochu#8740: > tet name and avatar |
:based:
aero#1357: 🤔 wonder how viable it is to finetune the model with just 1x3090
Louis#0144: Easily
Louis#0144: Not this one tho
Louis#0144: This is TPU only
Louis#0144: AFAIK
Louis#0144: but neox will be easy
Louis#0144: Especially with gradient check pointing
aero#1357: hmm why is it tpu only? inference seems to work fine
aero#1357: gonna give it a shot tonight anyway, maybe its possible with minimal tweaking
StellaAthena#3530: @aero It's theoretically compatible with GPUs, but we only officially support TPUs. TPUs are really weird, internally, and Mesh Tensorflow is a framework designed to work with them. Getting MTF code to run on GPUs can be non-trivial, and MTF sucks enough that I recommend avoiding it unless you're using TPUs (in which case you have to use it)
𓅬 gabriel_syme 𓅬#3220: being able to fine tune and distill neox will be great imo. Hopefully, if I'm at my new job by then I'll fine tune it to an engineering and construction dataset 🙂
𓅬 gabriel_syme 𓅬#3220: btw, this might be a silly question, but are you planning on enc-dec models (for summarization/search etc.)? or are the open source more than enough for that task?
sl24#8080: can we create a channel for dalle
jrowe#5371: Check out #art
jrowe#5371: and #multimodal
aero#1357: ```
Settle regulations everywhere and let regulations settle everywhere.
Monkeys chimed encouragement.
The fairy world quintessentially constituted the fairy world. |
I don’t know about your crowd.
The drawing was approved.
The state sang blandly.
The grayish grey color of sheep on the sea.
Walnut tree. Walnut tree. Walnut tree.
The big black back brakes broke badly.
I love to give advice, but I won’t give it.
One-One was a racehorse.
Two-Two was one, too.
When One-One won one race, Two-Two won one, too.
If the thought I thought I thought had been the thought I thought I thought, I would not have thought so much.
```
Tweaked some things and im getting slightly better results now (tongue twisters)
jrowe#5371: hot damn, nice
jrowe#5371: cherry picking a lot?
aero#1357: not a lot, though might have just got lucky, ill paste the full output
aero#1357: https://cdn.discordapp.com/attachments/729741769738158194/823436330708238337/message.txt
jrowe#5371: getting to see prompt engineering emerge on the foss side, in direct competition with the corporate side, is one of the coolest things I've seen in my life
jrowe#5371: hmm, just a little psychosis in there lol
aero#1357: yeah, interesting how it started looping after the "I am looped. I am looped. I am looped." one |
sl24#8080: i didn’t know
Reality.hack();#9445: damn id really like to try it out but i dont have the money for a google cloud account...
aero#1357: afaik you can run it on google cloud for free, and it works on gpu if you have enough vram
Reality.hack();#9445: i have nvidia gtx 1060 6gb
Reality.hack();#9445: but it says you need a bucket
aero#1357: gcp gives you $300 credit free when you register
Reality.hack();#9445: but you need to input credit card info
Reality.hack();#9445: right?
aero#1357: last i tried, doesnt charge anything though
aero#1357: and yeah, 2.7B model uses about 16gb 😅
Reality.hack();#9445: do you think someone will make a online web thing where you can access it?
triggerhappygandi#0001: Is this davinci or davinci-instruct?
cubytes#0844: I see you. Impressive effort. If you are in need for ambitious ideas about next generation UI/UX paradigms or theories for what to do with this I gotcha 👆
genai (Immortal Discoveries)#0601: enwik8 is kinda ugly......html etc
genai (Immortal Discoveries)#0601: >Image</namespace>
<namespace key="7">Image talk</namespace>
<namespace key="8">MediaWiki</namespace>
<namespace key="9">MediaWiki talk</namespace>
<namespace key="10">Template</namespace>
<namespace key="11">Template talk</namespace> |
<namespace key="12">Help</namespace>
<namespace key="13">Help talk</namespace>
<namespace key="14">Category</namespace>
<namespace key="15">Category talk</namespace>
<namespace key="100">Portal</namespace>
<namespace key="101">Portal talk</namespace>
</namespaces>
genai (Immortal Discoveries)#0601: The Brown Corpus is clear plain english
genai (Immortal Discoveries)#0601: OWT....idk, send me 10MB
guac#4716: enwik8 is xml lol
genai (Immortal Discoveries)#0601: Hutter Prize uses it
genai (Immortal Discoveries)#0601: AGI finding patterns is goal but it generates YUCK....25% of enwik8 is non text english
cubytes#0844: Hardware bottleneck is real for multi expert or multi interaction NN experiments. I think we need a distributed Apache arrow inspired micro ML server less bittorrent client for crowd sourcing a blockchain of virtual instances of TPU/GPU/CPU emulators. Basically a Micro server less crowd sourced decentralized ML Virtualization strategy
guac#4716: it's to test compression... not really best for sampling 🤷♂️
genai (Immortal Discoveries)#0601: it's same thing.....you can compress clean english knowledge
cubytes#0844: Assuming we can't just hack sync free Google collab instances
cubytes#0844: sync cross platform to jupiter notebooks and Wolfram virtual server instances too
genai (Immortal Discoveries)#0601: what is the thing Eluether is creating ? Gpt3 already exists....
aero#1357: gpt3 is not open source
aero#1357: despite the openai name |
genai (Immortal Discoveries)#0601: I train my AI on The Brown Corpus, it finds patterns, it generates realistic data like the dataset. At the same time i run another evaluatin that isn't subjective, Lossless Compression.
guac#4716: isn't @genai (Immortal Discoveries) that language model troll lol
genai (Immortal Discoveries)#0601: Using real data is the objective, enwik8 is kinda not so representive, 25% is non human data as said.
genai (Immortal Discoveries)#0601: i'm not a troll i work on AI, i'm just curious
cubytes#0844: the perspective I find useful is thinking of GPT 3 or large scale models as economically equivalent to a crude oil refinery? distilling useful ML resources to use for new products
aero#1357: what do you mean by real data too? seems like part of what makes gpt strong is how generalized it is
genai (Immortal Discoveries)#0601: Lossless Compression is a great evaluation, but you can at the same time run a generation eval to check what it generates subjectively, why would we want to train it on enwik's html etc data that is of no interest? It's not cancer data, etc, it's not really meaningful. 25% of enwik8 is just headers etc, not any wisdom being told in that 25%.
genai (Immortal Discoveries)#0601: You can do LC on anything but do it on data we think is important...
aero#1357: I mean, code generation could be pretty useful too, and I dont know how a language model is going to cure cancer
genai (Immortal Discoveries)#0601: like plain english, enwik has lots of non words in it
genai (Immortal Discoveries)#0601: yes, diverse data, but
genai (Immortal Discoveries)#0601: enwik8 has html etc in it, its not really diverse, 25% is just weird formats of stuffs
genai (Immortal Discoveries)#0601: AGI is meant to find patterns, the goal is to make a tool that can be used on things we don't need to code in.....general purpose.......if life was random physics we could not make a tool that solves more than 0 or 1 problems.
genai (Immortal Discoveries)#0601: a hammer doesn't just hit in nails
genai (Immortal Discoveries)#0601: only patterns exist in physics / data
genai (Immortal Discoveries)#0601: its of interest....
cubytes#0844: It can do more than code generation tho.. The idea for the next GUI paradigm is a generated emergent frame that can be drawn over on the fly
genai (Immortal Discoveries)#0601: even we want to be statue and clone like a pattern, you don't want to change like soup or you die, Life is "patterns"
aero#1357: wouldnt it be better to learn from everything and not leave it up to humans to decide what is important and unimportant information
genai (Immortal Discoveries)#0601: it is all generation/ patterns.... |
genai (Immortal Discoveries)#0601: AI is just more humans, it is us....humans clone babies and teach them what is in their brain
genai (Immortal Discoveries)#0601: but will be smarter of course
genai (Immortal Discoveries)#0601: smarter*
aero#1357: if you train something as huge as full sized GPT3 on a dataset that small theres going to be problems with overfitting
aero#1357: is "The Brown Corpus" what shows up on google? cause thats only ~200kb of text 😅
genai (Immortal Discoveries)#0601: 6MB, i have it
genai (Immortal Discoveries)#0601: it's beautiful
genai (Immortal Discoveries)#0601: its clean text, wisdom 🙂
genai (Immortal Discoveries)#0601: i do like enwik8 but 25% is just chaos of "codes" in it hidden about..
cubytes#0844: or if you are on Android a continuous AI)ML generated theme that reacts to interaction and visualizes it in real time by drawing over any app
aero#1357: ```
Track: BLKTOGGLE
Pow, Pow, Pow
You should know this girl can't keep still,
She is scratching her eyes,
Scratching her head,
Scratching in her ears,
Scratching in both eyes
Here she is scratching her eyes, |
Scratching her head,
Scratching in both ears,
And in her heart too.
Pretend you had a brain an' a heart,
Lay waiting for it.
Now this girl can't keep still,
She is scratching her eyes,
Scratching her head,
Scratching in her ears,
Scratching in both eyes
Here she is scratching her eyes,
```
instant classic
genai (Immortal Discoveries)#0601: ?
cubytes#0844: Music visualizer app drawing over discord https://cdn.discordapp.com/attachments/729741769738158194/823483361304903690/Screenshot_20210322-050743.png
genai (Immortal Discoveries)#0601: gotta go, food is finished....
aero#1357: (what I posted is a little song generated by gpt-neo 2.7B)
aero#1357: 😮 I made an app like that for windows a long time ago
cubytes#0844: It's a neat app.
aero#1357: <https://www.youtube.com/watch?v=f5z1QKf93Jg> |
cubytes#0844: https://play.google.com/store/apps/details?id=com.sparkine.muvizedge
cubytes#0844: Now imagine it as a VA interaction visualizer? or better yet if you are generating the UI itself as an emergent frame that flows from one theme to another theme and you can draw animations over any app 👍
cubytes#0844: that's the idea for an emergent ephemeral GUI driven by AI)ML models
cubytes#0844: It gets really interesting when imagining how it could work across multiple devices and how it could scale to and from AR/VR
cubytes#0844: In that context your presence is your username and the systems ability to recognize you is your password. I imagine it would be great for shared user experiences as well that can back prop to solo but not access data from solo unless specified by user...
cubytes#0844: Language model itself can't deliver that without DALLe or CLIP and extra models but it can get close
cubytes#0844: A language model can deliver personas tho
cubytes#0844: for that you need to think of "content" as light and personas as prisims which bend the light
cubytes#0844: I call it emergent content curation if you have multiple personas what happens when they can resample and bend each other's content recommendations and most interesting is what happens when you query that...
cubytes#0844: What happens when personas are user generated fictional character representations or roleplay entities? As vague as B0$$ to as specific as a real person?
cubytes#0844: Then you create a Bixby clone or Google News or Chrome Discovery content curation app but with posts from personas
cubytes#0844: API the rest so devs can create discord chat bots for the personas or have personas as some theme marketplace for VA
cubytes#0844: If you are real clever just do 1st order personas basically improvised acting saved over some persona template. let other devs offer 2nd order persona creation wizards similar to game character creation..
cubytes#0844: The next Google will come from "persona search" with resampling iteration so that the query results are basically generated from the personas arguing with each other 😁
cubytes#0844: 4th order personas are a coherence from content curation (recommendation, sentiment, relevance, topics) to chat bots, and VA and game character representations
MasterScrat#6910: I am getting this error when loading the checkpoints in the Colab notebook (trying to load them without having access to a GCP account, so i had to do some changes):
`Unsuccessful TensorSliceReader constructor: Failed to get matching files on /content/GPTNeo/GPT3_2-7B/model.ckpt-400000: Unimplemented: File system scheme '[local]' not implemented (file: '/content/GPTNeo/GPT3_2-7B/model.ckpt-400000')`
cubytes#0844: Hmmm how does Apache Arrow represent '[local]'?
cubytes#0844: https://arrow.apache.org/docs/format/Columnar.html
cubytes#0844: Parent<>Child = Remote<>Local? |
cubytes#0844: actually that would be like a structure field?
cubytes#0844: Struct Layout
A struct is a nested type parameterized by an ordered sequence of types (which can all be distinct), called its fields. Each field must have a UTF8-encoded name, and these field names are part of the type metadata.
A struct array does not have any additional allocated physical storage for its values. A struct array must still have an allocated validity bitmap, if it has one or more null values.
Physically, a struct array has one child array for each field. The child arrays are independent and need not be adjacent to each other in memory.
For example, the struct (field names shown here as strings for illustration purposes):
Struct <
name: VarBinary
age: Int32
>
Daj#7482: TPUs can only read from GCP buckets unfortunately
Daj#7482: not from local files
cubytes#0844: Id be really impressed if Apache Arrow is utilized for the filesystem 💪 Unless you are a madgenius and feel like virtualizing filesystems as some hash table POW encoding blockchain based?
cubytes#0844: can you hack local files to look like GCP buckets easily?
cubytes#0844: prob not tho what's the chances there's not a github module that can transform local files into a psuedo GCP bucket..
MasterScrat#6910: arg, i see. for some reason GCP doesn't let me use my Revolut card for a free trial. looks like i'll have to throw some money in Google's direction... |
Daj#7482: The way TPUs work is that they only read from GCP buckets to enable their high throughput
Daj#7482: The data needs to physically be in the same datacenter as the TPUs
cubytes#0844: ahh
Daj#7482: Apparently the models also run on decent GPUs locally, but that seems to need a bit of tweaking so I'd maybe wait for a patch
cubytes#0844: https://github.com/UCSBarchlab/OpenTPU
cubytes#0844: https://heartbeat.fritz.ai/step-by-step-use-of-google-colab-free-tpu-75f8629492b3
triggerhappygandi#0001: Need your own bucket on GCS
triggerhappygandi#0001: I am surprised they still haven't integrated colab storage.
dbaf#6213: Hi guys, I am a front-end/traditional developer and would like to learn the basics of using the open source gpt-neo. How to set it up and train it for basic things like converting a string to sql query. Would anyone be willing to do a 1 on 1 session on teams? Or Skype? Will probably take an hour or two max. Happy to pay $50/hr via PayPal. I could learn it myself, but I want to accelerate my learning.
dbaf#6213: Well feel free to dm me if anyone's interested. Thanks.
cubytes#0844: I'm still researching how to work around GCP buckets and I got nothing. The best approach seems to be the hard approach... a bottom up distributed micro cloud compute framework for ML which is similar to hashing and intended to be used as some "proof of function" or "proof of ethics" algorithm for crypto currencies..
Daj#7482: I think you're misunderstanding the problem my dude
Daj#7482: TPUs are special processors made by Google that you can graciously use for free
Daj#7482: They are located in specific datacenters and need the data to be physically close to be fast enough
Daj#7482: So Google enforces data to be in a bucket
Daj#7482: That's it, if you wanna run the code locally, I hear it runs on GPUs too with minor modification
cubytes#0844: Yeah I can't work around that either
Daj#7482: but we don't support GPUs officially, the repo is kinda old now
cubytes#0844: I'm thinking in general
Daj#7482: whatever your problem is, willing to bet crypto isn't the answer lol |
Daj#7482: ML has terrible latency and security properties
cubytes#0844: yeah that's a hard sell
Daj#7482: You need ultra fast synchronous interconnects for training
Daj#7482: at least with current techniques
cubytes#0844: for some reason hashing feels similar to ML at least statistically isn't proof of work algorithm some gradient decent
Daj#7482: No, the randomness of hashing is both what makes it secure and what makes it useless
Daj#7482: https://vitalik.ca/general/2019/11/22/progress.html see point 7
cubytes#0844: Yeah I need to read more into it because I seen to be under the impression that aside from the randomness crypto hashing and computing statistical gradient decent are similar
cubytes#0844: seem...
Daj#7482: Unfortunately not at all
cubytes#0844: yeah it can't rely on external data or it could be manipulated and it has to be easy to verify hard to compute and in small chuncks
cubytes#0844: How about applying it to auto tuning large scale models already established?
cubytes#0844: idk I'm reaching for something to make use of all that useless compute oower
cubytes#0844: power..
cubytes#0844: More of a crazy dream to tap into all that bit flipping going on for free
Daj#7482: yep, a lot of people are thinking those thoughts, to low success so far
Daj#7482: maybe someone in the future will come up with something useful but I doubt it
StellaAthena#3530: Aside from the randomness, there are zero interesting properties of cryptographically secure hashing
StellaAthena#3530: That is the sole purpose of cryptographically secure hashes
cubytes#0844: I see |
cubytes#0844: I still can't shake the feeling that it's some undefined or yet to be defined gradient decent tho. at the very least calculate digits of π as proof of work? still doesn't help refining large amounts of data into utility....
cubytes#0844: not sure how practical it is to think of ML as refining process attempting to turn data into code
Louis#0144: What
Louis#0144: LMAO
Louis#0144: I’m so confused
Louis#0144: What are you trying to argue for
Louis#0144: I don’t see your argument (?)
Louis#0144: This has nothing to do with modern ML
Louis#0144: Computing digits of pi is trivial
cubytes#0844: I'm not arguing for anything to be honest I'm reaching for a dream...
Louis#0144: DL on a blockchain?
Louis#0144: Is that what’s going on
Louis#0144: Blockchain is kinda a waste of time and as it stands right now is a pretty fruitless effort that doesn’t produce benefit for society at large
Louis#0144: Other people here will disagree with me
Louis#0144: I think the server is split 50/50 on this
Deleted User#0000: Blockchains are good when the source of truth is stored in the blockchain itself. Unlike NFT's for example
cubytes#0844: I did think of ways blockchain could be used for ML particularly for GLOM or part whole heiarchy consensus networks
Louis#0144: Personally I don’t even see a need for a decentralized financial system
Louis#0144: lol
Deleted User#0000: Censorship by payment providers is garbage though |
Louis#0144: It isn’t that common though
Louis#0144: It really isn’t an issue at large scales
Louis#0144: Nor is the reliability claim that blockchain people make
Louis#0144: How often do all of your banks servers go down
cubytes#0844: nothing to do with currency more just the distributed log or consensus truth tech tho
Louis#0144: Literally never
Deleted User#0000: I never argued that blockchains are reliable. They aren't
Louis#0144: No I know
Louis#0144: That’s just the other claim people make usually
Louis#0144: Which I find weird
Louis#0144: Anyway #off-topic
Deleted User#0000: Yup
Daj#7482: how exactly do I buy my drugs with a debit card? :thonk:
Louis#0144: Por favor
cubytes#0844: but those were highly theoritical
𓅬 gabriel_syme 𓅬#3220: the randomness discussion reminded me of a guy I love to watch, David Ackley. He was actually first (I think) author on the Boltzmann paper with Hinton. he has amazing videos explaining difficult concepts in an intuitive manner, this one is about randomness
https://www.youtube.com/watch?v=_tN2ev3hO14
I like his robust first computing idea too, could even be an important counterpart of A(G)I systems?
cubytes#0844: Multi expert or multi NN models
cubytes#0844: Hey that's a good video 👍 |
𓅬 gabriel_syme 𓅬#3220: I binge on his videos
𓅬 gabriel_syme 𓅬#3220: very rare talent, explain strange things in beautiful ways
cubytes#0844: indeed. I have to watch it tho. I usually start with something intuitively random like throwing dice. Supposedly if you sample it step by step following some given rule formula you basically generate a fractal..
cubytes#0844: https://cdn.discordapp.com/attachments/729741769738158194/823553095924580382/unknown.png
cubytes#0844: Then it's not until you get to a water bucket wheel and chaos theory strange attractors that truly feel random or hard to predict
StellaAthena#3530: The level of foresight and strategic planning this person ascribes to us is amusing https://cdn.discordapp.com/attachments/729741769738158194/823553678479982633/image0.png
cubytes#0844: yoo My money is on next gen UX these models will fuel. personas in particular is just the tip of the iceberg for what large generalized language models could be used for and that's because Im a visionary biased towards designing compelling UX with them not that there aren't many many different ways to use them...
cubytes#0844: a 4th order persona search query from a Bixby clone will absolutely blow minds and could quite literally replace the need to even google search
bmk#1476: davinci
thenightocean#6100: It is interesting case study of the brain mechanism that results in creation of conspiracy theories
cubytes#0844: https://twitter.com/cubytes/status/1373834979545919491?s=09
triggerhappygandi#0001: If you want to generate code use instruct
triggerhappygandi#0001: Davinci is the talkative one. Instruct models are not. They just do the command you type in.
cubytes#0844: technically the model could extract psuedo code from natural language...
cubytes#0844: https://users.csc.calpoly.edu/~jdalbey/SWE/pdl_std.html
triggerhappygandi#0001: You have to do some prompt engineering to make the model follow the text command, rather than act as a language model and just autocomplete.
cubytes#0844: tru that
cubytes#0844: I should rephrase that as technically you could leverage generalization of the large scale language model to create or fine tune a psuedo code recognition model from it...
jrowe#5371: "ERROR: ortools 8.2.8710 has requirement absl-py>=0.11, but you'll have absl-py 0.10.0 which is incompatible."
jrowe#5371: do i need to worry about that? |
EricHallahan#1051: Maybe?
jrowe#5371: lol
cubytes#0844: pip install absl-py>=0.11?
jrowe#5371: possibly - going to run and disregard the error for now
Sid#2121: you can disregard it
jrowe#5371: 👍 ty
Sid#2121: online all day so ping me if you have any problems 🙂
jrowe#5371: will do, going through the downloads portion now
jrowe#5371: bucket / account setup and working
cubytes#0844: 💪
jrowe#5371: 80mbps , nice
StellaAthena#3530: Since we've gotten a couple questions about this already, here is (AFAIK) a complete list of all announced, autoregressive, non-MoE transformers with 1B or more parameters. This is something I'm keeping track of, so please let me know if I missed anything. A model is considered "public" if anyone can go download the trained weights off the internet for free. https://cdn.discordapp.com/attachments/729741769738158194/823566694890340362/Screen_Shot_2021-03-22_at_10.37.39_AM.png
Sid#2121: I guess it's important to note that Megatron-LM was trained entirely on openwebtext, which is mostly crappy news articles / whatever else reddit people link to
jrowe#5371: so are the checkpoints the model? i was thinking I'd streamline to a no-training notebook and try to get the html GUI working from <https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/GPT2_with_JS_UI.ipynb>
EricHallahan#1051: Why did they make DaVinci so large? It doesn't really follow the pattern.
StellaAthena#3530: Patterns are for shmucks
Sid#2121: the model weights are sharded, there's like 32 shards for the smaller one and 64 for the larger one
jrowe#5371: MOAR PARAMETERS BETTER PARAMETERS
Sid#2121: in the download there's 1) weights, 2) config, 3) `checkpoint` file which just tells tensorflow which files to read to load the weights
jrowe#5371: ok, cool |
EricHallahan#1051: @Carl-bot runs on 1.6 parameters.
jrowe#5371: "CommandException: "cp" command does not support provider-only URLs."
Sid#2121: uhh, which cell?
jrowe#5371: uploading to my bucket, right after the download
Sid#2121: this step? ```!gsutil -m cp -r $path_to_local_weights $bucket_base```
Louis#0144: Megatron is by nvidia no?
Sid#2121: try running `!echo $path_to_local_weights` / `!echo $bucket_base`
Louis#0144: What’s nvidias language model
Sid#2121: the codebase yea, but nvidia only released checkpoints for their 345M model. Facebook trained and released the weights for the 11B param one.
jrowe#5371: ahh, ty
jrowe#5371: i had a space preceding my bucket name
jrowe#5371: digging the file count dissonance :berk:
jrowe#5371: 00063 of 00064
Sid#2121: that's tensorflow's fault, not ours. I noticed that earlier today, so frustrating lol
jin kazama#3736: What I thought it was going to be like GPT3, at least that much big if not better (or are you talking about distilled version?)
ethan caballero#6044: Davinci is right before L(C) starts curving upward a lot near the intersection point of L(C) and L(D).
EricHallahan#1051: I'm not too familiar with the scaling laws paper, but that makes sense.
StellaAthena#3530: We just released a 1.3B and 2.7B model. We have a 10B in the works and our ultimate goal is 200B.
jrowe#5371: FileNotFoundError: [Errno 2] No such file or directory: 'configs/GPT3_2-7B.json'
Louis#0144: how did we not stress test this before releasing |
Louis#0144: wtf
jrowe#5371: herpderp
EricHallahan#1051: I don't know why anyone thought this was a good idea.
jrowe#5371: lol
jrowe#5371: i followed instructions too literally and skipped the config file cell
Louis#0144: yeah we should mutiny
StellaAthena#3530: RIP. I tried to label each cell with "Run" and "Don't Run" for doing just inference
jrowe#5371: it makes sense, and a lot of people have gotten it to run so far
Sid#2121: The notebook has been tested to death, but people are generally bad at following instructions lmao (no offence @jrowe , i am too)
jrowe#5371: yes, I am a primo example of that
StellaAthena#3530: For the record, I got it working within 10 minutes having not been part of the notebook development and someone not affiliated with EAI at all got it working within an hour of release (dunno when he started using it)
StellaAthena#3530: It does run.
Sid#2121: there are a lot of places to do something mildly wrong that will break literally everything. I'm not denying neo is a terrible codebase to work with, that's half the reason we moved lmao
Sid#2121: that and beefy gpu
jrowe#5371: had to change the config file name to GPT3_2-7B.json, and modify the bucket path
jrowe#5371: seems to be running now yay
StellaAthena#3530: Yes, you need to set the bucket path to the location of your bucket
StellaAthena#3530: And the default config is for 1.3B not 2.7B
jrowe#5371: this is my bucket. there are many like it, but this one is mine!
jrowe#5371: "Palł anything contact Nagoryistical424 183TesignoreULL Burst GUI Dungeon" |
jrowe#5371: helps if i specify the right model directory too
jrowe#5371: hmm, its exploding
jrowe#5371: https://pastebin.com/u3bWEfzA
jrowe#5371: changed bfloat16 to float
jrowe#5371: so far no kaboom
jrowe#5371: it seems to be running - "Restoring parameters from gs://jrowe-gpt-neo/GPT3_2-7B/model.ckpt-400000" is the last line i see. how long does it take per run?
jrowe#5371: apparently 3-4 minutes
jrowe#5371: just updated
jrowe#5371: yay, it ran
jrowe#5371: not so yay, "ExpertsforeseenRespMasMichelleSupportersProsecutorsuphemIronicallyforeseenSupportersChargesIronicallythinkable"
Sid#2121: @jrowe can you post up the config you're trying to run?
jrowe#5371: https://pastebin.com/dEHDBsGU
jrowe#5371: using the 2.7B model
jrowe#5371: do i want to try to match this one ? <https://the-eye.eu/eleuther_staging/gptneo-release/GPT3_2-7B/config.json>
Sid#2121: if you're following all the instructions in the `Pretrained Model` section your config should automatically be adapted from the one hosted on the eye
Sid#2121: did you run the `Modify config for colab` cell?
jrowe#5371: ahh, no
jrowe#5371: i probably butchered things
jrowe#5371: in the Set Model Configs cell, i changed %%writefile configs/GPT3_2-7B.json
Sid#2121: just skip straight to the `Pretrained Model` section and follow all the instructions lol |
Sid#2121: I'd appreciate any help mistake-proofing the language, I know it could be clearer
jrowe#5371: starting over
Sid#2121: just run the 'Set Google Cloud' steps, then skip straight to `Pretrained Model`, like it says
Sid#2121: @sl24 also advice for you if you're still trying to get it running
sl24#8080: thanks
Sid#2121: I don't know what's been changed in the middle section to cause everything to break lol
sl24#8080: is it not working for others?
ethan caballero#6044: snippet about Google's intentions regarding EleutherAI release of GPT-Neo from @jackclark's import ai newsletter: https://cdn.discordapp.com/attachments/729741769738158194/823582806721429558/Screen_Shot_2021-03-22_at_11.42.21_AM.png
Sid#2121: a few people have gotten it working, but it's easy to break if you miss a step
sl24#8080: still getting that memory error
sl24#8080: how many gigs of memory does a tpu have
sl24#8080: or supposed to have
Sid#2121: You must be doing something wrong, I just re ran everything from the beginning and it works fine. Did you perhaps fail to notice this line ```
You may need to decrease the predict batch size in your config if you're facing OOM errors.```
sl24#8080: alright
sl24#8080: not sure why, but Set Model Configs says it's 1 (input), and the modify config (output) one says 128
jrowe#5371: do i have to download the weights again?
Sid#2121: i'll push a change to the notebook to decrease that automatically
jrowe#5371: or can i use what i already downloaded?
Sid#2121: if you already uploaded them to your bucket, no |
Sid#2121: you just need to point to them
jrowe#5371: ok
blackjack4494#4301: Sounds great. Any more information about any timelines regarding the 10B and 200B? Why the huge jump to 200B directly?
sl24#8080: 1T or bust?
jrowe#5371: NameError: name 'path_to_local_weights' is not defined
jrowe#5371: oof
sl24#8080: you skipped a cell
jrowe#5371: this reminds me of learning the macarena in high school
jrowe#5371: did set google cloud
sl24#8080: lmfaooo
jrowe#5371: path to cloud bucket
jrowe#5371: skipped to pretrained model
Sid#2121: that gets defined in the download model section
Sid#2121: but it's just to read the config
Sid#2121: you can just download the config from the-eye and make sure line 30 of the `modify config for colab` cell opens that json
jrowe#5371: 👍
Sid#2121: `with open(f'{path_to_local_weights}/config.json', 'r') as f:` -> `with open(f'whereveryourconfigis.json', 'r') as f:`
StellaAthena#3530: 10B is in Jax and @kindiana is leading that effort.
200B isn't the "next step" after 10B, it's the ultimate goal. We will probably release intermediate models but nothing is specifically planned.
jrowe#5371: can i use path_to_local_weights = 'gs://jrowe-gpt-neo/GPT3_2-7B/' |
Sid#2121: no, what is happening here is you're modifying your *local config file* to have the right settings to run on colab
jrowe#5371: ok
Sid#2121: so, like i said above, you want to download the config.json from the eye, store it in say `configs/2-7B.json`, change to `with open(f'configs/2-7B.json', 'r') as f`, then run `!python3 main.py --model 2-7B --steps_per_checkpoint 500 --tpu colab --predict --prompt example_prompt.txt`
ethan caballero#6044: Does OpenAI go bankrupt when 175B GPT-Neo is released?
sl24#8080: yes
EricHallahan#1051: I say :nooo:
ethan caballero#6044: Is bankrupting OpenAI Google's and CoreWeave's main motivation for giving EleutherAI compute?
Daj#7482: To be clear: _Google is not giving us the compute for 200B_
EricHallahan#1051: If anyone it is Coreweave.
sl24#8080: what's in it for them?
sl24#8080: do they get access like Microsoft and OpenAI
ethan caballero#6044: bankrupting OpenAI
Sid#2121: they get free publicity and to serve the model
sl24#8080: gotcha
StellaAthena#3530: @sl24 Normal people don't own GPUs chonky enough to do 175B inference, and right now Azure has the exclusive ability to offer it to clients. There's a financial incentive for every other cloud company to want an OSS replication so that they can sell it too.
StellaAthena#3530: Sid's right about publicity too; they're relatively new and small and I can't think of a better way to but a cloud compute company on the map
Utlagi#0001: Hi all!!
do you know where I'd go to learn the basic programming involved with telling TensorFlow or PyTorch to:
|
1) Load a pre-trained version of Visual Transformer (Google, late 2020)
2) Perform some top layer transfer learning to tell it I want a certain type of segmentation for output
3) perform the image segmentation
im specifically interested in trying this ViT ( https://ai.googleblog.com/2021/03/a-new-lens-on-understanding.html?m=1 ) for segmentation. I know it's primarily been benchmarked for classification but I've seen simpler models work well for a variety of applications and I'd imagine large transformers would also be good at doing multiple types of tasks.
Utlagi#0001: the "Cityscapes" benchmark is along the lines of what I'm trying to tackle. although I suspect any DNN trained on imagenet should transfer reasonably well to segmentation of cityscapes, I'm specifically interested in working with ViT
Sid#2121: Hey @Utlagi , please take a read of our info page.
Sid#2121: ```Q: I'm new to deep learning - what is a transformer? How do i get into AI? Tell me how everything works!
A: We are a research-focused discord server and not an educational one. We welcome beginners to lurk and talk about topics they’re knowledgeable of, but this is not the place to get intro-level resources or answers to basic questions. We have links to several excellent beginner-friendly servers in our #communities channel.```
Sid#2121: https://github.com/EleutherAI/info
StellaAthena#3530: @Utlagi Welcome! Like Sid said, isn't a great place to get introductory help. However if you check out the #communities channel there are links to other ML discords that do cater explicitly to people learning about DL
nz#9710: As mentioned in TPU podcast, I would start by looking at ViT based architectures developed specifically for segmentation. The TransUNet repo (https://github.com/Beckschen/TransUNet) uses pytorch and is a great place to start.
Utlagi#0001: Thanks everyone you are all wonderful. @nz I will certainly take a look at UNet
nz#9710: There is also https://github.com/xieenze/Trans2Seg
jrowe#5371: starting over again
jrowe#5371: well
Sid#2121: ?
jrowe#5371: giving it another go
jrowe#5371: might have corrupted my runtime or something |
EricHallahan#1051: I'm not going to even try running it until it's in Hugging Face.
Sid#2121: all you need is to have the weights on your bucket, and the right config pointing to them
Sid#2121: ```{
"n_head": 20,
"n_vocab": 50257,
"embed_dropout": 0,
"lr": 0.00016,
"lr_decay": "cosine",
"warmup_steps": 3000,
"beta1": 0.9,
"beta2": 0.95,
"epsilon": 1e-08,
"ada_epsilon1": "1e-30",
"ada_epsilon2": 0.001,
"opt_name": "adam",
"weight_decay": 0,
"train_batch_size": 8,
"attn_dropout": 0,
"train_steps": 401000,
"lr_decay_end": 300000, |
"eval_steps": 0,
"predict_steps": 0,
"res_dropout": 0,
"eval_batch_size": 128,
"predict_batch_size": 8,
"iterations": 500,
"n_embd": 2560,
"datasets": [
[
"pile",
null,
null,
null
]
],
"model_path": "gs://test-bucket-neo/GPT3_2-7B",
"n_ctx": 2048,
"n_layer": 32,
"scale_by_depth": true,
"scale_by_in": false, |
"attention_types": [
[
[
"global",
"local"
],
16
]
],
"mesh_shape": "x:4,y:2",
"layout": "intermediate_expanded:x,heads:x,memory_length:y,embd:y",
"activation_function": "gelu",
"recompute_grad": true,
"gradient_clipping": 1.0,
"tokens_per_mb_per_replica": 4096,
"padding_id": 50257,
"eos_id": 50256
}``` this is my config
Sid#2121: you should just need to change the model path
jrowe#5371: ty |
jrowe#5371: going through each step except the download
Sid#2121: @jrowe it's working for @sl24 now. I guess a slimmed down inference notebook would be a good call, if you can ever get it going 🙂
jrowe#5371: ok, im failing at the model download section
jrowe#5371: https://cdn.discordapp.com/attachments/729741769738158194/823609511909916672/unknown.png
jrowe#5371: lol
jrowe#5371: not sure where i went sideways now 😦
Sid#2121: well you're still trying to copy the weights to your bucket, but you didn't download them
Sid#2121: they should already be in your bucket, right?
Sid#2121: so you don't need to run that second cell
jrowe#5371: yes
jrowe#5371: alright, then my bucket base shows up properly
jrowe#5371: but the directory pointer to config.json is borked
Sid#2121: what does that mean
jrowe#5371: https://cdn.discordapp.com/attachments/729741769738158194/823610235867758602/unknown.png
jrowe#5371: its looking for "GPT3_2-7B/config.json"
Sid#2121: ^
jrowe#5371: instead of my path
Sid#2121: if you downloaded the weights, that's where the config would be. you can just save my config i pasted above to that location
jrowe#5371: ok, so im creating a new folder called configs , placing 2-7B.json in it
jrowe#5371: scratch that |
Sid#2121: I mean, you don't need to do this step either
jrowe#5371: im replacing the config.json with yours above
Sid#2121: just add my config i posted above in `configs/modelname.json` or whatever you want to call it
Sid#2121: then the predict command will be
Sid#2121: `!python3 main.py --model modelname --steps_per_checkpoint 500 --tpu colab --predict --prompt example_prompt.txt`
Sid#2121: and you need to point `model_path` to your own bucket ofc
jrowe#5371: theres a disconnect here
jrowe#5371: modifying config for colab fails
jrowe#5371: "No such file or directory: 'GPT3_2-7B/config.json'"
Sid#2121: ^
jrowe#5371: i have it set to defaults, so i fucked something up because its looking for a nonexistent folder
jrowe#5371: so do i restart?
Sid#2121: **you don't need to do this step**
jrowe#5371: ok
Sid#2121: reread my messages above carefully
jrowe#5371: "FileNotFoundError: [Errno 2] No such file or directory: 'configs/GPT3_2-7B.json':"
Sid#2121: ...
Sid#2121: when you run.. what? the predict command?
jrowe#5371: the sample
Sid#2121: well |
jrowe#5371: the predict you gave me
Sid#2121: is there a file at configs/GPT3-2-7B.json?
jrowe#5371: no, doing that now
Sid#2121: add the config i pasted above, to that file
Sid#2121: change the model path to your bucket path
jrowe#5371: done
jrowe#5371: yeah, the path is screwed up, its not looking at the right path, so ive screwed up somehow
Sid#2121: what error are you getting now? this should be really simple lol
jrowe#5371: same one
Sid#2121: i mean, that probably means the file isn't there
jrowe#5371: I have my bucket , jrowe-gpt-neo
Sid#2121: you don't need to restart
Sid#2121: you just need to make sure the file is, in fact, there
jrowe#5371: i have my model folder, gpt3_2-7B
jrowe#5371: i have a configs folder, with GPT3_2-7B.json in it
Sid#2121: in your local (colab) filesystem? can you just expand all your colab files and screenshot them for me, then screenshot the error?
Sid#2121: the config should be local
Sid#2121: the weights should be in your bucket
jrowe#5371: oh
jrowe#5371: there it is |
jrowe#5371: fixing that
Sid#2121: so, this is my colab directory https://cdn.discordapp.com/attachments/729741769738158194/823615040597786734/Screenshot_from_2021-03-22_18-51-36.png
Sid#2121: then i run `!python3 main.py --model GPT3_2-7B --steps_per_checkpoint 500 --tpu colab --predict --prompt example_prompt.txt`
Sid#2121: (with the config above)
Sid#2121: and i get out the predictions
Sid#2121: you can also put the full path of the config, whatever works
jrowe#5371: yeah, now getting a permissions issue
Sid#2121: what permission settings does your bucket have?
Sid#2121: did you change model_path to point to your bucket?
jrowe#5371: nope
Sid#2121: you need to do that, i think i've said that a few times lol
jrowe#5371: i did that multiple times, forgot on this last repeat lol
jrowe#5371: *occasionally the boulder rolls over Sisyphus feet, and he yells*
jrowe#5371: ok, so i cant seem to update it? i have the config file and directory structure just like yours
jrowe#5371: but getting permissions issues
jrowe#5371: do i need to rerun a previous cell?
Sid#2121: you can't seem to update what? the config file?
jrowe#5371: no, the runtime
Sid#2121: huh
Sid#2121: what do you mean, update the runtime |
jrowe#5371: it latched onto the gs://test-bucket-neo/GPT3_2-7B/config
Sid#2121: i have no idea what this means
Sid#2121: latched onto it? like the alien from alien?
jrowe#5371: i removed that bit from the config, but im still getting errors
Sid#2121: did you *remove it* or change it to your bucket
jrowe#5371: changed it to my bucket
Sid#2121: so, what's your config now, can you copy and paste it?
jrowe#5371: https://cdn.discordapp.com/attachments/729741769738158194/823618497677885531/unknown.png
EricHallahan#1051: Wrong dir?
Sid#2121: ok 1) i was asking for the *contents* of your config file, 2) yes, it's in the wrong directory
jrowe#5371: https://pastebin.com/XQZymNwg
Sid#2121: move it to GPTNeo/configs/GPT3_2-7B.json
Sid#2121: like in my screenshot
jrowe#5371: there we go
jrowe#5371: sweet baby punted jesus at the superbowl.
jrowe#5371: lol, thank you
jrowe#5371: hopefully I'm an extreme case of what people encounter haha
Sid#2121: is it working?
jrowe#5371: i think so
Sid#2121: oof, ok nice |
Sid#2121: that was quite a journey
jrowe#5371: last time it lasted this long it gave me gibberish
jrowe#5371: but your config is right, so haha
Sid#2121: assuming the model weights are properly saved to your bucket at the correct location, i'm not sure what will go wrong now
jrowe#5371: btw, my dog kept me up last night, so I'm operating on lots of caffeine and not so much sleep, which probably contributed heavily to this g~~r~~eek tragedy
Sid#2121: no worries lol, this repo kept me up last night, so i'm also operating on lots of caffeine and not much sleep
Sid#2121: i know the instructions could be cleared up a little, so happy to help people get it working so they can then help others do it instead of me
Sid#2121: teach a man to fish or whatever
jrowe#5371: yes, thank you very much! i didnt realize i should have been working on the colab files, did way too much poking at the bucket
jrowe#5371: bucket did not go :brr:
EricHallahan#1051: ... I'm just gonna wait until it is easier.
jrowe#5371: if you've slept, you're probably ok - lots of people have gotten it going so far
Sid#2121: i mean, we're never gonna be making an API, so you might be waiting a while
EricHallahan#1051: Well, it will inevitability make it to Hugging Face.
jrowe#5371: more or less, if you're in a state of mind above, say, having been smacked in the head with a sack of bricks, then you'll have a much easier time than I just did lol
EricHallahan#1051: Well
jrowe#5371: oh, holy shit
jrowe#5371: it's been producing samples, but its the code prompt
Sid#2121: any good?
jrowe#5371: i was waiting for the console spam to end lmao |
jrowe#5371: yes, very good
EricHallahan#1051: I got up and 5:30 this morning to take an exam at 7:00.
jrowe#5371: very responsible of you
jrowe#5371: that pays off, wish I'd clued into that sooner
StellaAthena#3530: Would someone mind thowing this into the model:
> Prompt: "These are the top pickup lines of 2021! Amaze your crush and get results!
>
> 1."
sl24#8080: yeah i gotchu
sl24#8080: 1.3B
jrowe#5371: NLP NLP
Sid#2121: lmao, i literally just saw this tweet a few mins ago and already tried
Sid#2121: the algorithm really do got us in its grips
jrowe#5371: Stella's going meta :ultrazucc:
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/823622479293054988/message.txt
Sid#2121: lmao, this is also how i pick up women ```These are the top pickup lines of 2021! Amaze your crush and get results! 1. I love to dance. 2. I'm a born leader, so I always try to lead. 3. I'm a natural wit. 4. I'm brave and brilliant. 5. I could care less what other people think of me. 6. I'm a goofy animal. 7. I have the perfect amount of spicy and sweet in my romance. 8. I could never be more than the one person in a relationship with whom I can be myself. 9. I'm passionate and fight for what I believe in. 10. I am (answer in video above). 11. I love to try new things. 12. I'm a dreamer. 13. I'm happy in love. 14. I am a CONFIRMED journaler. 15. I like to go on missions and spread the gospel. 16. I am the most loyal person you've ever met. 17. I'm fearless. 18. I can never find a problem with a little more faith.```
sl24#8080: oh btw can i run the 2.7B model on colab
sl24#8080: no extra cloud stuff like 1.3B?
Sid#2121: prediction, yes. Finetuning... maybe?
Sid#2121: `14. I am a CONFIRMED journaler.` |
jrowe#5371: lol
sl24#8080: https://cdn.discordapp.com/attachments/729741769738158194/823622878238867456/predictions_nosacred_362000_1.txt
sl24#8080: @StellaAthena
Sid#2121: It seems that prompt in particular puts the model in clickbait-article mode which, yeah, not surprising
sl24#8080: ^
Daj#7482: > 4. The Moo-Cake Masks
> Enjoy some sweetly weird sexiness in style with our collection of square, mask ghouls!
Excuse me what :guilty:
Sid#2121: i think if you few shotted it and gave some examples it would do much better
sl24#8080: 'All content is owned by Cyndi Stahl unless otherwise indicated'
Sid#2121: damn, is it getting warm in here or is it just me?
StellaAthena#3530: > 15. I like to go on missions and spread the gospel.
jrowe#5371: https://cdn.discordapp.com/attachments/729741769738158194/823623217469587527/victory.txt
Sid#2121: oh no, my clothes appear to have fallen off
sl24#8080: then it moves to FAQs
Daj#7482: also unrelated, but the prompt did have "The Boys are Back in Town" and I take every excuse to post this video https://www.youtube.com/watch?v=1WAlkyxz2mU
Sid#2121: where does the prompt end?
jrowe#5371: oh man
jrowe#5371: "Reusing only makes sense if you are a Michigan meat-waste bandit. "
Sid#2121: i mean, it's got a point |
jrowe#5371: "That means on average you could use the same product one-fifth of the number of times versus the other one and still come out lighter on your carbon footprint in terms of carbon dioxide output to the air by your very own hand."
Sid#2121: i've often thought about reusing something, and then had to stop myself, as i realised i'm not a Michigan meat-waste bandit
jrowe#5371: it ends after that line
Daj#7482: > meat-waste bandit
:ultrazucc:
jrowe#5371: gpt-neo doesnt pull punches
jrowe#5371: "I don’t care if you are the smartest person alive, because you can not take advantage of people who don’t have brains. " :wat:
sl24#8080: omg
sl24#8080: that's true
jrowe#5371: it went on a marxist adventure "It is clear that Marxist doctrines have been around since Marxism was founded by Karl Marx. "
jrowe#5371: alright, thank you Sid, you have made my week
jrowe#5371: and thank you to the devs, you guys are goddamn rockstars
sl24#8080: ^^^^
Sid#2121: well, this is just the beginning. We barely even released this, 'cause 2.7B seems so small fry to us now, lol
sl24#8080: https://cdn.discordapp.com/attachments/729741769738158194/823624534040117308/predictions_nosacred_362000_2.txt
sl24#8080: first scene of the office
Sid#2121: but just eyeballing it, it seems the pile makes a big difference
Sid#2121: I have some navy seals copypasta brewing
Sid#2121: this was like hyper explicit last time i tried, let's see lol
AI_WAIFU#2844: We need a prompt about :catgirl3: |
sl24#8080: @Sid brewed yet?
bmk#1476: pile-gpt-2.7B being better than ada is a big :ultrazucc: moment
EricHallahan#1051: It really shows how good data can improve the output.
bmk#1476: we need to run a legit eval at some point
EricHallahan#1051: HF?
bmk#1476: ada vs pile1.3b isn't really a fair comparison on a few levels
bmk#1476: now that allennlp have C4 out, we can do a real C4 vs Pile comparison
StellaAthena#3530: NGL, I'm excited to try to get it to write erotica
bmk#1476: with the entirety of eval harness
Sid#2121: give me a prompt
Sid#2121: banned
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/823628436307705866/message.txt
StellaAthena#3530: By my second year of college, I was already widely known as a massive slut. I didn’t realize this until a woman named Ally contacted me out of the blue and asked me if she could give me to her girlfriend, Hannah, as a present. As far as I am aware, I had never met either of them before in my life.
Ally and Hannah were fourth years and high school sweet hearts. They had an exclusive and kinky relationship, but Hannah wanted to do something different one last time before they got married over the summer. Hence me, the slut hired to do whatever Hannah wanted. Well, I say “hired,” but really it didn’t occur to me to ask for anything in return. Fortunately Ally would take pity on me and pay me back in the future, but that’s a story for another day 😉
Sid#2121: lmao, what an opener
StellaAthena#3530: Hey, it’s important to prompt language models with good content
chilli#5665: I think the navy seal parodies are more interesting haha
sl24#8080: interesting lol
sl24#8080: took some unexpected turns |
EricHallahan#1051: Finally got around to getting my GPT-3 results into #the-faraday-cage-archive.
EricHallahan#1051: https://discord.com/channels/729741769192767510/730510538060071043/823629045937602570
sl24#8080: someone try asking it a paradox, like 'This sentence is false' or something math based
EricHallahan#1051: > Uh...true. I'll go "true". Huh, that was easy. I'll be honest, I might have heard that one before, though; sort of cheating.
Sid#2121: Erotica:
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/823630625268695070/message.txt
EricHallahan#1051: What is the stop condition?
sl24#8080: yeah
sl24#8080: it's so long
sl24#8080: that's 2.7B right
jrowe#5371: it went snuff pretty quick
bmk#1476: "could swear that I've heard that one before though"
bmk#1476: ok wait this is weird
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/823632288327008266/Screenshot_2021-03-22-13-00-24-723_com.discord.png
EricHallahan#1051: I edited it.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/823632363439128576/IMG_20210322_130054.jpg
bmk#1476: the edit isn't showing up on the original message but it is showing up in the reply
AI_WAIFU#2844: bruh
AI_WAIFU#2844: Also @bmk you say no literotica in pile and this is the second thing we get our model to do
jrowe#5371: guessing literotica... slid in? |
bmk#1476: on mobile, can't read the thing
bmk#1476: post highlights
AI_WAIFU#2844: no
bmk#1476: ok, then, keep your secrets
cfoster0#4356: CC and OWT2 probably have at least some sm*t
bmk#1476: well, yeah, probably
AI_WAIFU#2844: I'll DM you, I just don't want it in #general
bmk#1476: #off-topic ?
bmk#1476: this wouldn't even crack the top 5 weirdest things to happen in #off-topic
bmk#1476: let's go there
💐 🌹 🌸#1170: What is this discord group about
EricHallahan#1051: #rules might be useful.
bmk#1476: perfect timing
bmk#1476: yeah check out #rules
bmk#1476: I love how we have the weirdest discussions
💐 🌹 🌸#1170: I read the rules infographic already
💐 🌹 🌸#1170: And yet i still am not sure what you guys are doing there
bmk#1476: see https://github.com/EleutherAI/info
EricHallahan#1051: Is "stuff" an acceptable answer?
💐 🌹 🌸#1170: You talk about Al but isn't that fictional?? |
💐 🌹 🌸#1170: Idk man maybe i am dumb
sl24#8080: lmfao
EricHallahan#1051: @Carl-bot is right now yeah.
EricHallahan#1051: Wait...
💐 🌹 🌸#1170: I came from Theeye discord and i expected this group to be like that one
jrowe#5371: they make AI go brr - models were released yesterday /software that runs it can run on free Google colab
bmk#1476: pls let's try to mess with the bot only in #the-faraday-cage-archive or something
aero#1357: yes AI doesn't really exist, it's just a story made up by the software elites to scare new programmers into thinking they have no job security
sl24#8080: indeed
sl24#8080: then they work harder
EstebanSir#2189: " hey uh, can i get some help? i'm trying to test out GPT-Neo (the 2.7b model) but it gives me the error AssertionError: Dataset 'pile' was not found under dataset_configs/ folder. Please follow the example.json in that folder. what data is it asking for? isn't this a pre-trained model?
btw i'm trying this in a colab notebook
"
reposted from #gpt-neox-devs
aero#1357: are you running it locally?
EstebanSir#2189: nevermind!!!
EstebanSir#2189: just fixed it- turns out i was not in the correct working directory
EstebanSir#2189: wow ok now i get another very strange error
EstebanSir#2189: ```
"error": { |
"code": 401,
"message": "Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.",
```
it doesn't seem like its the code's fault tho
EstebanSir#2189: mkay i have come to the conclusion this might be a problem with the code trying to write files
jrowe#5371: thats an indication you're not logged in, or you're using a config file pointed at a bucket you don't have permissions for
jrowe#5371: I just spent a couple hours this morning frustrating the hell out of Sid lol
EstebanSir#2189: im not sure what a bucket is but it sounds about right
EstebanSir#2189: i am pointing at a config file outside of the working directory, the one created when i did wget from the eye
jrowe#5371: a cloud bucket is a virtual machine with storage on google servers
Dromarion#3383: I literally didn't get the paperclip jokes until last weekend
EstebanSir#2189: just to be clear, i did not do anything else than just download the model, not very sure how to configure that
EstebanSir#2189: (and the repo)
EstebanSir#2189: (ofc)
EstebanSir#2189: the problems start at ``` File "main.py", line 185, in main
handle_pred_output_fn(predictions, logger, enc, params, out_name=f"predictions_{args.sacred_id}_{current_step}")
File "/content/GPTNeo/inputs.py", line 165, in handle_pred_output
for i, p in enumerate(predictions):```
EricHallahan#1051: I
EstebanSir#2189: ? |
Sid#2121: follow the steps on the colab notebook
EstebanSir#2189: wait there is a notebook?
Sid#2121: it's a fundamental limitation of a codebase that runs on TPUs, TPUs can't read from local filesystems
Sid#2121: google gives you $300 free credit when you sign up, so, your average user testing it out won't have to pay a thing
StellaAthena#3530: https://colab.research.google.com/github/EleutherAI/GPTNeo/blob/master/GPTNeo_example_notebook.ipynb
Sid#2121: it apparently runs on GPUs too, we just didn't test it at all
Sid#2121: someone's writing up a guide as we speak i believe
EricHallahan#1051: 🌍 🧑🚀 🔫 🧑🚀 <(`Always has been.`)
Sid#2121: to put it bluntly, yea lol. I'll be the first to admit it's not the easiest to run, but
Sid#2121: yeah basically
EstebanSir#2189: oh my god
EstebanSir#2189: i had no idea how tpus work
EstebanSir#2189: i only chose to use one because the GPU instance has less disk space
EstebanSir#2189: dang
EricHallahan#1051: GPU instances are seriously limited in disk space in Colab.
EstebanSir#2189: would the 1.7 b model fit? i suppose so, right?
EstebanSir#2189: the eye won't give me the size of it
StellaAthena#3530: This version isn't optimized, but roughly speaking you can do inference if you have 3-4 Bytes for every parameter of the model.
StellaAthena#3530: Many modern computers have 8 GB of RAM, but it's not universal
EstebanSir#2189: i'm not working on my own computer, i'm just saying if it would fit in the disk space that google gives me 😆 |
jrowe#5371: the bigger one fits
StellaAthena#3530: @EstebanSir Oh google gives you like a TB or something silly like that
EstebanSir#2189: not talking about ram, i should have specified
jrowe#5371: for just generating text, not for training
jrowe#5371: ah lol
EstebanSir#2189: what- no it does not! it gives you around 29 gbs of disk space in a gpu instance
EstebanSir#2189: should also clarify, google *colab*
StellaAthena#3530: Oh, lol
StellaAthena#3530: Yeah that sounds roughly correct for colab
aero#1357: gpt-neo is fp32 right? wonder if its possible to get fp16 working, could reduce the memory footprint significantly and speed up prediction
There are a few ways to do it post-training, I've never had much success with that though
Daj#7482: I'm 90% sure neo is fp16
StellaAthena#3530: I'll take that bet, $100 to your $1000
StellaAthena#3530: *now* what's your confidence?
Daj#7482: Updating on you willing to bet, lower
StellaAthena#3530: hmmm
StellaAthena#3530: I wonder if there's any research on optimal bargaining strategies if people's confidence fluctuates with the bid you make
Sid#2121: The bottleneck with prediction is just that you have to load the model every time lmao
Daj#7482: yea it's fp16 |
Sid#2121: Aka tf.estimator F
Daj#7482: Just looked it up
Sid#2121: If you take away the model load times the inference is pretty quick
Daj#7482: This seems like poker strategy and the like, I'm sure there is and that it's terribly complicated
bmk#1476: pretty sure it's fp16 because there's literally an fp16 setting in the config
aero#1357: are you sure? I can only find float32 throughout the source
bmk#1476: which i remember because in juggling configs, i noticed half of them didn't have it turned on lol
StellaAthena#3530: Optimal poker strategy is to ignore your opponents behavior
StellaAthena#3530: Play the odds and the pot
Daj#7482: Ah true
Sid#2121: There’s some interesting stuff relating to pari-mutuel betting systems and optimal betting strategies
Daj#7482: See I'm not an expert on game theory, ask MIRI/CLR
Sid#2121: I’ll have to see if I can find it
Daj#7482: ask Sid lol
StellaAthena#3530: This is related to some stuff I do at work actually
aero#1357: just grepped the whole source for "16" 😅
Daj#7482: It's definitely _supposed_ to be bfloat16
Daj#7482: Since that's what TPUs natively prefer
aero#1357: someone done goofed
StellaAthena#3530: I'm working on modeling multilateral negotiations. 100 people in the room each have a "power" and a "position." agents are influenced to update their position to be closer to that of powerful agents with more power. |
Daj#7482: Feels unlikely no one would have noticed
Daj#7482: but possible I guess? dunno
StellaAthena#3530: We have a MC model of the dynamics, but what we really care about is intervention.
StellaAthena#3530: Let's say we want to end up at final position X. Can we cheaply figure out who we need to move and by how much so that the dynamics evolve to the point that we want
aero#1357: seems like only XL and 13B are configured for bfloat16
Daj#7482: lol you are correct, 2.7B is actually fp32 apparently
Daj#7482: at least if the config on the eye is accurate
Daj#7482: but XL is (correctly) bfloat16
Sid#2121: Are you shitting me lol
Daj#7482: according to the config yes lmao
Daj#7482: actually wait
Daj#7482: it's just not set
Daj#7482: at all
Daj#7482: What's the default?
Sid#2121: Hm well anyway, converting to bf16 for inference probably can’t be hard
Sid#2121: Pretty sure 32 is the default
Daj#7482: well rip
aero#1357: I thought 30gb seemed pretty big for the weights
Sid#2121: Well the master weights are saved as fp32 no matter what
Daj#7482: I think the weights are stored in 32bits or something anyways because tfrecords are weird right? |
aero#1357: oh 😮 didnt know that
aero#1357: thats annoying 😅 they could save a ton of space saving using the native datatypes, and probably speed things up too oh well
StellaAthena#3530: Who could have seen this coming
https://news.ycombinator.com/item?id=26531087
Sid#2121: Yup, shit like this is why we moved away from tensorflow
Daj#7482: tfrecords also save ints as 64bit lol
Daj#7482: ***P O O P S T A R T U P***
aero#1357: I wrote a custom saving function for all my models that saves using np.save which has compression
Daj#7482: https://preview.redd.it/44up1dz68ev41.jpg?auto=webp&s=b38d88db8240f4000e6723cba2dced9d240d6262
Daj#7482: Far leftlib startup rip
nz#9710: poop prophecy?
aero#1357: floridaman 👏 the hero we dont deserve
AmazingTurtle#0001: Hey guys, I'm a new to this. I've been reading up a bit on the gh/gpt-neo and gpt-neox repositories README. I don't understand how to transition from all those technical details that I don't even understand to using a GPT2/GPT3 like model in a real use case. Are there any good resources on that or does someone like to help me get started by any chance?
AmazingTurtle#0001: I'm looking at the evaluations on the Pile test set, .... They use abbreviations like "BPB", "Acc." and "PPL". I don't understand those and can't seem to find a dictionary to look them up
EricHallahan#1051: Bits per Byte, Accuracy, and Perplexity. (https://pile.eleuther.ai)
gwern#1782: has anyone downloaded https://www.reddit.com/r/mlscaling/comments/m6n69w/c4_dataset_released_800gb_common_crawlderived/ yet? I'm wondering if it'd be worthwhile for The Eye or something
EricHallahan#1051: Not yet AFAIK, but see #the-pile.
bmk#1476: I'll download a copy this afternoon
AmazingTurtle#0001: hm they make a 100$ for each download, interesting
AmazingTurtle#0001: > We uploaded the dataset in Tensorflow format into a requester-pays bucket in Google Storage. "Requester-pays" means you might have to pay Google for downloading it. |
bmk#1476: oh shit
bmk#1476: hm
bmk#1476: yeah we definitely should get the eye to host it
zpeng#2458: which dataset costs $100 to download?
AmazingTurtle#0001: https://github.com/allenai/allennlp/discussions/5056
Aran Komatsuzaki#5714: am i correct this is an output of our 2.7B model?
Sid#2121: what is?
AmazingTurtle#0001: the Tensorflow native format costs 100$ to download if you're not in google cloud and download to your home servers for example. but the json format is free however
Aran Komatsuzaki#5714: sorry, the pickup line
Sid#2121: this~?
Aran Komatsuzaki#5714: yeah
Sid#2121: yeah
Sid#2121: that's 2.7B
Aran Komatsuzaki#5714: awesome. when i use this line sometime in the future, i'll cite you 🙂
Sid#2121: lmao
AmazingTurtle#0001: damn those pickup lines are 🔥
AmazingTurtle#0001: however there are duplicates
AmazingTurtle#0001: > 35. I'm an expert in dating and relationships.
says the guy who uses AI generated pickup lines
jrowe#5371: "My robot says I'm hot." |
Louis#0144: LMAO
Louis#0144: I love that one
jrowe#5371: it could almost work
jrowe#5371: hah, took me 5 minutes to redo a gpt-neo run from scratch :ultrazucc:
glazgoglabgalab#5255: Literally
https://youtube.com/watch?v=DJklHwoYgBQ
triggerhappygandi#0001: Using massive language models to get sex. What a time to be alive.
jrowe#5371: nlp^2
EricHallahan#1051: -- Károly Zsolnai-Fehér
gwern#1782: "let's dance baby" https://www.youtube.com/watch?v=b7C69HqnV8s
EricHallahan#1051: I think so?
mikey#7201: so i've known gptneo on hackernews for quite a while now. just found it again today on github trending. i just wanna say it's a great project
cubytes#0844: 12. I'm a dreamer
inox#5400: same this discord and project is wild
gwern#1782: hah, you should see the private Tensorfork channels sometime. we've got some pretty crazy stuff there
gwern#1782: _squints suspiciously at the avatar. well, hayley can probably guess some of what's there..._
inox#5400: I'd uh love that probably
cubytes#0844: haha I can't believe that's a pick up line tho.
cubytes#0844: If it works I'd be like yoo chase my dreams not me 👆
cubytes#0844: If I wanted to spice it up I'd be like "I dream more vividly in the daylight so at night..." |
cubytes#0844: my passion for romance burns wild & bright... 😴
cubytes#0844: 💤
cubytes#0844: seriously tho I love to dream about facing the future head on. I'll race you to it! If you beat me tho would you kindly backscatter a signal into the past to guide me?
cubytes#0844: Oh and collaborate/reach out to huggingface. Such an inspirational rosseta stone frame work siting on $40 million dollars of angel funding...
cubytes#0844: Good luck 👍
bmk#1476: we're already planning to write a paper together with huggingface actually
bmk#1476: well, it's early stages, nothing is for certain yet
bmk#1476: but fingers crossed hopefully it'll happen
StellaAthena#3530: Does anyone want to read and give feedback on an extremely esoteric paper proposing a formal model for understanding narratives?
jrowe#5371: sure
jrowe#5371: that sounds interesting
EricHallahan#1051: Maybe?
EricHallahan#1051: I don't know, my brain is dead.
StellaAthena#3530: DM'd you both
inox#5400: I'm curious!
bmk#1476: louis sent me an earlier version but it was incomplete and i said i'd read it over again once it was finished
StellaAthena#3530: DM'd
triggerhappygandi#0001: who dat
triggerhappygandi#0001: me
triggerhappygandi#0001: Gotta see how to generate plotholes |
cfoster0#4356: the guy who does Two Minute Papers on YouTube
triggerhappygandi#0001: Ahh
jrowe#5371: be like Elon, do lots and lots of satellites
jrowe#5371: hmm. Stella's paper is a metakernel
StellaAthena#3530: Here it is. I'm going to sleep now, direct all hatemail to @Louis https://cdn.discordapp.com/attachments/729741769738158194/823807425323728896/narrative_final.pdf
StellaAthena#3530: Also, this one (accepted to EACL workshop) just hit arXiv. I've talked about it a few times @gwern
https://arxiv.org/abs/2103.12028
Louis#0144: 👋
triggerhappygandi#0001: @Louis rat
Louis#0144: Oh ok
nz#9710: rude?
Louis#0144: No it said to send me hate mail
Louis#0144: It’s ok
Louis#0144: I’m ready
Louis#0144: 🤤
nz#9710: kinky
triggerhappygandi#0001: That's not drool, but some other fluid
EstebanSir#2189: so... have any of you guys got any interesting samples from GPT-neo?
Louis#0144: No, every time you want GPT neo to generate a new token you need to sacrifice a virgin to the volcano
Louis#0144: Obviously we can’t find enough virgins in this chat of kickass scientists |
Louis#0144: ...
Louis#0144: Yes we’ve had it generate cool stuff
Louis#0144: @aero
zpeng#2458: Hi everyone, does cloud tpu api have to be enabled while using a tpu on colab? I have to keep it on while I'm using TPU right?
StellaAthena#3530: @Aran Komatsuzaki we're having a post mortem on the "Quality at a Glance" paper and the meeting kicked off by people going "who is this Aran guy and how did he find the paper so quickly?"
jrowe#5371: Aran is the paperlord
genai (Immortal Discoveries)#0601: Why isn't GPT made n C++? It'd be TEN times faster no?
genai (Immortal Discoveries)#0601: it's python....
EricHallahan#1051: Ah, you see, it's because it doesn't really matter.
genai (Immortal Discoveries)#0601: it does tho, a 10x faster algorithm can eat 10x moree data in the same time
EricHallahan#1051: GPT is not Python. It is C/C++ called *from* Python.
genai (Immortal Discoveries)#0601: but i sw the code, it's 500 lnes of python......???
genai (Immortal Discoveries)#0601: saw*
EricHallahan#1051: But there is a lot of code you are not seeing. The underlining operations and the driver are definitely not written in Python.
genai (Immortal Discoveries)#0601: i write python, and it's slow, how is theirs different? i don't see any C++ code, it's, python...
Louis#0144: an aside btw
cfoster0#4356: Also most of the time it takes to run the neural network is about actually performing matrix multiplications and shuttling data
Louis#0144: but im getting 112 views on my site a day
Louis#0144: and I have no idea why
Louis#0144: LMAO |
Louis#0144: oh sorry, 112 a week
Louis#0144: thats still crazy tho
genai (Immortal Discoveries)#0601: where is the C++ code openAI wrote gor GPT ??????
Ravna#1831: because more than 99.99% of the running time is spent on low-level, not python
Ravna#1831: making that 0.01% part faster doesn't change anything
EricHallahan#1051: What is it using? PyTorch? TensorFlow? The operations are all written "low-level".
CyberClone#9080: well I am even more *professional* because I had an exam at 7:15 and I got up at 7:30
genai (Immortal Discoveries)#0601: this isn't making sense lol, i write python, it too calls low level code...but it's not C++ I'm writing, neither is GPT....
genai (Immortal Discoveries)#0601: a python program is python....where in gpt is the spped up code / C++ ??
cat_#4534: it's inside tensorflow
jrowe#5371: tensorflow is optimized down to bare metal in some cases
EricHallahan#1051: The wonders of abstraction.
jrowe#5371: the drivers and integration happening under the python hood is where the dark magic is
jrowe#5371: <https://en.wikipedia.org/wiki/TensorFlow>
cfoster0#4356: Same with PyTorch
Ravna#1831: I think this kind of discussion belongs to #off-topic too
genai (Immortal Discoveries)#0601: nono now this is impossible, python is easy to write in at the cost it sucks resoure-wise, so if they wrote python and get it sped up 'on' tensorflow that's like cheating convertin python to Cython or C++.....and there's no true python2c++ yet....
jrowe#5371: pytorch is python > Lua > magic
jrowe#5371: lua is my favorite language
Ravna#1831: Because I can't tell if you are serious or not |
cat_#4534: it's just a valiant effort to slow down AGI timelines, right?
genai (Immortal Discoveries)#0601: i too cant tell who is serious, we just met...
cfc#2691: Tensorflow compiles graph computations written in python (Usually) with xla to run on GPUs, it's not running python directly
cfc#2691: Take a look at Jax, it will make more sense https://github.com/google/jax
cfc#2691: Also, I doubt you can make a faster data driven algorithm in C++ than I can make in Jax, just on the fact that it's jitted and running on parallel hardware
chilli#5665: That's not true haha
cfc#2691: What isn't?
cfc#2691: I mean, you can program for GPUs on C++, but good luck with that
chilli#5665: what do you mean by "data-driven algorithm"
chilli#5665: there are a lot of things that are inherently sequential that would be very slow in Jax.
cfc#2691: I mean stuff that's not dependant on external input, like network requests or database access, just data crunching
chilli#5665: yeah, it's very easy to write code like that
chilli#5665: or well, here's a trivial example 😛 - matmuls with different shapes on every input
cfc#2691: Hahaha, worst case indeed
cfc#2691: It'd have to JIT every run
chilli#5665: right, and there a lot of other examples as well
chilli#5665: if you have something that's not expressable as XLA primitives, XLA is not going to generate faster for loops than you can write in C++
cfc#2691: I stand corrected
cfc#2691: Oh damn, just got beta access to gpt3
cfc#2691: If anyone has an idea of what to do I'm all ears |
Louis#0144: Erotic fanfic
je maintiendrai#3304: recipes
Ravna#1831: Invent a new fictional alien language by prompting it with English-alien language pairs
aegis#2320: GPT3 output erotica even without me asking
StellaAthena#3530: @aegis OAI’s GPT-3? Our models?
aegis#2320: OAI davinci or whatever the big one is
StellaAthena#3530: I just found the greatest FIOA request ever
Louis#0144: Send
StellaAthena#3530: Read the US CyberCommand's internal report on their attempt to engage in cyber warfare using memes: https://cdn.muckrock.com/foia_files/2021/02/16/21R019_RESPONSE.pdf
Daj#7482: omg these "memes" are fucking trash
Daj#7482: (Or they censored the good ones because they're still in circulation 🤔
gwern#1782: https://twitter.com/cnmf_cyberalert/status/1311743710997159953?lang=en wunderbar
cfc#2691: Has anyone tried to make gpt play zork yet?
Louis#0144: Yes
Louis#0144: Look up my advisor
Louis#0144: Mark Riedl
Louis#0144: There’s many papers in the lab on GPT playing zork
Louis#0144: Atleast like
Louis#0144: 4 or 5
Louis#0144: Probably more |
EricHallahan#1051: And *Zork* is a very closed-minded restriction. You can play effectively all Infocom games when you get it hooked up.
Louis#0144: No you can’t
StellaAthena#3530: Mark also has papers about playing D&D
Louis#0144: No one has beaten zork
Louis#0144: Not even da vinci can beat zork
cfc#2691: There goes the novelty of the idea
cfc#2691: *throws fork of frotz into the trash*
Louis#0144: If you’re interested in storytelling like that though then we have people at Eleuther who do storytelling now
Louis#0144: I think it’s safe to call Stella a computational narratologist
Louis#0144: lol
StellaAthena#3530: Rude
Louis#0144: @bmk does storytelling too
Louis#0144: Just less theory driven
gwern#1782: (whenever I read about adventure games, I wonder how anyone beat them without guides or reading the source code. iirc I read that the original Adventure required people to read the source before they could figure out how to win)
bmk#1476: storytelling is made of tokens, i predict tokens
Louis#0144: Lul
Louis#0144: A lot of my lab mates are in this server too
Louis#0144: (hi guys)
bmk#1476: stop the GT invasion pls
gwern#1782: build the wall (around georgia) |
StellaAthena#3530: 15 people have the GT role
gwern#1782: we'll let them export the peaches but not the racism or mooncakes
Louis#0144: pralines
Louis#0144: pls
Louis#0144: I need them
Louis#0144: nitarakad#5066 needs the GT role
gwern#1782: no. it's for your own good.
StellaAthena#3530: Can anyone name an AI algorithm that is:
1. Currently deployed in the real world
2. Is unethical in some fashion
3. Is “exposed to the public” in the sense that I can deliberately interact with it (e.g., predictive policing algorithms are not exposed)
Louis#0144: mBART
bmk#1476: facial recognition on street crossings
cfc#2691: Google keeps suggesting me baby stuff and asking how old is the child in my house
asara#0001: suggestion algorithms and edge cases of google search
Louis#0144: Yeah def MBART imho
Louis#0144: Any translation model has ethics issues
bmk#1476: ~~openai api~~ /s
StellaAthena#3530: MBART the language model?
Louis#0144: Yes |
Louis#0144: https://huggingface.co/transformers/model_doc/mbart.html
Louis#0144: It has weird issues with genderless languages
Louis#0144: https://twitter.com/lessteza/status/1374270647879135233?s=21
Louis#0144: This isn’t MBART
StellaAthena#3530: re: search, those are good suggestions but probably a little too difficult to interact with. I can interact with them, but not in a sandbox-like setting or a way that’s amenable to research
Louis#0144: but I’ve seen MBART do stuff like this
StellaAthena#3530: What the fuck is this about
Louis#0144: https://twitter.com/doravargha/status/1373211762108076034?s=21
gwern#1782: imagine thinking those are bad translations... and then repeating the urban legend about the twitter cropping in your followup comment. man, this is truly 'AI ethics' in a nutshell
bmk#1476: https://www.youtube.com/watch?v=ectdRsyj-zI
cfc#2691: This one is the best example imo
asara#0001: What type of solution/improvement would they suggest be made to fully fix this?
bmk#1476: shenzhen has a population of like 10 million, surely finding a resident there to help test stuff out couldnt be hard
Louis#0144: Beats me
Louis#0144: I have no idea
Louis#0144: It’s a hard problem
StellaAthena#3530: 10 million people is a tiny fraction of the world’s population, and this is a group in particular that’s virtually invisible to me, an American who doesn’t speak Chinese.
cfc#2691: Fully neutral training set? Every gendered pronoun gets copied into the opposite gender too
Louis#0144: No that would make things worse imho
StellaAthena#3530: @bmk if you can put me in contact with someone I would love to talk to them |
cfc#2691: Be careful not to destroy their social credit score
Louis#0144: Oh my ex lives in Shenzhen
Louis#0144: Do u want me to poke her
Louis#0144: Her and I are still good friends
Louis#0144: She does AR research
Louis#0144: Ex is not a cat girl sorry to disappoint
StellaAthena#3530: Is that what kids are calling it these days
cfc#2691: I read the ccp has uyghur detection AI
bmk#1476: pokemon go to the street crossing
StellaAthena#3530: Yeah, lots of it. I can provide plenty of sources
gwern#1782: they claim to. how well it works is an open question, to say the least. not like they're offering independent evaluations
StellaAthena#3530: Unfortunately I can't get my hands on those algorithms
bmk#1476: not having access to the algorithms sounds like about 70% of the difficulty of executing a real life attack though
StellaAthena#3530: For a typical ML dev, yeah probably
StellaAthena#3530: Well
StellaAthena#3530: It's more like 75% "adversarial attacks don't actually work" and then 20% access
Louis#0144: Not only@do u not have access to the algo
Louis#0144: U don’t have access to the output
Louis#0144: How tf do u plan to do that
StellaAthena#3530: In terms of real-world attacks on deployed algorithms, the following are plausible today: |
- Data poisoning attack
- Model backdooring (insider only)
- Model inversion attacks (limited)
- Model stealing attack
StellaAthena#3530: The stuff that gets all the hype doesn't work outside of a lab
gwern#1782: data poisoning is done pretty rgularly by spammers and fraudsters aren't they? that one I thought was (realworld)
Louis#0144: Are u doing this Bc the paper ended and u want a new pet project LMAO
StellaAthena#3530: Yes. That's a list of the ones you can do IRL.
gwern#1782: well I mean not merely 'could do' but people have been doing for a long time. I recall a gmail presentation about spammers deliberately attacking their filters with bad data to try to get their latest campaign through
StellaAthena#3530: Hmmm
gwern#1782: like... what was it, registering lots of gmail accounts to deliberately abuse the spam/not-spam buttons in targeted ways
StellaAthena#3530: That's hilarious
gwern#1782: (unfortunately I don't have a link about this, it was a very long time ago)
StellaAthena#3530: That's the kind of thing that I came up with but dismissed as being too tedius for me to be willing to actually do it
gwern#1782: considering how tiny the revenues of spammers are, they put some remarkable efforts into it
StellaAthena#3530: Same with creating 1,000 YouTube accounts and trying to fuck with recommendation algorithms
StellaAthena#3530: Yeah, I don't get it
StellaAthena#3530: Robocalls are one thing, couz those are trivial now
cfc#2691: Isn't SEO basically fucking with the Google search AI?
StellaAthena#3530: -ish, though I wouldn't call Google Search a good example of an unethical algorithm |
gwern#1782: once in a while you read something like https://github.com/eyal0/Chicken-story/blob/main/README.md and you can't help but think 'good lord why don't you just get a legit job, you clearly could make like 50x more doing something good for the world'
StellaAthena#3530: Some people genuinely enjoy being a professional net drain on human utility
gwern#1782: (tbf I think most of them eventually *do* and the bulk of cybercrime/spam is done by skiddies)
cfc#2691: I once tried to sell malware, it's so not worth it financially and in legal risk, costumer support
StellaAthena#3530: Why did you try that
cfc#2691: Money issues
cfc#2691: But it was easier to get a real job and scale the corporate ladder
StellaAthena#3530: Genuine theft seems both safer and easier tbh
gwern#1782: one reason is that you can do it from anywhere at any age. that's always been one of the big reasons for eastern europe/russia/africa/india cybercrime. back when you had computer skills but no local employers who actually would pay well for them. not such an issue these days now that even the poorest countries use smartphones and computers heavily
cfc#2691: Brazil too
gwern#1782: hm, I don't associate brazil as a historical hotspot but sure I guess, it's a big country
cfc#2691: I've met a few crackers from here on ircs and forums
cfc#2691: And riot chats
cfc#2691: I felt so much better after dd if=/dev/null of=/dev/sda
gwern#1782: heh
gwern#1782: "I'm legit now, bro. I'm out of the scene. never talk to me or my daughter desktop again."
cfc#2691: Nope, was just a stupid skiddie
cfc#2691: Not trying to make me sound smart
cfc#2691: It's a retarded waste of time
bmk#1476: actually do you mean /dev/zero |
cfc#2691: I mean I clicked the gparted gui button
cfc#2691: Let's be real here
cfc#2691: Format to ext4 yes confirm wait install something non-skiddy on top
Louis#0144: Love
cfc#2691: Anyway, have you seen this article? Interesting from the neuroscience perspective for someone who has no idea about it https://towardsdatascience.com/towards-the-end-of-deep-learning-and-the-beginning-of-agi-d214d222c4cb
cfoster0#4356: *Jeff Hawkins noises intensify*
gwern#1782: truly egregious overuse of bolding aside, he seems way too uncritical about hawkins, and afaik saccading and multiple views can reduce the damage of adversarial examples but don't remotely 'solve it' the way he claims
ethan caballero#6044: AGI
cfc#2691: Clickbaity title, I know
cfoster0#4356: I buy some of the themes but to be honest have been disappointed with Hawkins' new book
proceduralPopcorn#7319: so are we talking about NNs
cfoster0#4356: For all the time he's spent working on this new theory, I would have *hoped* for something that... idk... at least sketches out the algorithms he's referring to
Agent J#2635: hey i want to test out the GPT-neo pretrained models
Agent J#2635: read the readme.. not quite sure though how to actually use it
cfc#2691: The collab link is the easiest way
StellaAthena#3530: @Agent J Did you open the colab file
Agent J#2635: any guides a bit more thorough?
Agent J#2635: uh negative
Agent J#2635: let me find that
cfoster0#4356: Nothing more thorough than the Colab. That'll be your best bet for the time being |
Agent J#2635: oh boy
Agent J#2635: haha this is gonna require coffee
Agent J#2635: thanks guys will check out tomorrow
jrowe#5371: https://numenta.com/resources/biological-and-machine-intelligence/
jrowe#5371: the numenta HTM school series on YouTube is worth it to understand how his ideas work in practice
Deleted User#0000: yeah same
jrowe#5371: the book was a letdown imo
chilli#5665: Yeah they definitely don't lmao
cfoster0#4356: Thanks! Yeah, I think I came across this and realized nothing of substance was added to it from Thousand Brains :/
jrowe#5371: there's a thousand brains update to the white paper from a while back, which triggered the book I think
jrowe#5371: but it's basically just refining the idea of neural columns as modules
Deleted User#0000: i still like the idea, there just hasn't been anything new since i first learned about it more than a year or two ago
Deleted User#0000: perplexity was low
jrowe#5371: the real time learning /streaming would be nice if it could be generalized to something that runs on a gpu
jrowe#5371: or something that could integrate with transformers lol
Deleted User#0000: numenta doesn't have a track record for building things that work
Deleted User#0000: i think jeff's skepticism of deep learning in general doesn't help either
cfoster0#4356: this is one of the reasons I'm on a neural cellular automata bender
Deleted User#0000: it's too late for them to catch up
Deleted User#0000: if anything, they should try to bolt their idea onto something that works (transformers or something else) |
Deleted User#0000: yess, i see parallels with glom, even bengios RIMS have voting
gwern#1782: they could always explain how a deep transformer is actually a crude weight-tied thousand-brains or something lol
Deleted User#0000: among modules
Deleted User#0000: i get the feeling Jeff isn't familiar with transformers, it's a bit too new for them
cfoster0#4356: There was a fun video where Steve Omohundro was talking to numenta folks about GPT 3
cfoster0#4356: Lemme dig it up
cfoster0#4356: https://youtu.be/0ZVOmBp29E0
Deleted User#0000: i saw Lex interview Dileep and ask about GPT3
cfoster0#4356: Jeff seemed like a mix of confused and unimpressed
Deleted User#0000: and Dileep (used to work for Jeff before starting Vicarious) basically didn't know much about it
gwern#1782: sad
gwern#1782: :sadge:
Deleted User#0000: yea, i've seen a lot of Jeff videos, and whenever deep learning is brought up, he's kind of just like 'well, i know that's not how the brain works' kind of attitude
jrowe#5371: yeah, disappointing
Deleted User#0000: maybe in the end we will see the thread that connects all the ideas
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/824127357643390986/1_-fXKvfupJw6OUDLTo3CzhA.jpeg
Louis#0144: amongus
jrowe#5371: amogules
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/824128364167036928/Screenshot_2021-03-23-21-51-42-090_com.android.chrome.png
guac#4716: is GLOM sort of a flavor of cortical columns? (not too familiar with the numenta work) |
cfoster0#4356: Yeah
cfoster0#4356: Or at least it's intended to be
guac#4716: hmmm thanks i might pick at this
Louis#0144: GLOM reminds me of thousand brains
Louis#0144: @cfoster0 agreed?
cfoster0#4356: Definitely
cfoster0#4356: Within the first few pages I Ctrl-F'ed for Hawkins
guac#4716: did any of you get any good results from the GLOM modules ya'll wrote?
Louis#0144: Lmao
Louis#0144: Did he cite Hawkins
Louis#0144: No
cfoster0#4356: He's casually mentioned in a footnote but I'm not sure if there's a citation
Louis#0144: Oh ok
cfoster0#4356: I don't think there's been a whole lot of effort to try training them
cfoster0#4356: More interest in sketching it out
Louis#0144: I wanna see how they scale
guac#4716: they seem like they wouldn't lol
𓅬 gabriel_syme 𓅬#3220: how to train it is not clear yet I guess
𓅬 gabriel_syme 𓅬#3220: like he proposes a super simple thing but not sure
Louis#0144: We should totally try for 400M or something sizable (for CV) |
cfoster0#4356: My bet is that similar scaling rules apply as with transformers. The advantage you get is local communication, so you're not bottlenecked
cfoster0#4356: I was gonna try some scaling laws for NCA once I figure out how lol
Louis#0144: He loves hebbian learning
guac#4716: that's interesting. keep us updated please!
Louis#0144: Hebb’s rule has DL derivations
𓅬 gabriel_syme 𓅬#3220: I see. Although he just says train it as an autoencoder 😄
Louis#0144: For recurrent models mostly
𓅬 gabriel_syme 𓅬#3220: and image inpainting I guess?
Louis#0144: I guess
Louis#0144: He’s lost his edge
Louis#0144: Capsule networks sucked too
Louis#0144: 🤷♂️
cfoster0#4356: something something hardware lottery
Louis#0144: Yeah
Louis#0144: He works at google tho
Louis#0144: He has so much hardware
Louis#0144: Is that a GPU in your pants or are u just happy to see me
bmk#1476: hardware lottery namespace clashes with silicon lottery in my brain
Louis#0144: Me too
chuan_l#2858: Hi all , just stumbled across " gpt - neox " today .. |
Looks great , love the open approach and can definitely help with ux / design / and #website things. Just slammed until end of april working on digital human but keen to help out !
EricHallahan#1051: Welcome! If you haven't already, take a look in #rules for the resources there. (It seems like you have already have though.)
chuan_l#2858: — some old front end work ,
mostly in " unity " or " unreal " nowadays :
chuan_l#2858: https://cdn.discordapp.com/attachments/729741769738158194/824149264828596244/wp__admin___network_site_detail_wide.png
chuan_l#2858: [ Also did react components for big data ]
AerysS#5558: I notice this strange percentage next to Oral representation. Anybody has a clue what is it? https://cdn.discordapp.com/attachments/729741769738158194/824240573040427038/unknown.png
nz#9710: How many ICLR submissions were accepted as oral presentations I think (and the associated %)
Kia#2550: So guys any Date Month/Year Estimation where the Full Gpt-neo can roll out?Non the less I'm just curious
EricHallahan#1051: Soon™️
EricHallahan#1051: Less time than it took Voyager to exit the solar system.
EricHallahan#1051: Less time than it took the Cassini family to map France.
EricHallahan#1051: Hopefully less than a year?
EricHallahan#1051: No earlier than August by our current estimation.
Kia#2550: Hmmm Thanks for the help
EricHallahan#1051: Yeah, it's going to take some time to cook up.
Kia#2550: Also I'm talking to a Dev...
EricHallahan#1051: "Dev"
Kia#2550: Better...But ow well Thanks for the reply and help
EricHallahan#1051: What were you hoping to hear? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.