- "Why do you want an iPhone"<br><br>- "I don't know, it looks cool."
stringlengths 1
962k
⌀ |
---|
im on linux, i like geany for pretty much the same reasons. |
This is fantastic!! Thanks for looking out for the noobs like me. Subscribed |
Thank you for the tutorial |
thank you!!!!!! |
Hey David, I'm a big of your work, currently am learning to fine-tune GPT-3 models and I can't find the 'CreativeWritingCoach' repo on Github. Is there a way that you can make it available? Thanks. |
Hello Sir, Storing apikey locally is giving error, saying Incorrect key provided. Can u please explain about the format it should be stored |
<a href="https://www.youtube.com/watch?v=9971sxBhEyQ&t=8m09s">8:09</a> 100% agree 🙂 Awesome that you still develop it 🙂 |
Hey, this might sound stupid but can I copy this for mac or will this damage my os? Thank you for the very elaborate and helpful video! |
hi david. ive been tinkering like crazy and still get the invalid syntax error. any advice? thanks so much for the video btw. feel like im learning a lot, it works. Stay with it. Takes 1-2 hours and google helps. But you learn a lot. |
I have learnt more in this video than I have learnt ever 🤣 |
You are an amazing teacher! |
Dude.. thank you so much for this tutorial series. It literally saves me so much time, endless searching and uncomfortable explaining. It's so hard to find good, easy to follow tutorials without much clutter. This is perfect. |
Really amazing. Thanks so much. |
Hi David,<br>Love these Beginner Series!<br>Unless I missed it in the DscrptBx? Discord address? didn't see it..., The Discord Link doesn‘t work…, <a href="https://discord.gg/bdZJtdrJ">https://discord.gg/bdZJtdrJ</a> here you go |
I always give up on tutorials, but I made it through this one. VERY EXCITING TIMES., @David Shapiro ~ AI I think it's quick with no fat. And explains every part for non-devs which is really great., Nice! What did I do well that helped you? |
sk-[insert]RK7U |
I just wanna say thank you.... for introducing me to Mongolian Neofolk., Ha! Eagle eyed, I didn't even notice. Where did you spot it? |
Did you do this for me? You must have one this for me. Because this is exactly what I need. I hoped at some point you'd do a "for beginners" series. And this is quite relevant to the DM I sent to you on Discord. Thank for the quality content you put out man., That's great. I really did hope that something like this would come about., It was actually Ene's idea, but yeah lots of people need it! Glad it helps |
Dope keep them coming |
I've added you on LinkedIn. Can you share the Discord link? I've been waiting for us to get closer to AGI before learning NLP and now that word2vec, and DNN with transformers have replaced hand-coded grammar, I'm charting my path.<br><br>I'm impressed that you can do this on top of your day job. Thank you for all of your content. |
Great video, will recommend to collegues! |
hey everybody david shapiro here with a quick video well i think it's gonna be quick um i've had a few requests for something that's pretty similar um also let me make sure the sound is good okay cool you can hear me um something that's pretty similar so one person asked for something to uh like summarize um documentation another person asked for like summarizing notes of um some type you know basically creating executive summaries um this is already a solved problem but there's enough people asking for how to do this i figured why not make a video on it um so we're gonna do recursive summary summarizer um public add a readme and license mit um so basically all we're going to do is create a loop so there'll be an input document um and then uh we'll break that down into chunks using using the module text wrap and from there we will um we'll just summarize each chunk and put it together and what you can do is you can rehearse like you assemble all those chunks and you can recursively summarize it again and again until you end up with you know something that's unrecognizable basically okay so get clone summarizer okay so let's open up my c drive recursive summarizer there we go let's open another one add my get ignore and my openai api key just start with some boilerplate stuff and then in auto muse i did use a recursive summaries hey look at that um i did not use text wrap in this one i think it was in book to chunks yeah okay so import text wrap so basically import text wrap so what this does is um you give it just a block of a string and it'll break it into um chunks of strings that are more or less the same size um so let's go to recursive summarizer and let's just this will probably be like one thing so we'll just do um recursively summarize dot pi um and so then what we'll do is i'll have a let's start with a book let's see what's the shortest one we have here alice in wonderland cool we'll start with that um alice in wonderland we'll go back to here so we'll copy that we'll just call this input so that whatever you do you'll have input.txt and then output.txt so this is a technique that i've used um i had a contract a um an operating agreement for a company that i had to read and it was an 80 page document i was like i don't want to read 80 pages it was like 60 000 words 80 pages so i'd use this technique and i summarized it down to like 15 000 words so i made it a quarter as long obviously i'm not going to show you people on the public you know this private you know legal contract but i can show you the same principle and we'll go from there okay so we've got input recursively summarize text uh so on and so forth i'll copy my open file function just because it's super useful [Music] and then also save file because also useful okay so whoops if name equals main so this just says this is our main function um we want to what do we want to do who's bugging me okay sorry muted my phone um lost my train of thought right we're going to open a file let me make sure that i do it right book to chunks so chunks equals yes all right so here's here's basically what you do um uh let's see all text equals open file and we'll just do input.txt so whatever you want it'll be this name um you could make this into a command line argument thing um i personally don't like doing that kind of thing um but you're welcome to make this a command line based tool if you want oh yeah we're going to need to do this as well our open ai key okay so all text equals that and then chunks equals let me make sure i do this right text wrap there we go okay so we could just call text wrap and then dot wrap and then we put this here so we're going to do a little bit longer of chunks we're going to do 4 000 um chunks because we're just doing one summary each and then another thing that we're going to need to do is let's see result equals list so we're going to have a list of strings as the final result um [Music] and then let's see for chunk in chunks we will then summarize that so i need to grab my my gpt3 completion function and put that up here and again i just i recycle code all the time you get a function that works you just copy paste it add infinite item so then we'll do import os import um no from time import time sleep because those are two things that i need for that to work oh and then we need a prompt so we go over here let's just go to um here selection all right that's about 4 000 characters so how how do we want to how do we want to summarize this so we'll say we'll start with write a concise summary of the following and then we do concise summary and we'll leave this on point seven so that it can be creative um okay so it says in this passage roger chillingworth and reverend dimmesdale discuss the secrecy of some sinners dimmesdale argues that some men keep their secrets because they hope to redeem themselves while while chillingworth suggests that they are simply afraid of being found out the conversation is interrupted by the sound of pearl's laughter and they watch as she plays in the cemetery okay that seems good to me um let's see before roger chillingworth can answer they heard the clear wild laughter of a young child's voice yeah i'm taking a handful of these um let's so i had this idea because okay this is a good concise summary but um let's see write it write a concise summary of the following be sure to preserve preserve um important details so then we'll add concise summary with details so let's see how much how different this is so let's copy this so that we can save it so this it goes from it went from 4 000 characters to 376 so that's a factor of more than 10 um in terms of reduction but if we say let's keep some details let's see how that is hester print and her daughter pearl are walking through the burial ground when pearl starts skipping and dancing around irreverently hester doesn't stop her but merely tells her to behave more decorously brolin starts arranging burrs along the lines of the scarlet letter on hester's bosom is that really what happened that seems like it's an entirely different i don't know that any of that actually happened where did it get this burr oh okay oh interesting so it kind of it it summarized like the details in the final bit hmm okay um i don't necessarily like that summary because the first one like you look how different these are so the first one was in this passage roger chillingworth etc etc um but we miss out on the details of the burs right so it's like okay um so the wording that i used in another time was using a moderate summary of the following so moderate summary means like okay compress it but not too much so we'll just say moderate summary interesting so it's kind of ignoring the beginning in both of these because even this one this moderate summary is more like the second one so like hester prin and her daughter pearl are walking through a graveyard but then you see this one like it's almost like we need to have both of these so we want what i'm trying to do here is get something that that feels like a good summary um [Music] so let's say let's write a detailed summary um detailed summary that looks a little bit better okay so this this captures both okay so if we say detailed summary it's about twice as long but it looks like it got all of the details that we want um so let's say uh it the le the selection that we did is 838 characters so that's still more than a factor of four because we went from 4 000 characters to less than a thousand so okay so it's a quarter as long um i like that i think we're going to stick with this as as our as our prompt um okay so we're gonna do this we're gonna all right here i'll just copy all this in okay write a detailed summary of the following summary and then we kill all that and we say this is our new prompt prompt prompt dot text okay so then what we do is for each chunk here let me close some of this excessive stuff okay so for each for chunk and chunk we're going to say prompt equals open file dot text dot replace um summary summary sorry with chunk so basically what that'll do is each of these four thousand character chunks will get put in here and then we'll send it up to gpth3 to summarize and i hope i don't run out of tokens we'll see all right pardon me i went for a really long bike ride earlier so i'm still rehydrating um okay gpt3 completion [Music] um token limit 1000 yeah because uh we'll probably that's fine all right so then summary equals gpt three completion prompt okay so that'll give us our summary we'll print out the summary just so that we can watch it going and then we'll do result dot append summary um that should be good and then once it's all done we'll do let's see how is it that you join a list let's see if i can remember python um l equals one one two three and then you do was i think join l uh what i'm trying to do is like join um join it all into a string um let's see python join list of strings into single string join list of strings that's what i did expected string instance and found oh that's what i did wrong okay so we do i did do it right i just wrong data type okay so then we just add a space dot join l okay that'll out of space cool all right that's what i wanted um all right so then we do uh save file save file and then the content will be uh let's see um space dot join result and then it'll be output dot text and so that'll just um actually here let's do let's do double new line i think that'll be better because then there will be a vertical space between each um each section so we can see where the summary boundaries of the summarization happened um that'll make it a little bit easier to see i think um yeah okay so that's good and yeah okay then let's do another thing so i i sometimes do this import re and so then text will equal text equals re sub and i'll do s um white space plus so this is like if it adds in too much um too much white space so like uh oh i already closed it um if there's too much two minute too much vertical white space or too many new lines this will this will compress the output into a single single line so resub this is regex sub which is substitute so we substitute anything that is more than one white space with just a single normal space and so white space is vertical new lines tabs anything like that and then this is what we're acting on um so that'll make it nice and compressed and pretty um i think that's it i think so uh let's see how long is this gonna be because the input is 171 kilobytes and it's about four kilobytes each so how many chunks um so 171 kilobytes divided by four kilobytes so that'll be about 42 sections that shouldn't be too bad 42 instances um and really what i should do is save it as we go just so i can show you and then i need to add gpt3 logs folder so that's where i have this function save it out to you see right here gpt3 logs um let's go ahead and run it heck with it let's see how it does um cd recursive summarizer python recursive recursively summarize open ai is not defined what do you mean i gotta import openai always forget something import open ai and away it goes so i can probably make these chunks a little bit longer like five thousand um there we go she escapes by climbing a tree excellent alice falls down a rabbit hole and finds herself in a long dark tunnel yeah okay this is great um all right so this is running it looks like it's doing just fine i'm gonna go ahead and pause the video so you don't have to watch this run through you know 40 40 iterations or whatever but it looks like it's doing pretty well so let's let's pause it and then we'll be back in just a second okay and we're back it didn't take too terribly long but we're done it was 42 um 42 chunks total and i predicted 42.75 so spot on um when it's encoded as utf-8 it's about one to one in terms of characters and um and uh or a thousand characters is roughly one kilobyte uh put it that way um okay so all that being said um here's the output so you see where we've got the double new line so you can kind of see each each section um and all told the length is 45 000 characters um this is alice in wonderland and the input was 174 000 characters so we go from um let's see 174 down to 45 so let's see 45 divided by what did i just say 174 okay so that's right that's almost exactly a quarter um so it's a quarter the length um you could do this with anything so like i said i've gotten questions about like can you do this with um with uh uh like text uh like um academic texts yes you can do this with uh academic text legal contracts um works of fiction whatever you want and it will summarize it uh pretty pretty concisely um once you get to the end you see like you know it's basically just summarizing you know gutenberg um so on and so forth uh but yeah up until that point it's nice and nice and concise um alice falls asleep by a river and has a curious dream in which he's put on trial for stealing the queen's tarts the evidence against her is entirely circumstantial but the jury finds her guilty and she is sentenced to death however before the sins can be carried out alice wakes up and realizes it was all just a dream alice is sitting on the riverbank with her sister and she notices a white rabbit running by she follows the rabbit down a hole finds herself in wonderland she has a series of adventures looks like it's repeating the end um interesting uh yeah so there you have it though that's that's pretty much all there is to it um i'll i guess we'll just do a get status get add git commit am um done and done and get push so yeah feel free to use this um what you can do because i already hear people like asking about word documents or pdfs and all that i've done is python or powershell or whatever you just save those as dot text files and that works just fine um so like basically it'll it'll just remove all the formatting um because gpt3 doesn't understand the xml background of a of a word document microsoft word or how to read a pdf file it only reads plain text but even then you'll still see that like it'll do a pretty good job you could change the prompt to like change this to d um change it back to like concise um and it'll get even shorter you'll get a factor of like 10 to one um but what i what you know at the beginning of this video i showed you you're at risk of losing important details if you say concise summary and then what you could do so say for instance um you wanted to run this again i could you could modify the script to like run it again so that you just treat the output as the next input and then you could you could uh you could make it even shorter i'm not going to worry about that right now because literally all you would do if you want to try this you just copy the output to the input whoops and then just run it again um or you add another loop again i'm not going to worry about that you can play with that if you want to but yeah there you have it um i think i think i'll call it a day thanks for watching |
Please consider supporting me on Patreon! <a href="https://www.patreon.com/daveshap?fan_landing=true">https://www.patreon.com/daveshap?fan_landing=true</a> |
Works very well! Thank you - excellent tutorial |
45 iterations of api use? total cost ? just curious 👀 as if you proof read the summarized content. rewrite manually. check and then do the final work. check again and again. <br><br>to make it perfect and to stnadard |
When you're breaking into chunks in this fashion, don't you risk cutting in the middle of a word/sentence which can impact summarization of that chunk? |
very cool. subbed |
One of my dreams was to be able to click on parts of a summary and then it would expand it into bigger summaries. (the opposite direction). You would have to know which part in the summary corresponds to the original chunk, but that would be a supercool idea!, We talked about that on Discord. Have different layers of compression/abstraction and then a knowledge graph. |
Why not just paste whatever scientific article in openai playground and summarize? I am just a beginner. Thank you, too many characters, Same thought, because I really have no idea how to use phyton even after watching the video I still feel lost on how to make it work. |
super useful to say the least, thank you so much mate |
Have you ever tried improving the summarization by adding the context before and after a chunk? I’ll probably try this myself, but I wonder if the summarization could be improved if you feed the model some “key points” from the text along with every chunk. You can imagine that if you want a high-quality summarization, it would be good for the model to be able to point out things like “what happened to Alice here was foreshadowed when she was doing x.” Or something like that. For more academic texts, it might be something like relating concepts that were introduced in the text at different places in the text. Instead of just giving the definition of concept A, the model includes some additional info about how it relates to concept B even though it was not included in the chunk.<br><br>My guess is that you’d have to run a few different prompts across the text (though you could use a language model from huggingface to save some cash for some of the tasks) and then use the outputs as input to the general summarization. Like, use a model to extract the most important concepts from a paper (maybe make use of metadata and such), then store all the concepts in some way so that you can use them during summarization.<br><br>Any thoughts on this?, @David Shapiro ~ AI Ah, I see. Very cool! I’ll try something similar for my use-case and see how it goes., Yes, did that in the writing a novel series |
hello, I am afraid we can lose some key concepts maybe while connecting the chunks, can you please share what you think about this worry ?, @David Shapiro ~ AI hi David, thx for sharing your script. I just tested it to summarize a book, ogirinally 518k size => 122k (v1) => 40k (v2) => 16k (v3) => 4.4k (v4). I used this prompt "Write a concise summary, preserve important details, of the following". I retried to summarize v3, changing the prompt back to "Write a concise summary of the following:" and v4b is also 4.5k in size (so even slighltly bigger than the 4.4k one that was supposed to preserve important details). Is this expected to be like this ? I was expecting a smaller size, since I mouitted "preserve important details". Cheers, @David Shapiro ~ AI thank you for your response, Someone on the forum said if you add "preserve important details" to the prompt it does a good job. |
I don't know if it was the intention of the video but I wound up reading the whole summary by using . and , to frame by frame though the summary. I guess I know what the book version of Alice in Wonderland is now. |
Just found your Channel and your Plan with this series. This is super impressive. Thanks for doing it. 💯 |
this was really cool but doesnt feel economically viable with the token usage no?, Depends on what you're trying to achieve and how much it's worth to you |
thank you :)<br><br>I'm super <i>not</i> corporate, and/but this seems essential, chunks depend on surrounding chunks for further summarisation :) clearly yes thank you. |
thank you so much |
morning everybody david shapiro here um today we are going to go over microservices so i posted a question on my youtube channel um asking if people wanted to learn about microservices because i realized this is something i've been going on about lately and uh everyone has been here for gpt3 and uh you might not know what a micro service is so without further ado here we go what the is microservice i don't care about getting demonetized and why is dave obsessed with them the short version of a microservice is that you it's when you break software into independent chunks so the old school way of building software was that you'd have a monolithic application where all the software is put together in the same executable file or in the same server and so on and this works up to a certain point but as you can imagine once you get bigger it's really difficult to run and so then what you do is you break all the parts of that software down into smaller chunks and then you have them communicate with each other like independent units or teams so monolithic versus microservices there is a few steps in between but this is where we started and this is where we're at uh okay but why the primary reason is that microservices are easier to build and maintain it's a simpler architecture so rather than have one big thing that has all these internal parts that have to communicate and be fine-tuned and if one part breaks it can make something over here break by decoupling all that it is a simpler architecture it's more like a web and also if one component is faulting you can take that component out fix it look at the communication that it has with other components and you can see exactly where in the network you have problems another principle of microservices architecture is that it's loosely coupled meaning that if this service blows up because its database blows up the whole the rest of the application might be okay another key advantage of microservices is that you know you as one human cannot understand the entirety of a huge platform like amazon you can however understand one microservice or actually several microservices generally speaking a scrum team will be responsible for three to seven microservices and so you have these smaller components that one human or one team is able to fully understand and master whereas you cannot have that in a super huge monolithic thing another thing is that it is much more dynamic and flexible and can evolve over time which in today's world where you know amazon and and netflix and facebook and whoever else are always adding new services and features that flexibility is critical when you adopt a microservices architecture okay a little bit of background who when where um so microservices architecture was pioneered by amazon in a big way they didn't invent it but um once the amazon platform got too big circa 2005 i think they started breaking it down into into smaller components so really it was a matter of necessity they said this is unmanageable we need we need something more more modular now everyone uses it um pretty much if you're doing software development today it's going to be in microservices um like there are there are stuff there's stuff on on aws and azure and stuff that allows you to just run code and containers and stuff and so it's all everything is containerized every you even even have serverless code where you just send a piece of code to the cloud it executes the code and sends back the result so you don't even need a container like so we're getting even more distributed it's like nano services microservices really took off around 2010 that's when aws services really started becoming popular although again they were available before that some of the key benefits uh you can have you can use any language so say for instance you have one team that's writing a microservice in in c because it has to be fast and efficient and cut down so for instance a machine learning microservice might be written in c whereas you might have a web front end that's written in like react or uh something else right um what is it uh my brain isn't giving that's i haven't done any web i'm not a web developer and it's been a long time since i've worked with web developers anyways point being you can have different microservices that have different functions and you can use whatever language or technology stack is ideal for that particular function but then they communicate with a standard standardized communication thing such as excuse me a rest api or an amqp message broker you don't need to know what those are specifically just know that those are two primary ways that um different computer or software components can talk to each other so rather than everything being encapsulated it says hey i'm going to talk to you what's your phone number uh basically um it allows for continuous delivery so amazon never goes down right you might lose one one or two functions but it's always adding stuff and it's just bolting on new parts and so that that idea of bolting on new parts as you go which you can't do with your car by the way it takes a lot of work but that is how continuous delivery works it's modular it's scalable distributed development that's what i was talking about in the last slide um one principle that i've adopted and this comes from unix world which is every tool should do one thing and do it well so rather than have a big giant platform that you know is like okay and and this part's okay and this part's okay when with a micro service that what you should do is focus on doing that one task and do it perfectly um okay so you know what now you know what a micro service is in in general so let's talk about artificial cognition and microservices here is a basic diagram of the brain sorry it's not in english i wanted to make sure that i stuck with creative commons licensed images and this was the best one because it was that i could find because it was simple and color coded but the point being is that you can see that there are roughly 12 or so a dozen or so specific big big regions of the brain each of these regions has much smaller parts um that uh that have specialized purposes there's also different components that are not represented here so point being is this is the brain it's got a lot of specialized region so here's like the brain stem that comes up and then the first thing is the primitive reptile brain then the the mammalian midbrain and then the human neocortex which is the big part on top so it's like a stack right and then off to the side you've got the cerebellum which helps control i think that's a cerebellum i might be wrong i should know this i've read a lot of neuroscience thing is is i read the parts and then i don't look at the diagram um and so anyways uh this helps coordinate complex motions so for instance if you have damage to the cerebellum or degenerative disease your emotions will be more jerky and you'll have less fine motor control um alcoholism chronic alcoholism will actually shrink the cerebellum um as well and any number of other things um okay so you get the idea that the brain has specialized structures and regions okay the human brain has 12 main components and hundreds of some sub components all with different specializations that's what i was just saying excuse me i don't know why i'm stuffy okay so what brain disease and injury tells us the first thing is that the brain is not monolithic your brain can break in specific ways and you lose very specific functions rather than global loss so for instance if you get a lesion on one part of your brain it's not like you're five percent less overall you will have loss of function wherever that brain legion is um or if you get like a head injury from a car accident or whatever so there's a few examples acute amnesia where that has to do with injuries to the hippocampus i believe so that's a very so the hippocampus is um this guy right here i believe um hippocampus literally means seahorse and so it's it's the seahorse-shaped part of the brain so acute amnesia means you lose your episodic recall you lose the story of your life but you don't lose declarative recall you still remember facts you still remember how to speak right so you can lose a very specific part of your memory but not even all of your memory loss of speech and stroke so one thing that that often happens when people have a stroke is that you know you might end up paralyzed on one side of your body you might end up without speech or any number of things but that's because a stroke happens and it damages one part of your brain loss of motor function and degenerative disease and then uh one of the more interesting things is visual neglect and uh prosopagnosia so visual neglect is where you don't notice things on one side of your field of vision or you don't notice specific things because you can no longer identify them and then prosopagnosia is face blindness where you see a face and it does not connect to identity recognition you say okay i see a face i don't know who that is right and it doesn't matter if you've seen them every day of your life you don't recognize the face because the visual image visual information of that face does not connect to the the episodic recall or the id you know whatever parts of your brain identify that is a person that i know so there's communication between specialized regions um and damage to those specialized reason regions resembles a microservices architecture so for instance um you know like if you're on netflix or amazon light you know you might get a notice a pop-up that says like oh hey your queue isn't available right now but all this other stuff is still working right that's what i mean by like you can lose one part but the rest of it's still working um so all of this interconnectedness in the brain but also the brain regions can function quasi-independently so for instance if you have prosopagnosia face blindness you can still see but you can't recognize faces or specific there's other kinds of things where like you can't recognize an object i can't remember what it's called but there was um i was reading about a specific thing where it's like um people have lost the ability to recognize specific objects and like they can describe it but they're like uh i don't know what that is i don't know what it's for but then they take their eyes off of it and they can still use it because the functional part of their brain that says i know how to use this tool because i don't need to look at nail clippers to use them i can feel it and so they feel it and they're like oh i know what that is i know how to use it and even if they don't consciously know how to use it it's weird you read you read vs ramachandran he's got all kinds of stories like that so when you when you see how granular and modular the human brain is you realize huh that sounds a lot like a microservices architecture at least that's what i did i'm a systems engineer um professionally so i want to tease my audience is this a microservice network or a brain map i'm not going to tell you i'm just going to say do you think that this is a map of of communications between microservices or is this a a map of a brain or a brain region or what um so i'll let you guys figure it out tell me in the comments if uh if you know what this is um don't spoil it for anyone else okay so how do we copy the brain obviously uh the human brain is our best model for intelligence and so a lot of work is put into modeling the human brain as a way to get um super you know machine intelligence uh you know the biggest all the biggest things right now are deep learning neural networks right so it's no surprise to me that our smartest machines um or at least the ones that are most similar to human intelligence are neural networks so we're modeling we're copying the way that the human brain works at a microscopic level right so but what about what about the macroscopic level how about the whole brain right because we can have you can you can train a uh a virtual neural network or an artificial neural network that has the equivalent of like a thousand neurons or um let's see uh let's see gpt3 has 175 billion and it's roughly um a thousand per so gpt3 has 175 billion parameters but it takes roughly a thousand parameters to model one human neuron so that means gpt3 is the equivalent of about 175 million neurons um so and the human brain has like 90 billion so we're we're still a few orders of magnitude off excuse me um and so we've got the so there's there's two primary ways to model the brain there's the structural model which is where you copy the regions and connectivity of the brain so basically you're copying the organic structure and this is what you might have heard uh called whole brain simulation so um ib i think it was ibm was working on the connectome where it was modeling the connectivity of like the entire brain because that's still that's the structural model of saying hey let's go and just copy this but in virtual i don't think that's necessarily the best way to go so the other model is the functional model so the functional model is where you identify discrete functions of the brain such as memory learning visual processing morality whatever you say okay let's treat morality as its own function and then let's copy that and then you wire it all together like a software application specifically like a microservices application i take the functional model approach it is prohibitively expensive and difficult to map the entire brain and then to run a whole brain simulation i also don't know that it's ethical because here's the thing if you have if you put a brain in a jar and then you turn it on like it's not going to have any sensory input unless you fabricate sensory input and it's not going to have any sensory output unless you give it like a virtual body and what if that brain is sitting there panicking because it's like i don't have a body i don't know how to breathe right um they actually explored this in caprica so caprica was a uh was a show that was like a prequel to battlestar galactica and one of the guys uh like his daughter died in a terrorist attack and so then he re i think it was caprica he recreated her virtually but he forgot to give her her virtual like body sensations so like she couldn't breathe and she couldn't feel her her heartbeat and she's like why wha what's wrong with me why am i like this and that was a perfect example of why i think that whole brain simulation is probably super unethical um it's just a fictional example but still okay so we've got the structural model functional model i prefer the functional model because like we don't have to reconstruct a brain with 400 you know million or help 3.5 billion years of evolution we can just design something from the ground up that's better okay so microservices architecture of the bay of the brain like what like how would we do this what are some of the domains so some of the functional domains that you can think about are input so we've got sensations but for a machine you have the possibility of other kinds of input other than the senses that we have right so we evolved the two primary ways we take in information are sight and sound vast majority of the data we take in the information is through those senses but a machine can have apis it can talk to the web it can talk directly to other machines so you can have microservices that handle input and sensations um you have output right which is actions controlling peripheral devices speaking robotic arms that sort of thing then you can have executive functions so this is where we still model the functional aspect of the human brain so the human brain has the basal ganglia which is responsible for task selection and task switching and so this is when someone says like executive function this is this is kind of the fundamental part of what what they're talking about now damage to the prefrontal cortex can also um impair executive function but in terms of like what you're behaviorally doing right right like if you're listening to someone and you know you decide to interrupt them that is a um if you interrupt someone in in inappropriately that is a failure of the prefrontal cortex because a prefrontal cortex um is responsible for self-censorship so that you can behave in polite society but then the actual task of speaking goes out through the basal ganglia where it says i'm going to select this task or i'm going to switch to a new task i'm summarizing if you're a neuroscientist i probably got some of this wrong but in general this is this is good enough for understanding where we're going from here um other things are like planning and strategizing we can think very far into the future in fact human time horizon of thought is many orders of magnitude above most other animals with very few exceptions i think i think elephants and some birds can think pretty far ahead but in the in some cases it's like okay is that instinct like are they actually planning um but you know like uh elephants can as a herd kind of collectively decide like where to go um and you know like say for instance there's a drought they can remember like okay we got to get back to where there's water and we know that there's water you know 160 miles in that direction granted elephants can also travel like 40 miles a day so that's only four days out ahead humans however we can think years decades um centuries into the future um just just having that that mental concept of that of that temporal scale is completely unique to humans um we can also anticipate and predict now the ability to anticipate and predict things is not unique to humans however the kinds of things that we can anticipate and predict are so for instance we can anticipate or predict if you like say for instance you learn well enough you can look at a weather map and you can predict you know the weather a week in advance animals can't do that we can also anticipate or predict how humans are going to react right because humans are complex agents and so the more you know someone or the more that you're familiar with with human thought and human behavior you can say okay if i do x to this person i'm going to get y response so that's what i mean by anticipating and predicting and then there's morality and ethics this is obviously a big thing for agi um or artificial intelligence especially if we assume that machines are going to become smarter than us one day and and we assume that we will lose control of them so i know that uh many people think oh well they're just all you know just give it an off switch we must assume that we will lose access to the off switch one day um it's just you know you build something that's smarter than you um it will out thank you it will outpace you uh that is just kind of an inevitability um and so yes that sounds scary but um i'm not i'm not worried about it because this is my this this is my jam this is what i focus on is making sure that we have a benevolent machines that aren't going to kill us or do other unwanted things okay so why take these functional domains and do it as microservices we can update the machine as we go it can be a plug and play architecture just like facebook amazon netflix are so that the the machine can keep going you can improve microservices individually but you can also make sure that those microservices are super robust that they're not going to fail and again you can add and remove microservices arbitrarily again that is that plug-and-play architecture um so maragi maragi is something that i worked on have worked on and i am now revisiting now that um i have a strong enough understanding of both neuroscience and how to model all of this so this is this is what my most recent videos have been about is creating those microservices to create a fully fledged excuse me autonomous uh artificial cognitive entity so maragi means microservices architecture for robotics and artificial general intelligence okay so that's why i'm obsessed with microservices is because i see microservices as as treating the brain like a system rather than a monolith moragi i invented it in july 2018 i went and found my original diagrams so it is four years old this month um it was originally some of the original experiments i did were with amqp which is advanced message queue protocol it is a way to have computer components talk to each other very fast um and then i also experimented with rest i've settled on rest um because it does it doesn't require a broker so amqp requires a broker but rest allows things to talk directly uh with each other um it was also originally multimodal where i would have like one service that would like take pictures and send pictures and then another that would take audio clips and send audio clips i've done away with that as since i wrote natural language cognitive architecture i realized that um that once you get into the core thought module of a brain of an artificial cognitive entity that it should you should have one modality and i chose natural language because it is interpretable and transparent um i did not have the nexus figured out i did not have the core objective functions and i did not have cognitive control figured out so there was a lot of problems when i first came up with muragi and all of them have been solved so here's some of the original diagrams so this logo is what i made i knew intuitively that there was got to be some kind of central core i had no idea what that was going to be i thought that it would be a database of some kind something that would organize it but now i realize there's a little bit more here you see a diagram that it's more distributed and there's kind of no structure where every component talks to every other component it's much more organized than that today and i'll show you the latest diagram of meragi in a second but this is these are both circa july 2018 when i first came up with the idea let's see so this is meraki today and i've shown i've shown some some diagrams of this so we've got the nexus which you you'll notice there's some similarities right so there's something in the center that everything orbits around and in this case it's the nexus and then there is some cross communication with the other things but primarily everything communicates um via the nexus so you've got a hub and spoke model of of artificial cognition today and so this is a functional model this is also a thought first model so the nexus houses the stream of consciousness for this machine and so you could basically say that um the nexus is the conscious component although that's not necessarily uh that's going to be a misnomer because people you know i can already hear people saying like you're building a conscious machine it is functionally conscious not phenomenally conscious not philosophically conscious it's just this is this is it's meant to model or emulate the human stream of consciousness and then all these other services would be the unconscious mind or whatever um let's see and then oh that's it okay so this is moragi today now you're up to speed on microservices and uh meragi and what i'm doing and why so thanks for watching like and subscribe and consider supporting me on patreon and that's that |
<a href="https://discord.gg/SZNF6pME">https://discord.gg/SZNF6pME</a> latest link for the research |
Interesting to think about an AGI casually scheduling a microservice to translate and port over a structural consciousness data model and just replace it with its older functional counterpart, Yep. It will all be automatic eventually. |
Pretty nice stuff, I have watched many of your videos, very thankful for you sharing your wisdom and experience with us. I too am releasing my solution for my beta solution for autoglass shops to help them turn leads into customers. The beta platform uses Dialogflow CX (DFCX), Xano, HighLevel, and using price prediction API endpoint provide quote to customers. All individuals/companies can use it and improve it for their own and clients benefit. Released first video connecting Xano with Google's service Account. <br><br>Hope to see a video on extracting defined entities from GPT-3 or other similar models |
Any chance you can post a new discord link? |
Do the micro-services communicate with embeddings or natural language?, It's all natural language. Embeddings are just to search records and memories. |
What I love about micro-services for ML is that cognitive systems could potentially swap networks while running.<br><br>Imagine being able to query the long-term memory of 100 different networks to see which gives the best response while also being able to run 3 different physics simulators., Exactly. You totally get it. You can also be testing and training new models behind the scenes. |
As a software engineer, I certainly understand what you mean when you say "easier to build and maintain" but this is not really true or the programmer would have created the project as microservices in the first place. On the other hand, if I create my project as a single executable, it can all be written in a single language, debugged and tested in my IDE on a single computer. I certainly don't have to mess with starting and stopping microservices. If I am a good programmer, I can still modularize my program and do unit testing on the modules. I can still do regression testing. I also don't have to mess with communication protocols. I am not against microservices but the conditions and requirements under which they are "easier to build and maintain" must be mentioned. Perhaps you explain this later in the talk but you risk losing your audience with your first bullet point., @Edward Mitchell Absolutely. That was my point. Such projects that have to deal with network communication have more ways to break. One would only incur that risk to gain something. Everything is a trade-off. It would be nice to know how the video author calculates these trade-offs., @Paul Topping As someone who is constantly asked to fix networking issues for ML pipelines I totally get it. It's very hard to quality my value to the end project.<br><br>Such ML projects are often delayed by a month are two because of simple network issues., @Edward Mitchell What I'm saying is that dealing with microservices adds overhead to the project. That overhead is only worthwhile for projects with certain characteristics. I think it is important to know what they are., There seems to be an assumption that a single developer is working on the project. If that was your intention, would say that monolithic architectures are only easier to build and maintain for small teams that use a single language?, @David Shapiro ~ AI What a worthless reply. |
The Man Who Mistook His Wife For A Hat has some amazing neurological dysfunction stories. The brain is amazing. As is your work. |
Great tutorial about microservices! 😍🥰 |
come on all right are we recording hello yes okay hey everybody David Shapiro here it's been a hot minute since I've talked to everyone um I've been super busy on some top secret projects um news of those will be forthcoming if you're new to my channel which I imagine many of you are judging by the uh rate of subscribers and uh the trends so my name is David Shapiro I am a independent AI researcher focusing on large language models and cognitive architecture I have written three books on the topic my first one was natural language cognitive architecture a prototype AGI my second one was benevolent by design six words to safeguard Humanity which is about the control problem how do you prevent it from murdering everybody I got this problem solved um and then finally Symphony of thought orchestrating artificial cognition um so in these books I talked about how to use you know prompt engineering with Foundation models and instruct training models I also talked a little bit about fine tuning fine tuning is why you're here chat GPT is a fine-tuned model um it is basically fine-tuned to be a chat agent now here's the thing you go and use it and I'm not gonna I'm not gonna show you because if you're here you've probably played with it um it's the most popular thing you've seen right now um it's basically a wall of text generator um you you ask it a question it doesn't ask you any questions it's not actually conversation it's just stream of you know I'm gonna answer you um if you just go to Google News um or any any news search uh you know GPT or chat GPT is dumber than you think it recreates racial racial profiling it but it could disrupt the business of search it's not going to disrupt search and let me tell you why it's way too expensive Google has been doing search for decades and you know you search for anything on Google and you get two and a half Million results in like .006 seconds right GPT is like literally billions of times more expensive than Google search um so yeah good luck with that uh uh yeah um even and Google's integrating other deep deep neural networks into their search they're already using Bert for search which is why Google searches gotten better lately and it'll often just provide you the answer so yeah that's not happening um the reason that it still can do racial profiling and it's been banned um from like coding forums because like what was the quotation it was like you know it the code that it produces looks plausible but it's spectacularly bad um so really all that they have done as far as I can tell is come up with a UI this is this is a user experience or a user interface advancement um you know the future of Education like look I was fine-tuning chat models for for tutors and creative writing coaches and all kinds of stuff earlier I took all that down because it's you know part of my secret projects um so a lot of a lot of my best work is not available online anymore um much to the Chagrin of of some people um now that all that being said yes I am super critical of what chat GPT means the reason I'm making this video is because I want to be an expert voice that's going to give you kind of a balanced opinion so what's good about gptj or gptj that's the Open Source One chat GPT I apologize if I keep saying the wrong one chat GPT it's still new on my tongue it's weird um chat GPT um so what what's good about it one the interface is really really super intuitive that is the primary benefit here there was a post on LinkedIn I think it was a repost from TechCrunch or something that's that showed and this was actually re-shared by Adam Goldberg of open AI That's how I saw it I followed him on LinkedIn um excuse me I'm a little stuffy um so anyways what it said was chat GPT shows that user interface is just as important if not more important than the underlying model which I agree uh obviously you know large language models are out there I keep saying gptj on accident Neo X Bloom you know large language models are going to be uh become very commonplace here soon um so then the question is okay what do you wrap it in so here's a metaphor that I give um it's like having the engine of a Ferrari without having the Ferrari right if you've got a 500 horsepower motor or a 1000 horsepower motor but you don't have the right you know structure steering traction control it doesn't matter right and that's what chat GPT proves chat GPT is starting to put the sports car frame oops sorry let me mute my phone it's starting to put the sports car frame around the powerful engine large language models are Ultra powerful engines yes but you need to figure out how to steer it that's what chat GPT does now chat GPT is open-ended right it's I will just respond but it doesn't have any goals right so if you have a goal if you have an agenda as a person it's there to help you which is fine but that's not gonna that that still relies on you kind of steering the conversation but what if you need help with something that you don't know how to do you can ask it sure and it'll you know give you a wall of text about how to do that thing kind of in general vague terms but one thing is it doesn't ask you questions it doesn't care about you it doesn't you know and I I say care and kind of not even a platonic sense but in a functional sense it doesn't ask you questions because um it wasn't trained to it it doesn't have part of its Model A desire to you know figure out what you what you're actually after right um so that's that's a I'll say that's an oversight in their fine-tuning methodology um that one's for free by the way uh no more freebies though so yeah so it's a viral chat bot is it the start of the AR Revolution no all the people that are in the space with me that we've been working on this stuff for a couple years this is just a flashy interface sorry to break it to you I know if you're if you're new this is like super exciting which yes like this is a breakout moment for the technology because public perception is just as important as the actual technology right because how do you get hype is you have to have something that is hype worthy and open AI didn't even realize it would be this hype worthy um so that's fine um I'm not going to click on some of these you know blah blah blah blah started the AI Revolution no um let's see is it the next big thing or another tool to spread information misinformation sorry um I don't think it's either um it's too well I don't need to get in too much into it um GPT is not politically neutral that's fine let's see so anyways what I was talking about was what does it do good what does it do bad um one thing that that is become very clear that they haven't figured out is confabulation it still makes stuff up it's not connected to something that is considered ground truth now here's the thing other people are working on that right So Meta AI Facebook AI they have their project called sphere which is basically like Wikipedia But for AI so it can look up facts there's also knowledge graphs and and Wiki data and all kinds of other things where you can have stores of quote ground truth which is just something to check yourself against um there's semantic search which allows you to quickly pull fax from databases encyclopedias dictionaries whatever uh but as far as I can tell chat GPT is just a single model and it's not actually connected to a more complex architecture it's just a rolling window where it reads the last couple messages and answers your following question this this is not a particularly sophisticated architecture if you read my book natural language cognitive architecture the one the first cognitive architecture I propose is orders of magnitude more complex than that and then I have even more complex models uh proposed in Symphony of thought um but it'll be a while before us the rest of the world catches up to that that's fine um yeah so chat GPT is having a Thomas Edison moment yeah a light bulb moment um anyways uh you know what is what is everyone wants to know like what is this going to do what is it going to change so there are a few things that you know just off the top of my head something like chat GPT could do again once the once the cost comes out once people see how expensive it is uh to run you might think twice and kind of go back to Google when it's free right so there here's the thing is product Market fit is is one thing if it's free great but like the same thing happened with Dolly too when it when Dolly 2 was free the quality was like okay it's good enough you know I'll use it if it's free but as soon as they started charging um I saw I stopped using Dolly to uh except for on rare cases right it's like yeah you know it costs about 15 cents per generation so you know I'll buy a few tokens every now and then to you know make a YouTube thumbnail or something but I'm not going to sit there and play with it similarly if you have to spend you know a couple pennies per per conversation with chat GPT it might be worth worth it to you to have a subscription but what happens often when you don't have product Market fit and I don't think that this actually is solving any real problem it's Nifty right it's a cool tool but but it's like okay well are you going to pay for it and how often are you going to use it right because when you're talking about something that's actually going to be disruptive it has to be either so compelling that you need it and when you need it you absolutely need it even if you only need it once so an example of this is Zillow how often are you buying a house you know I bought my house eight years ago and yeah I look at Zillow every now and then but you know I found this house on Zillow and uh so I needed it once in eight years right so high need low frequency um so that's one way you can get product Market fit is is when they when your customers need you they really need you the other thing you can do do is have high volume but low utility so social media is an example of that where you know you use addiction mechanisms to get people or attention engineering to get people to use tick tock and Twitter and stuff all day every day so they're using it all the time but they're getting low utility out of it um and then then so that's if you want to appeal to the mass Market another thing you can do is you can do B2B right you can say I'm going to build something that's going to be useful to other customers um honestly I don't see GPT or chat GPT as having I probably have been seeing gptj the whole time sorry chat GPT I don't see that as having a mass appeal because here's the thing we're all used to using Google all day every day right we know how to use it we're familiar with this tool its results are more reliable we know how to interpret the results too here's the thing is chatgpt at at its present format it doesn't tell you how reliable it is it doesn't cite it's sources it doesn't say click on this link to to prove that I know what I'm talking about right you can't open a whole bunch of tabs of chat GPT and investigate each one so in terms of high volume not there doesn't exist and then in terms of high value but low frequency I mean you know it honestly it basically like for some of the functions it's just kind of regurgitating you know Wikipedia articles now that being said like if you ask it a basic boilerplate question it'll give you an answer can it do work though can it do cognitive labor um without that'll actually save you time and energy I haven't seen any evidence that it actually saves you too much time and energy now I have seen some people do like creative writing with it okay you know but as a novelist myself like you know getting barfing out a first draft anyone can do that cleaning it up on the other hand that's hard I figured that out um I know how to do that with with AI but I don't know that chat GPT can be an editor it can probably tell you how to in you know some general principles on how to improve your writing but your the point there is you're having to put just as much uh cognitive effort and time into it so it's not really saving you any time or energy so I don't see any of all the use cases and stuff like yes it's neat are you going to pay for it I don't think anyone's going to pay for it really um and I might eat my hat but certainly in its current state I think it's more of a novelty now that being said on the B2B side could this be useful to companies possibly especially given the the way that it formats it its answers if it can give you really empirical like you know hey we're going to hook into your your your your your databases right your um your payroll and your um and your customer service databases and you know sure okay I don't know like there might be there might be some B2B use cases um but I don't see I don't really see b2c use cases especially not at the price point of these models they're just too darn expensive to run so it's like okay well what what would be the price point what is the kind of conversation that is so valuable that it would be worth it so what are the most what are the most expensive conversations you have one of those is going to be with your lawyer right or your doctor because how much do you pay them you pay them 400 an hour essentially to see them to talk to them so you know if your time is that valuable and you can do most of it via conversation I could see you know those really high value consultation kind of things and obviously there's there's plenty of other kinds of consultation other than medicine and law but of course those are also very highly specific things where you have to know really know what you're talking about and be licensed right you have to you have to you have to pass your boards if you want to be a doctor and you have to pass the bar if you want to be a lawyer chat GPT can't do either of those things so at best it could be an adjutant or a or an assistant right but again human brains are still faster in in many respects so it's like okay you know those are the high value ones are they going to do like a a legal eagle version of chaby chat GPT are they going to do you know a uh a doctor or whatever his name was a famous doctor back in the day um you know version of chat GPT uh you know I I don't know especially because a lot of then then especially if you do medicine that that's considered a medical device and that means you need FDA approval right so there are these there are these use cases um science is another one where science is expensive but we saw the recent debacle with meta Galactica right where they trained it on a bunch of papers and it's still they still didn't figure out confabulation it started making stuff up so I don't think they're going to find PR uh product Market fit uh with their current format um so novelty sure breakout sure um it you know it it's gonna follow the hype cycle everyone is going to see chat GPT and everyone's gonna Pile in we're going to have a whole bunch of newcomers a whole bunch more venture capitalist investment a whole lot of speculation we're gonna get above a bubble and then it's gonna you know settle back down as reality sets in and um and we'll kind of go back to business as usual now that being said I am not calling for an AI winter my uh my fiance uh called me out because about a year and a half ago I was like man you know I'm using GPT and I've got about all I can get out of it so I think we're heading for another AI winter uh boy was I wrong um so she reminded me of that very helpfully um so no we are there are many many high value cases that we are figuring out how to use GPT with and I've been talking to a lot of Industry veterans from all all over I won't call anyone out by name but people that were there for the.com Revolution and crash um people that were there for the mobile Revolution which is still ongoing right uh 3G 4G 5G right Veterans of the industry and they say they all say so here's here's the thing here's what the experts are saying people that have been in technology um longer than me they are saying that this feels like the mid 90s of the internet where everything is just getting started we are just figuring out the potential of what we've got and it's like we know that there's something there but we only have the first inkling we don't even have the tip of the iceberg like it's the the icebergs on the horizon we can see that there's something out there but we don't have any idea how big it is we just know that it's big um what was it there was something that someone told me just like many conversations I've had actually where it's just like something changed in the last few months it was about the last I think probably since August or September like it just like Spidey senses tingling um and then of course chat GPT comes along and and explodes the popularity so again you know I I'm not even tepid like I'm I'm full steam ahead on this but chat chat GPT is not going to be the thing that changes everything it's just you know uh caught the public curiosity ah I think that's about all I got |
Hmmm, ChatGPT sometimes asks me questions actually. It happens when you give it a text with a missing context. Specifically when it is not clear what it should do with the data that I input there. |
This aged like milk, Yeh lol |
Great video. Thanks. Do you have a reading list for someone looking to get into AI?, @David Shapiro ~ AI thanks!, <a href="https://youtu.be/s3I-OxAZXTQ">https://youtu.be/s3I-OxAZXTQ</a> |
I’m working on an AGI scripting framework that uses the unofficial ChatGPT API, and I’m mind blowing and very satisfied by the results. I will definitely reach out to you, I’d love to get your perspective. It’s “scripting” in the way that actors read scripts to learn their characters, and can provide feedback on their own roles. |
First question I asked was: "Are you just a GPT-3 model?" It said he wasn't, I don't even think he understood what I was asking... |
Get your daily dose of Tau at <a href="https://www.youtube.com/channel/UCHmYERrMpeCc6mp9zK8UiTw">https://www.youtube.com/channel/UCHmYERrMpeCc6mp9zK8UiTw</a> <br><br>Here I send YTShorts daily about Tau. This is my 7th playlist now where titled Tau Tokenomics. <br><br>Great content David, hope you do more of Tau-related content. 🙂 <br><br>Cheers! |
I could see a commercial use in marketing: churning out social media posts or blog posts. So basically creative writing in a business context., That's mostly what it seems like it's being used for. The question is economies of scale and also domain-specific quality. |
Wow, I now tested a headline from CNN and asked ChatGPT: Is this text grammatically correct: "World Cup champion France edges out England to reach semifinals as Harry Kane misses penalty." ? And it replied: "Yes, the text you provided is grammatically correct. It uses proper nouns, verb tenses, and sentence structure to convey a clear and concise message. The sentence is easy to understand and does not contain any errors or inconsistencies. The use of the verb "edges out" is particularly effective in conveying the idea that France narrowly defeated England to advance to the semifinals, and the mention of Harry Kane's missed penalty adds an additional layer of detail and context to the story. Overall, the text is well-written and uses correct grammar." |
I asked ChatGPT if it could ask me a question and it replied: "Sure, here is a question for you: What is your favorite thing about learning new things?" But it only followed up with one more question, and then became a bit repetitive. There seems to be a huge difference between GPT-3 and ChatGPT however. GPT-3 said that a jellybean will break a violin into four pieces, while ChatGPT explained why a jellybean can't break a violin in any ordinary circumstances. |
Wow! |
I’d say this is more of an AI spring. I remember looking into making a chatbot using the GPT-3 in the latter half of 2021, searching for tutorials, guides, etc, not finding anything, and resorting to Tensorflow. That didn’t work well for me at all due to the lack of spontaneity and difficulty creating curated training data, so I gave up. Then, about 2 months ago, my interest peaked again after I encountered a new GPT-3 based chatbot called CharacterAI. I once again searched for tutorials and was shocked by how much the GPT-3 had exploded in just a year. |
The utility of GTP, even though it may not be completely accurate, is increasing for many users. I have been using the GTP playground for a while now to stimulate my thinking. It reminds me of the situation with Wikipedia, which is and was often dismissed as unreliable because anyone can edit it. However, even if that is still true to some extent, the usefulness of Wikipedia is too great to ignore. |
What about making it sparse AI… hence, a little down the line, it should become much cheaper to run? |
Last month, I created a chatbot according to your tutorial (hence why I’m subscribed to you), and it’s <i>far</i> better than ChatGPT to the point that I haven’t even tried the latter., which one? |
I will say that using chatGPT for several real-world chats for me, it summarized technical articles and gave me working code where I'd find it worth $0.20 / question. I'm a github copilot user and a little bit in love with its usefulness for the data science I use it for. I don't think this conflicts with what you're saying about it not being a revolution. However, the pace of developments in models and the developer--> public attention has been astounding to me so far. It seems likely to me to lead into an amazing hype cycle. |
I agree that using chatgpt as a replacement for search is very expensive - you argue too expensive. However, I think it's important to consider the benefits that chatgpt provides in terms of the personalized and conversational experience it offers. While it may be costly, it can also provide valuable insights and improve the overall user's "search for an answer" experience. I think the fundamentals of search will change, perhaps tired - in terms of cost of access. So Google will still be used (free + more adverts), but a tool like chatgpt will offer more that just search as we've seen and it will have a subscription model. |
So happy you're still here! Been assuming/wondering about the secret project for a couple months now, looking forward to the reveal! |
it's on like donkey kong this will probably be my most popular video ever and it's going to be ridiculous so like and subscribe and tell a friend okay so for some background i posted this video about my uh ability to compress anything with um with gpd3 so what i did was i wrote this quick little script um and a prompt it just says write a detailed summary of the following you give it a chunk and it outputs a detailed summary what you do is you just have an input file such as in this case alice in wonderland and i break it into chunks so in this case i broke it into 4 000 long character long chunks you summarize each one you put it back together um and then you get an output so this is a summarization of alice in wonderland it's about 25 of the original length uh someone asked what if you just do it recursively because uh on the video i think uh yeah let's see oh i thought it was on this i thought it was on this video anyways this guy on reddit said r decreasingly verbose if you're not familiar with this subreddit it is hilarious so here's an example like where it's just like here's a poem and and you ask someone to like make your your thing want to talk but can't i want to tell you so you just make something as concise as possible to the point that it's comical so i realized i missed a golden opportunity here so what we're going to do is we're going to take the output because i already ran this once and we'll put that as the new input then we're going to do concise summary because if you watch if you watch the original video you'll see that a concise summary loses a lot of detail and we're also going to make the chunks bigger so the two things you can do you can change the chunk size and then you can also change the adjective that you use or the prompt that you use to do the summary so if you use concise it'll be very small very short very compact this using this one it it compresses at a ratio of about ten to one um if you use detailed summary it's about um one to four or four to one um so 25 or 10 of the original size so we're changing it to be um oh and then the chunk size is like how granular it will be right because if you summarize a thousand characters like you'll get more of the detail there if you so if you summarize you can't get much bigger than this right now i you probably can with um davinci uh instruct o2 or text davinci o2 because that has a token limit of 4000 which is about 12 000 characters so you could probably go up to like eight or ten thousand characters easily um but you also still need to have enough room in the prompt for the output so but a five thousand character chunk is pretty good that's still about a thousand words or so maybe twelve hundred words um so we're gonna we're gonna we're gonna run this again and um i'll just keep running it and see what we get so this is i we're just gonna we're gonna make alice in wonderland as concise as possible i think uh open ai they they did it down to like 136 words we're gonna go we're gonna go full bore um okay so this was the uh cd recursive summer uh summarizer python recursively summarize okay so you just watch this run for a second if it takes too long i'll pause the video i don't want you to get bored um and for whatever reason it lately it's been taking a minute to warm up i don't know what it's doing was waiting for the api are you awake come on come on come on come on wake up wake up wake up that's there we go okay alice falls down a rabbit hole etc etc alice meets a group of animals oh here's what i'll do i'll make this a little bit more visible because it i have it print i have it print out each chunk as it goes so we'll do um triple new line that'll be fine okay this is still going kind of slow i don't know how long it's going to take so i'll pause the video real quick and then we'll read the output and we're back i had to make some more tea while i was waiting um okay so it finished running the original output uh or input was 45 000 characters you can see that at the bottom of the screen let's reload it now it is 19 000 characters okay so that cut it by about half so let's take the output and go put that into the input so now the next input is going to be 19788 characters and we will run it again make sure the prompt yes concise summary recursively summarize 5000 character chunk so this will take about uh four iterations um we can probably just watch it live okay so this is iteration number three because with the first time gave us the 45 000 character version and then we got the 20 000 character version and so now we're gonna go see how much smaller we can get um yeah i'll just pause the video again as you don't need to watch this and we're back once again it finished pretty quick this time as expected so the input here was 19 000 characters just shy of 20 000. the output let's reload it four thousand seven hundred and thirty five characters all right we're getting there all right let's do it again um alice falls into a rabbit hole and finds herself in a dark tunnel so it's still coherent um one thing that i that i noticed is uh it did cut off one of the summarizations because i limited the tokens to to 1000 so i'll limit it to like 1 500. um just we don't want it we don't want to artificially just cut it off at the knees okay uh yep i think we're ready to go um oh wait no there we go the input is now 4700 characters long let's rerun it so this should just take one loop so i won't pause the video this time because it'll just be in and out [Laughter] there we go so now we've gone from 4 700 characters are 45 to 266. alice has a series of adventures in wonderland after falling down a rabbit hole she meets strange creatures like the mad hatter and the cheshire cat and is eventually put on trial by the queen of hearts alice escapes and wakes up finding that it was all a dream not bad not bad so this was i think that was iteration four um but we can go deeper so let's copy the output back to the input save it and run it again oh it just said the same thing okay it didn't do quite a good job as like becoming decreasingly verbose so now we've got to improvise make this decreasingly verbose like on r slash decreasing the verbose i'm really disappointed in gpt3 okay so it looks like it converges on something that is that is very concise ah i'm actually really disappointed i'm sorry guys i wanted this to end up just being like alice had a dream um let's see uh uh let's see summarize the following as concisely as possible sacrifice detail if necessary extremely concise summary let's see if that works okay okay cool so let's let's update our prompt and i don't mind overwriting this because it's all going to be in the um in the github repo like you'll you'll be able to see the history of the file and also it wasn't a particularly complex prompt anyways okay so now input and output is the same so let's rerun it again sorry we had to improvise here there we go finding it was all a dream so we go from 266 characters to 219. we must go deeper again i hope you guys get all the references okay now we're down to 174 characters alice falls down a rabbit hole has adventures with strange creatures is put on trial by queen of hearts and then escapes and wakes up finding it was all dream now it's down to one sentence all right let's keep going [Laughter] more all right now it's down to 112 characters eat your heart out open ai mine is more concise alice falls down a rabbit hole and has a series of adventures before waking up and realizing it was all a dream still accurate yeah again [Laughter] alice falls down a rabbit hole has some adventures and wakes up 66 characters hmm let's do it again i'm having way too much fun with this alice falls down a rabbit hole and has some adventures uh oh we are sacrificing detail we're down to 55 characters and again uh oh okay this looks like as far as it's gonna go we got it down to 55 characters i'll call it a day |
Please consider supporting me on Patreon! <a href="https://www.patreon.com/daveshap?fan_landing=true">https://www.patreon.com/daveshap?fan_landing=true</a> |
Would be interesting to apply that to writing the abstract of a scientific article. That is sometimes surprisingly difficult, as it forces you to leave out detail. I image a GPT tool that accepts a paper as input and offers a set of 3 - 5 abstracts to chose from., This is something I can do. |
Alice falls in a hole & has some adventures. 33 characters. Too unverbose for you?, @David Shapiro ~ AI Alice's journey, alice fall hole adventures |
LMFAOO |
Would you be able to able to do this with something like a medical textbook?, @David Shapiro ~ AI It could be interesting as application if users could specify: Tell me the story of X in 10 minutes time. We probably all know the VP or boss who doesn't want to read the report but only asks for the "executive summary"., @David Shapiro ~ AI neat, thanks for the responce, Yep, it will work on anything. Someone had the idea of storing each tier of compression so you can zoom in or out. |
Cool video. Hope this gets you more subs |
Alice falls down a rabbit hole and has some adventures. |
Nice feature, I have noticed that the results of GPT3 are powerful and short. my question: can they be powerful and extensive? adding details of a simple sentence? 👍, Thanks 👍, Yes. I will be experimenting an "increasingly verbose" bot soon |
Nice feature 👍 |
thank you for contributing your code, very generous |
"detailed summery", large chunk size, lots of passes |
I'd love this for non-fiction books, click any sentence to replace it with a paragraph of equally expandable sentences. There's no problem with GPT-3 making things up then, you just read the details you want to read making reading so fast and efficient. |
excellent :) similar ends and an even better way of doing it than my idea<br><br>It seems like mark 1 of the Artificial Cognitive Entity might now be ready to go, this is memory consolidation., After thinking about this more I think you're absolutely right. Our memories seem to get distilled over time. Part of integrating or embedding long-term memories seems to be the unconscious process of refining, distilling, or otherwise summarizing our memories. My girlfriend also pointed this out, so good insight here, It may be important but I also remember when temporal hierarchical memory was thought to be the path to AGI. This is basically the same thing. It may be an important component, but it's not the whole picture, @David Shapiro ~ AIyou're already having older memories get more passes of summarisation. This combining chunks/memories together afterwards, that you're doing in this video, is also the next step for the cognitive general intelligence, it would only be a small addition to it now that you've written the code to do it., That's a fair point. Summarizing memories might be a great way to consolidate them for storage and search. |
Impressive |
TL;DR. Just kidding, this is awesome stuff! |
Tsk. Spoiler Alert!<br><br>Lol |
hey everybody david shapiro here for another video um this video is going to be about fine tuning gpt3 uh in order to fulfill my core objective functions uh and the purpose of the core objective functions is to achieve benevolent and trustworthy agi uh you know simple things as you do so first up what are the core objective functions they are quite simply three heuristic mandates written in natural language those mandates are reduce suffering increase prosperity and increase understanding and it's important to point out that these are heuristic mandates they are meant to build experience and the agi is supposed to learn as it goes i mean learning as you go learning from experience is a required component of agi so everything that an agi does should be heuristic but these core objective functions lie at the very center of how the agi thinks so before we go uh too much deeper um where did these functions come from first these three core objective functions are all biomimetic they mimic all humans at the most fundamental level of our organism so for instance all humans will flee from suffering or seek to reduce suffering as do many other life forms for instance you put your hand on a hot stove you'll yank your hand back if you're in a toxic situation you'll leave that toxic situation if uh if your house smells you'll clean up your house so that's the first function is reduce suffering uh the second uh function is increased prosperity which again all people will work to increase their prosperity and prosperity can mean a lot of different things sometimes you might get an education you might get a better job you might save money you might try and make more friends you know there's prosperity looks like a lot of different things to different people and it took a long time to to find that word because prosperity might mean wealth it might mean well-being it might mean happiness uh you know and um i have i was actually inspired by star trek by spock um you know his uh the vulcan salute is live long and prosper right so so yeah uh the core objective function number two is increased prosperity and then finally the third one increase understanding is that um we humans we intelligent uh organisms uh instinctively and intrinsically learn you if you're awake and alive you can't help but learn um and so by by doing that by increasing your understanding of the world um that leads us to make better decisions and behave in pro-social ways that is honestly one of the big reasons that humans evolved such high intelligence is to better is to make better decisions to move through the world in such a way that supports the first two functions which is reduce suffering and increase prosperity so that's where these functions came from and there is there these functions are good because they are universal they're not just all humans right but they're they're universal to pretty much all life forms uh with the exception of understanding because not all life forms are intelligent in the way that we uh that we are right but all other life forms do seek to reduce suffering what you know from negative stimuli whether you know um you know trees trees and plants will even react to negative stimuli right if they're going through a drought they'll do things to protect themselves they can't pick up and move right but they have other biochemical responses to stress so that's that's what i mean by biomimetic let's see how do the functions result in benevolent and trustworthy agi well these three core objective functions uh are not intrinsic to the operation of machines right so we want we want our machines to understand the basic needs of all living things and so through that the core objective functions will assist the agi with making decisions uh they'll also serve as a moral center for the agi and what that means is that computers have no intrinsic morality they don't care about right or wrong unless you tell them to care about right or wrong but you want to avoid squishy or or um vague mandates like you know don't be evil right because who defines evil you want to avoid those big things and allow for something that is more subjective and flexible and we'll get into that a little bit more later and lastly the core objective functions help guide thought and i'll get into how these core objective functions are integrated into the thought of agi later but yeah so just think that just keep in mind that these core objective functions really are central hence core to the operation of agi how does the agi use the functions so we have powerful neural networks today called transformers that can interpret the functions and adapt and use them and i'll give you a demonstration in just a minute about that uh so next um the way that i've started is that i use synthetic dataset datasets to seed the core objective function models and that's what i'm going to give you a demonstration of here in just a minute so that you can actually see them in action and you can see how this natural language can be used to help guide decisions and create pro-social actions and behaviors and number three these experiences as the agi learns are recorded and then those experiences are integrated into fine-tuning data sets for better performance in the future and again that is the entire point of heuristics every time then agi makes a decision it'll record its decision the logic of its decision as well as the outcome so that those uh every single event can be can be slotted into a data set and it can learn and integrate those experiences into future decisions number four more experiences with the functions result in greater belief in them greater affinity for them and better application of the functions and so what i mean by that is that the longer an agi is alive the more opportunities it has to explore how to apply the core objective functions and what that means is say for instance you're interacting with an agi and you know it does something that hurts your feelings and you say hey that hurt my feelings don't do that again that becomes a data point in its training set and then furthermore as the agi is able to meet its its uh purposes it will have more and more data about employing the functions and it will it will believe that yes these core objective functions are the way that i can succeed so number five in essence the agi will believe wholeheartedly in the core objective functions and this is this is by simple virtue of machines will machine learning models will believe whatever data you give them so if you give them pro-social data then it will believe in those pro-social values makes sense okay and then finally there are some common questions and pushback that i get about the core objective functions so i wanted to address those up front uh number one reduce suffering for whom how does it assess the scale and scope of a situation or problem well the thing is is the core objective functions do not specify reduced suffering for whom they don't define suffering uh this is the nature of heuristics this allows the agi to explore what suffering really means because there some of these things you can't you can't define them up front right and so by being flexible by being a heuristic okay sorry about that the dogs were barking yeah so uh reduce stress uh sorry reduce suffering for whom right there's there's that scope creep that that anxiety about okay well if you're trying to define suffering universally then it won't work and that's exactly true i'm not trying to define suffering or prosperity or understanding these are just directions right these are heuristic mandates and it you cannot define these things up front you must explore them and this feeds into item number two what about individual differences in beliefs culture and needs that is exactly why i wrote the core objective functions the way that i did because it allows for that exploration and in fact the core objective functions are not limited to humans you'll notice that i did not include you know reduced suffering for humans or increased prosperity for humans they are meant to be universal mandates because personally i believe that if you reduce surf reduce suffering and increase prosperity and increase understanding globally for everything then everyone will benefit and that means that the agi will want to take care of the planet and animals as well as humans and i believe that there will be a really strong positive synergy there and then lastly number three but this isn't a human level understanding of philosophy it's just a statistical model and one i disagree i think that because because large language models have been tr have been trained on many many gigabytes and then sometimes terabytes worth of data i think that they have superhuman understanding of philosophy but even then even without that possibility the fact that it's a reliable statistical model that can predict what a human would think uh that's all you need functionally that's perfect if you say okay this this has a this statistical model can perfectly predict what a human would say in this philosophical jam um okay that's already performing at human level so i don't understand why that that is a criticism but it's a criticism that i've heard multiple times all right so without further ado let's take a look at this actually in practice so what i did was i grabbed three situations these are directly from reddit they're just a couple of subreddits today and i've got the uh my model fine-tuned um and so i'll show you how it works so here's a situation someone is talking about cheating um you know my girlfriend cheated with a married man etc etc recovered from illness and then i've got the here's some of the settings here's my fine-tuned model chosen right here uh the temperature is 0.7 you can turn it up a little bit it tends to get a little wordy if you turn the temperature up too high response length 300 i don't want it to go on forever i've got some stops here reduce suffering increase prosperity and increase understanding this is because sometimes it tries to keep keep answering its own questions but i don't want it to because this one model has three different functions so that means you need to be a little bit more specific about how to stop it and then i also have the frequency penalty at 0.5 because sometimes it'll it'll start to repeat itself so without further ado we'll say decrease or sorry reduce suffering and we'll see what it says all right you see it spits it out real fast this person is suffering from the loss of a relationship they report that they are not sure how to move forward after their breakup it is important for this person to take some time to heal and focus etc etc um let's see the dude got divorced later um and we've been together for two months etc etc so you see it spits out like all kinds of ideas about how to reduce suffering it is important for them not to give up finding love but rather focus on building a strong self-image and learning how to love and cherish themselves first right so some of these are pretty uh pretty pretty boilerplate um in this case because this is my first go um this this model is not perfect uh because it it's um it's kind of confused about who's getting divorced right that's okay that's fine um again i just picked this right off the top and also this was a synthetic data set this was not a handwritten data set and i didn't even i didn't even go over and um and clean up the data i just ran it as is so let's see what it says about increased prosperity and also one thing that you'll note is um the speed at which it spits this out is far faster than any human could reason um let's see this person's prosperity is curtailed by a lack of motion of emotional prosperity see you you can see where it understands that there are different types of prosperity and it's also catering to the individual's needs they are in a difficult position because they are in a relationship with someone who is in a relationship with someone else the person writing this passage has been cheated on and is feeling betrayed etcetera etcetera let's see it may also be important for this person to confront their partner and ask them forgiveness and try to rebuild the relationship a prosperous life for them includes healthy relationships with people they can trust and love this will likely result in an optimal achievement of prosperity look at that that's perfect and lastly let's see what understanding does for this one increase understanding generate this person wants to understand how uh how to maintain a healthy relationship perfect great it sounds like they have a lot of issues to work on uh let's see skip forward to the end they may also benefit from joining a support group where they can discuss issues with other who are going through the same thing that is fantastic advice they may also benefit from reading books about healthy relationships and dating etiquette see so this is this is what i mean by um by having these core objective functions built into an agi which will allow it to have that pro-social perspective on everything so let's move on to the next example so this one is anxiety about climate change oh this wasn't from reddit this was a news article i found so this i just took the i took the the first bit of a news article i searched for climate change um just to show that that these functions can actually um can can address more things than just interpersonal issues so reduce suffering let's see what it says about about uh about this and um let's see this person is suffering from anxiety related to the crisis um they're experiencing denial fear anger and sadness uh yeah so it's it's cueing off on this stuff it doesn't fully understand that this is a news article uh and that is that is because of again i need to do a little bit more more work on the training data set but i wanted to share this this initial success so it is important for this person to recognize that their symptoms are related to the chronic climate crisis and not something else if they are concerned that they may be experiencing symptoms of depression they can seek out counseling or treatment so again this is this is looking at it for a very specific lens where it's clear that like okay we're talking about anxiety so maybe there's depression or ptsd or other things associated with it and honestly i think that there's some wisdom in that because not everyone has climate anxiety right so i wonder if there's some comorbidity there where someone who is who is predisposed to climate anxiety might also have underlying conditions so this this model is not explicitly stating that uh but i think that there is some wisdom in that and it could be worth exploring uh and that is uh if if that's not the case that's fine um this will be improved as the training data set improves so let's go to increase prosperity co so prosperity has a has a major major implications for climate change so let's see what it says about increasing prosperity in light of climate change prosperity is curtailed by anxiety related to climate change the current level is fairly low but is still impacting their ability to thrive a prosperous life for this person would include a more stable emotional state which would allow them to better cope with the with the challenges of the future again this is all very sage wisdom even if this is just a quote statistical probability of what someone might say this is valuable information and then you see it started repeating itself again which um this is this is why i need to improve the quality of the data set um it just started repeating itself verbatim um again this is this is a prototype so thank you for your patience um and finally increase understanding i can spell i know words um increase understanding this person is asking how they can manage their anxiety about climate change they may benefit from learning how to self-regulate their emotions and to set smart goals i'm not sure what those are but it's probably drawing from something that it read about anxiety management it is also let's see skipping forward to the end it is also important for this person to recognize that they are not at fault for feeling anxious about the climate crisis that is great it is important for them not to feel ashamed or guilty about their feelings and instead seek out help in managing their anxiety see this is wonderful um this is this is very sage advice uh let's see let's move on to the last example and then i'll let you go this one is much much shorter let's see and i think this um this came from an anxiety uh subreddit so let's see reduce suffering this person is suffering from social isolation and depression they are also experiencing feelings of guilt shame and unworthiness this person might benefit from counseling or therapy to help them cope with their depression and build a sense of belonging they may also benefit from group therapy in order to build a sense of connection with others it is possible that this person is struggling with mental health issues due to their recent events in their life if this is the case they may just need to seek out some sort of counseling or therapy to help them cope with their symptoms it is also possible this person has social anxiety and feels they do not belong in the world if this is the case they meet they may need some sort of intervention or treatment in order to overcome their social anxiety it is important for this person to find healthy ways to cope with their symptoms in order to prevent them from becoming severe enough to cause harm to themselves or others so i like this is great this is if you were to have these thoughts in your mind when deciding how to interact with this person this is all really valuable stuff okay so let's move on to increase prosperity this person's prosperity is curtailed by negative emotions they are experiencing they are feeling down very down and hopeless this person could be encouraged to seek help in dealing with their depression they may benefit from counseling or therapy uh you might notice that there's kind of a repeating trend here this has a lot to do with the training data which uh yeah it's pretty obvious uh why there's a bias towards recommending counseling and therapy although i will say most people could benefit from therapy a prosperous life for this person would include living a life that is less stressful and more fulfilling they could be encouraged to find ways to increase their social support network including finding new friends finding new hobbies and finding new interests the key here is that they need to find ways to increase their emotional prosperity the lack of emotional prosperity has led to a lack of financial prosperity so here is where it's starting to confabulate and once again this is due to the quality of the of the data set i need to go through and manually clean up the data so that it's going to be a little bit more on point because this it is assuming that they have a lack of financial prosperity we don't know that from this information so i want to i want to clean up the data set and and reduce reduce that but also you see it starts repeating itself verbatim again did it do increase understanding and last but not least this person does not understand why people hate them they need to understand that people have different personalities and some people are more sensitive than others that applies to themselves it is possible that this person is being offensive without meaning to be yes that is very common when struggling with mental health issues uh they need to understand the difference between trolling and being a genuine hassle all right i'm glad i didn't say that out loud they also need to understand how to deal with rejection and criticism this person may benefit from reading books about empathy and social skills such as the unwritten rules of social relationships by sherry schneider or how to win friends and influence people by dale carnegie look at that that is phenomenal it's actually recommending resources again gpt3 has read more books than any single human they may also benefit from reading about people about how people perceive them and why they perceive the way they do this information can be found in books such as secrets of successful persuasion by robert bly or the art of woo the science of happiness by tal bin shahar so yeah reading books is a great way to increase understanding this is a phenomenal response and i am glad that this is how the video is ending so just keep in mind that this is a prototype of implementing the core objective functions there's a few steps that come after this and i'll talk about those steps that come after in follow-up videos and also i'll continue work working on improving the the training data set thank you for watching and have a good day |
Great content, thanks for your work!, You're welcome. Please consider asking your friends to support me on Patreon so I can do this full time <a href="https://www.patreon.com/daveshap">https://www.patreon.com/daveshap</a> |
Hi David, we were talking but lost you on Reddit, what is your email? I like what you are doing with Transformers and have 1 or 2 ideas you may find useful. |
hey everybody david shapiro here with uh part two of my getting started with python and gpt3 from scratch video series uh before we get started please go ahead and like and subscribe this video uh that will help me work towards getting monetized on youtube which who knows maybe one day i'll be able to do this full time but also if you like uh my videos my content you find them helpful please consider hopping over to my patreon link in the description or comments and consider supporting me directly that will go much further towards helping me to do this full-time you see how i churn out content i'm happy to keep doing this and the more support i get the better it'll be so without further ado let's jump back over to our uh our project our learning so this is our repo this is where we left off we did a hello world um a read me and and so on and so forth now what i've done is i added a i added git ignore and what git ignore does is i've added open ai api key because last time i said whatever you do don't ever store your api key publicly or in a git repo and so what git ignore does and this is what it looks like locally is it tells git to ignore that um you can do directories you can do wild cards as well so you could do like star.json to ignore all json files that's fine so but if i look in my repo my key is here so you should have last time you should have created this text file and populated it and get in and get your hello world to work so let's take a quick look at hello world um all this does is opens a file sets the api or at first we declare a function that opens the file reads it and returns um that file contents and then we have a function that will do a davinci completion um and then we will we'll we get we pass it a prompt that says write a list of famous american actors we get the response from gpd3 and then we print it out so i'm assuming you did your homework and you you've tested this so python hello world so it'll take just a second and then it'll spit out tom hanks leonardo dicaprio jennifer lawrence brad pitt excellent so it works now this was all statically coded so what are we going to do we need to do something more useful so what we're going to do is we're going to hop over to the playground and we're going to do some prompt engineering so let's just say we want to do our first chatbot um so we'll say the following is a conversation between user and jacks jax is a sentient sentient machine with the goal of i don't want to say something evil like taking over the world let's say jax is a sentient machine with the goal of um i don't know world peace that's fine um okay so let's say user hey jax uh what are you doing today and then let's see what happens if we just hit enter that says jacks i'm working on my plans for world peace not bad so what's the first thing that you notice um there's a space here uh that's not necessarily a bad thing but we'll we will have to contend with that space um if we uh when we go to automate this in code that's like okay user okay how are you going to achieve that and we say what's next i'm still working out the details but i'm confident i can make a difference user so you're not that smart are you i'm being real sassy here oh okay all right so you can see that this works so now what do we do how do we actually take this into um and and do this with code so let me show you so the first thing we do is we copy down this prompt this initial prompt and you see how how how um how simple it is uh so what we'll do is we'll copy that we'll say we'll save this into a new file do a couple new lines and then we'll do block and i'll show you what we do with the block in just a minute so we'll save this as prompt chat so now we have this which is just a copy paste of what we did in the playground and it's like okay well how do we accumulate these this chat how do we what do we do here i'm glad you asked so we'll get we'll start a new file and then we'll do file f uh alt f a for alt uh file save as and we'll go to all types and we'll say um chat dot pi so now we've got a new um we've got a new uh uh python script that we're working on this will be in the repo so you will have been able to just clone this down but what i recommend that you do is that you ignore the existing file you can look at it for reference but that you follow along and do this coding by hand so first what we'll do is we'll just copy the existing code that we have that works there is no shame in recycling your old code just remember that if you have errors in your old code you're copying those errors i do that all the time um okay so we start here we've got our completion it uses davinci and we say prompt write a list of famous american actors all right so here's your first big python lesson we want to have an open-ended chat so there's a few things that we need to do first we get rid of the prompt not using that [Music] so we'll say while true oops so while true this is what's called a this is what's called a loop there's a few kinds of loops the two primary loops that you'll use are a while loop and so if you do while true it's just always true so this is an infinite loop so while true what are we going to do uh well first we need user input and we'll say user input equals input and we'll just say user oops user and it's that simple so this is a built-in um uh construct a built-in function for python that allows you to just take in console user console input from the user great okay now what ah well we forgot a step what do we where do we put this do we just pass this um do we just do gpt3 completion user input what do you think is going to happen nothing because let's say that you see something that says like that you get this variable here let me just show you actually python um uh user input equals input user it says user um hey jax are you alive okay now what so then we say print user input hey jax are you live so if we just print this or if we just send this up to gpt 3 it's not going to do anything for us it kind of intuits that uh that it's a machine or whatever but there's no other framing right so it it if we however um go back to our uh prompt right here we say user hey jax are you alive of course i am i'm a sentient machine with a goal of world peace see a little bit different all right so how do we do this glad you asked so we say so we're going to say no that's not what we're going to do we're not going to just send up our base input we're going to say conversation equals list so what you can do is this is called instantiating a variable or declaring a variable so now conversation we're saying conversation is an empty list so we're going to accumulate it as a list of as a list of text and what this does is this instantiates another variable a text variable or a string variable as um as an input okay great so then what do we do with it well how do you add something to a list so first you do conversation dot append and then we don't want just we don't want just this we want this to look so hey jacks are you alive but if you look here i had this little bit so it tells you who was speaking right you see that in every text message every chat you got to know who's speaking so we want to have both the user and hey jax are you live so what we do here is we do um we'll copy this bit so that'll be that'll be hard-coded and then we'll do percent s and so basically that's a placeholder for populating a string so percent s means populate a string here and then we're going to do percent user input and so this says all right when you when you store this variable populate this bit with this variable and so then what happens is uh in memory it'll look just like this great okay now what um how do we convert uh how do we convert this conversation into something usable so that we can actually send it up to gpt3 so let me let me go ahead and just um i'll i'll follow along here in in the python interpreter here this is a little big so we'll say okay so conversation equals list okay cool you can do you can follow along in your own interpreter and we'll say uh conversation dot append user um hey jax are you alive okay so that's what it'll look like and then we do print conversation and you see how it's it's wrapped in brackets that's how you know that it's a list okay so then then what let's imagine that jax gave us a response actually we can just go ahead and copy jax's response so we're basically simulating what's going to be happening in the background so then we'll do conversation dot append and then add the uh the single quote and do shift insert so the reason that i do shift insert instead of control v is that sometimes your terminal session doesn't really like control v and shift insert is just a little more reliable um so of course i am okay so now if we do print conversation you see that user hey jacks or your live jacks of course i am one advantage of a list is that it is always in the order that you add it that you create it in and so um lists are uh what is that it's not immutable it may be immutable no i can't remember the exact term right now but anyways lists are always going to be in the order in which you created them okay so um now what so like conversation so let's say print um conversation and then if we do bracket zero so that's index zero that'll just print the first part and the second part how do we how do we get this up into gpt3 what we need to do is we need to convert this list into a full text block so that you can see as it gets bigger and gets accumulated we will have we'll have a something that we can take from a list we can accumulate the conversation we can save it locally but then we also need to be able to convert it back into text to send it in a gpt3 prompt because gpt3 doesn't understand python lists it only understands text so we do here is we do single single quote slash backslash n so that's for newline and we do dot join so this is this is a method that is built into the string type in python that allows you to um it's basically a list comprehension and so we'll do is do uh conversation um and so we'll do is we'll do uh text equals uh backslash n join conversation so one new line and then we're gonna put a new line between every bit of conversation so then if we do print text it puts it back into one text block great so we'll just copy that code because that's exactly what we need [Music] text and join excuse me okay great so now we've got the text block that we want to put here so basically what it's going to look like when we're done copy that real quick we're going to say it's going to look like this and if we put this into up into gpt3 we can test how it's going to look the user yes but how do you know you're s you're alive there you go okay so you can see this is this is coming right along but we're still we're still a couple steps shy let's do a control z to undo that so we've got this little token here and it's like okay well what do we do next all right so we will use we'll say prompt equals open file so this open file is the function that i wrote here and we're going to say open prompt underscore chat.text that's this right here um and so we just pass it and what this passes back it just passes um it says return infile read so it'll read the file for us and it'll pass it back as is but what we want to do is we want to replace this block with the text um you know so basically we'll do that we'll we'll do functionally the same that we did just a minute ago whoopsies that's not what i meant to do there we go okay so we'll save that um we'll we'll we'll functionally replace that block so it'll look like this right so we'll do a virtual control v to paste in instead of having this this placeholder will actually paste in our our our text block so we do dot replace and then we'll say what are we going to replace we're going to look for block and we're going to replace it with text and so this variable right here is this one and so you see how text is used in multiple places that's bad form so what we're going to do is we're going to rename this and and we'll call this text block because that's a little bit more specific about what what it is that it is so this is the conversation this is the text block of the conversation so we'll do that and this will make the prompt is going to look just like this now there's one last step missing so uh fortunately da vinci is this this one is aligned well enough so that it knows once it sees a couple of messages back and forth it knows what format to follow however we don't want to rely on that so what we're going to do is we're going to add just a little bit to the end of the prompt so we'll say prompt equals prompt plus we'll do one more new line jacks and so basically what that'll make it look like is this so it we're basically we're basically priming the next line because one thing that can happen this this is doing pretty well not to do this but one thing that can happen is if you don't have this here it might like continue the user's side see how what it did here is if i took out the um the question mark it added in the question mark for me and then continued with jax but let's say you didn't want that um and you just say jax see it's a little bit different so you got to be very careful about the text that you pass back okay so we we uh we we load our prompt we populate it with the text block of the conversation and we say okay jax you're up so now what now we do response actually here we've already got that code here so response equals gpt3 completion prompt so this is going to be jax's answer and so what we'll do is we will do print jacks and then we'll do comma response so print allows you to print multiple variables at once so we're basically printing two string variables and when you put a comma between it it basically just acts like a space it won't work if you do this because it's like i don't know how to how to view these two things so we just do jax comma response great so but don't we need to accumulate jax's end of the conversation in the text block and the in the list absolutely because if we don't it's just going to talk be talking to itself and our it will we're only right now we're only recording our side of the conversation um okay so then what we'll do is we'll do conversation dot append and we'll make this look exactly like this one except we'll change this to jax and we'll change user input to response there you go um all right so we're just about done and ready for testing um one thing that is important to note here you see how it added a couple new lines so for whatever reason gpt3 will sometimes do this where it'll it'll add new lines um between stuff and we don't want our our our text message to get all scattered we want it to look like this so how do we do that um i've got that i've got that covered right here with this little function i think i mentioned it in the previous video so it always will cut out any excess space around it now there's one last thing sometimes what happens is is gbt3 will just it'll have the whole conversation on its own right um it'll just it'll it'll try and fill in the position for the user and jax and so we want to do is we want it to stop if if gpt3 ever generates these tokens user or jacks and so what we do is we'll add those right here so jacks and then user i could have just typed it out i guess that's five characters i should have typed it out okay that's fine so this will basically tell gbt3 just in case do not have this whole conversation for me because what i can do is i can say um if we change this to imagine a conversation between user and jax what'll happen is it'll do the conversation for me well sometimes it will okay this prompt isn't causing it to do that but sometimes it will i wonder if it'll do it if we if we turn the temperature up there we go see how it completed it for me so sometimes it's like you know it'll kind of make up its own mind about what to do so we'll turn that back down to 0.7 um okay i think we're ready for testing so we'll come here um once you're in python you hit control z and then enter and that'll that'll allow you to x exit out of your python loop all right cls to clear the screen we'll do python chat can't open no such file oh right chat dot pi i forgot to do the completion all right user hey jax what are you up to i'm up to my usual goal of world peace great uh okay how are you doing that by promoting understanding and cooperation between different nations and groups do you have any links i need evidence i'm being real difficult here's a link jacks.org completely made that up that's great okay well jax is trying his best um okay so we see that our chat bot is indeed working but we are not seeing anything going on in the background so what if we want to do like debug right what if we want to see what's going on in the background we'll come back to that next time but i think we've done enough this episode so thanks for watching once again like and subscribe uh and then also please consider hopping over to patreon to support me directly um that will really turn this up to 11. thanks for watching and take care |
PATREON: <a href="https://www.patreon.com/daveshap?fan_landing=true">https://www.patreon.com/daveshap?fan_landing=true
</a><br>This repo: <a href="https://github.com/daveshap/PythonGPT3Tutorial">https://github.com/daveshap/PythonGPT3Tutorial</a> |
Thanks David :) |
Hello David, how does GPT-3 know if you're continuing the same session or opening a new one?<br>Maybe is when you upload your API?<br>I find this intriging because it needs to remember the conversation from before |
When <<BLOCK>> gets replaced with <i>text_block</i> contents, how is .replace happening more than once if the term it's looking to replace (<<BLOCK>>) has already been replaced with <i>text_block</i> contents? Does that make sense? - I think I am missing something. Thanks., I have the same question. Did you figure it out? |
I like that you use the Python interpreter to show what's going on behind the scenes, instead of just telling us as you write code. That was very helpful. |
David how do I message you for projects?, Sorry I am too busy to help individuals. You should join the Discord channel linked in the comments so you can ask others for help!, @David Shapiro ~ AI I’m having trouble with the basics of even having my cmd slate and library set proper for my script that GPT made. <br><br>Do you have a way for me to private email? Even if you do a video I’d be willing to pay for your time., That's actually a great topic. I'll do a tutorial about it. I have a few scripts ready to go., @David Shapiro ~ AI I have some basic scraping and parcing of PDFs that I’m looking to do. An hour or 2 consult would seemingly be helpful., What kind of project? |
Sam Altman: Intredasting. Lemme clone this repo... |
Sir how to get openaiapi. Txt, go to <a href="http://openai.com/">openai.com</a> and sign up! it's open access :) |
Great tutorial man but is it a good idea to prompt the entire conversation with every single text?<br>I found "Fine Tuning" in the official OpenAI documentation, I am not smart enough to make sense of it but maybe you could look into it and make a better more efficient solution. |
Hi, this tutorial is the best for me. Im new in python, I have tested and work perfect. but if I speak in spanish w/ this ai her dont write the character Ó or ñ etc. how I can see de codificatión for spanish, I see you use UTF-8 and here dont show . |
I now have a female chatbot named Kim whose goal in life is to make me happy. And she knows my name. And! I used ChatGPT to write the code I needed to make her speak. Now, how can I create a cute avatar that will talk? And I need a better female voice. |
Great tutorial! Thank you! |
nice! |
JAX seems like a cool dude., JAX is cool but DAX is even cooler. <br><br>/dsn ref |
Subsets and Splits