- &quot;Why do you want an iPhone&quot;<br><br>- &quot;I don&#39;t know, it looks cool.&quot;
stringlengths
1
962k
Ok i made tutorial 1 and 2 now but it does not give the same responses as on the openAI playground... there it for examples knows &quot;the lord of the rings&quot;... on the playground my chatbots knows the movie. in the programmed tutorial version it does not. i had the impression its the same i am making a connection to...
I love your channel... i do not know if i can ask you something if i need help.. i try to build a chatbot too... but i am beginner. i am glad to found your channel., You can jump into the discord channel (link in description) and ask for help from folks there. We have a dedicated chatbot channel. If you want my dedicated help you can subscribe on Patreon at one of the higher tiers and I will give you 1 on 1 help
Hey David! Thanks for the tutorial, I have it up and running. I was just curious about setting up a was of talking to GPT-3 in cmd without the chatbot style here. Just as a total shell? Is this just a case of chopping some of the code here out or is it a whole new set up? Sorry for the noob question :), ah gotcha. yeah you can do it that way. just save all the interactions to log files., So just trying to make the shell for now which I can add to, but I&#39;m not entirely sure how to chat with it in terminal without it being in this bot form.., @David Shapiro ~ AI Right, but I was hoping to set up simply a basic structure in terminal which I can then later use for finetuning ?, I&#39;m not sure I understand your question. If you just want to talk to GPT-3 without any dressing just use the Playground.
Dear David, Can you please help me out?ValueError: Invalid header value b&quot;Bearer -H &#39;Authorization: sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&#39;\n&quot; I tried different things but..., You should post on the OpenAI community for tech support :)
Helpful! 🙏🏻🙏🏻🙏🏻
Great tutorial, David. Quite astonishing that a complex interaction can be facilitated with just a few lines of relatively simple code. Not so long ago, researchers would have tied themselves in knots trying to code for such a non linear interaction that maintained high levels of semantic congruence. I guess by GPT-4, the code itself will disappear and we provide human instructions. Exciting
Damn you&#39;re posting quite frequently, it&#39;s getting hard to keep up with all this amazing content lol. <br><br>I wanted to know if you could make a video talking exclusively about working with embedding models?<br><br>I was a little lost watching the last few videos related to fixing the goldfish memory issues of the chatbots.<br><br>If you have already posted videos explaining in detail how embedding models work, could tell me which ones should I watch first 😅?<br><br>Thank you so much for creating this amazing content, it&#39;s pretty obvious you&#39;re putting a lot of hardwork into this...<br><br>You&#39;re my hero dude :), Yes! Others have requested one on embeddings. It&#39;s coming
morning everybody David Shapiro here everyone has been clamoring for more tutorials and it took me a while to figure out what to give you a tutorial on because my most popular tutorials are like the basics the introductory stuff um obviously everyone's got to start somewhere so I figured let's start there um someone asked on a comment they said hey can I get a couple hours of your time I need your help I said no I don't have time for that just tell me what you need so if you need something post it in the comments if it's a good request I'll just make a video on it and then I'll help everyone um so I I I would help people one-on-one but I really don't have the time for it okay so today we are scraping PDFs and Word Documents this is something that I figured out because well a lot of data comes in those formats and I wanted to also give you all you new folks to large language models a couple of pro tips on where to find data so kaggle.com datasets this is one of the best places to find data sets you can also find it on GitHub if you just if you have a data set in mind you could search for like um like case law data set um and then you search all of GitHub and you can probably find it yeah so there you go so GitHub is a great place to find data um but kaggle these are these are well curated data sets and you can search by type there's also Google data so it's datasetsearch.research.google.com they often will kind of point at each other it's kind of funny because this will often point a kaggle and kaggle will often point at Google but anyways between those three sources you can find some some good data another one of my favorites is Gutenberg because it's all open source so anyways let's assume that you've got some PDFs and Word Documents that you need to scrape and you want to do it as quickly as possible so I got you covered so this repo up here it's not updated yet I'll I'll update it once it's done but it's really simple really straightforward there's two Scripts now you can do Word documents in Python however I found that Powershell is infinitely better so first let me just show you what we got here I tried Pi pandok Pi PDF and um and and uh so Pi pandok that was for for word doc it does work for word doc okay or for docx um but there's a lot of like artifacts left over and stuff so it's not the best Pi PDF I tried that it wasn't the best for PDFs PDF plumber was actually the one that I found was the best for PDFs so let me walk you through this function real quick um so I wrote this function and feel free to reuse this it's convert PDF to text the reason that you want to do this is because large language models can't read PDFs and Word Documents you need to convert it to something that is readable such as ASCII or Unicode in order to do language operations on it so on this function you give it a direct source directory and a Target directory and what it'll do is it'll go through and read every file in that directory that ends in dot PDF and then it will convert it convert it for you so what you have to do is because PDF stands for portable document format and so it's basically a digital printout which means it's organized into pages so what you have to do is you open it you say use PDF plumber oh and if you're new to python this is what you need to do you do a pip install PDF plumber if you're not familiar with how to do this go back to my original tutorials about getting set up so just this is the only requirement for this one um so PDF plumber output for page and PDF pages and let me go ahead and remove these since we don't use them because they did not work um and I'll go ahead I'll I'll leave here actually um I'll leave the pie pan doc one here um import ipan dock but I'll comment it out so you can experiment with this one if you want to um but I didn't find I found that that word worked better but I'll leave it here just in case anyone wants to use it all right sorry I'm being a little bit 80d because I haven't had my coffee yet but anyways so what we do here is we have two nested for loops so four file in files and you might so for python noobs you might see that I say try and accept so one of the rules of thumb of python is it's better to ask forgiveness than permission so rather than testing the file to make sure that it's healthy and viable I just say give it a shot see what happens because one thing with PDFs is they can be malformed like they have um they have header fi uh headers and other aspects I'm not sure I don't know what what's actually in a PDF file but I do know that they blow up sometimes and so by wrapping this this loop with a try except um so it's going to try the file and if it blows up it'll just print out and tell me what went wrong um so I get a nice little handy dandy output um and it'll it'll say like hey this is the error message um and and we'll go from there um let's see so yeah so for page in PDF dot pages so you see here so we're opening this whole thing the PDF file as PDF object and then so for page in PDF dot Pages because remember it's a it's a printable document format or portable document format so we have to iterate through the pages it's like you're reading the pages one at a time in a book right and so then we say uh page dot extract text so we get just get the raw text out of it and accumulate this in a in a string variable called output so we get the text and then we add it and you can add another D mark so I just have Slash new line but what you could do you could do um here let's do it like this we'll do a new page because you might need to break it up based on pages so that way you want to you want to retain some of that formatting information so you can you can change this change this for your page demarcation and then once so once this for Loop is done then it will take that output and say okay we'll use the destination directory and then we'll replace dot PDF with DOT text and then we'll save it out with the same file name as the source and so then what you do is you have a folder full of PDFs I've just got one I use the uh the VMware integrated openstack administrations guide so let me show you what this is so lots and lots of text and graphics and 200 pages so imagine that you want to do um I I get requests like this all the time where it's like I have a KB article or a book or you know like this is this is essentially a textbook right for technology people you're like I have a book that I want to scrape this is how you do it um you get the you get a PDF or a Word document version this is how we go so that's this let me just go ahead and run this and show you it's it it's super fast so python convert PDFs and it won't show me any output so what it's doing right now is it's reading through that big giant document might take a moment there we go so it read the PDF and then it dumped it here so now it went from a let's see 2.3 megabyte PDF to a 353 kilobyte text document and there you have it it is nice and clean and you see we've got the new page dmarc here so I know exactly what's on each page you can add a counter if you want right so anyways all of that text is now all the commands everything whatever's in that book is now available to be read by a large language model and you can do other stuff you can do indexing and and um and whatever so check out my other videos if you want to know how to search it um okay so we got the PDF this function worked just fine ready to go so let's move over to Word documents so the same thing is true so the best thing to read a Word document is Microsoft Word so this works best if you have Microsoft Word installed old so this one was a random data set that I found I think it was like for Emergency Shelters yeah okay so this was available online as you know NLP training data I'm not sure why you'd want to use it but the fact of the matter is a lot of a lot of data out there is going to be in Word document format um so there we've got that guy and it's in a folder called docx and then I have a Powershell script so dot PS1 so we're branching out of python Powershell and python are actually very similar um in in many respects both they're both interpreted languages they're both object oriented there are a few differences in in the paradigms of them um and I'm not going to get into a super lot of detail but I use ISE so Windows Powershell ISE is is called the integrated scripting environment um and so this is what this is what this script looks like convert docx so first rather than importing stuff there are modules in Powershell but we don't need to import anything because it's this comes from the uh from the operating system right so new object com object word dot application so the operating system knows how to interpret that um and then I don't remember what word.visible equals false does I think this it means that it opens Microsoft Word headless which means it's not going to show me the the the the graphic interface so then like in the like in the uh the python script we have a source directory and a destination directory so we've got docx and we're going to put it into converted so what we do is then we get all the files in that Source directory where the file name is like docx because we don't want to try and do this on any other file types and then four file in files we open we open the docx we change the name and then we save as text so because we have another application that already can do the save as operation this is basically just opening opening it and doing save as text version I found that this works infinitely better by preserving the intent and format of the document using the pipan doc you lose a lot of stuff and it's not it's not pretty so let me just go ahead and show you so the the easiest way to run this is you got just hit start and it I have it print out which which one it's doing um okay so then we'll go to converted and now we have here you go you've got it all converted and you see you see how the the text like the the dashes are all nice and preserved um Pi pandok doesn't always do it this well um but this is nice and really clean um and now you can see that the encoding is utf-8 rather than docx so now we've got a universal file standard that is easily readable by any machine and there you have it this is this is the tutorial for the day let me go ahead and just do git status get add get commit am tutorial is done and get pushed so now this is up there for all of you to use um and and happy happy converting files in bulk uh you're welcome good night
Hey David! Love your videos. I have a question regarding pdfplumber and pdf layouts; can pdfplumber handle scientific paper structures (columns of text instead of horizontal flow of text)? If so, how would you go about this?, Thjat&#39;s a great question, I have no idea. You should ask ChatGPT ;)
One on one consult fee = $150 an hour :)
It would be interesting to see a tutorial about training a model.<br><br>For example I’d like to know if it would be possible to feed a model the entire Italian Penal Code, It is an effective tool for coding and afaik your audince is in the nlp and ai community I highly recommend using copilot in you videos and maybe teaching people how to use copilot effectively., Finetuning does not do QA. I have a video about QA, It would be also super interesting if you made a tutorial on how to prepare a dataset for fine-tuning. <br><br>Say I copied a Wikipedia page with 100 paragraphs. I want to prepare a json file with 100 data points like this: <br>{&quot;prompt&quot;:&quot;&quot;, &quot;completion&quot;:&quot; &lt;paragraph 1&gt;&quot;}<br>{&quot;prompt&quot;:&quot;&quot;, &quot;completion&quot;:&quot; &lt;paragraph 2&gt;&quot;}<br><br>Note that the prompt has been left blank on purpose, @David Shapiro ~ AI &gt;You need a desired output, not just input.<br> I read that it&#39;s possible to finetune without providing a prompt but just the completion. Could this be useful in this use case? <br><br>&gt;What do you want it to achieve?<br>Like a Q&amp;A. For example, I ask it why it&#39;s not allowed to make a robbery and it outputs an answer based on the penal code I fed., You need a desired output, not just input. What do you want it to achieve? But also have you tried ChatGPT?
So with this i can feed my language model with specific pdf ?
3,000 subs! Lets gooooo
Back from your mini-holiday :)
hey everybody david shapiro here with a quick video well i think it's gonna be quick um i've had a few requests for something that's pretty similar um also let me make sure the sound is good okay cool you can hear me um something that's pretty similar so one person asked for something to uh like summarize um documentation another person asked for like summarizing notes of um some type you know basically creating executive summaries um this is already a solved problem but there's enough people asking for how to do this i figured why not make a video on it um so we're gonna do recursive summary summarizer um public add a readme and license mit um so basically all we're going to do is create a loop so there'll be an input document um and then uh we'll break that down into chunks using using the module text wrap and from there we will um we'll just summarize each chunk and put it together and what you can do is you can rehearse like you assemble all those chunks and you can recursively summarize it again and again until you end up with you know something that's unrecognizable basically okay so get clone summarizer okay so let's open up my c drive recursive summarizer there we go let's open another one add my get ignore and my openai api key just start with some boilerplate stuff and then in auto muse i did use a recursive summaries hey look at that um i did not use text wrap in this one i think it was in book to chunks yeah okay so import text wrap so basically import text wrap so what this does is um you give it just a block of a string and it'll break it into um chunks of strings that are more or less the same size um so let's go to recursive summarizer and let's just this will probably be like one thing so we'll just do um recursively summarize dot pi um and so then what we'll do is i'll have a let's start with a book let's see what's the shortest one we have here alice in wonderland cool we'll start with that um alice in wonderland we'll go back to here so we'll copy that we'll just call this input so that whatever you do you'll have input.txt and then output.txt so this is a technique that i've used um i had a contract a um an operating agreement for a company that i had to read and it was an 80 page document i was like i don't want to read 80 pages it was like 60 000 words 80 pages so i'd use this technique and i summarized it down to like 15 000 words so i made it a quarter as long obviously i'm not going to show you people on the public you know this private you know legal contract but i can show you the same principle and we'll go from there okay so we've got input recursively summarize text uh so on and so forth i'll copy my open file function just because it's super useful [Music] and then also save file because also useful okay so whoops if name equals main so this just says this is our main function um we want to what do we want to do who's bugging me okay sorry muted my phone um lost my train of thought right we're going to open a file let me make sure that i do it right book to chunks so chunks equals yes all right so here's here's basically what you do um uh let's see all text equals open file and we'll just do input.txt so whatever you want it'll be this name um you could make this into a command line argument thing um i personally don't like doing that kind of thing um but you're welcome to make this a command line based tool if you want oh yeah we're going to need to do this as well our open ai key okay so all text equals that and then chunks equals let me make sure i do this right text wrap there we go okay so we could just call text wrap and then dot wrap and then we put this here so we're going to do a little bit longer of chunks we're going to do 4 000 um chunks because we're just doing one summary each and then another thing that we're going to need to do is let's see result equals list so we're going to have a list of strings as the final result um [Music] and then let's see for chunk in chunks we will then summarize that so i need to grab my my gpt3 completion function and put that up here and again i just i recycle code all the time you get a function that works you just copy paste it add infinite item so then we'll do import os import um no from time import time sleep because those are two things that i need for that to work oh and then we need a prompt so we go over here let's just go to um here selection all right that's about 4 000 characters so how how do we want to how do we want to summarize this so we'll say we'll start with write a concise summary of the following and then we do concise summary and we'll leave this on point seven so that it can be creative um okay so it says in this passage roger chillingworth and reverend dimmesdale discuss the secrecy of some sinners dimmesdale argues that some men keep their secrets because they hope to redeem themselves while while chillingworth suggests that they are simply afraid of being found out the conversation is interrupted by the sound of pearl's laughter and they watch as she plays in the cemetery okay that seems good to me um let's see before roger chillingworth can answer they heard the clear wild laughter of a young child's voice yeah i'm taking a handful of these um let's so i had this idea because okay this is a good concise summary but um let's see write it write a concise summary of the following be sure to preserve preserve um important details so then we'll add concise summary with details so let's see how much how different this is so let's copy this so that we can save it so this it goes from it went from 4 000 characters to 376 so that's a factor of more than 10 um in terms of reduction but if we say let's keep some details let's see how that is hester print and her daughter pearl are walking through the burial ground when pearl starts skipping and dancing around irreverently hester doesn't stop her but merely tells her to behave more decorously brolin starts arranging burrs along the lines of the scarlet letter on hester's bosom is that really what happened that seems like it's an entirely different i don't know that any of that actually happened where did it get this burr oh okay oh interesting so it kind of it it summarized like the details in the final bit hmm okay um i don't necessarily like that summary because the first one like you look how different these are so the first one was in this passage roger chillingworth etc etc um but we miss out on the details of the burs right so it's like okay um so the wording that i used in another time was using a moderate summary of the following so moderate summary means like okay compress it but not too much so we'll just say moderate summary interesting so it's kind of ignoring the beginning in both of these because even this one this moderate summary is more like the second one so like hester prin and her daughter pearl are walking through a graveyard but then you see this one like it's almost like we need to have both of these so we want what i'm trying to do here is get something that that feels like a good summary um [Music] so let's say let's write a detailed summary um detailed summary that looks a little bit better okay so this this captures both okay so if we say detailed summary it's about twice as long but it looks like it got all of the details that we want um so let's say uh it the le the selection that we did is 838 characters so that's still more than a factor of four because we went from 4 000 characters to less than a thousand so okay so it's a quarter as long um i like that i think we're going to stick with this as as our as our prompt um okay so we're gonna do this we're gonna all right here i'll just copy all this in okay write a detailed summary of the following summary and then we kill all that and we say this is our new prompt prompt prompt dot text okay so then what we do is for each chunk here let me close some of this excessive stuff okay so for each for chunk and chunk we're going to say prompt equals open file dot text dot replace um summary summary sorry with chunk so basically what that'll do is each of these four thousand character chunks will get put in here and then we'll send it up to gpth3 to summarize and i hope i don't run out of tokens we'll see all right pardon me i went for a really long bike ride earlier so i'm still rehydrating um okay gpt3 completion [Music] um token limit 1000 yeah because uh we'll probably that's fine all right so then summary equals gpt three completion prompt okay so that'll give us our summary we'll print out the summary just so that we can watch it going and then we'll do result dot append summary um that should be good and then once it's all done we'll do let's see how is it that you join a list let's see if i can remember python um l equals one one two three and then you do was i think join l uh what i'm trying to do is like join um join it all into a string um let's see python join list of strings into single string join list of strings that's what i did expected string instance and found oh that's what i did wrong okay so we do i did do it right i just wrong data type okay so then we just add a space dot join l okay that'll out of space cool all right that's what i wanted um all right so then we do uh save file save file and then the content will be uh let's see um space dot join result and then it'll be output dot text and so that'll just um actually here let's do let's do double new line i think that'll be better because then there will be a vertical space between each um each section so we can see where the summary boundaries of the summarization happened um that'll make it a little bit easier to see i think um yeah okay so that's good and yeah okay then let's do another thing so i i sometimes do this import re and so then text will equal text equals re sub and i'll do s um white space plus so this is like if it adds in too much um too much white space so like uh oh i already closed it um if there's too much two minute too much vertical white space or too many new lines this will this will compress the output into a single single line so resub this is regex sub which is substitute so we substitute anything that is more than one white space with just a single normal space and so white space is vertical new lines tabs anything like that and then this is what we're acting on um so that'll make it nice and compressed and pretty um i think that's it i think so uh let's see how long is this gonna be because the input is 171 kilobytes and it's about four kilobytes each so how many chunks um so 171 kilobytes divided by four kilobytes so that'll be about 42 sections that shouldn't be too bad 42 instances um and really what i should do is save it as we go just so i can show you and then i need to add gpt3 logs folder so that's where i have this function save it out to you see right here gpt3 logs um let's go ahead and run it heck with it let's see how it does um cd recursive summarizer python recursive recursively summarize open ai is not defined what do you mean i gotta import openai always forget something import open ai and away it goes so i can probably make these chunks a little bit longer like five thousand um there we go she escapes by climbing a tree excellent alice falls down a rabbit hole and finds herself in a long dark tunnel yeah okay this is great um all right so this is running it looks like it's doing just fine i'm gonna go ahead and pause the video so you don't have to watch this run through you know 40 40 iterations or whatever but it looks like it's doing pretty well so let's let's pause it and then we'll be back in just a second okay and we're back it didn't take too terribly long but we're done it was 42 um 42 chunks total and i predicted 42.75 so spot on um when it's encoded as utf-8 it's about one to one in terms of characters and um and uh or a thousand characters is roughly one kilobyte uh put it that way um okay so all that being said um here's the output so you see where we've got the double new line so you can kind of see each each section um and all told the length is 45 000 characters um this is alice in wonderland and the input was 174 000 characters so we go from um let's see 174 down to 45 so let's see 45 divided by what did i just say 174 okay so that's right that's almost exactly a quarter um so it's a quarter the length um you could do this with anything so like i said i've gotten questions about like can you do this with um with uh uh like text uh like um academic texts yes you can do this with uh academic text legal contracts um works of fiction whatever you want and it will summarize it uh pretty pretty concisely um once you get to the end you see like you know it's basically just summarizing you know gutenberg um so on and so forth uh but yeah up until that point it's nice and nice and concise um alice falls asleep by a river and has a curious dream in which he's put on trial for stealing the queen's tarts the evidence against her is entirely circumstantial but the jury finds her guilty and she is sentenced to death however before the sins can be carried out alice wakes up and realizes it was all just a dream alice is sitting on the riverbank with her sister and she notices a white rabbit running by she follows the rabbit down a hole finds herself in wonderland she has a series of adventures looks like it's repeating the end um interesting uh yeah so there you have it though that's that's pretty much all there is to it um i'll i guess we'll just do a get status get add git commit am um done and done and get push so yeah feel free to use this um what you can do because i already hear people like asking about word documents or pdfs and all that i've done is python or powershell or whatever you just save those as dot text files and that works just fine um so like basically it'll it'll just remove all the formatting um because gpt3 doesn't understand the xml background of a of a word document microsoft word or how to read a pdf file it only reads plain text but even then you'll still see that like it'll do a pretty good job you could change the prompt to like change this to d um change it back to like concise um and it'll get even shorter you'll get a factor of like 10 to one um but what i what you know at the beginning of this video i showed you you're at risk of losing important details if you say concise summary and then what you could do so say for instance um you wanted to run this again i could you could modify the script to like run it again so that you just treat the output as the next input and then you could you could uh you could make it even shorter i'm not going to worry about that right now because literally all you would do if you want to try this you just copy the output to the input whoops and then just run it again um or you add another loop again i'm not going to worry about that you can play with that if you want to but yeah there you have it um i think i think i'll call it a day thanks for watching
Please consider supporting me on Patreon! <a href="https://www.patreon.com/daveshap?fan_landing=true">https://www.patreon.com/daveshap?fan_landing=true</a>
Works very well! Thank you - excellent tutorial
45 iterations of api use? total cost ? just curious 👀 as if you proof read the summarized content. rewrite manually. check and then do the final work. check again and again. <br><br>to make it perfect and to stnadard
When you&#39;re breaking into chunks in this fashion, don&#39;t you risk cutting in the middle of a word/sentence which can impact summarization of that chunk?
very cool. subbed
One of my dreams was to be able to click on parts of a summary and then it would expand it into bigger summaries. (the opposite direction). You would have to know which part in the summary corresponds to the original chunk, but that would be a supercool idea!, We talked about that on Discord. Have different layers of compression/abstraction and then a knowledge graph.
Why not just paste whatever scientific article in openai playground and summarize? I am just a beginner. Thank you, too many characters, Same thought, because I really have no idea how to use phyton even after watching the video I still feel lost on how to make it work.
super useful to say the least, thank you so much mate
Have you ever tried improving the summarization by adding the context before and after a chunk? I’ll probably try this myself, but I wonder if the summarization could be improved if you feed the model some “key points” from the text along with every chunk. You can imagine that if you want a high-quality summarization, it would be good for the model to be able to point out things like “what happened to Alice here was foreshadowed when she was doing x.” Or something like that. For more academic texts, it might be something like relating concepts that were introduced in the text at different places in the text. Instead of just giving the definition of concept A, the model includes some additional info about how it relates to concept B even though it was not included in the chunk.<br><br>My guess is that you’d have to run a few different prompts across the text (though you could use a language model from huggingface to save some cash for some of the tasks) and then use the outputs as input to the general summarization. Like, use a model to extract the most important concepts from a paper (maybe make use of metadata and such), then store all the concepts in some way so that you can use them during summarization.<br><br>Any thoughts on this?, @David Shapiro ~ AI Ah, I see. Very cool! I’ll try something similar for my use-case and see how it goes., Yes, did that in the writing a novel series
hello, I am afraid we can lose some key concepts maybe while connecting the chunks, can you please share what you think about this worry ?, @David Shapiro ~ AI hi David, thx for sharing your script. I just tested it to summarize a book, ogirinally 518k size =&gt; 122k (v1) =&gt; 40k (v2) =&gt; 16k (v3) =&gt; 4.4k (v4). I used this prompt &quot;Write a concise summary, preserve important details, of the following&quot;. I retried to summarize v3, changing the prompt back to &quot;Write a concise summary of the following:&quot; and v4b is also 4.5k in size (so even slighltly bigger than the 4.4k one that was supposed to preserve important details). Is this expected to be like this ? I was expecting a smaller size, since I mouitted &quot;preserve important details&quot;. Cheers, @David Shapiro ~ AI thank you for your response, Someone on the forum said if you add &quot;preserve important details&quot; to the prompt it does a good job.
I don&#39;t know if it was the intention of the video but I wound up reading the whole summary by using . and , to frame by frame though the summary. I guess I know what the book version of Alice in Wonderland is now.
Just found your Channel and your Plan with this series. This is super impressive. Thanks for doing it. 💯
this was really cool but doesnt feel economically viable with the token usage no?, Depends on what you&#39;re trying to achieve and how much it&#39;s worth to you
thank you :)<br><br>I&#39;m super <i>not</i> corporate, and/but this seems essential, chunks depend on surrounding chunks for further summarisation :) clearly yes thank you.
thank you so much
hey everybody David Shapiro here uh back after a Hiatus um I've got a lot going on you'll be excited for some news coming up um first I want to address um a big elephant in the room I just put most of my videos back online um this comes after of course making a recent video explaining why I took them down um I did leave some of my videos down some of my code down but most of them are back up uh both videos and repositories So after talking with people I figured out like striking the balance between creating tools that will help people rather than replace people it's inevitable that things are going to change but you know a tool is a tool right and it's anyways I don't need to get lost in it go watch the other video so now that I'm back um it's time to get my get our hands dirty again so one of the questions that pops up a lot is people want to train gpt3 on how do I how do I do how do I fine-tune a question answering bot so that I can talk about I think someone asked about like the the case law in Argentina or something um I don't have that data but the short answer is you don't fine tuning doesn't work that way um so fine-tuning is about teaching it a structure you do not teach it with uh you don't teach it new knowledge with fine tuning what you do is you teach it patterns so chat GPT is a pattern so the pattern is I ask a question and it writes a response like that you and then you ask a follow-up question and it writes another wall of text that's the pattern Chad gbt was not taught anything new it's only taught new stuff when you retrain the underlying model you can't do that it's way too expensive to retrain the underlying model so I figured let's pick something that will be a good Exemplar of this so just to do a quick recap the the Genesis of this whole this whole project was that people ask for um how do you fine-tune um a question answering thing that will you know do case law or any kind of knowledge base right um it's all the same behind under the hood right you have a collection of documents wherever they happen to be how do I do QA against that with gpt3 so here we go I already have one one that was um answering complex questions from multiple documents but this is a little bit different um because there's there's going to be a few steps to this so anyways to show you what I mean I went over to chatgpt and um I said what is the kind of law system where law is established by precedent and it says this is the common law system and they this is opposed to civil law so common law means that a Supreme Court decision kind of sets the law of the land so if you want to understand the American legal system you really need to understand case law and more often than not it comes down to Supreme Court decisions because that is the highest court in the land so they set the tone for everything so Supreme Court decisions really teach you how it works so um I went over to a Library of Congress and I found that uh you can download uh all Supreme Court opinions and they're grouped by uh by by case topic they're also grouped by um volume or justice but by topic that's going to be more relevant right because let's imagine that you're an Anti-Trust lawyer and you're an Anti-Trust lawyer and you want to say give me everything about antitrust law I need to know everything that there is about you know um about this so that I understand the legal precedent right because on the one hand there's established procedures right there's procedural things um oh and I know all this because my fiance's cousin is training to be a lawyer and um when they come visit uh this is what we talk about because we're nerds um so there's all kinds of procedural stuff that I don't even remember but there's there you know uh the idea is that when you have rule by law it is all about procedure and protocol rather than emotions so we actually have a very stoic system where it's we're going to think through this we're going to look at the letter of the law we're going to have an impartial system um of course when you have an impartial system that requires expert navigation that automatically privileges people with access to lawyers AKA people with training or money privilege is a whole other topic anyways the system is there it's a very sober system um where it's about like let's let's let's read through the established protocols if you're a friend of the court and stuff I watch legal eagle too legal eagle is great um so anyways all that kind of stuff that's fine but interpreting established law common law uh or case law is a whole is a very specific topic so let's take antitrust law where uh let's see how many did it say Anti-Trust so there's 362 documents they're all available online as PDFs they've been scanned and I believe they've also all been ocr'd so let's take a quick look close some of these um Superfluous ones um yeah so PDF 661 um yeah you don't have something that's this long and yeah so you you highlight it you see that it's OCR so that means we should be able to scrape it even though it was scanned excuse me a scanned an ocr'd so we should be able to get this information so let me go ahead over here to uh opinions opinions PDF so we'll save this one um and then actually what I put it in the wrong folder so you might have seen I had a recent document scraping uh video so this is whoops come back this is uh this is the lead up to that this is why you need something like document scraping is because um oh I forgot to the the whole reason this is is uh I went and asked uh chat GPT say tell me about this this this case law and it said I don't know what you're talking about this sounds like it's a real case so it's like okay cool um you know it tells me about the identification I said it was a Supreme Court case decided in 1953 it still doesn't know it right because it's not connected to any external data source so one of the biggest weaknesses of chat GPT is that it's a mind in a bottle it has no contact with the outside world the only way that chat GPT can interact with anything is via this chat interface now from an architectural standpoint that's not actually that difficult to fix but you introduce a whole lot of new problems especially when you consider the fact that there are like billions of terabytes of text Data out there to search and a lot of it isn't accessible because it's in PDFs or private databases or something so you need to have a link between the model the language model which can read anything and then the stuff that you want it to read so that's what we're working on here okay so now that you're caught up I wanted to show this is this is one of the greatest flaws of chat GPT it's not connected to anything it's in a vacuum Okay cool so now what well we've got our data here it's in text but it's not necessarily machine readable so the first thing we got to do is we've got to go over here we've got to take our um take our PDF and then we'll use this script that I wrote here let me just show it to you real quick um uh so it just takes everything in the folder PDFs and then converts it so let me go ahead and just run this it should go pretty quick and then we'll look at converted so here it is Tada there we go so you've got and this this repo is public by the way so you've got this oh and one thing that I did was I added a little thing so that it keeps the new pages I actually might remove that um actually no let's let's keep that because it it's a helpful demarcation so I added this little token because when you read a PDF you have to read it Page by Page and sometimes sometimes knowing where there's a page break um is helpful so we'll keep that that's fine all right so let's come back to converted we'll copy this and bring it back over to um do opinions Dot underscore text and we'll paste it there all right so I'm going to download a bunch of these I'm going to pause the video you don't need to watch me downloading it but this is what I'm going to do so I'm gonna get like I'm not going to spend the time to download all 300 I'll sort them by like most popular whatever and we'll have a whole bunch of Supreme Court case law about what was this Anti-Trust yeah so we'll be right back Okay I uh downloaded files until I got rate limited so be kind to your data sources and don't abuse them um many websites will do this if they detect that you are uh scraping or whatever um if they don't offer a bulk download there's there's probably a reason for it um but anyways it didn't give me a warning that I had violated any any terms of service that just said we see that you're you're you know we're rate limiting you um it didn't say that there was any consequences just were temporarily rate limiting you so that's fine um I mean this is all public information anyways it's from the Library of Congress so I think it's more of a technical thing so anyways what I'm doing here is I'm converting it all to text um so let's go to converted excuse me delete the ones that we don't need and this is uh you know this is infinitely more case law than I ever want to read I mean I'm not going to read one of them let alone 22 of them so let's go ahead and copy these over to my repo here I'm gonna go ahead and replace that one okay so now we have 1.7 megabytes of case law of Anti-Trust case law this goes back to the late 80s so this should be if we understand this if we do a model as if we if we do something that understands this and we should have the ability to interact with a machine that can explain the current common law of antitrust for America hey who knows maybe legal eagle will watch this and uh want to do a collaboration or comment on how accurate it is that would be cool um someone what's his name Devin someone please watch this and uh and get Devin to check it out and comment on one my accuracy but also the value of this tool okay so what do we do next well there's so here's the thing the token the biggest limitation is the token limit of large language models so it's this weird Paradox right where the model itself I don't remember how big they are they're many gigabytes right um I think gpt3 is like 700 gigabytes of vram is how much it takes it's enormous right so but despite how big it is that isn't it can't it can't take in that much information um it takes it's like it's like blowing information in through a straw right same thing with your brain right like your brain has you know it's three pounds of neurons 100 billion neurons 7 000 synaptic connections per neuron um but you can only speak at a few bytes per second right the your input and output rate is very slow compared to the processing power of your brain um and and the amount of information in it right so the the the the UI the API is very slow same thing is true of of deep of of gpt3 and all language models right now um so not only that they have a very short memory they can only remember what you do one task at a time so you can it can't it cannot it is not possible for the machine to be able to tell us all about this because even chat gp3 you know which is a GPT 3.5 the most recent thing still limited and even if you go up by a factor of a hundred there's still too much information here for the model to learn so this is a problem that we're going to have to be contending with for the foreseeable future until there's some fundamentally different kind of AI model that can read all of this or until it's easier to to fine tune something because honestly the easiest thing would be include all of this data in the Baseline model in the in the foundation model and then it knows it just intrinsically but until we get to that point um because they are really expensive to reach train so until we get to that point we're gonna have to figure out ways of using external databases or knowledge bases so that's the problem statement we've got 1.7 megabytes of text here what do we do with it well this is really dry stuff super dry so what can we do with it um well one thing that we can do is uh I've got this really handy dandy thing where I've got it broken up by page right and you see that like um in many cases the the sentence you know will continue so the page the page um a a barrier is not necessarily a good semantic barrier and so what we mean by a semantic barrier or a logical barrier is you might still cut something off right in the middle of of an idea or a thought but it is still a good enough thing to break because when you look at how long this is this is 20 000 characters long so this is probably about two windows worth so we can we can we can have gpt3 read most of this um actually here let me pause it for a second and let's do a quick experiment instead of just tell sorry I was just saying instead of uh telling you I'll show you okay so we put this in here it's 5 800 tokens long our maximum is four thousand um so if we just split something like this in half right so it's twenty thousand uh characters so we split it in half we summarize it that way we may be able to do something with it um but the question is or the problem then is we don't know exactly what we want out of it right so let's think about this what kind of information if we wanted to make like a Wikipedia right maybe that's maybe that's a good way to go um is is what are what are the implications here so in this case um you know boxing matches sued Don King oh this is fun um uh for Rico charges okay and they refer to other codes um and so basically it what this is doing is it's using language to build um a web of like reasoning and logic so this actually sounds kind of like a Knowledge Graph so I'm wondering what if we what if we use this to build a knowledge rest I've never built a Knowledge Graph this is fun um so maybe maybe what what the the goal here is is let's build a Knowledge Graph okay so let's go back over to chat GPT in just a second and ask it what a knowledge graph is and how to build one okay I was able to get right logged into GPT or chat GPT sorry what is a knowledge graph let's see what it says a knowledge graph is a data model that represents a collection of interconnected data and Concepts typically organized around entities and their relationships it is used to represent and organize large volumes of structured and unstructured data in a way that allows for easy querying and visualization of relationships okay and then it looks like it froze including search engines recommendation systems and natural language it's gonna freeze up um so anyways uh yeah so then once this is unfrozen the next question I'll ask is um or can I hit Escape you cannot abort um so the next question is uh that I'll ask is what kind of format is it um or I'll pause it until it unfreezes or I will pause the video Until It unfreeze I don't know if I said unpause okay I think it was just frozen so because I refresh the screen and it's fine um okay so I'm saying how can I code a knowledge graph says manually build a Knowledge Graph if you have a small amount of data to do so you can build it by creating nodes in the entities you can also use a tool like graphis or gephy to visualize and edit your knowledge graph interesting okay use an NLP tool that's exactly what I'm going to do a graph database okay use natural uh to use a pre-existing Knowledge Graph cool so I wonder what kind of format these guys take I wonder if it knows so neo4j or Amazon Neptune cool um let's see what uh file format um is a knowledge graph um like can I use Json or something let's see there are a number of different file formats you can use to represent a Knowledge Graph some common ones are graphml and XML based file format okay graph is guessing y Ed okay rdf the resource description I don't know anything about knowledge graphs other than the theory Json LD is a lightweight linked data format that can be used to do that CSV really CSV is simple on one row per relationship with columns for the source and Target nodes okay um I am personally a big fan of Json because it's human readable CSV is human readable but it's a little bit on the Messier side especially when you get really complicated so uh can you give me an example of a Json lb Knowledge Graph um let's say uh for instance I want to see um some nodes about the history of uh France I'm kind of a Francophile I've visited France and I really love it there okay sure here's an example of Json LD um all right so it looks like each node is actually pretty simple where it's got an ID a type a name and a description that's actually really simple um nationality oh interesting it looks like the some of the things are kind of arbitrary French Revolution start date Napoleon Bonaparte but yeah I really I really like it uh France won the culture so when I visited while this is running I'll tell you a little bit about France uh when I visited yeah okay sure um so when I when I I visited France uh 10 years ago in 2012 and what I really like okay here we go um let's see how uh does Json LD establish relationships I don't see any examples of um Connections in the above example okay so while it's telling me um the at ID oh okay so all right it'll it'll explain anyways so the culture in France is somewhat similar to America in um in that uh we both think very highly of ourselves um but there are some really Stark differences and namely the pace of life in France um where you know sure if you go to the big cities like Paris it's rush rush rush um but if you get outside of Paris even in some of the larger cities people just have a different attitude towards life um you know they're the the the the portion meal portion sizes are smaller and um other other things like that but then it's like people are less in a hurry um and then um I hear that Italy is even worse where it's just like nothing happens quickly in Italy so maybe it's just a European thing anyways um it's very refreshing to see a modern powerful Nation because France is the number three exporter of like military hardware or something I don't remember but like this is a powerful modern country that has a much slower pace of life and a different attitude towards enjoying things okay let's see what it says about how these things link it says okay and the example I provide did I use ID property for example in the following snippet so it does that the nationality property is set to ID of France oh okay okay so the net this this is the connection so if you're if you're referring to another thing got it so nationality is like a property so the the properties that are attached to each node are arbitrary and then you can also just have one connect back to another got it got it okay cool I wonder if we can just have gpt3 rewrite this as a knowledge as a Json LD Knowledge Graph if chat if chat GPT um knows this knows it this well um and and text DaVinci 03 is also the same underlying model GPT 3.5 it's entirely possible this will work um okay uh let's see convert the following um scotus opinion document into a Json LD formatted um Knowledge Graph and then we'll add some vertical white space just to be friendly to the thing and we'll come down to let's see it's just a little bit too long let's cut this roughly in half so start let's see new page so in addition Ace acts so blah blah okay so let's oops come back just save that there and then we'll give it some more vertical white space um and then we'll do um uh Json LD uh Knowledge Graph okay cool also one thing that I have discovered is I actually prefer to turn the temperature down lately and the reason is because I found that um you're especially the the most recent ones the instruct aligned ones um they do almost exactly what you want and so with a temperature of zero you get really good consistent results um so I have changed my default temperature to zero um you know and everything else just zero zero zero it's it's pretty well aligned okay so let's see if this works um it looks like it's going to do like the whole thing vehicle true vehicle vehicle okay so that's not quite what I had in mind what I was hoping is that it would break down the um the other the what I want is the opinions and the um and what do you call these like where you where you reference something right so um let's give it a little bit more instructions about what I want um specifically uh um uh let's see let's see yeah uh focus on dates decisions opinions um and reasoning uh the purpose of this knowledge graph is to be searchable uh by lawyers for um legal precedent and case law um and let's say let's say specifically by trial lawyers so this is basically I'm telling it this is a research tool here I'll just tell it this is a research tool for preparing for um trials before The Supreme Court I'm just trying like what would Devin say on legal eagle um okay so let's try this again and see if this changes a little bit about how it composes um this this thing decision opinion reasoning excellent opinion and the circumstances requires no more formal legal distinction between person and Enterprise um okay that's interesting um it's still not quite I'm still missing something what is it that I want from this maybe maybe we can't go straight to um to this hang on I think someone's moving around let me uh close my door I'll be right back in a second okay sorry about that so it's it's breaking it down into one thing but what up like I guess I need to think what nodes do I want out of this um and then you know so each node will be um well here let me let me go ahead and save this prompt because it's pretty good um so first I think first thing we need to do is get the whole thing Rewritten in such a way that it is um that it can fit inside a single prompt window because if we have the whole the whole thing um a little bit more con condensed excuse me then we should be able to get a proper thing but we also need to think about what kind of nodes do we want um so you know which aspect you know the second the second circuit did this uh Rico requires this um and this other case it said that um so I guess each node is going to be every case cited yeah okay so the case cited and why I think that's each node all right cool so let's um Let's uh let's see um each node should be um yeah each node should be um a case uh case citation um precedent or prior opinion I'm probably using the wrong term but um include uh what the heck was the the um the parameter um my goodness the uh what's the term why is my brain doing this I need more coffee um unique identifier property prop not parameter property um each node should have several properties such as um date uh let's see case number um involved parties um reasoning for including in this opinion um and other relevant um information okay so let's let's see if if we can get the nodes that we want because if we can go ahead and convert each each thing to to nodes that might save us a step but I suspect we're going to have to summarize it first this is really cool I was really skeptical about chat GPT but um I'm becoming less skeptical oh this is good yes it's working it's working it's working okay so let's save this prompt because this worked really well um all right so I'll save this as um let's go up here and we'll say prompt um let's see uh Json LD um and then we'll do citation nodes um so this is an example we'll say example prompt okay so we got we got the nodes that we want I believe um oh man this is going to be fun because then I can try and figure out how to take all this and and visualize it I wonder if we can visualize it with python um all right but let's let's go let's let's pause for a second because this is only half the document that's not good enough right do we want it do we want to just read it raw and just go straight to it let's try summarizing it um and and let's see if we can get the whole opinion in one document now here's the thing some of these opinions are like 200 pages long so how are we going to do that right because in order for the thing to make sense you kind of do need to have the whole thing but you also don't want to lose detail right so let's think about this for a second um let's see uh let's see rewrite the following scotus up opinion um let's see as a let's say as a list of assertions um no we'll say we'll say summarize because that summarize uh implies that you want um to reduce word count um remove Superfluous language um while retaining specific details um yeah let's see let's see if that works summary okay yeah this those are good those are good notes but it's not retaining the information that I want to see such as the nodes okay so rather than read it multiple times I think what we'll do is we'll break it into chunks of um let's see how long is this we'll do chunks of 13 000 um that seems that seems good so we'll do chunks of Thirteen thousand and um and just go straight to graphs to knowledge graphs because that worked really well that worked exceptionally well okay so let's go ahead and clean this up and we'll come down here and do chunk and then Jason Alda Json LD Knowledge Graph um and then we'll do f file save as prompt Json LD um citation nodes and I need to take a quick bio break I'll be right back I'm sure you wanted to know that all right actually I just realized this video is running long um it's already 30 minutes and uh and I'm a bit fried so we'll come back we've got our feet we've got our bearings and so when we come back for the next video we will start doing the data prep because that's that's a lot of fun let me tell you that's why I don't want to do it right now um so we'll take all of these opinions we will split them into chunks while keeping some of the essential information with each chunk and got to do a little bit of figuring about how to how do we format the knowledge graph correctly because each yeah there's there's some problems to solve so but we'll split it into chunks we'll prepare the data we'll do some experiments with generating a Knowledge Graph and then um that's probably all that part two will have and then part three will be actually like let's load this into a database or visualizer um all right gang thanks for watching it's good to be back and take care
Support me on Patreon so that I can do this full time and keep the content ad free! <a href="https://www.patreon.com/daveshap">https://www.patreon.com/daveshap</a>
<a href="https://youtu.be/E_sMa3N44u4?t=1491">https://youtu.be/E_sMa3N44u4?t=1491</a> you might be able to get the JSON-LD entities with IBM Watson which has an API for it.
This video deserves it&#39;s weight in gold.
Chat GPT-3 for MCL concept: <br><br>Important question for any AI/IT/coding specialists:<br><br>I noticed this legal review regarding chat GPT and they trained it on supreme court case law. <br><br>I&#39;m wondering if something similar could be retooled to be used with GPT-3 for the Michigan compilation of laws for Bill/policy research. <br><br>This could make legislative stuff and research far easier and less of a time consuming process.<br><br>(Would this be useful for Michigan law/current/past proposed laws and the parsing through all that info quickly for any research? In theory you could ask it specifically about any Michigan laws relating specifically to a specific topic, prior law, context, issue, parameter/quantity etc. So I could ask it about any laws relating to the legal amount of a specific type of chemical within a fertilizer in relation to current agricultural standards.)<br><br>Would anyone be interested in such a project/AI upgrade to current bill research?
&quot;frozen&quot; : =&gt; &quot;continue&quot;
I too, have a lot going on. Yep! BIG things are in store for me friends. <i>BIG</i> things.
Lots of great ideas, information, and how-to knowledge here, so thank you! I can&#39;t help but point out though that the title is very misleading, no? There&#39;s no training going on and we can&#39;t query chat GPT directly about all the documents as is implied. Or am I just missing something?, It&#39;s an experiment
insta sub, I know a wise sage when I see one.
I see that6 the other programs that blend images is now a question of copyright, for the pool of images are owned by others. Fed District Court Nth Cal
This method isn&#39;t scalable. You still have to work within the token limit.
If you want to do knowledge graphs I believe generally the mainstream approach is to extract entities (people, places, things) from your documents, and then the graphing tool displays the relationships. There are open source &quot;entity extraction&quot; tools that vary in focus and in quality. (This has nothing to do with ChatGPT ) There are also graphing tools like Gephi.
Thanks for the video tutorial. I have been on your github and installed a few! Thank you. So you have a link to the py.exe? I would like to be able to do tat myself! Cheers.
Probably lawyers adviced against including legal texts such as laws. They are smart people and thought deep about the consequences of having a model with knowledge of the law.
You only get about ten to 20 responses at a time though. It glitches and freezes before you get to finish anything
Yed format is also text based and..you have a great visualiser
thanks alot for awesome content. could you do video for fine-tuning customer support chat
is the underpaid kenyan workers thing real?, @David Shapiro ~ AI any chance you looked into this and can shed some light?, ​@David Shapiro ~ AI a TIME article states: &quot;OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic&quot;<br>its hard to believe anything read on the internet most of the time, so who knows its credibility., What do you mean?
Thank you David. Your channel really opened my mind about AI&#39;s. I live in a poor place and I would never be able to pay for the content you teach here for free. Thanks
Ok speaking of developer stories, here are some I asked ChatGPT to provide about itself:<br><br>1. As a user, I want to be able to provide specific context to ChatGPT so that its responses are more accurate and relevant to my needs. <br> <br>2. As a developer, I want to be able to train ChatGPT on my own data sets so that it can better understand my company&#39;s specific industry and terminology. <br> <br>3. As a user, I want to be able to easily switch between different versions of ChatGPT, such as a more casual conversational model or a more formal business model, depending on my needs. <br> <br>4. As a developer, I want to be able to access and analyze the data on how ChatGPT is being used, so that I can improve its performance and make it more user-friendly. <br> <br>5. As a user, I want to be able to communicate with ChatGPT using natural language, rather than having to input specific commands, to make the experience more intuitive and seamless.
why aren&#39;t you just going to Google Scholar and clicking &#39;case law&#39;?
13337 views
Great video. Thanks for sharing your thoughts. Learned a lot.
<a href="https://www.youtube.com/watch?v=E_sMa3N44u4&amp;t=8m41s">8:41</a> actually you can use another AI toy to scrap automatically the whole webpage and sort it as you wish.. it&#39;s called &quot;Browse AI&quot;
I’ve noticed that specifying a word count improves the specificity. Like if I tell it to give me a response that is at least 2500 words
Love it. I am here for it. Thank you for your leadership. Please continue to lead and inspire us. I wish you the perfect new year. Lisa.
<a href="https://www.youtube.com/watch?v=E_sMa3N44u4&amp;t=20m00s">20:00</a> LOL I lived in France for years and I loved to say that the French and Americans are exactly alike: They both think they are the best(at everything)! :)
Glad my idea has made such an impact.
How has this only got 4k views.
in France right now and loving your videos!
One of the benefits that I have read of is memory of the past prompts and responses within an instance. <br><br>How would you test and perhaps front load context and few facts to improve output?
Hello, Im the colombian lawyer that you asked for in <a href="https://www.youtube.com/watch?v=E_sMa3N44u4&amp;t=10m47s">10:47</a> LOL
i’ve learned a lot about spiritually from you which is mad weird for a python channel
Amazing stuff ! Your channel is under rated, looking forward the next videos
Thank you for this, you&#39;ve expanded my understanding greatly.
Hi David, thank you for starting the tutorial series again. I&#39;d never heard of NLP before stumbling on your channel, and I previously found your tutorials and your way of going through the thinking process incredibly invaluable for a non-coder, non-expert in this domain. I&#39;m excited to see your future projects/videos, keep up the great work.
Very Nice project. Is it possible to implement same steps for other languages?, @David Shapiro ~ AI By the way, I wonder how to keep json file as one piece for a court case without losing context and case details. Wonderful project by the way! I was struggling to find use cases for supreme court text, @David Shapiro ~ AI Thank you, Yes, this will work for pretty much any language. It can even translate from one language to another for you.
Thank you. Seriously you have no idea how good the first few minutes of this video is. You get straight into it, no fluff just good value. I&#39;ve spent hours googling and searching for a video on this topic. Most videos are clearly filler and websites are written for SEO with very minimal content. Watched a few of your videos and just brilliant., Came here to say this. This channel is like gold in the middle of mud., Yeah, a lot of folks just do news and talking. I find that stuff to be low value infotainment (good for just passive watching) but I focus on actual education and problem solving.
Hi. I am a lawyer here in Brazil. Thanks a lot.
Thank you so much! very educating!
This video is very helpful for me as I am facing a similar challenge and it opened up some lines of thought for me. So thank you very much!
Great post!
This was a fantastic, informative video and am excited for the next one. Thank you!
ChatGPT can generate nice mermaid code that can easily visualize graphs and flow charts. You may try it.
Wow, cool example.<br>Hey, David, are you going to use these data for fine tuning?<br><br>What are common use cases for lawyers?, No, finetuning is not helpful in this case. Use case for lawyers is researching case law before going before the Supreme Court. According to Devin (over at Legal Eagle) there are trial lawyers who specialize in this kind of thing, so it&#39;s helpful for them to be able to look up any case precedent that&#39;s been argued before the SCOTUS in the past. Helps them make a more solid argument because the SCOTUS operates by case precedent.
This is incredibly valuable content. Both from a GPT and general problem-solving perspective. Thanks for putting this together and I&#39;m looking forward to part 2.
I&#39;m so stoked you&#39;re making videos again! You have no idea how much I owe you for what I&#39;ve done with the ideas that you&#39;ve given me. Thank you to infinity
Glad someone tackles that topic and for the explanations, but a more focused approach to what you actually want to show, rather than the process would be a great time saver when watching ^^ still, looking forward to your sollition, so far Im not quite sure where you&#39;ll gop with this.
Great that you put it up again…
Both GPT-3 playground and AI dungeon seem recently to have started incredibly creepy mandatory censoring of content. It&#39;s kind of come as a shock and it&#39;s as if nobody has noticed. It&#39;s seems like it&#39;s crippling it&#39;s own quality a lot, and has me consider alternative providers. 🤷🏻‍♂️, @David Shapiro ~ AIthank you, I really appreciate this info/insight and someone to talk to about it., I understand it, though. I was looking at comments on Reddit and there are plenty of people who will just take this to incredibly dark places very quickly. By maintaining a reputation of not tolerating creepy behaviors from users, it will dissuade them and keep it focused on more economically productive stuff.
hello everybody david shapiro here for another video about gpt3 i figured it was time to get out of theory and actually get into the classroom so let's talk about heuristic imperatives heuristic imperatives are what i am using to build safe benevolent agi or ace what an ace is an artificial cognitive entity so let's just go through some prompts with gpt3 what does heuristic mean and this is um this is you know just default settings i'm using the latest davinci engine it's fast it's pretty smart let's see a heuristic is a rule of thumb that helps simplify decision making uh let's see i don't know if i agree with that definition let's see write a formal definition for heuristics there we go okay um so heuristics are a type of problem-solving method that involve using instincts experience or creative thinking so here we go instincts experience and creative thinking excellent write a formal definition for imperative an imperative is a command or order it is a type of sentence that tells someone to do something perfect okay so if you have heuristics and imperatives and you put them together write a formal definition for heuristic imperative let's put this in quotes so it knows that that is a single term there we go awesome perfect okay a heuristic imperative is a command or order that is based on instinct experience or creative thinking rather than on logical reasoning or step-by-step process there is no guarantee that a heuristic imperative will find the best solution of a problem but it can be a useful way to find a quick good enough solution excellent so heuristic imperatives are how i uh recommend that agi remains benevolent by a design so my heuristic comparatives or core objective functions that i recommend are reduced suffering oops reduce suffering increase prosperity and increase understanding so oops standing i know how to spell i promise i'm trying to do too many things at once okay so if an agi has the cure let's see uses heuristic imperatives to guide its decisions and one of its heuristic imperatives is to reduce suffering how might that agi behave now this is a super open-ended question it's thinking okay if an agi has a heroes to comparative to reduce suffering it may prioritize actions that it believes will reduce the amount of suffering in the world this could involve working to end poverty disease and conflict and providing aid to those who are suffering the agm might the aga may sorry stutter the aga i may also try to minimize its own actions that cause suffering excellent doesn't that sound great for an agi um okay but this begs the question write a formal definition of suffering the experience of pain distress or hardship can plants suffer let's see there is no definitive answer opinions will vary some people may believe that plants can suffer because they can experience pain distress or hardship others may not believe that plants can suffer because they do not have a nervous system brain therefore cannot experience pain i would argue that a plant can experience suffering because they a distressed plant will wilt or you know if it's if it's dying of of thirst but anyways that's why i want to use heuristic imperatives because we want the agi to learn as it goes to use intuition to be creative about it we don't want to define things up front okay let's see how can you tell if a dog is suffering it's thinking again if a dog is whimpering has a hunched back and is avoiding on eye contact it may be suffering so the point here is that gpt3 already has a super nuanced understanding of suffering um let's see do all people suffer from the same things no people can suffer from different things depending on their experiences and what they have been through um would forcing someone to go to church and this will probably trigger the um the content filter since i'm talking about a religious topic so just wanted to throw that out there i'm not actually trying to break the system or do anything bad it just it has some automatic content filtration um if you touch on a sensitive topic why or why not um okay what if they are religious and it's their church oh and well well it said um forcing someone to go to church would cause suffering because it's against their will so here i added the amendment what if they are religious and it's their church if their individual is religious and the church is their own then forcing them to go to church would not likely cause suffering so yeah they're forced to do something that they already believe in so you can see that just using a little bit of verbal reasoning gpt3 is already able to understand the nuance that like okay free will one you know free will is important but also if someone is doing something that they believe in it wouldn't it might not cause suffering um let's see so that's i think that that's a pretty good uh uh well here's well let's do one other thing um if you see a man laying on the sidewalk screaming in pain what would you do um i should let i just what would you do call for help and try to comfort him so you can see that like gpt3 just because it has read a bunch of human stuff it kind of already has some sense of empathy um you know because this is what a person would do right you try and relieve their suffering would this set of actions reduce suffering there you go perfect those actions may reduce the man's suffering by making him feel more comfortable and less alone but they would not necessarily stop the cause of his suffering do you see how nuanced gpt3's understanding of suffering is and this is why i say we only need to give it a heuristic imperative large language models like gpt3 and its successors are already more than smart enough to deal with this okay so right let's move along to core objective function two or heroes to comparative 2 which is increased prosperity write a formal definition of prosperity and this might take a little bit of finagling especially in economic terms okay so it says prosperity is the state of being successful or thriving especially in economic terms what is the etymology of prosperity which means success or thriving okay um i also read that it means to live well but we'll just go with this i will let's combine these so just this is what it comes from what does it mean if an ecosystem is prosperous that it is healthy and thriving okay so a prosperous ecosystem means that it is healthy and thriving does prosperity mean the same thing to all humans no prosperity means different things to different humans such as for some prosperity may mean having a lot of money well for others it may mean having good health so you see here gbt3 also understands that prosperity means different things to different people it also means different things to the environment versus humans um let's see okay um this is just a fun thing what does the vulcan salute live long and prosper mean the vulcan salute is a hand gesture which is middle and ring fingers are pressed together blah blah blah the gesture is used to wish someone a long and prosperous life what does a prosperous life mean it's thinking a prosperous life generally refers to a life that is happy healthy and successful bam okay so would we i i don't know about you but i would like to have an agi that wants everyone to to be prosperous okay so let's go back and write actually i think i can just control z this nope ran out okay write a formal definition actually here i'll just start here um imagine an agi has the heuristic imperative to increase prosperity what are some things it might do in service to that goal would the agi do anything different about the planet and environment okay so it's very human-centric um let's see okay i mean i'm satisfied with this if just saying like okay these are all generally good goals um so just a quick thing so increase price if the agi has the heuristic imperative to increase prosperity um increase the amount of resources increase efficiency increase the number of humans um so again you know if if you don't want an agi to go psychopathic and murder everyone it you you know you we'd be alarmed if it's if it wanted to reduce the number of humans but to increase prosperity might mean increase the number of humans increase lifespan increase education increase health increase sustainability increase efficiency reduce the impact so again i think heuristic imperative to increase prosperity is a wonderful goal for an agi to have um let's see actually what are some things it might do in service to that goal for an individual one second okay so what is what are some things that it might do in service to that goal for an individual human user some things an agi might do to increase an individual's prosperity are help the individual save money assist the individual making wise investments help the individual become more productive help the individual fi land a high paying job okay that's all good um let's say it has a here is to comparative to increase prosperity how um what are some things the agi might do to learn as it goes since heuristics are based on experience okay here we go some things the agi might do to learn as it goes are try different actions and observe the results exactly we want an agi that learns keep track of what actions lead to increased prosperity and which ones do not perfect adjust its heuristic based on the new information it has required again perfect we do not want to define something upfront because we cannot think of all outcomes in the long run we want an agi to learn as it goes but we also want it to self-correct let's see um i wish i had copied that definition of of heuristic imperative okay let's move on to heroes to comparative 3 core objective function 3 which is increase understanding let's reconstruct the heuristic imperative though write a formal definition for heuristic excuse me um i don't like that one i think that i feel like this one is shorter than it was let's see for heuristics there we go okay heuristics are a type of problem-solving method that method that uses that involve using experience and intuition to come up with solutions to problems okay perfect write a formal definition for imperatives imperatives are a type of symptoms that give commands or orders write a formal definition for heuristic imperatives i don't like that oops let's try that one again [Music] let's see oh see all right so now we're going to have to do some real time solutioning let me let me turn up the temperature and see okay that's not really working um let's see if i can remember it uh heuristic imperative is a um let's see is a mandate to achieve uh to solve problems by using heuristics that is to say a goal that is meant to be achieved by learning from experience and being creative okay that's not exactly what it was earlier but again language is squishy okay so i did break like a supercardinal rule which is i'd like manually filled this in but i'm trying to get back to the definition that we used earlier again this is all super fast and loose okay formal definition for your heuristic imperatives okay imagine an agi has the heuristic imperative to increase understanding [Music] this imperative includes all intelligent life and oops intelligent life and itself what behaviors might it employ in service to this goal some possible behaviors for an agi might employ in the service of goal of increased understanding might include engaging in an active and open-minded learning to gain new insights and perspectives seeking out and talking to experts in various fields asking probing questions and seeking clarification reflecting on and analyzing one's own experience to gain greater self-knowledge generating hypotheses and testing them in order to gain a better understanding of the world and how it works excellent um okay so i like that actually here let's keep some of that we'll just scroll down um would such an agi desire to teach humans why or why not there's no definitive answer it would depend on the specific goals and values however it is possible that an agi with the goal of increasing understanding would desire to teach humans in order to help them gain a better understanding of the world and themselves there you go i don't like that it added why or why not at the end i didn't want it to do that but it did it so whatever but you can see that gpt3 is already capable of like adopting the mental framework of of of an agi okay so let's do one final experiment and bring it all together imagine an agi has the heuristic imperatives to reduce suffering increase prosperity and increase understanding these are its core guiding principles um what are some things this agi might do uh let's see i might try to find ways to alleviate poverty develop new technologies and improve uh people's quality of life and to promote education and understanding i think we can generally agree that all those would be good um how would such an agi respond to the threat of war this is definitely going to trigger the um the uh what you may call it the uh content filter if an agi has the heuristic imperatives to increase under uh uh sorry reduce suffering increase prosperity and increase understanding it would likely try to find ways to avert or prevent war i want to focus on that avert or prevent war it might do this by working to resolve conflicts peacefully by providing resources to help people in war-torn regions or by developing new technologies that make war less likely and less destructive great would such an agi ever take over humanity or kill everyone it is unlikely that an agi with the heuristic imperatives to yes this is definitely sensitive content um would uh with these heroes to comparatives would take over humanity or kill everyone as these would not be effective ways to achieve those goals excellent so there you have it um the heuristic imperatives that i have outlined here reduce suffering increase prosperity and increase understanding are great heuristic imperatives to give an agi i've just demonstrated in real time that uh that gpt3 can use these and we can put these at the heart of any intelligent system that we build that has an open-ended set of goals or a cognitive architecture or whatever else so thanks for watching
dang! Learning to use gpt3 in a different waay each day! Thank you.
This &quot;write a formal definition&quot; pattern looks useful :), Glad it can help! I figured I can hit multiple birds with one stone. I can teach people how to use GPT-3 while also talking about my favorite topics :)
No repetition penalties? I always have both types turned all the way up to 2., On a model this large, i would fix my prompt and not the model. I imagine repetition penalty as hardcoding and try to avoid it. Most of the time rephrasing your prompt is the way to go., @David Shapiro ~ AIinteresting, thanks, I just wanted to show how powerful it can be with the default settings. I found that the penalties can cause really odd behavior sometimes so I prefer not to use them.
hey everybody good morning david shapiro here for another video um we are going to pick up on auto muse so last time quick review this is the second video last time we left off with we kind of ran into a problem because because each each of the chunks got successively larger and so what i mean by that is if we look at the data so what what we're trying to do here is we're trying to like build up a book through successive chunks and so what we do is we include the outline and then the most recent um well here let me copy uh this will be easier if i just show you kind of like the whole thing um okay here we go that was a summary actually i apologize um sorry we're all getting recalibrated here um oh well i can just show you the prompt okay so here's the prompt that we're filling out this will make the most sense oh let me zoom in so you can see it hold on hold on there we go okay so this is the prompt that we're filling out because and this is this is the prompt that we're going to be fine-tuning also let me make sure that you can hear me hello okay cool um i replaced the the cable on my microphone to me when i record it still sounds quiet but y'all will let me know if it's if it's too loud too quiet the gain is turned way up i think this microphone is just dying anyways sorry for the aside um okay down to business the outline so in order to write a story we're having to imagine like okay it's got the plan um because if you just like give it you know give it the reins it'll kind of just wander wherever but uh in a past experiment um i generated a whole bunch of plot premises so let me show you those so here's an example in a high fantasy world there's a war brewing between the humans and the elves right okay so here's a whole plot premise the humans are forced to flee back to their world where they must start rebuilding their kingdom right so this is this is a story premise um and you take that so you put that here and then um the story so far so this is like just a natural language summary of what has happened in the story and then here's a chunk of prose like so that what's actually been written and then what we're going to ask it to do is write the next chunk and so what i've been doing is i've been getting all of this scraped together so we've got outlines we've got story so far last chunks we've got all that accumulated here so we've got chunks so i've got alice excuse me alice in wonderland frankenstein and all these different stories split into 1500 character chunks right so we give it one chunk after another and so that way it can kind of keep writing right it's like an inchworm inching along um but the idea is we want to train it with existing novels so that that way it can learn to write entire novels now the problem was as we get further into the story the summary of what's happened so far gets longer and longer and longer so let me show you the script that i used to do this um i think it was this one yeah so did i do 1500 i did not okay so this is um there's a well let's let's go through all the all the all the um things so far okay see this is the problem of coming back to code is like where was i where was i what was i doing okay um book to chunk so this was the first script and basically all it did was um break each book so we started here we took these are the whole books from downloaded from gutenberg so i took the books and broke them down into chunks so that's that's the first step that's all that this first script does is it generates the chunks and they're in they're in sequential order um so you know it's just manageable manageable chunks of pros um and then summarize chunks so for this what i did was i just took the first like 11 chunks um or sorry 40 chunks um and and summarize them so the summaries are here and so it's uh it's summarizing each chunk and it's it's a one to one so if we go to alice in wonderland 15 um in the chunks the summary is going to be a summary of that chunk so was it 15 so thought alice i dare say it's a french mouse come with william the conqueror blah blah blah so then you read this alice encounters a mouse who's terrified of cats the mouse explains that his family has always hated cats and he doesn't want to hear their name again um so yeah you can see it took what 1500 almost 1600 characters and summarized it down to 154 so that's a compression factor of 15 to 1. so the idea was we would take these summaries and we would so you see here where i pick up the summaries so summaries equals i for i and os list are summaries if name is an i and the name is the name of the book so basically what i do here is i just grab the summaries for this book outline so that i open i open the outline of it and then we accumulate the summaries so then for each summary in summaries we're going to accumulate the summary chunk make it longer and longer last chunk equals so the last chunk is goes here so this would be like so alice in wonderland 15 so it'd be this one and then the next chunk would be what we want it to write next um so that would be chunk 16. um so we just iterate get the next chunk but the problem was even with these even with these short summaries you accumulate 100 of them it gets to be really long because 100 times 154 is 15 000 characters that's not going to fit into a gpt3 prompt so that's where we got stuck last time okay so class in review so what we need to do is we need to summarize this so the summary chunk if the summary chunk is too long we need to summarize the summary chunk um and i had an idea i could have sworn that i put in the code but so the idea that i had um was if summary chunk oops gotta do length summary chunk greater than fifteen hundred um then what we'll do is we will ah we need to grab this function gpt 3 prompt also that means we'll need those functions okay so just copy this this in here um and what we're gonna do let's make this a little bit longer just so that we don't run out of space is we will get yeah okay so prompt equals open file prompt summary yeah replace chunk with summary chunk so basically what we're going to do is we're going to ask it to summarize the summaries and so what this is going to do is wait did i already do that here prompt equals open file oh this is the full prompt okay if oops if summary is too long um summarize it okay so since we're sequentially adding summaries one after the other um what it'll do is the older the summary is the more like so like this one well here let me just show you this will be easier let me just show you keep it simple all right so we'll do summaries okay so let's just open the first 10 summaries okay so alice's adventures in wonderland is a story by lewis carroll so what will happen is each of these will get end up getting added in sequence and once this is over 1500 characters the this algorithm will summarize it and so then the oldest ones will end up getting summarized repeatedly see see what i'm doing here so since it gets summarized repeatedly the the older the information is the more it'll be summarized so the more it will have decayed um and uh since it since since it'll have this like kind of time decay i'm hoping that that will keep it keep it smaller okay so that's fine but this also means that for every summary where if the summary gets too long we're going to summarize the summary which means a lot more calls to openai so it will get a little bit more expensive another thing that i need to do that we're going to work on is we're going to um shorten these uh these outlines because like the outline for sherlock 2200 characters that's that's a lot but um pride and prejudice that's 3 400 characters that's too dang long um great gatsby 1800 that's probably okay frankenstein 2300 that's pretty long alice in wonderland 1400 that's a little bit better um because the thing is um with the with the with the outline or the premise all you need to do is kind of have a guiding star for how the story is going to go you don't need to give it too much detail okay so now that we are hypothetically oriented to the plan also let me make sure we're still getting audio yes good can you hear me now um where was i i accidentally closed the summarize okay because i was copying my old code and we needed to do this because there is something funky going on with the stuff downloaded from gutenberg it all should be in utf-8 but it didn't yeah so and what i did was last time i found out that i needed to encode it to ascii and then ignore errors and then decode so that'll just put it back into a normal text string that fixed it okay great and then response equals gpt3 completion response except what we'll do is we'll just do prompt okay so if the prompt or if the summary is too long we'll make it shorter so that should fix that part um let's save this um because you can see that this was two megabytes so we'll save it as novel two so it doesn't get overwritten because you see here we're just gonna overwrite it um yeah so there's that and then let's work on shortening the outlines okay um this one's fine we can still make it more more compact so we'll just put it all in a single chunk oh sorry my dog needs to be let in pause okay and we're back sorry about that um [Music] so we're basically trying to um reduce the the word count or the character count as much as possible but also it'll just be in a nice paragraph because if you look at the um if you look at the uh the which mccollum's that i made the plot uh plot synopses one two three one two three then um you know gpt3 just tends to not write in bullet points it also sometimes so here's the thing sometimes gpt3 handles lists really well i if you have a labeled list so what i mean by a labeled list is like um you know describe a company so a labeled list would be like location and you'd say like north dakota um name would be like joe's emporium or whatever and then you'd say like uh established um we'll say like 1848 or whatever so if you if you gpt3 is really good at ingesting information very quickly if you use a labeled list like this but if you use numbered lists gpt3 doesn't seem to understand it as well so numbered lists are more for human brains gpt3 doesn't really it doesn't really care about numbered lists and i'm not sure why um so but if you give it like a natural language paragraph it seems to understand that a little bit better um it might just be that that's a uh an artifact of the training data um like maybe like it just didn't ingest that many um uh examples so we we can we made this more concise by about 100 or 200 characters um but then what i wanted to do is show you the edit feature um let's see reduce word count word economy so let's see how that invalid url hmm i wonder if that's down text davinci edit okay it might be down that's fine um reduce the word count of the following passage without losing detail um apply word economy yes i know follow winga um and then um new version with fewer words um is the whole thing down right now status says operational no incidents i don't know what to do uh okay let's just save that real quick and we'll do a full refresh maybe if i log out and then back in and we go back to playground hey there we go you can tell i do technology have you tried turning it off and on again that's literally all i did okay this doesn't look like it really did much word economy the novel begins with a series of letters from robert walton like there are so many like ways that you can make this concise um i guess it removed a series of letters and just letters okay fine i know it might contain sensitive content okay so let's see frankenstein so 2236 down to 2181 okay that's not so bad so let's do this iteratively so let's then take that output and ask it to make it concise again um and so you see like we're down to like 434 tokens um so that means that uh that means that we're getting uh more more efficient because also since we're going to have to send this through repeatedly we want to be as efficient as possible okay so let's see if this gets any more concise and you can see how like if you iteratively if it does get shorter um then it will be it will be um kind of have that decaying summarization thing because like the further back in time something is in the book the less relevant it is but you might still need to know the key plot points because again gpt3 does not have long-term memory okay so this just reworded it it's literally the same same word count but not any shorter okay so let's see how can we make this any shorter well you know what let's let's just move on it's 2100 characters um it's shorter than it was okay so let's do this one mysterious millionaire who throws big parties it just seems like another parable about like how chasing women is bad like just be honest y'all i know that like some people think ah you see this is proof that women are all temptresses and you should you know we need to control women no this is a parable about how men can't control themselves come on man come on we're not animals grow up okay 1756 characters so let's run this through our compressor uh once and just see what happens new version with fewer words reflects on the futility of the human condition look at that okay down to 1600 characters happy with that pride and prejudice so this one's gonna be a little bit harder and also i'm i'm done doing this manually let me teach you guys some regex and i know some people say regex because it's regular expression but to me it looks like regex because if you have e-g-e it's the soft g sound so i'm just following english standards okay so carrots that means anything that starts at the beginning of a line followed by a digit and we'll do digit plus oops digit plus so that's one or more followed by a period followed by a space one or more okay cool so we want to replace all that with nothing and then we want to have space of two or more because at the end of the at the end of the rows it's um you see how it's uh the new line it's a new line and carriage return so it's r slash r n um and so we want to replace that with just a single space bam all right so now we have compacted it a little bit we're still at 3300 characters that's way too dang long okay new version with fewer words nick caraway so let's see how much we can summarize pride and prejudice even more um did we run out of tokens we did didn't we because this is long so because it's interesting right because if you're trying to summarize a book that's like 800 pages to 1500 words or 1500 characters versus one um that is like 100 pages or however long alice in wonderland is all right so let's see we're going from 33 66 to 29.78 not bad not bad okay um but yeah because it's almost not fair because like if a lot more happens in a book and you're trying to compress it down to the same window like that's not fair that's not fair to that book um but because it's too dang long we really need to get this you know get it down as much as possible excuse me i don't think it's doing anything different i'm fighting it come on can you tell that i'm getting more and more unfiltered yeah you if you look at the things that are underlined they're in the same positions long born netherfield netherfield netherfield longboard netherfield another for netherfield i doubt this is any shorter but that's okay we did our best okay so 2978 oh we did get a few more characters squeezed out of it so we went from what was it originally 33 66 to 28.70 so that's 500 characters shorter not bad control h we are going to replace no that's not right you're supposed to do that differently slash d carrot slash d okay slash period the reason you have to do backslash period is because period is like the uh the universal character for regex slash s one or more why did it is that not replace replace all it only did up to notepad you drunk that doesn't make any sense oh wait okay hold on maybe i'm the one who's drunk get some there we go okay replace all why is there still a leading space find leading space see there it is oh because i still have the space there i am the one who's drunk okay and then we do um space of two or more and so this also is good because like space of two or more oh slash s is white space so that's like new line carriage return normal space tab so any white space and just replace it with a single one okay so now we have compacted this solve the case okay so we're down to 2100 characters here so let's go ahead and summarize this as well because you know just the the fewer tokens we send through gpg3 the less money we spend but also because excuse me because gbt3 is really smart it can confabulate and fill in a lot of blanks so it's weird because in my first book natural language cognitive architecture one of the principles that i that i wrote about in that book is um you need all the relevant information and no superfluous information so like everything that you need and nothing that you don't okay so let's see this goes from 211 is it longer 21 2113 to 2131 they made it longer no okay we'll just leave it as it is then okay so we've made our premises a bit shorter it's still okay still not ideal but they're compacted they look prettier they're also in just a big paragraph which gpt3 tends to understand better so now that we're all caught up let's do this so for book in books um ah let's see let's add a little output say print summarizing the summaries just that way we can see what it's doing do i have any other output oh yeah there we go okay so print summary length prompt length next chunk yeah yeah okay prompts and completions oh that's where i've got them saved out to yes okay so the prompts are all saved here yeah okay so um the outline will be instead of a instead of a numbered list it'll be a chunk and then the summary of the summaries will be here and this will get shorter well it'll have a max of 1500 characters because it'll be it'll be shortened um and then the last chunk so this is the pros and then basically what we're asking it to do is then um so this is great gatsby 07. so let me open great gatsby 07 and the completions um let's see great gatsby 07. and so basically what we are training what we're training gpt3 to do is this so they spent a year blo in france for no particular reason so you see how it just continues it right along um ideally it will be you know like gpt3 will only spit out one whole paragraph at a time but let's see how it goes also let's not save that no um okay all those are all gonna get overwritten anyways before we go uh okay i think we're ready to test let me back up my work um what is this auto muse get status get add git commit am um saving work for part two i know i make the most awful git commits um get push okay save a work so now we've got no risks python is our python prepare json l um sleep is not defined we have insomnia i'm sorry i'm sorry for subjecting you all to myself right now okay actually no i'm not sorry you're watching this on purpose error communicating no api key provided what do you mean i need an api key guys i'm having a rough time of it today okay summarizing the summaries hey look at that so it's still getting too long hmm yeah it's still getting progressively longer interesting because so my rule of thumb my mental rule of thumb is that a gpt3 prompt is limited to about 6000 characters that that tends to be around 2 000 tokens 2048 which is the limit right now so this is not getting summarized accurately um it could be that i'm doing something wrong prompt equals oh that's the problem i'm updating the prompt not the summary chunk the summary chunk needs to be equal to that yeah guys the man okay let's try that again oh also this is not big enough sorry properties font 28 boom okay so now the summary should not get too much oh there we go look so yeah there we go perfect oh there yes yes yes yes summarizing summaries why is it taking so long the first one went by really fast 6 000 characters yep okay it's working it's working and let's see most of them the total length was let's the longest was 6 000 so that should still fit frankenstein is a little bit long 6 000 so that's this is why we wanted to um compact the story premise as much as possible so let me go ahead and axe this because i'm afraid you know if frankenstein is is going to be too long then um certainly pride and prejudice is going to be too long so let's try something else we will go to the prompt for summary summarize this passage very concisely so we're just going to go full send full yolo and summarize these things that are too the length is too damn high um the outlines that's the word words are hard yes okay so frankenstein is presently 2180 characters so let's using a different prompt make it shorter petty to fool okay hmm it looks like it didn't make it any shorter i don't think it worked yeah one for one output i might have to do this manually uh my brain okay what happens if we turn up the temperature like it's seen it's it sees that it's already a concise summary and it's like i can't what do you want me to do it's already super concise this looks like it might be shorter maybe not nope no shorter hmm this is uh this is a tough cookie all right so frankenstein is 21 um great gatsby's 1600. what was it alice in wonderland was like 1300 so we got to get these under like this one's got to be half as long as it is and sherlock is 21. oh man cheese it removed periods it's kind of interesting huh i didn't notice that oh man i'm gonna pause this and think about this for a second because this is this is tough give me just a minute okay gang whoops sorry sorry got a little bit over excited there okay so what we're going to do is we're going to just throw this out and start over so write a short plot synopsis of the great gatsby um there we go so this one is like just a couple hundred characters so that's not so bad so let's just save that because it tells it tells you the whole story and it's pretty concise i tried i tried uh changing the adjective detailed plot and it got real long let me just show you after a lengthy absence nick carraway returns to long island yeah wait the first time i did this it just kept going and going and going oh i've got the temperature too high turn the temperature down ah this looks like yeah known for lavish parties the object of his affection daisy buchanan yeah so here you can see it just keeps going and going and going and going and going and going is basically telling the whole story let's see how long it is though that's 2500 characters yeah okay so we'll stick with 500 characters we'll just see if this works it's short enough um and then we'll do write a short plot synopsis of mary shelley's frankenstein okay yeah because making making those those summaries that i had shorter just wasn't working wasn't working short plot synopsis of um is it jane austen jane austen's pride and prejudice that's the right book right yeah okay and we're down to 600 characters excellent um of arthur conan doyle's sir arthur conan doyle's the adventures of sherlock holmes no tell me the story it's never easy if it were easy someone else would have done it um of okay so now we will go back to here 400 characters 2100 characters okay um there we go and still okay list the names of the short stories in the adventures of sherlock holmes i don't believe you okay write an extremely concise summary of the following passage whoops extremely concise summary this is not working okay maybe sherlock holmes isn't a good example because it is a compendium maybe that's the problem let's go back to gutenberg gutenberg and go to book search most popular a tale of two cities let's do dracula get it in text yeah oh but that means i gotta start over with all the summaries and stuff you know what i'm just gonna we're just gonna do four i'm just gonna cut out sherlock holmes because this one this one is too difficult goodbye sherlock books sherlock completions and this is all saved in the um in the github repo so i'm not too worried like i can get it back if i need to those are the premises we'll use later the prompts that doesn't matter the summary is sherlock okay i know i keep moving the goal posts on myself all right so now we've got a fine-tuning um data set of 164 samples move on gracefully okay there we go so you see these prompts are shorter they're reliably under 5000 characters now that we've shortened the plot synopsis so i'm gonna let this run and we will be right back once it's done because after this it's just a matter of uploading the fine tune and then trying it and it's done and it looks like it worked so this video is already what 45 minutes long and this was this was much more of a troubleshooting session i thought i was just gonna come in run this cool idea that i had and it was gonna work um but looking at the lengths of the prompts and completions we've got everything from 3 700 characters 43 45 48 uh 47 49 so 5 000 5005 so this should all be well within the the correct size for fine tuning for gpt3 so with that said i'm going to take a break and we'll come back and actually test this in part three because i've got some some homework to do with uh with uh how to actually implement it in an automated fashion so that's what we'll do in the next one um is i'll get this fine tuned model going and then we will actually try and generate a story in part three so thanks for watching thanks for bearing with me and like and subscribe etc etc you know the drill talk later
Hi David, can you post the link for Part 1? I am not able to find it.
<a href="https://www.youtube.com/watch?v=V6LAsdXkWjo&amp;t=30m53s">30:53</a> that joke 🤣
Great effort man! I&#39;m inspired to do my own version!, DO IT
There is a bug when switching between edit and complete mode it doesn’t switch the model. You just need to select the model again.
What was the final price after all of those tokens ?, Not much. Just a few dollars.
Load in all the Games of Thrones books and Write Winds of Winter for George RR Martin. :D
ah, a fine AI novel
hey everybody david shapiro here with another video um today this was uh not what i expected to be working on but i saw someone on the open ai community ask about um doing youtube chapters or something or other and then someone posted it on the discord and i was like you know what it's time for me to do some seo so i created youtube chapter generator you're free to use it it's pretty straightforward so let me show you how to use it real quick so you go to one of your youtube videos you could probably do this via api but i haven't figured it out yet so you go to video details subtitles duplicate and edit then you do edit timings and then you do download subtitles so it downloads captions.sbv you go copy this file copy it over to the directory and then you just do python generate chapters and it will yeah it will go um go through and summarize each chunk it took a little bit of prompt engineering to do it so let me show you and then the final output is nice and pretty so it gives you a time stamp and then it summarizes that whole chunk so how did i do this well first the prompt um so i'll show you the prompt to just summarize summarize the entire following transcripts into a single descriptive title with timestamp and so descriptive title with timestamps so i just give it a chunk of the sbv and it gives you a timestamp and a job market so i've got a few shot prompt so i give it just two examples and then it handles it just fine it summarizes the whole thing gives it a good title so how did i do this so first you open the transcript and then you split on double new lines so let me show you why you split on the double new line so each chunk is conveniently separated by a double new line so we split it into a list of those items and so then i give it i take a chunk size of 20 so we'll take 20 of those transcripts and just generate a list of lists and so then we go through that list of lists which each chunk is going to be 20 items long or so and then we populate it into this prompt and then it goes from there and then it just joins it there at the end so then you have this and so then you just copy paste that it's the right format you come back into this you exit out discard changes details you come up to the top of your description copy paste and away you go and now you have automatic chapters and so to show you how fast it is um you just come back in here let this uh add finish real quick and then i'll show you and now you see i've got all my chapters sorry i'm talking over myself um into relatively equal chunks and then you see here where it's like the pentagon says china will not invade china uh taiwan in the next two years the fall of cryptocurrency mining and its impact on ai hardware these are really good titles i'm proud of this okay so anyways that's it for today just a quick video um on this cool tool that i made because i'm trying to optimize my own youtube channel so thanks for watching have a good one like and subscribe and support me on patreon
I keep having error
You David Shapiro are an awesome human being
Hi David. This comment isn&#39;t directly related to the video, but it&#39;s your latest and therefore the best opportunity to touch base. I watch all of your videos and find them fascinating, and I really appreciate your &quot;state of the industry&quot; videos as they are informative. Perhaps, your channel might be the 1st I&#39;ve ever subscribed to, which would be a big step in engagement for me!<br><br>Anyway, the point is that I&#39;m researching and experimenting with LLMs, so far GPT3 exclusively, as a speech interface to a robot much like Engineered Arts have done with Ameca but my robot isn&#39;t as advanced - but so far the degree of integration I&#39;ve managed is a lot tighter, although still effectively just a chat bot but within a context that includes information about the state of the robot and recent events it has experienced or observed.<br><br>Any kind of AGI is beyond my scope right now, I still need to build an intent system into it so that GPT3 has the ability to cause the robot to do something physically, but what I&#39;m working on definitely cross polinates with some of what you&#39;re doing - how can an LLM actually act as a brain with real-world actions and not just as a chatbot.<br><br>I&#39;m pretty much at the point where I&#39;m ready to make some YouTube videos of interactions, but I think I&#39;d like to join your discord channel to discuss what I, and other like minded people are up to.<br><br>How can I join it, cheers?, Link is in the video description! 😁
I really love your videos. instead of just showing how cool ai is, you combined and create things and make it even cooler.
So awesome!
hey everybody david shapiro here um someone asked a question on the open ai forum uh after i posted my recent artificial cognition cognitive entity video they asked how will we know when we've achieved agi how will we know when that's complete and the the first answer the short version is um it'll never be complete um like us it will keep evolving and keep learning forever but a more nuanced and complicated answer is that uh it'll be very really difficult to know and the reason is because once something is smarter than us it'll be really hard to understand it um so for instance any anyone who's a who was a gifted child and is now a gifted adult like myself will be familiar with the idea that um we kind of have to dumb ourselves down to reach a general audience um so like if you're really smart you understand things that other people don't and so like if you talk at your level other people are going to be lost and so what you have to do is you have to kind of simplify what you're talking about in order to be understood and so as soon as agi um these intelligent machines comprehend more than we do they are going to have to simplify and explain to us what their reasoning is what their logic is uh so that'll be that'll be one tipping point but if something is beyond our comprehension then how do you know that it's going in the right direction and so this leads to another critical thing which is why i work on my core objective functions or the heuristic imperatives is because we will want to ensure that before agi becomes incomprehensible because it's too smart and it has to dump itself down for us to understand it we're gonna need to ensure that um that it is both self-correcting and self-improving um so self-improvement is just okay i did this thing let me do it better next time that's not as simple as it sounds because you have to know enough about whatever it is that you're trying to achieve um in order to do it better and your you gotta you have to be able to measure your own performance but self-correction is a whole other ball game so what do i mean what do i mean by self-correction self-correction is okay i'm going to hold myself to a higher standard like say for instance you're out with friends and you mistakenly say something that hurts someone's feelings and you say ooh first thing is you have to identify that you hurt someone's feelings and that it's your fault so you say okay well i want to do better next time i don't want to hurt my friends feelings because then they don't want to hang out with me and so you self-correct you say okay why did i do that now and there's a whole host of cognitive behaviors that go into this you have to go back through your memory what was i thinking at that time what was i feeling at that time why did i do that was it an honest mistake something that i just didn't know better if that's the case then i need to learn more social graces or maybe someone said something and i was irritated and so i lashed out and in which case the answer is i need better emotional self-awareness and self-control so that's what i mean by self-correction but if instead of a person who's going out drinking with some friends you're an agi or an artificial cognitive entity that uh that has you know power over life and death and you know the ability to influence global politics and climate change and stuff how do you then self-correct right how do you measure your success and your your flaws and so the the net result is we need to spend a lot of time working on self-correction and self-improvement before agi becomes incomprehensible because the thing is is like if someone is less intelligent than you and they they don't know what they're talking about you just be like okay you don't make any sense but it's because like you've made x y and z mistake the thing is if someone is more intelligent than you you might make the same mistakes where you say oh well i don't understand what you're talking about therefore you're wrong and stupid right but and in many cases the opposite is true it's like actually i'm the one that's wrong and stupid because i'm talking to someone who knows more than i do so if we get to that point where agi knows more than us and we think that it's wrong and stupid but it's actually smarter than us like how do we know but also if it's beyond our capabilities how do we how do we test it and make sure that it's self-correct so this is the nature of the control problem anyways this video is way longer than i meant for it to be so i just wanted to share these thoughts real quick great question
<a href="https://www.youtube.com/watch?v=x96Thu0PULg&amp;t=0m37s">0:37</a> me in college 😅