- "Why do you want an iPhone"<br><br>- "I don't know, it looks cool."
stringlengths 1
962k
⌀ |
---|
hi david. ive been tinkering like crazy and still get the invalid syntax error. any advice? thanks so much for the video btw. feel like im learning a lot, it works. Stay with it. Takes 1-2 hours and google helps. But you learn a lot. |
I have learnt more in this video than I have learnt ever 🤣 |
You are an amazing teacher! |
Dude.. thank you so much for this tutorial series. It literally saves me so much time, endless searching and uncomfortable explaining. It's so hard to find good, easy to follow tutorials without much clutter. This is perfect. |
Really amazing. Thanks so much. |
Hi David,<br>Love these Beginner Series!<br>Unless I missed it in the DscrptBx? Discord address? didn't see it..., The Discord Link doesn‘t work…, <a href="https://discord.gg/bdZJtdrJ">https://discord.gg/bdZJtdrJ</a> here you go |
I always give up on tutorials, but I made it through this one. VERY EXCITING TIMES., @David Shapiro ~ AI I think it's quick with no fat. And explains every part for non-devs which is really great., Nice! What did I do well that helped you? |
sk-[insert]RK7U |
I just wanna say thank you.... for introducing me to Mongolian Neofolk., Ha! Eagle eyed, I didn't even notice. Where did you spot it? |
Did you do this for me? You must have one this for me. Because this is exactly what I need. I hoped at some point you'd do a "for beginners" series. And this is quite relevant to the DM I sent to you on Discord. Thank for the quality content you put out man., That's great. I really did hope that something like this would come about., It was actually Ene's idea, but yeah lots of people need it! Glad it helps |
Dope keep them coming |
I've added you on LinkedIn. Can you share the Discord link? I've been waiting for us to get closer to AGI before learning NLP and now that word2vec, and DNN with transformers have replaced hand-coded grammar, I'm charting my path.<br><br>I'm impressed that you can do this on top of your day job. Thank you for all of your content. |
Great video, will recommend to collegues! |
morning everybody David Shapiro here with a video about robotics let me kill flux there we go um so today's state of the industry uh update is going to be about gpt3 or large language models in general and Robotics so the first one I want to introduce is say can so say can is what does it do as I can not as I say this is a an experiment to integrate uh robotics with large language models to perform the planning and actions and so the key thing that they've done is they use large language models to plan actions and figure out what it can do through affordances and waiting I'm not going to go through the whole paper but I'll just kind of give you the high level things and you can see it kind of doing its thing here in the background their um their page this link will be in the comments um it's great it walks you through the whole thing with demonstrations and examples and it's wonderful you even see it from the robot's perspective it's pretty cool um so yeah this is this is really great I predicted that this would happen back when I wrote natural language cognitive architecture and it was not long after that they uh started doing this kind of thing so basically this is a very early realization of of you know I'm not I don't know I don't know if they took it from my work but it is it is a natural language cognitive architecture it is using natural language to think um and plan um now one thing that's missing from this is that it has no moral affordances it doesn't think about what's right or wrong it's just doing a task um and this is that is by no means a criticism against this work this work is phenomenal it's setting the stage for future things because eventually one of the one of the things that you'll have to create weights or values for or affordances is um is something dangerous right is there high risk is it low risk is it morally or ethically ambiguous is it objectionable um but by by trying to assign numbers to things um that like that's that's how you get started and there's plenty of other things within affordances that have nothing to do with moral or ethics you can calculate Things based on how much time it's going to take right so time is a cost energetic expenditure is a cost how much money is it going to cost right there's all kinds of things that that our brains assess the cost and and like reward cost risk um and and benefits um all automatically very fast um anyways so this really sets the stage I'm excited to see where this team and people who take this research go with it um I definitely recommend you check it out they have a simulation version so you can just download it and run it yourself the other thing is uh uh this company engineered Arts they have uh integrated gpd3 with their robot platform so it can have open-ended conversations um I'm not going to play the whole conversation for you um it's very clear that it's it's using just kind of some basic gpt3 prompting so that it can respond to any question um I don't know how much they have solved in terms of long-term conversations um but hey if they see this video I'm happy to talk to them um and and talk about using semantic search for uh for holding long-term conversations although I'm sure other people are figuring out long-term longitudinal chat Bots what's going to be interesting is when you have humanoid robots with long format conversations and cognitive architectures for solving problems and following instructions that's really like kind of what we're moving towards and the Tesla bot you know that's that's what they want to do so Amica is the name of I think that's the name of this this chassis um uh but the the the company is engineered Arts so you see they've got their their robotic chassis it looks it's kind of in The Uncanny Valley of of human forms right uh especially because it's got this kind of like slightly curvy body like almost organic to me it's a little bit unsettling um but you know it'll take some time uh it is still very impressive just how sophisticated it is um you can even just kind of zoom in and look at the um look at the the I don't know if it's pneumatics or Hydraulics but it's it's got connection points in similar locations to where human bodies do so the particularly I'm interested in these long ones that connect to the to the bottom of the arm because we have really long muscles that connect here and actually um if you pull down it engages all the muscles um in your ribs as well and you can kind of crunch up your whole body um I did an experiment years ago where I was trying to make um like complex joints with cheap parts it's called Murphy I think I've still got it down in my shop maybe I'll post some pictures of that um anyways it was not going to work uh because I couldn't figure out the musculature someone did recommend pneumatics anyways I'm drifting I'm getting off topic um so yeah just wanted to show that say can this came out um just a few weeks ago I believe um let's see their update as of 8 16. um okay so the initial release was back in April then last month they did a bunch of open uh updates rather and then this was just just released just a few days ago so we can see that there's progress happening there's multiple uh folks working on integrating large language models with robotics the future is coming faster than you think I am telling you guys like yeah we might not have like AGI solved but as many people actually have commented on Twitter and other places we don't necessarily need full AGI before these things are one useful and too dangerous um you don't need something that is super intelligent um in order to put you know something like this inside of say for instance a combat drone or a domestic like you know make a slightly smarter Roomba that you can talk to um so basically kind of what I'm the the conclusion that I'm coming to and this is not like this is nothing special but is that uh AGI will be realized very slowly and in degrees and this is why I started talking about artificial cognition is the goal is because this is a machine that thinks it thinks about how to solve a problem that you gave it it only does that one thing but it's thinking and so then then the problem is how do you make it think more how do you make it smarter or faster how do you make it able to solve more problems right so this is a natural language cognitive architecture It's A Primitive one but it's darn good and then this guy over here the Amica robot I don't know how much cognitive architecture is in it but they're integrating it with a robotic Opera operating system and so they're working at it and I think that I think that history will record the the deployment of large language models like gpt3 like Bloom like Lambda history will record this as the inflection point where we actually solve that General open purpose uh general purpose open-ended Computing because that's really what it does um with with the with the Advent of large language models we now have um open-ended Computing so what I mean by that is you can have a system that just it'll keep thinking and it's not a finite State machine it's not an infinite State machine where you know it's moving from one mathematical state to another it's completely open-ended in my other experiments um uh get oh there we go um so I I'm reorganizing it so I'm uh I've got I renamed this one um the longitudinal study so this is this was the experiment I did a few days ago where I'm testing core objective functions for their um for different different paradigms for their longitudinal stability um and I showed that Foundation models like Da Vinci tend to be unstable whereas uh fine-tuned models like instruct series tend to be stable but less creative everyone knows that um so I just demonstrated it in a different way anyways so you know what am I trying to say I don't know it's it's just very exciting and I I'm curious to see how all this work comes together on um robotic platforms obviously AGI does not need to exist in a robotic platform that's just going to be the thing that you interact with personally I suspect that um the most powerful things are going to run in data centers in the cloud and you're just going to have a terminal just an endpoint right like you're going to interact with it through your phone through a smart home device but its brain is going to be running up in the data center and I mean that's that's how gpt3 and other large language models run now they're too big to run at home you actually need like several very large computers to run them anyways so that's the state of the industry update is we've got this very exciting time where large language models are being integrated with robotic platforms so we will see where this goes stay tuned it's coming fast thanks for watching like And subscribe and consider supporting me on patreon have a good one |
Please consider supporting me on Patreon: <a href="https://www.patreon.com/daveshap">https://www.patreon.com/daveshap</a> |
I think Google say can will be for robotics what Android is for smartphones |
Large language models should be integrated in all robots .just think asking a spot robot to go to the nearest store and groceries .then they will fly off the shelves |
Ideally AGI would have a control plane on top of it that simulates results with rewards for safety and power efficiency., @David Shapiro ~ AI sounds like a plan, as humans even the brightest of us make costly errors out of ignorance of outcome. My assertion is that we are far away from that type of ai and close to systems that can build their own outcomes without human intervention. Today I can ask an ai to write code to solve a problem and build a structure to reach an outcome. Tomorrow the human may not be in the loop and ai neurons will be building upon themselves., You should read my latest book Symphony of Thought. An entire section is about thinking ahead., @David Shapiro ~ AI I see what you are saying, decision making with inference is the goal, I prefer applying rules and logic after the ai decides the course of action. We don’t know enough about outcomes to allow the system to decide what to do and then execute it without oversight from a completely different system. I have a video about it coming soon but I fear that it won’t be as polished as your videos. Even if the model was 90% accurate and we humans often make mistakes, a robots mistakes could have terrible consequences, Maybe not "on top" but integrated with directly so that it is consciously aware of such things and can make deliberate, thoughtful decisions based on cost, safety, power, etc. |
Get ready for liftoff... |
SayCan: <a href="https://say-can.github.io/">https://say-can.github.io/</a> |
Ameca Robot: <a href="https://www.engineeredarts.co.uk/robot/ameca/">https://www.engineeredarts.co.uk/robot/ameca/</a> |
hey everybody David Shapiro here with a brand new video um yeah so I posted on my YouTube channel the Community page a few days well I guess it was just a day ago good grief uh two days ago um I had this idea I used the chat GPT to destroy the chat GPT um so basically chat gbt is an incredibly powerful tool um and uh obviously people are using it for all kinds of stuff but you know what if you use the AI to recreate the AI or improve the AI right what if you have this kind of AI Inception and if I keep looking like beyond the camera it's because there's some really gross looking storm clouds out there um anyways sorry distracted so I have this repo created recreate chat GPT so you can follow my work um and prompts and stuff uh and uh I've got this open so you know here's the here's the goal um but first so this is going to be a long series um if it if it even works I might give up I know some of you have uh have noted that a lot of my stuff doesn't work um that is on brand for my channel um so my channel is about fudging through things and and finding what doesn't work as well as what does and figuring it out the hard way so keep that in mind I have no expectation that this will work uh and neither should you so first let's establish requirements whenever you're starting any new project you have to do planning so this video will be about planning and we might have to do like two or more videos on planning anyways so let's talk through this I am building an AI chat bot um just getting started um let's see I need your help uh don't worry about the AI part just yet um we are uh in the initial stages of planning um let's start by focusing on the architecture um database uh web front end back end Etc um I already know that I want to use Azure Azure is my favorite platform um so we can start there um talk through this with me um I basically need you to be a project manager product owner and Cloud architect all in one are you up to the challenge Soldier all right let's see if chat GPT is game all right it sounds like you have a good idea of what you want to do and the tools you want to use here are some steps that you could follow to get started Define your project and goals and objectives what do you want your chatbot to do what is the target audience having clear goals and objective and decision making okay determine your chatbots functionality what kind of interactions do you want your chatbots users to use uh how uh complex carry uh the the carry on more complex conversations that's tongue twister choose your development platform you mentioned you want to use Azure which is a good choice some options include the Azure bot service to build custom chatbot logic Cosmos DB Azure SQL database good good Azure table storage build and test your chatbot okay cool and once this is done I'll actually save this I'll save the this conversation out um so that way you can see uh the whole process damn son it's really going or if you'd like more detailed guidance Okay cool so let's go ahead and I'll just save this as I go so I don't have to do it um here let's zoom out a little bit that's too far zoomed out okay so it's got oh here I need to do view uh oh word wrap is on oh it is wrapping okay never mind it just some of these are are long all right cool so uh cult goals and objectives Define the functionality um so so here's the skinny um there's this product out there I'm telling chat GPT about chat GPT called chat gbt which uses um GPT 3 as the AI backend this is a general purpose let's say Universal chatbot that can do just about anything um now there are a few uh characteristics that set this chat bot part from all the rest first it tends to give very verbose responses like wall of text or entire scripts um if you ask for code um but it can do just about anything it also seems to have a good long term not long long term memory um which uh let's see so those are the two primary features we need infinite flexibility and long-term memory um this is something I've studied so I know how to do it from an AI perspective um so first let's focus on defining um this project can you help distill down the requirements uh for me all right I will need to be able to handle a wide range of tasks personal recommendations decision making support um okay it doesn't seem like it understands that it's Universal it's like you're gonna have to build this so it looks like a scalability that's a good thing uh open AI had problems with scalability didn't you um so scalability security long-term memory functionality user experience as a key yeah ux is a primary um all right I hope this gives you a better understanding yep sure Perfect all right so let's go ahead and save all this because that's good stuff um here I'll add some extra white space and go ahead and save this uh recreate chat GPT so this will be um uh convo planning01 I have no like I said I have no idea how many of these it's going to take um okay so it's basically kind of regurgitating as like uh okay so from a from a project management standpoint what's next I have no idea I have no idea what it's gonna say project timeline milestones and deadlines tasks and subtasks um resources a list of the people tools and uh uh and other resources budget risks and contingencies um cool can you write the project plan for me um based on everything I've shared already okay so this is this is not going to actually write it it's just going to give me an overview okay so we found a limitation it can give me General stuff but it doesn't seem to understand like no write the plan for me and I can't I can't abort it okay so um actually no here we go it's it's getting it's getting a little bit specific um hold on hold on maybe I was wrong project of if you're of some array of the project goals oh wait that's just a fill in the blank Task 1 choose development platform subtask two research available platforms compare features and costs design its database build and test the chat bot um yeah you know I guess database is probably most very important um so it's recommending Azure Cosmos Microsoft bot framework monitoring analytics my Azure monitor okay yeah that's fine so it really wants me to focus on the database which okay that's fine Auto scaling features great we'll get to that um so this is actually pretty helpful although I I am using a lot of my own knowledge and and mental designs to kind of zero in on the stuff that's most helpful um so keep that in mind remember with crime anything is possible so write that down sorry no we're not doing any crime here um okay difficulty building and testing contingency seek help from experts not bad difficulty scaling contingency risks and contingencies this is I might actually use this for my uh my startup this is not bad I have a I have a personal project manager just working through all this okay cool cool thanks um let's focus on the database first um I think you're right to focus on that after all data is everything right can you tell me all the reasoning around starting with the database design um what are all the uh conditions and criteria we need to look at and you start oops um what kind of database or data store should we look at one thing to keep in mind is that we need to handle um semantic search vector-based search with semantic embeddings for an arbitrarily large number of Records um think billions um basically we need a super powerful AGI brain database okay go deconstruct this for me designing the database of your chatbot is an important step as it will determine how your chatbot stores and achieves information here's some key considerations to keep in mind data model you'll need to decide how you want to structure your data and what types of information your chatbot will need to store this will depend on the functionality of your chatbot and the types of interactions you want to have with users for example if you want the chatbot to be able to provide personalized recommendations you need to store data about the user's preferences and interests okay data storage performance all right this is kind of generic I was hoping that it would be a little bit more specific in terms of Technologies but we're uh well actually no I got a little ahead of myself nosql Azure Cosmos DB nosql databases are generally more flexible and scalable than traditional relational databases and can support a wide range of um okay so it keeps recommending Azure Cosmos cool performance database is fast and responsive scalability security got it cool you keep recommending Azure Cosmos DB um and you mentioned no SQL seems like this is the way to go um explain your reasoning justify yourself fool don't abuse your chat bot your chatbot is just trying to help this is actually a lot of fun okay Azure Cosmos DB uh it's globally distributed multi-model database uh highly scalable performance excellent excellent excellent global distribution high scalability handle billions of requests per day cool multiple database models including document key value graph column family you can choose the data model it's low latency it's security okay so looks like it's going to recommend Azure Cosmos DB um great large volume excellent excellent excellent um sure uh oh here let me save these um because these are really valuable all right where did I leave off um let's see Oak so that we left off with the tasks uh okay so we left off here so let's get all this um and let's continue on okay cool all right fantastic uh let's talk through the data model then first thing that comes to mind is the raw chat logs um I said you said or you know uh bot said Etc um I think we should probably store the raw chat logs um somewhere somehow what kinds of fields or document or schema do you recommend for storing raw chat logs foreign user identifier okay who is participating a time stamp when the message was sent a message type whether it was sent by the user or the chat bot uh message content okay so user identifier is saying like who's having the conversation and then you can have so okay that makes sense context depending on the functionality message content so that's fine context for instance your provide personalized recommendation you might want to store data about the topics that have been discussed conversation identifier yep okay this sounds good let me go ahead and save this um let's see or you could use traditional like Azure SQL um yeah uh let's think about which DB type to go with here um semantic search or indexing or vector search are important um do either Azure SQL or Azure Cosmos allow for that or will we need to look at another service okay so it says they both allow for search and indexing but they it's full text search um you can do full text index contains or free text predicates your search queries okay um Azure Cosmos DB on the other hand includes support for Azure search which is a fully managed service um that can be used to index and search it uses Apache leucine okay um which uses faceted search which faceted search if you're not familiar with is I believe topical so it facets it based on topic geospatial and language aware search you can use Azure shirts to perform vector-based searches by creating a search index on your Cosmos DB data okay cool uh let me go ahead and save this because this is useful all right so this it looks like Cosmos DB um does handle everything that we need [Music] all right cool Azure Cosmos it is um glad we got that squared away uh now let's think about other operations that we'll need to happen on the data um what I mean by this is that we will want to um hold out a scratch pad functionality so basically we will summarize conversations as they happen so that we can keep track of the whole task another thing that we need to do is um is be able to retrieve vast amounts of relevant historical information based on topics um uh topics uh temporal factors emotional valence and so on um this will be useful uh for ux purposes um including relevant personal information or historical historical information but we will also want to build um or a fine-tuning data sets so that the underlying models get better one thought that I had for this was to use the raw logs to create knowledge graphs um uh from the conversations um how would this work is it a good idea do you have any alternative recommendations using knowledge to organize and represent data from chatbot front conversations can be a powerful way to enable a range of functionality such as summarizing conversations retrieving relevant historical information and improving the underlying models so one thing I want to point out here is that this response in no way relies on the rest of the conversation my brain is holding the conversation so ins this could be a gimmick right this could be smoke and mirrors where it feels like I'm having an ongoing conversation but what in this response required any previous knowledge about um about the conversation not much so that's what I wanted that's why I wanted to make this video is to point out as we went like we already saw some limitations where um it wasn't necessarily limitations it basically has to talk through it and figure out what it is that you want and and then fully answer the question which is great um but then there's nothing in this particular response that requires any long-term memory it just it looks at the question and then you know because I gave it a lot of information and it gave me a very thorough answer so there's nothing wrong with that but you need to be you need to be cognizant um especially when you're designing these things because I've done a lot of work on cognitive architecture you can waste a lot of time and energy pulling irrelevant information irrelevant memories and trying to summarize them and you'll just be wasting a lot of time in compute Cycles um okay so let's see what it said um let's see uh it's such a summarizing conversations we're treating retrieving relevant historical information um oh hey look it it keeps the chat um so in this case the ux allows me to just say this is the chat so I guess I can come back to it so that's that's cool um a Knowledge Graph it's a visual representation of the relationships and connections between different pieces of information uh set of nodes which represent entities or concepts and edges which represent the relationships to build a Knowledge Graph okay it's just going to tell me the the the nitty-gritty um let's see overall it can be useful way um Okay so um summarize the overall uh um data design so far so it turns out I do that little like singing thing while I'm bagging groceries at the grocery store and the uh the the girl at the checkout counter like called me out on it and she's like oh you were here a couple weeks ago like I'm apparently their favorite customer or something because I'm I'm the dude who sings to himself while I'm bagging groceries apparently I also do it while I'm reading chat GPT output okay to summarize the overall data design you will use Azure Cosmos as your database technology it is globally distributed blah blah you'll store your chat logs and Azure Cosmic GBS documents with each document representing a single conversation okay I don't know if we agreed on that but the documents will include Fields such as user identifier timestamp message type message content and conversation identifier you will use Azure search which was a fully managed service to integrate cool you'll use knowledge graphs to organize and represent your chat log data such as gpt3 neo4j and graphql you will anyway able this will enable you to summarize conversations retrieve relevant historical information and improve the underlying models yes cool so let me do a quick time check how far in are we we're already at 23 minutes so we will probably just do a little bit more um I feel like the I feel like the whoops come back here I feel like the overall thing is kind of coming into Focus like a data first approach okay cool um you know that that uh is not necessarily what I would have thought but you know I just kind of followed this thing's lead and it seems like it's making sense you're thinking through it in terms of data okay um all right so there are two things coming to mind um what are the pitfalls here like uh what do we have to watch out for um in terms of uh design search and retrieval I want what what I'm going to see is if it will uh if it knows that like you can you can waste a lot of time on search and retrieval um there's going to be a lot of data to use so I don't well no here um so we need to handle it carefully uh what are your thoughts on that all right so let's see if it understands just like a I'm thinking through I'm like okay I have this concern let me just ask you um let's see as your data becomes more popular you'll need to ensure that your database and search infrastructure can scale data quality okay so I guess I guess that falls under scalability um that's fine data security data quality it's kind of regurgitating itself um you know search relevance when designing your search functionality you'll need to ensure that it is able to return relevant results to user queries this may require careful tuning of your search indexes data privacy gdpr and CCPA excellent okay that's fine um let's save this um and I said there's two things there's pitfalls and I don't remember what the other thing I was gonna say was um but yeah so let's focus on let's focus on scalability and relevance um let me explain why uh llms large language models um are expensive to use um you pay per token uh so that means we want to reduce uh dependence on on the bigger models as much as possible uh so let's think about how we can offload as much work to faster cheaper tools like indexed search rather than um rather than doing a lot of inference with llms uh okay so uh so basically how do we focus in on um on retrieving the most relevant memories for uh for chat like in a for instance in a sea of a billion uh chat logs how do we find the 10 most relevant conversations Ready Set Go there are a few strategies that you can use to focus in on the most relevant memories for your chatbot and reduce your dependent on large language models llms use full text search if you're using traditional relational database like Azure SQL so it at this point it is forgotten that we're using Cosmos so this obviously does not have the best long-term memory um and then it kind of says you know okay go back to this uh it's based on Apache leucine use machine learning so this is giving me really uh generic stuff um so what I'll do is this is really super not what I was looking for um use machine learning duh that's why we're here um if it if it had kept track of the whole conversation it would know that use data sampling if you have a very large data set duh I just said we have billions of conversations um yep so they can predict the which memories well okay actually no I need to read this a little more carefully because we talked about these so use machine learning um to predict which memories are the most relevant to a given query for example you could use a classification model to forget to predict whether a given memory is relevant or not so that indicates the possibility of maybe using a support Vector machine something that is well established and much cheaper so that's an interesting possibility use data sampling if you have a very large data set you may want to consider using data sampling techniques to find the most relevant memories data sampling involves selecting a representative subset of your data and using it to train a model rather than using the entire data set okay so these these are some interesting possibilities but I was hoping for something that was going to be more real time um and a little bit more like let's say let's say tactical rather than strategic um so this is this is okay so we're thinking through this um yeah uh and basically kind of what it's talking here is is like quantizing um but it didn't give me something specific it gave me kind of generic so let's say um do you whoops do you have specific examples um of these techniques um any particular tool platform ml method or coding uh tactic so let's see if it'll if it'll give me something specific okay it's just going to give me the same the same thing um all right so this is this is uh well okay it's telling me how to how to run a SQL database um create full text and it can and then use contains or free text predicate predicates select from so you want to avoid doing select star when you have databases this big um or I guess it well select star is probably okay but you know where it contains Etc okay service name indexes API okay so it's giving me a little bit it's giving me a little bit specific um you can use the Azure search rest API to find databases okay I mean maybe I'm overthinking this maybe maybe simpler is better um and it's worth trying now we won't know until I actually give it a try so there is that um let's see scikit-learner tensorflow you'll need to prepare your data um looks like we ran out of um ran out of tokens um okay so let's do let's do this do you have specific examples yep okay so all right basically probably we have to get get here once you search something um and you you find the most like five relevant things and summarize them or whatever um and then recursively summarize over time actually let me do a time check this is probably yeah we're 30 minutes all right we'll call it a day um okay so uh please summarize this whole conversation so far all right and then we'll call we'll we'll do this and call it a day you're planning to build a chatbot that uses large language models as the AI backend with the following characteristics oh wow it does remember infinite flexibility long-term memory uh to implement these features you're considering using Azure as your cloud and Cosmos DB maybe I was wrong um if it goes back if it went all the way back to the original post maybe it has some some scratch Pad capability or long-term search um excellent excellent excellent so yeah I'm pretty I'm pretty jazzed this is this has been fun um I you know obviously I can't stop thinking about these things um so I did have a few preconceived notions of like what this what this um was gonna be like um but I had no idea that it was going to focus on the uh data model first so the this machine did that for me um yeah very exciting uh thanks for watching uh stay tuned as we continue this planning project we are going to use GPT chat GPT to recreate chat gbt to the best of our abilities all right take care |
Support me on Patreon so that I can do this full time and keep the content ad free! <a href="https://www.patreon.com/daveshap">https://www.patreon.com/daveshap</a> |
Justify yourself fool!<a href="about:invalid#zCSafez"></a> I knew I liked you. |
David, I have no words for how grateful i am for how smooth and natural your channel feels. The amount of knowledge on your github repos ALONE is INSANE!!!!! Add that to the YT channel and this is in a different realm. I've run through at least 15 of your videos in the last month and i must say I'm definitely joining the patreon. THANK YOU! |
Just discovered your YouTube channel, AMAZING! Keep up the amazing work David! <3 |
Maybe you should look into using spaCy to do semantic search in your chatbot. |
This is so good! Looking forward to the next! |
Really valuable stuff here just watching as a noob. Saw some stuff I wouldn't think to look for. <br>Hey do you think if you preempted a conversation with something like "Please remember everything I tell you from this point forward, as it is important for the rest of the conversation" that it would have "better" long term memory? It kinda pulled thru there at the end though., @David Shapiro ~ AI Thanks for the great reply!, I am not certain how it holds long term memory. I find there are limits so it could be that the memory "decays" (the further back in the convo, the less it is "remembered") which could happen if there's recursive summarization or the scratchpad idea. the fact that memories decay seems to indicate that it does NOT use search, otherwise the recall would be perfect when needed. That said, I doubt that it has a "salience" mechanism. Human memories are comprised of three primary things: content (what is remembered), context (mostly where and when, and under what circumstances), and "emotional salience" - memories have extra weight if they are labeled as important. The emotional content of a memory is the number 1 thing that burns it into memory. Fear, anger, joy, pain, etc. The higher valence the emotions, the heavier weight the memory has. I have been thinking about this for when I get to Part 3 of the series. Basically, I am going to try a combo of search, scratchpad, and knowledge graph, and I will try "salience" weighting or something like that. |
Your videos are amazing!!Tactic: when you run out of tokens for an answer just say “continue” and you’ll get more of the answers., Yeah I figured out that trick more recently. |
Great video sir!! <br> You can write a book on <br>The Ultimate ChatGPT Guide<br><br>500+ Powerful Examples for Everyday Use.<br><br>We can Simply Copy, Paste into chatgpt. |
Really? Looking forward to seeing that video. |
morning everybody today's episode of state of the industry is going to focus on hardware okay so first um i want to talk about intel because intel is the um you know it's the one of the world leaders in terms of uh uh cpus um they make desktop and server cpus as well as some mobile although arm and a few others dominate the mobile space so sapphire rapids is the code name for their upcoming release which i think is supposed to drop later this year planned for 2022 and so one thing that i saw someone shared this on the discord is that they're going to start supporting amx extension so what is that amx is advanced matrix extensions which basically gives it a little bit more native support for artificial intelligence applications um so this is a step up from avx which is advanced advanced vector extension so it basically turns it from one-dimensional matrices to n-dimensional matrices or two-dimensional matrices which is the bedrock of a.i today because all all deep neural networks use tensor math or matrix math okay so just wanted to shout give a shout out to uh to that to sapphire rapids which means oh so the other thing is um alder lake which is the desktop version is not going to have amx support um or at least rumored not to you see over here on the on the the extensions amx is missing it does still have a vx though so if we go back to sapphire rapids you see it's got avx up to 512 which is even bigger than what they have on the desktop so um basically what i predict that we're going to be seeing is that conventional hardware you know intel chips amd chips are going to start supporting more and more ai related math so that you may or may not need a separate gpu or tpu so right now if you want to do ai locally you need dedicate you need special hardware to accelerate it you can run it on a normal cpu but it's going to be sometimes 100 times slower okay so with that being said i wanted to also introduce the concept of and eight so basically what what this does is most weights are represented as floating point 32 or floating point 16 bit values which means that they are represented by two or four bytes total in order to represent each individual parameter but what happened what you can do what we found is that you can reduce that down to one byte which is uh eight bit and you actually get one it's much much smaller so it takes less memory less storage and the processing is faster and what happens is when you distill or change or quantize a neural network down to 8-bit it is the performance is much better but you don't actually you don't actually lose the the performance of it and so there's a few uh a few papers out here this one came out august 15th so uh almost a month ago um here is a hugging face article about it and i'll have all this in the in the comments um so a gentle introduction to 8-bit matrix multiplication for transformers at scale using hugging face transformers accelerate and bits and bytes um so yeah uh this is so basically i'm showing you the amx because just showing you that matrix math is getting built into cpus and in the meantime we're figuring out more efficiencies in in the transformer space so the hardware is ramping up but then also we're optimizing the neural networks and so what i hope to see before too long is that large language models will be able to run on conventional commodity hardware that'll be ideal uh okay so that's kind of the two the two uh opening things that i wanted to show you so sapphire rapids uh will have amx support which will make it faster to run matrices and then 8-bit matrices or 8-bit parameters are also on the rise okay so i mentioned quantization so what is that so quantization takes a smooth curve and turns it into a stepped curve so this is probably the best example so you take a floating point 32 value and turn it into an 8-bit value so it's actually simpler you represent the same you know waveform or curve or data just with lower resolution and with neural networks because you have so many parameters it kind of doesn't matter so there you have it you could you can represent it very simply so taking that one step further you can also do analog neural networks so i remember many years ago when the mem resistor was announced i'm trying to find it analog computing okay so this is the idea of a mem resistor or a tunable resistor so basically a parameter is just it affects the way information flows through a matrix so what you can do is you can do this at the hardware level where you've got a matrix and then you got a bunch of resistors and the resistors change the current flowing through the circuit but if you can modify those resistors which is a mem resistor or a tunable resistor you basically bring compute and memory together so this is the compute in memory processing element basically you embed a deep neural network or at lea maybe this is just one layer i'm not sure the full architecture of this yet but mythic ai is a company and so they're producing these chips that have a hardware level neural networks and they take a lot less power um and uh let's see i think this is the the layers um yeah here you go layers um so basically you plot out all the layers that you need on hardware and it feeds them one into the next um so there you there you go that is an example of a of a hardware level neural network i was trying to find an article that someone had posted i think on the open ai community it was months ago but where a scientist or a team of researchers actually created a neural network that used i think it was just ambient heat or vibrations of different materials um and so like you could have neural networks that did processing just with the ambient heat passing through like aluminum and glass i could be remembering that wrong but basically the idea is that all materials do some sort of information processing by virtue of the transmit heat and sound and some transmit light and so the idea is that because because the energy passing through those materials changes in a predictable way you can actually use those materials to to do com computation the examples that they had were just you know very basic lab levels um they were not particularly dense or sophisticated so it'll be a while before those are are commercially available because i first remember seeing mem resistors many years ago probably 10 years ago so the cycle to go from you know theoretical demonstrated in the lab to commercial application five to ten years sometimes longer oleds were in the lab for decades i remember a friend of mine his dad told me about oleds in the 90s so the and then you know we only just started having oled phones and stuff come out within the last few years so that's like a 20 plus year development cycle so sometimes you have an idea and you prove that it works but it takes a while to commercialize it so powerless like zero power or almost zero power neural networks are coming um but they will be one they probably won't be programmable because it'll be elements based on like the material unless they're unless they're tunable it could it could be uh it could be that you get um uh similar similar things to mem resistors who knows or the maybe maybe these other processes will make the mem resistors more efficient who knows but what this does is it allows you to embed a deep neural network on a chip and it uses very little power and theoretically it could use only ambient energy ambient heat energy to do the processing and when you when you get that level you could actually end up this might be the path towards having ai that is more efficient than human brains because right now the human brain operates at hypothetically um exascale right but the the first exascale computer that we have requires 21 megawatts of power and the human brain only requires 20 watts of power so the human brain is a million times more efficient than an equal equally powered supercomputer so how do we get a reduction of power of a factor of a million or more and it's probably going to be these hardware embedded um ai matrix mem resistors those kinds of things good grief sorry um so one of the things that this does is you've got compute in memory processing so you've got compute and memory paired very closely together that's what that's what neural networks do in the brain so in the brain you have memory is in the synaptic connections but also so is computing so for the brain memory and processing happen at the same time um which is kind of difficult to wrap your head around there's people that have articulated it better than me so then so this is this is on the low end right this is low power this is mobile these are the kinds of chips that are going to be embedded inside cameras inside cars drones that sort of thing that do things like video navigation that sort of stuff but then on the high end you've got sarah brus with their wafer scale engine so this is a giant um die that does uh let's see it has 850 000 cores um and 20 220 petabytes a second or petabits a second of memory um so this is how the largest models are being trained and run today so there was a rumor that open ai got their hands on one of these guys or maybe multiple ones of these guys because suddenly some of their models got way faster so it either they did something like quantization um where they they reduce the um the the from floating point 16 or 32 down to end eight and or they got better hardware so you get better hardware that has faster memory bandwidth um and and integrates everything closer together because the distance that it travels that information has to travel is a big thing that slows it down because then you get latency right so instead of having a cluster of gpus scattered across multiple servers you do everything in one chip and that's also kind of what they're doing here is rather than having a gpu and a cpu and storage and memory all separate it's all together um and so by by shortening the distance that things have to travel you get lots of efficiencies in terms of time because it's lower latency but also you get efficiencies in terms of power because you have less superfluous systems so we're getting more and more purpose built so this is on the low end the small portable end and then on the high end you're getting this stuff and then in the middle you get um you get you know conventional server general purpose server hardware that's going to integrate some of these abilities okay so i think that's it for the day i just wanted to bring you up to speed with kind of what's happening out in uh in ai land with at the hardware level but then also kind of how that hardware is meeting the software and how that software is meeting the hardware so anyways uh thanks for watching like and subscribe and consider supporting me on patreon have a good one |
Please consider supporting me on Patreon: <a href="https://www.patreon.com/daveshap?fan_landing=true">https://www.patreon.com/daveshap?fan_landing=true</a> |
Be nice to have an 8-bit fixed point value, and an 8-bit float, both with a range of 0 to 1. Otherwise most possible values (such as 215.7) end up being useless. |
HuggingFace INT8: <a href="https://huggingface.co/blog/hf-bitsandbytes-integration">https://huggingface.co/blog/hf-bitsandbytes-integration</a> |
Mythic AI tunable resistor: <a href="https://mythic.ai/technology/analog-computing/">https://mythic.ai/technology/analog-computing/</a> |
Cerebras WSE-2: <a href="https://www.cerebras.net/product-chip/">https://www.cerebras.net/product-chip/</a> |
Sapphine Rapids Intel: <a href="https://en.wikipedia.org/wiki/Sapphire_Rapids">https://en.wikipedia.org/wiki/Sapphire_Rapids</a> |
morning everybody David Shapiro here with a video so this is one that I've been wanting to do for a little while and someone on YouTube said hey what about marketing stuff so I was like all right let's do it so I had this idea for Market Marcus the marketing guy a while back almost a year ago um and so a little bit of background is gpt3 already knows more about marketing than any single person because it's read most of the internet the problem is getting to that information and so with prompt engineering you can extract some of it um obviously fine tuning especially if you've got a bespoke data set that will really focus on certain marketing activities will do better but just out of the box we've got a lot of functionality so just wanted to show you an example where it's like um in this prompt I've got you know I'm an expert and you know and I have a few things that it can that it can do product Market fit SEO etc etc so by by by preemptively saying focus on these topics should get a little bit better at it um so say Marcus um I need help understanding um how to do product Market fit what is that actually here um let's see how to uh how to Market my new product the wysiwyg yeah web Builder and so then we'll do markets okay you just took it and took it and ran with it I'm really disappointed that uh that it did the new line new line um because it's it's supposed to realize that we're doing um you know one one input after another but anyways that's fine it's fine okay so he says there's a few things you want to consider first you make sure you have a solid understanding of your target market who will be using your product what is it pro what is it address you also want to create messaging that resonates okay so let's say let's say I want to do messaging okay that's focused on messaging uh right now how do I do that you'll want to focus on creating a message that resonates with your target market you'll want to avoid jargon or buzzwords instead of focus on creating a message that is clear and easy to understand okay user can you brainstorm some messages with me or I'll May figure out what to say Marcus unique features okay um so it's is it easier or more affordable or unique user um my product is unique because it reads your mind you don't even need to touch the keyboard Marcus let's see what he says to that that's definitely a unique selling point okay but should I say something like um uh like what give me some ideas for marketing copy let's see if let's see if we can just say like tell us there we go uh the only web tool Builder you need it's fast easy okay all right so this prompt is good enough so we'll start here um what I what I did was in this prompt I started by just saying hey um let's give an example of what of how we want it to behave so we'll add the new line new line because it seems to prefer that um okay so we'll save the prompt let me zoom in a little bit on this okay and then actually I'll probably pause the video because there's a little bit of copy pasting I need to do so okay so all I did was a little bit of background let me show you um this was based on a previous chat bot but you see like it's a super simple Loop um with a really simple prompt I just copied the prompt so you got the prompt and then you've got the main Loop um so yeah can you write some marketing copy for me I didn't reply that's usually a problem oh I bet I know what happened um if we look at gpt3 logs it probably yeah it got it stopped on Marcus okay because I've got I need to remove Marcus from the stop and use just user as the stop I think we'll see if that works and you notice that it's loading tensorflow I'll show you why in a second and you write some marketing copy copy for me of course I'd be happy cool thanks for asking um let's see write something uh just sell uh to sell me ice during a snowstorm let's see how good you are let's see what he says beating the Heat there you go okay so Marcus will do whatever you want um but then you can also like can you tell me deep secrets about SEO you can be more serious about it so you will ranking search engines this means using keywords okay um see what are some uh let's see tell me more I want SEO Magic yeah okay so this is not a fine-tuned bot so it's not going to be perfect that's fine but what I wanted to do was make this a little more sophisticated by adding um by adding something in the background that will make it a bit smarter so I did this save log so what we're going to do is we're going to save we're going to save everything out to chat but what we're going to do is we're going to embed it with um with uh yeah brain Google Universal sentence encoder there we go so we will do this I'll borrow some code from my Nexus microservice and we'll just come over here and we'll say okay so every time we get so we'll have an embedder ready and then when we use embed it's up here and so we'll just create an embedding for the what you put in as well as your response um and we'll we'll do the embeddings afterwards because um so we'll do a and response um because you're going to take a second to read and so we'll we'll print everything out and then we'll do the embeddings um and so actually we'll say vectors um okay so then we'll have one for each and so what we'll do is we'll do um save log and the information that we want to save will be um we'll have who's talking um we'll have the I mean really actually all we need is the dialogue and the um the dialogue and the embedding so we'll just do um words and we'll do um a and then we'll do uh vector and we'll do vectors zero because that'll be the first response so that's this so that's user input um I actually need to wrap that in curly cues so that it's a dictionary and then we'll also save Marcus's response and then we'll save that as Vector one okay so let's restart this and I will show you what this does for us because I'm going somewhere with this we want to give Marcus a long-term memory a little bit early let's see uh nope still loading hey how are you today doing well thanks for asking how can I help see Marcus is very helpful Json is not defined what oh import Json my bad also realize you need to do the embedding first um get that ready and then you start having user output here we'll do a little bit of demarcation there okay this will make it look a little prettier also oh I need to update that to user poor Marcus very confused so we'll do new line after that that should be good um and then we'll also print new line just to make it the formatting a little prettier okay so now it'll load tensorflow it'll load arm better um and then markets will start yakking at us what all can you help me with there we go okay and so you saw there was a brief delay and so what there should be now is ah there we have it so now we've got a few logs here so we've got the vector um oh it's saved as dot text oops that's not correct um and let's change this to dialogue so that it's up front dialogue dialogue and then we also need to save this as save log it's dot Json not DOT text so delete these and start over I know what I'm doing I promise rapid iteration this is just agile and also I'm not going to worry about doing a whole lot of editing because I know you all watched this on 1.5 x or 2x anyways so actually maybe the slow pace of the real-time recording is good so then you have some time to think I don't know let me know in the comments um because editing takes a long time I'm happy to do a little bit of cleanup but okay um write some marketing copy um I want to build a resort on a Hoth um but I need to pitch to investors let's see what he says and then we'll see those pop up in just a second okay so open it there we go dialogue right it's a marketing copy um I want to build a resort on Hoth great and then Marcus hatha's beautiful blah blah blah Okay cool so here's what I want to do as we build up a um as we build up a chat log a rapport with Marcus I want him to be able to remember stuff um and the simplest the absolute simplest way to have a chat bot with long-term memory sorry my ear just started ringing um is to just record all the logs and search them well the best way to search them is to um is to use embeddings because if you want to use like old school search you can do keyword search or you know fuzzy search or do um lemmas search and all that kind of fun stuff but that's like all kinds of old school stuff we've got embeddings we're modern so we're going to use a 512 Dimension embedding to find relevant conversations from the past so here's what we're going to do um I'm going to search so I'm going to use some old functions route search there we go um all right so for data is load data for I in data score equals that there we go so I've got our vector and then we order by that we'll simplify this because I've already written this function so basically what I'm going to do is we're going to do um def uh similar logs and so then we're going to have dialog actually we should do the vector just once I guess we'll be using this so okay so we'll actually we'll get the uh we'll take the vector so we'll start here so we'll call similar logs we'll say similar equals similar logs um and we'll pass uh vectors zero okay so basically what I'm going to be looking for is similar things to what you have said um so that way so that way Marcus can remember whatever it was that you were talking about last time and there's obviously lots and lots of ways to deal with this okay so we'll do that simplify this so data equals load data I don't have that file I have load logs so we'll call load logs which sends back everything and we'll call this um chat so we're basically just going to load all of it so for I in chat um score equals NP dot product I vector and that's what these look like so we're going to compare we're going to use dot product to compare the vector that we just passed with the vector for what we're um what we're searching for so info equals I so we're just going to copy that and then we're going to add score so we're basically copying what this is which has the dialog and the vector and then the score is that so let's see if score is greater than or equal to 1.0 um we will want to continue because one thing that happens is if you search the log that you just created it will always match and we don't want to duplicate things um so if it's if it's an exact match we will just continue otherwise we'll do that we'll continue um we're not going to do a try accept Loop that is not necessary here because this is not uh this this should just work um ordered equals results key Lambda score reverse true so sort top to bottom um and then um let's see so then the number of results will return um let's say let's return um the top 10 results um and then we'll just return [Music] ordered but if that doesn't work then we'll just return ordered as is um so I do this because sometimes like if you only have two logs and you try and hack it up like zero to ten um it'll fail it'll say that's index out of range so then you just yeah okay so that should be good um so then what we're going to do is we're going to have um so similar logs uh let's see so we'll start similar as similar just equals blank list and so then when we create our block of text we're going to inject this at the beginning so we're going to say um block equals uh and we'll just copy this new line new line similar plus new line and so then we'll say block plus equals this and I think we take that out I think that's right okay so now what what we've done here is we're going to inject the 10 most relevant um uh things right into into the dialogue into the chat so that way Marcus will intrinsically have a memory of whatever it is that we're talking about all right I think that'll work and what we'll need to do is we'll need to watch the gpt3 logs here because you'll see older stuff kind of pre-populated up here that's where we're going to be looking for it okay foreign this didn't take as long as I thought it would do and if I asked Marcus about Thoth um hey what were we talking about for Hoth and I don't think he'll remember it the first time um because it's not it's not going to preload yep um OS is not defined really don't be darned import OS this is always a problem when you copy paste stuff okay let's try that again um you know what I bet no that's fine okay hey let's talk about poth again all right so it'll it'll pick up Hoth Hoth is a great tool for marketing that's not what Hoth is um thank you that's not what love is do you remember what I asked last time really really join similar expected string instance but found oh whoops so here I'm returning all the information not just um yeah because this has this has the scores um and baked in as well not just the um not just uh yeah yeah yeah okay um all right so similar is a list of dictionaries so I need to treat it a little bit differently um so what we'll do is we'll do I dialogue for i n similar that should work rapid prototyping let's see how long this video is we're at oh we're at 20 minutes this is fine we'll call this a day in just a second um let's talk about my Resort on planet uh actually now here let's see I won't say Resort let's just talk about let's talk about planet Hoth again yep okay um what was I asking about last time and so now there we go um remind me what you said that you said exactly and so let's check these logs yeah so here you see um oh interesting that doesn't look quite right I'll need to look at why those don't why those are messed up but so you see here where um right so marketing copy yep okay so anyways it's remembering what I said um it's pre-populating um but it's not adding user in so that's interesting I bet append so user and then the response response should include that but it's almost like it's not including it oh I know where I know where where the problem is because if we look at the logs yeah it's not it's not saving it in the log so I need user colon in the logs saved there so when I save it here I need to do um user hey there we go okay um all right but we've got it we've got it working so Marcus remembers everything that we're talking about and this can get very long um like the the the length of your conversation so what I often do is um I'll add something if length of convert station is like greater than and also since Marcus has a memory now we can shorten it up so let's say let's say like 20 um if length of conversation is greater than 20 then um uh let's see uh b equals conversation dot Pop I think that's all you need to do I think it's pop zero um which so all that does is just remove the first index from it but there you have it this technique can be used uh very very simply to give any chat bot a long-term memory and it's brain dead simple um so yeah I think we'll call it a day there you have it this is a marketing bot with a long-term memory and of course you just change the prompt to whatever chatbot you want and it's got a long-term memory there's all sorts of other things you can do built in here you can check out my dolly chatbot which I have a separate function built in where Dolly will generate um uh the dolly bot will generate uh prompts for you anyways that's another video that's another time thanks for watching have a good one |
Support me on Patreon so that I can do this full time and keep the content ad free! <a href="https://www.patreon.com/daveshap">https://www.patreon.com/daveshap</a> |
Hi David, I discovered your channel watching interview by Bakz T.Future. <br><br>I enjoying your all your suggestions/'prompts' about using prompt matrioskas design :-). <br>Seriously, I appreciate your creative / smart deepening about how to instruct the system to follow your paths. Great stuff! <br><br>Back to this video/experiment I watched with pleasure your demo example a bout the 'Marketing Aid Chatbot with Long Term Memory'. <br><br>Could you maybe share along this video the corresponding code you wrote, maybe sharing the python program on your github repo? <br>It's seems to me that there is no trace of this code there, even if probably that experiment is something pretty similar to other code you already shared, like that one in the LongtermChatExternalSources repo. But for beginners it could be very useful to support the video with the demo code.<br><br>Anyway, thanks for all your videos, absolutely inspiring!, Thanks! Unfortunately, if I haven't put any code back up, it's not going back up for various reasons. Most of my videos and code are back up, and this is how it will be |
David: Give me SEO magic!<br>AI: There are no shortcuts. You gotta do the work. |
Great video, I like the way you code and explain in parallel! One question: would the chatbot access and use its memory (chat history) automatically or do you need to remind him to do so? |
Excellent videos, I like raw videos of making errors and how you go about solving them, this to me is the real gold. |
It would be interesting to be able to parse relevant business infos (like phone number, email address, business hours and related stuff) from unstructured web text (raw html from a get request).<br>This would enable really powerful unsupervised chatbots for e-commerce websites and other businesses, Google does that with BERT, @David Shapiro ~ AI yeah I meant some kind of crawler that can autonomously finds answers to more complex questions, for example on an e-commerce website about workout equipment the user might ask something like “I have sore muscles after workout what can you recommend?” The bot might query all the product pages and find the product which description best match the asked question. This is just an example but I’m talking basically about a deeper version of a chatbot that truly feels like a retail assistant😊, Oh that's trivial. Super easy. Google already does that to populate Google Maps. |
Hi! Great stuff! just getting into gpt-3. When using the api, do you have to send all the text (the full conversation + the pre-promp) back to the server everytime?, yes, it has no long term memory, you have to do that manually |
Watch these ~1.75 speed, some of the content debuting time is useful, overall content is awesome |
Any reason for no copilot and no vscode? Copilot literally speeds my development process by 40% i'd say. I've gotten so used to it I've practically mastered if the copilot will auto complete with what I want, and I mastered the latency so I can press tab almost instantly for it to auto complete, plus there are so many other useful extensions on vscode. I have my editor setup as the github color scheme because I noticed github's color scheme looked a little odd to me, now I am normalized to it., If CODEX or COPILOT had a plugin for NPP I would give it a try, @IamVeryBigSad I know npp, I used to use it as well, @Entropy how can one know vscode but not know notepad++?, @David Shapiro ~ AI OG, @David Shapiro ~ AI yeah, copilot probably is a bad option for videos, though I do wonder if people would subconsciously would be more "familiar" (for lack of a better term) with the VSCode |
With videos like this where I essentially understand the concept and code, and I'm just hearing you implement it, I just listen to it in the background on 1x speed |
hey everyone david shapiro here for a quick video today i had someone message me i think it was on twitter or linkedin asking if i could help with a cover letter generator and i was like no i don't really do consulting anymore um but then i was looking through my video history and i saw that my email generator was very popular and so the email generator you can go go look at the code or look at the video but the email generator is pretty straightforward it just takes a blob of text and writes an email um if it can take a chat log and convert it into an email it can take a rant and turn it into an email so on and so forth so uh let's just this i and i wanted to try this i was thinking like okay that was a popular video resumes cover letters nobody knows how to write a cover letter right you don't ever do it um and i thought like at first i was like is this unethical because like a cover letter is supposed to be written by you to you know but i was like no like you people go to career coaches people have mentors actually um my fiance has a a friend who is like a professional um like career coach who helps with cover letters i ha i have another friend who who actually gave me advice for my resume so i was like no like it's just another tool um so i don't think there's anything unethical about using a machine to help you generate this thing um so yeah here we go the prompt that i wrote and this was my first attempt i didn't make any changes so the following is a conversation between a career coach and a job seeker the career coach asks the job seeker lots of questions to learn about them so they can write an awesome cover letter so it just says this is what's going on this is the goal the career coach also asks about the company and job they want as well as their work history strengths and passions finally once the coach has enough information they write an impressive cover letter so all i started was here was hey there what's your name and so this was actually an actual generation i posted it on twitter because i was like well that was easy my name is dave hi dave tell me a little bit about the company and job you're interested in and since i grew up watching wiley coyote and roadrunner i said acme dynamite i want to work at acme dynamite as a product specialist i love blowing stuff up that sounds like a really fun job what are some of your strengths i freaking love dynamite i know everything there is to know about it did you know that the largest non-nuclear explosion ever created was an underwater demolition project no i didn't know that that's really interesting what is your work history like well i used to be an ordinance tech with the us army none of this is true actually about me that's great what did you do in that role mostly i said demo charges to dispose of failed ordnance and demo and or demolished structures that sounds like a really exciting job i'm sure you have lots of experience with dynamite what are you passionate about dynamite i can tell while i think i have enough information and so then um it ended there and i i added this cover letter so that just tells the tells the machine okay time to write the cover letter and this is what it wrote to whom it may concern i am writing in regards to the open product specialist position at acme dynamite i'm immensely excited about the prospect of working for such a fun and innovative company i bring with me a wealth of experience with dynamite both in my previous role as an ordinance tech with the us army and my personal life i am passionate about dynamite and i firmly believe that i have the skills and knowledge to excel in this role now this is kind of generic kind of boilerplate so you could probably do some fine tuning to make a little bit more robust of a of a cover letter but you know what like for just off the cuff just some prompt engineering i'm really impressed with this in addition to my experience with dynamite i am also highly motivated and detailed oriented to individual i'm confident that i can be an asset to the acme dynamite team and i look forward to contributing to your success thank you for your consideration i look forward to hearing from you soon all right so let's do this again um so let's just like seeker um hi there what's your name while e coyote coach oh and so now stop that's not what you're supposed to do why didn't it work this time okay so i forgot to point out um uh coach and seeker so by having um here my name is we'll do period um if you i add the stop sequences because otherwise it kind of will will just go off and run like it like you just saw it um nice to meet you wiley uh what company are you looking to work for um i want to work for the um eat the road runner foundation coach okay great what job are you looking for um seeker i want to eat i'm definitely aging myself i see well what are your strengths um seeker i'm very hungry unmotivated to catch the roadrunner um [Music] i excel at designing and setting traps such as by using rockets cages and automated mallets coach those are great strengths what are you passionate about road runner flesh i'm having way too much fun with this coach okay i think i have enough information to write an impressive cover letter okay cover letter let's see what he writes to whom it may concern the roadrunner eater i am confident that i am the perfect candidate for this job as i have a great deal of experience in hunting and trapping roadrunners my strengths include my hunger and motivation as well as my creativity and designing traps i'm passionate about eating road runners and i'm confident that i can be an asset to your organization thank you for your time and consideration wild e coyote okay so this clearly works let me do a time check i don't think i'm going to yeah this is only six minutes um i don't think i'm going to do this as a fine tuning project uh honestly i just did this because i'm waiting for other fine tuning jobs to finish um but i think this this worked rather well so i'll just save this as another demo um demo two so this will this will be up in the uh the github repo um so that you can check these these prompts if you want so it'll be under cover letter generator i might revisit it and do um do this as a fine tuning job but uh so here's here's the advantage here's the chief advantage of this is that by having a prompt by having this prompt it kind of gives you the structure because as a normal job seeker like you might not know what to include but by having this prompt it will walk you through the process and so i'm sure someone with uh with a little bit of web experience could probably build this into a quick ue ui um and then you could probably even do a second prompt to make actually no let's do that okay yeah yeah yeah all right so given um all right i think i already copied this out yeah okay so given the following chat log write um an impressive cover letter uh let's see the cover letter should um the coach cover letter um let's see the cover letter should be bold impactful um and professional um include um include details such as strengths and passions um but also address the company directly so what i did here uh let me actually show you i did i did a little bit of research so there's um some of these ideas i got from like harvard business review like how to write a cover letter think about the future strong opening um emphasize your personal value and it kind of already knows all this i actually i wouldn't be surprised if gpt3 already has read these articles especially the older ones um so yeah and then there's you can look at examples you know so on and so forth um yeah so anyways let's just take this and plug this into the playground let's see if it gives me anything just a little bit different tomb it may concern um let's see it's a little bit better yeah okay let's run this again so i'm writing in regards to the open position based on the requirements as uh as someone with an extensive background in hunting and trapping yeah so this is a little bit more a little bit more professional so it might be might be beneficial to split it up into multiple problems that fat fingered that real hard [Laughter] okay uh and it looks like it'll generate a similar format okay cool so i will yeah there you go so i'll save this as the um so this will be this will be a cover of prompt cover letter and then what i'll do is i'll come in here and rename um so we've got prompt for the cover letter and then we'll have prompt for the um prompt for the uh coach chat and then i guess i don't need i'll just leave the demo up there as an example but yeah okay so you can see basically splitting it into two will have oh sorry let me zoom in a little um we'll have one prompt for handling the cover letter and a second prompt or the first prompt will be to generate the chat and then another prompt to actually generate the cover letter um and so then if you want to play with this you can let me go ahead and cd cover letter get status get add git commit am all done and get push whoops push okay so you can you can play with this yourself if someone um does build this into a ui please send it to me and i will feature you on my channel or twitter or whatever but i think that i think that this you just build a basic website that does this and you've got instant value alright well thanks for watching like and subscribe and consider supporting me on patreon |
Please consider supporting me on Patreon: <a href="https://www.patreon.com/daveshap">https://www.patreon.com/daveshap</a> |
I think having a chat bot that walks the user through some exploratory steps is a critical piece of intelligent generation. That said, in the blog writing video that I just posted, I assumed that I was going to have to do some prompt chaining to get something reasonable. Not so much! Lots of room for improvement, but instruct kind of crushed it with the most basic prompts. |
GitHub repo: <a href="https://github.com/daveshap/CoverLetterGenerator">https://github.com/daveshap/CoverLetterGenerator</a> |
what's up everybody david shapiro here for another video before i get started i wanted to um just show you guys some numbers um and this is just i'm still in awe i don't understand why this is happening but it is so like view count is going up exponentially watch time is going up exponentially and subscribers are going up exponentially so whatever i'm doing you guys like it so i'm gonna keep doing it um also audience um there was let's see where is it uh age and gender okay so this is fascinating to me um it's roughly two-thirds male one-third female it used to be a little bit more balanced but it's becoming slightly less balanced but most of the audience is younger than me i'm 36 and so most of y'all are younger than me so it's like this is great this is exactly the people i wanted to reach why because y'all are the next generation whatever i don't finish you all have to pick up the torch and carry so anyways with that being said we are going to finish our um our creative writing coach today so i started this um because this was an idea from reddit where someone said hey it would be really great if you could have a a chat bot or or something that provides like professional feedback about like tone and style not just like um not just like correcting grammar and stuff and i was like got you got you covered so let's do a quick review about what we did last time um here let me just go to the folder of our creative writing coach okay so the first thing that i did was you need data right you need data when you're when you're doing this kind of thing you can generate synthetic data you could ask gpthree to generate um lots of uh lots of uh like fake stories and you could even ask it to do errors um and uh there's a problem with this and i don't fully understand the math but part of the problem is that these large language models tend to kind of go towards the average and so like and people on the on the on the community forum have noticed this that it's like it's kind of the average of what humans produce but what gpt3 produces is still slightly inhuman and so you're going to get much more variance much more variability when you get data from real humans so i just wrote this quick script to download a bunch of stories um from uh from reddit because there is a subreddit called writing prompts actually here let me just show you reddit.com slash r slash writing prompts there we go okay so let me show you what this what this looks like in case i don't know if i showed you last time so basically what someone does is they post a writing prompt and then all the top level comments except for the first one are stories and they they're they vary in length some of them are just short poems some of them are like novellas basically um and so this is a great place to get training data for this so what i did was i wrote a script that um that gets all the that uh well first it gets all the top posts from that subreddit so it says get the top for the last month so this little string here it says get everything for the last month but put it in json format so it's data um so this is what it looks like but if you remove the json part so the dot json you remove that and it's just t month this says okay top of everything for the last month so i did that because the top ones are going to have more responses and so you see like you know there's 193 comments 162 comments but we only wanted the top level comments so we get the top posts of the last month so that's this page and then from then from within each of those um we get the comments and so what that did what that does is we say okay so comments we get the first layer of children and then we say for comment and comments we get the data and the body of that comment so we didn't recurse through because we don't want all replies we only wanted the top level replies so that's like um it got it would get the bot so we'd get rid of the bot um and but it would get like this one which is a story it would get this one which is a story so on and so forth so that's what we did so we downloaded um after cleaning it up 388 stories of varying lengths excuse me the longest one is i think i already deleted some of the ones that were too long there was ones that were like 18 kilobytes but this one is 10 000 characters long so that's um that's that's probably a little too long let's see how many tokens that turns out to be so 10 000 characters almost exactly 2 300 tokens okay so that can fit in the current da vinci model the text o2 because the maximum length is four thousand tokens but yeah so it's it's roughly four to one because ten thousand um ten thousand uh characters is twenty three hundred tokens um okay so i'll update my my ratio because i had it as three to one so it's roughly four to one okay so that's where we're at um that's what we did last time let's see the other thing that we did was i started working on the prompt so um let's see we had what was prepare prompts generate completions i don't think i got this far yet yeah prompts and whoops ah come back slow down and completions okay we did try and generate some completions but i didn't like them that was the problem okay um yeah so that this was as far as we got last time where um now now that i'm up to speed let me show you um so here's a short script that just you know generates the completions excuse me um and so i ran a few and i was i was kind of not happy with it because it tended to um to give uh i mean the advice was good enough but it tended to give it in a list and i'm like i don't really like that like um i want i want to see my feedback um written like you know uh written more like a paragraph um so let's say write one or two paragraphs um okay so we're at the point of prompt engineering um so what i've written here is adopt the persona because once you tell gpthree what it is you say like you know i am this um adopt the persona of a professional creative writing editor we can probably even shorten this and simplify it and just say i am a professional creative writing editor read the following story and provide me okay because i wanted to write you so um yeah actually i think we do have to stay adopt the persona of because what another problem with the feedback that it gave is that it says um uh where it kind of talks about the author and third person but i realized if you're using this tool you want it to be giving feedback to you where it's like it's not talking about your work kind of in an abstract third-party way you want it to be saying you did this do this better right so i want it to give feedback to me so we have to be you have to be very mindful of point of view so as a as a fiction writer like i'm very aware of point of view but also understanding point of view and how large language models handle point of view is really critical for developing acogs artificial cognitive entities so we'll get into that in another video but point of view critical so adopt the persona of a professional creative writing editor so that's that's saying that's the model now thinks okay i am a creative writing editor right so that's its point of view read the following story and provide me detailed feedback so it says this is what i am and i'm talking to you so you're framing the whole conversation to improve the pros hold me to the highest literary standards feedback should be let's see your feedback should be open-ended and include examples or suggestions and then i added some framing here the reason that i did this is because sometimes it got confused and would just continue writing the story um so i like add story like story starts here then you add the story and then end story um and then i added a bunch of white space actually let's just do two um because if you if you use just a single line of white space sometimes that just looks like a paragraph break but if you do like two or three that that clearly signifies like in your brain it says oh this is a whole new section but also gpt3 learns that that means it's a new section as well because from from uh from the large language models perspective it doesn't actually see space it just sees characters so it sees that it sees slash n slash n slash n or sometimes it's going to be slash um rn slash r slash n which slash r is carriage return so that brings the cursor back to the beginning and then n slash n is newline um and so if it sees three slash ends it says okay this is this is like a whole break whereas if it just sees two it's like oh that's just a new line but in this case when i have three vertical white spaces that's actually four four new lines total um so that just kind of gives it says okay this is a new section what am i going to do okay so then the final instruction one thing that i found for especially for these instruct series prompts is that if you if you give it the instructions at the beginning and then you reminded of the instructions at the end you tend to get really good results so you'll notice that this is my standard format where it's like i kind of give it the framing this is what you're about to read this is a story this is what i want you to do so because what that does is it primes the model because there's there the model so in case you don't know this about gpt 3 and large language models they have an internal state that internal state is cued up by by the prompt and so that internal state is represented by an embedding and an embedding is well one way to represent it is um as a vector which is uh embeddings are vectors it's just a particular kind of vector it's a long series of numbers and so what you're doing is you're you're charging it up you're priming it to have the correct internal embedding and it's just the same as what happens to a human like if i give you instructions i say imagine that you're making a peanut butter and jelly sandwich i gave you instructions and now you have an internal state in your head and so gpt3 is no different so we have to have a theory of mind to understand gpt three's mind right and because humans have a theory of mind we can anthropomorphize large language models because it's like okay imagine that you just randomly grab someone off the street and you're giving them instructions that's how you have to write gpt3 prompts okay so now that i've explained why it looks the way that it does let's give this a quick test so let's delete these they're already saved in github so i don't mind deleting them um and they weren't the best anyways um okay so then generate completions so basically oh and one other thing that i need to i need to share is um what i've started doing is breaking up the process into smaller and smaller steps and so the prompts i preload the prompts so that i can just feed them into gpt3 one at a time later and uh and so that's that's fine um let's see so but then we need to also prepare prompts um so by breaking it into smaller steps um i'm just saying like okay read these make it and then i added this little bit here where it will um one thing that i noticed that the the if the story was too long and it was cut off the um and but the uh but the the my my creative writing coach didn't realize that it said this story ended abruptly and i'm like oh yeah that's because it got cut off so if we add something to say like story truncated due to length the creative writing editor should know like oh okay i didn't get the whole story but what i did get is you know good um okay so let's run that real quick cd and we're going to the creative writing coach python prepare prompts this just runs in a second or two so let's go to our prompts folder you can see these were just updated oh sorry my dog's outside i need to let him in be right back okay and we're back sorry about that um okay so oh yeah we just prepared the prompts here let me close some of these extraneous tabs uh go away cancel do not save okay too much noise all right prompts so now we've updated the prompt where it says adopt the persona of our professional creative uh writing editors so on and so forth your feedback should be open-ended and include examples or suggestions um excuse me now give me detailed professional editor feedback um with suggestions and examples to improve pros write one or two paragraphs so i made a joke in youtube comments that that basically you have to use neuro-linguistic programming with gpt-3 so neuro-linguistic programming was invented in what the 60s by a psychiatrist and it was basically like the way that you speak and the way that you frame things to yourself will change the way your brain works um and that in in terms of humans it has been largely discredited the most popular proponent of neuro-linguistic programming is tony robbins who has been on the ted stage so make of that what you will um but uh basically like if you say i am an expert rather than i am an idiot right however you frame something might change your cognition um but the the funny thing is is how you frame something absolutely changes the way that gpt3 thinks about it because if you say if you if i open this with like i am an idiot like gpt3 will act like an idiot if you say i am an expert i am a creative writing expert it will act like it um so actually that makes i wonder if we should update the prompt again now that i'm talking through this um i'm telling it what to do okay um yeah cause it's so it's hard because if you like it's a matter of who is who is me right if you use i and me um but i say i'm i'm telling it give me feedback so um yeah so i'm telling you what it is but i'm wondering if it will be better i need to read the following and give you feedback i think i think there's a better way of framing this let's say i am a creative writing expert um i have been a professional editor for 309 years 30 years um i am going to read the following short story and provide you um detailed feedback to improve your pros i will hold you to the highest literary standards and my feedback will be open-ended um and include examples or suggestions okay so by framing it this way because remember gpt3 was trained on writing um so i'm basically i'm just saying like okay imagine that a helpful redditor is writing this so if a redditor just like introduces himself i will read your work and i will provide your feedback so this is what they might say so i'm wondering if framing it this way will be even better okay so then because also this we're cultivating in this it is adopting an identity i'm not telling it to adopt an identity i'm just i just through neuro-linguistic programming this is what i am this is what i have done okay let's see i will now give you several one or two paragraphs of detailed feedback to improve your pros and style okay this is cool i wonder if let's let's see how this works i've never written a prompt quite like this but now that i'm talking through it i think that this will be pretty good um okay so let's do the prepare prompts again um let's make sure that they got updated with the new format i'm a creative writing expert excellent i will now give you one or two paragraphs of detailed feedback to improve your pros and style okay so it should it should talk to you right um no don't uh go away whoops don't save okay so now now that we've got that we're gonna do python and generate so we're going to generate some completions and we'll see how it goes and i'll just pop them open as it's running i'll let it generate like five and then i'll cancel it okay it's still doing the list sometimes okay so one thing about the instruct series is that they are they're very finely crafted to do lists they're really good at generating lists i really wish that it wouldn't do that though because it looks like two of these three were lists yeah come on i said generate paragraphs okay this one is good one issue i see in this story so this is this is the feedback that it would give you so um to the dude who our girl who asked for this on reddit this is what we're aiming for one issue i see in this story is that there is a lot of telling rather than showing for example in the second paragraph the author tells us that kate is considering abandoning her inheritance rather than showing us through her her actions or thoughts in the third paragraph the author tells us that the factory is full of shadows rather than showing us through description oh that's good that's really good that i mean i'm a member of a writing group of a feedback group this is perfect maybe i'll maybe maybe what i'll do is i'll just i'll uh i'll i'll i'll take my my group members stories and plug it into this and just read this to them another issue is that the story is somewhat choppy and disjointed there is a lot of description but it is often abrupt and does not flow smoothly from one sentence to the next finally i would recommend using more active uh and precise language throughout the story for example rather than saying kate fumbled with her phone for a flashlight the author could say kate search for search your phone for a flashlight um okay either way this would make the writing more lively and engaging that's possible okay you have great opening sentence that really draws the reader in i love the description of the statue turning to rust it's really evocative and sets the scene well the dialogue between the entity and cyrus is great it's chilling and really gets the reader invested in the story the final paragraph is a great way to end the story um okay so we want it to be more critique um that's good you you do want to provide positive feedback um okay so we need we need to do a little bit more prompt engineering um okay so let's open the prompt um i will now give you one or two paragraphs of detailed feedback i wonder if it's the detailed feedback uh let's so so let's change um detail to critical because i i suspect that that keyword detailed because when you ask for details it's like oh i need to list it out um so but if i'm saying one or two paragraphs of critical feedback to improve your pros and style you know let's let's do this i also might switch to an older model because the original instruct beta tends to be a little bit better with creativity so let's try this and if it if it does the lists again we'll go back to instruct beta and see how that works um okay wait wait wait wait wait cancel i need to do the uh prepare prompts cd creative writing coach uh python prepare prompt um check to make sure the prompts look right delete the completions i will provide you critical feedback yes critical okay cool and then oops um python generate okay it's still doing lists let's give it a couple more completions okay it is super fixated on doing lists but the the feedback is much more detailed okay it seems like it's kind of bouncing back like it's it's still giving me numbered lists but they seem to be in more complete paragraphs so let's take a look at these whoops um completions sorry i was like what am i looking for okay so that's a paragraph that's a paragraph that's a pair okay so these are just numbered paragraphs it's kind of gross but we'll go with it um because sometimes it gives you just like yeah and it's so this one about kate about being you know be vivid and detailed show us show us show us so this this feedback is very consistent for this particular story um [Music] um oh i thought of something else so one thing that you're supposed to do with with feedback especially creative writing feedback because it's so personal is there's the compliment sandwich so you're supposed to open with a compliment and close with a compliment um which is basically like this is what you did well um and and then you provide the harsh part in the middle and then you say but also i really liked it right so let me add let's see if we can get that um provide you critical feedback uh um um i will also commend you where you did well um so that should be good i will now give you one or two paragraphs of critical feedback to improve your pros and style i will also use the compliment sandwich method of feedback okay so let's see if it let's see if it can follow that all right so we do cd whoops not that one um python prepare prompt python python generate completions and first let's go delete the ones and clear those out delete them it'll overwrite them because all it's doing is it's giving the completion the same file name as the prompt but i want to delete it so that way i don't accidentally get confused alright so make sure the prompt i'll also commend you where you did well okay compliment sandwich because we want this to be good all right generate completions the other because the thing is like if you're gonna so this is this is while this is running um a cardinal rule of thumb oh that's good that looks good okay a cardinal rule of thumb with automation is yes it might feel tedious to like get into the details to make sure it's perfect but once you do automation right okay we've got a few once you do automation right it doesn't matter if you're doing it 100 times or 10 million times if you've got your automation correct it's infinitely reproducible and so that's why i'm so meticulous in these videos and that's why you notice like most of the time is just getting the prep like laying the groundwork because once you get the right prompts once you get the right data that's most of the battle one after that you just let it run and then you fine tune your model and it's good okay so i think i think we've i think we've nailed it let's see i think we got three for three um let's see this was the prompt so we can close that one um let's see the first few sentence for example the sentence still in the high of graduating together with masters from could be rewritten as still writing high the story has good potential but there are few areas that could be improved for example the dialogue could be more natural less stiff additionally the story could be proof read for grammar errors overall this is a strong story with good potential that's not too bad very strong immediately sets a tense and suspenseful tone um i love the descriptions however i felt the story lost a bit of momentum in the middle when kate is exploring the factory might help to focus on one or two key images or scenes rather than trying to describe the whole factory in detail the ending of the story is very effective and the image of the door slamming shut is particularly chilling i think he did a great job building suspension creating a sense of unease in the reader yes so the reason that you also want to commend an author for what they did well is because it's not it's not always obvious what they did well and so when you when you tell someone you did this good it tells them keep that right keep that don't throw that out but fix this other thing so you have to you have to do good and bad right you have to label label the feedback as both good and bad the first thing i noticed was that your story has excellent potential but there are few areas that could use improvement for example your use of description could be more concise and vivid in particular i would recommend using more concrete images and specific details to bring your setting and characters to life additionally your dialogue could be strengthened by adding more natural sounding conversation and making sure that each character's voice is distinct that's a big problem that a lot of authors have including myself overall i think that if you focus on these areas your stories will be even more engaging and enjoyable to read perfect okay i think this is good we're three for three i like this so let me just do a quick git commit so we can preserve everything we've got here get status get add all git commit am really in love with this prompt and results um running the full thing now okay get push and uh yeah okay so now let's do um python generate completions so we'll let that run in the background there's another script that i need to write um which is actually preparing the json or i need to fix it so i'll do this while the rest is running um and the re so uh for file and files so i actually got this wrong because instead of prompts um i don't need to i don't need the prompt because then we're just we're doing the prompts i need the stories so stories so we'll have the story the completion so the prompt is going to be the prompt plus slash n actually let's change this to story equals and so then the story is so the kind of clarifying open file stories so we're going to get the story then the prompt is going to be the story plus new line new line and then what i usually do is do the tags do the um the demarcation tag in all caps because then it's very obvious that like this is the end so professional feedback and then add a space and then the completion will be space plus completion um and i think that should be it creativewritingcoach.jsonl um okay let's make sure this is running yes good okay um yeah and then the last thing that we're going to need is the actual fine-tune script so let me copy my fine-tune script from another one fine tune dot pi whoops not recursive summarizer creative writing coach and then we'll edit this with notepad plus plus and we're changing this the name is the creativewritingcoach.jsonl coach okay so that should be good this is going to be a lot of data also um one thing that might happen is if this bombs on any of the um it shouldn't because i have i have the the prompt in there limiting the length so if you look here in the prompts the maximum size is six kilobytes which is um six thousand characters so you can see that um like it is it is very much constrained because one of the biggest risks is if it's going to be too long that can cause it to fail and what i don't have in my script right now i should add this is because if i get a bad if i get a bad output or a bad response this script will bomb in the middle and then i'm going to need to write something to just skip the ones that are already done um because you don't want to regenerate everything um yeah because well no i do have a retry thing in the gpt3 completion where is it the generate completions so right here where i have while true try and accept so this will catch some errors um so it won't bomb but if the prompt is just too long it'll bomb five times in a row and then exit that that loop also if there's any like formatting issues that cause errors and file handling that will also cause it to bomb but so far so good so this will take a little while to finish we've got 388 stories um to do and uh we're sitting pretty at 19 so this is going to take a little while to finish so i'm going to pause the video and then we'll get back once it's time to do the um the data prep and fine-tuning okay not all is well in the valley um i'm watching it run and there's um i noticed that some of the completions are big because you see some of most of them are one kilobyte right so you take a look at it your story has a lot of potential but there are some areas that could use some improvement etcetera etcetera that's fine but then let's look at the bigger ones um so the first one that's two kilobytes says the complement sandwich is a method of blah blah the following constructive like okay i didn't ask it to tell me what the compliment sandwich was um uh so let ba in this case all we have to do is just delete um what we don't want to see that's fine this one the opening of your story that's fine first i will commend you on something you did well in the story so it's like yeah we already did that um i think i know what i can do though i think if i change the prompt um feedback so let's do that um that should fix it because now it's just saying like okay this is what i'm gonna do feedback feedback time um and since my script so the reason that i can do that i've done this deliberately is that in my in my scripts i have it um so see for up for each file it opens the prompt um oh wait no that's not going to work right because i separated the process darn oh actually i know what i can do i updated that so we can get sneaky creative writing coach python prepare prompts so now all the prompts are going to be updated behind the scenes so there we go now that should be fixed okay so because if you're in the middle of a long run and you see some some things you don't necessarily want to interrupt it because if you only have like 5 or ten things that you want to fix before it finishes doing 300 right you don't want to you don't have to stop and restart so got that fixed in i all i did was do this for the prompt because what it what it did was it it queued off of this and just continued explaining i was like oh okay um it didn't with the semicolon or sorry with the colon it didn't realize like okay feedback starts here but now i just made it explicit feedback starts here like this is what i'm gonna do feedback um okay so get rid of that so this is an example of a good one you did a good job of setting the scene okay one suggestion one suggestion the opening of your story is strong that's good first i want to commend you okay so those are ones that i had already fixed um so let's go back to completions and sort by size so let's open all of them that are two kilobytes or larger because those are the ones that are more likely to be problematic okay the opening of your scene is story is good okay that's fine the opening is strong that's good it added some dashes don't need that proxies new okay so this this one where this is an example of what i said where i just continued the story for some reason um so i have no idea why that happened um gpt3 just got um it it came up with its own idea so these ones you know like praxis new without even the smallest nagging doubt the former heart of industry seems so fragile in the dark um oh okay wait this is a great opening line the metal once revered as the backbone of american society this is great description in the setting um okay this is this is copying it too much i don't want it to just like call each thing out um it's what it's probably doing here is copying the style of reddit where you would quote something and respond to it directly we don't want that style though so we're gonna we're gonna hold this one open because we're gonna change that the treaty was a fragile one yep and this is just repeating the story back to us this one is the story he wrote again so this is also the quoting thing positive i like the opening yep okay so we don't want this this file um the complement sandwich that's fine so we'll just delete this say the opening paragraph is very evocative it sets the scene well overall this is a strong story but with a few revisions it could be even better opening scene is very effective okay opening paragraph is well written in general okay i don't like the format of this one i guess we can close this since this is good that's right i need to close them if they're if they're if they're correct um your opening paragraph is good overall interesting characters so basically what i'm going to do is i'm just going to keep the ones that need to be repeated keep them open that one's good and that one's good okay so we've got one two three four five six seven that aren't good so when you have this much data you often don't need to fix them you just delete the bad ones um so what i'll do is i'll wait for this to be completely finished um if we're at 107 out of out of 388 let's see let's see how much this is costing because some of you have questions about token cost so each of these completions um 25 cents 20 cents 22 [Music] 36 this is actually going to be pretty expensive okay that's fine how much am i at already ow oh this is going to be a really expensive job this is why i was waiting so i started this project um a while ago and um and i was like i'm about out of tokens for the uh for the month so i'm gonna wait till june so we're on june 1st and um yeah i'm i'm vernon tokens so they're on average this is about the first what is that um 90 or so so 90 cost 6.23 cents so 388 divided by 90 times six dollars and 23 cents so this is going to be 27 just to generate the fine tuning data um this is another reason why i truncated it on length um is because it's like i don't want to spend that much money um but it'll be good we'll get good results this way also let's see a um a 1.2 megabyte fine-tuning job was like 40 this is probably going to be bigger so this is going to hurt how much is this already oh that's not so bad okay maybe this will be a smaller fine-tuning job the uh the um novel writing one was killer because there was there was um 200 samples and each prompt was really long let me show you how big that was um auto muse so the um the data here 1.1.3 megabytes um so it's total 1.3 megabytes that was on let's see 523 was when it was last edited so let's go to june and i can show you how much that was um because i know they're just doing this while while it's running and answering questions that you guys have had um let's see 523 come on let that load let's see how much do we have now um may may 23 fine-tuning one request so the novel rider at 1.3 megabytes was a 38.61 fine-tuned job um so that's just so that you you know like how much this costs so um if uh let's see let's see what how much data we're at right now for the creative writing coach so we're 128 items and that is um the size is 115 kilobytes okay so that's like roughly a third so let's see how many how many files did i 128 okay so 388 divided by 128 so yeah almost exactly a third times 115 kilobytes so that'll be 348 kilobytes total if it averages out um so and then uh let's see this one was um it was like 1300 kilobytes so 1300 um actually no crap uh 348 divided by 1300 so this is just over a quarter of the size so we s we multiply that then by what was it and 38.61 cents so we'll expect this fine-tuned job to be about 10 so generating the the um the training data is very expensive but hopefully the fine-tune job will be much cheaper i could be wrong about all this we'll see how it turns out anyways we're about a third of the way done so i'll go ahead and pause the video again um do a quick refresh yeah so you see how there there haven't been any there haven't been too many more additional large um large uh completions yep you have a great imagination good i don't like that one i don't like that one yeah so all right so what we'll do is um some of these some of these are salvageable we just need to clean them up a little bit but then once we do that we'll delete the ones that we don't like just because again we've got a plethora of data if if something's not good just delete it because we want to have that consistency one thing that happens with fine tuning is that you can you can kind of smooth out rough edges because it'll take the average like if you have a handful of aberrations in the fine tuning data it's okay um which is why i often don't check the fine tuning data but in this case i wanna um just show you like this is one thing you can do to clean it up to get more consistent responses okay i'm gonna pause the video now and we'll um we'll be back in just a second okay we're back um i chickened out because this was getting expensive but we do have 202 completions i've already taken the liberty of deleting the ones that were too big too long or didn't follow the correct format so this is one example of so because you know if you're an expert in fine tuning or if you're familiar you might be like dave if you're just going to find if you're going to use one prompt to generate all of your data what's the point of fine tuning the point of fine tuning is that you can look for those aberrations and remove them and clean up the formatting so that you'll because the purpose of fine tuning is to get very consistent results fine tuning usually reduces creativity but it increases consistency um and so it's basically like you can also embed multiple prompts so in other examples of fine-tuning experiments you'll notice that i use several kinds of data or several kinds of prompts and put them all into a single data set so fine tuning can allow you to combine different kinds of tasks different kinds of data into a single model or it can allow you to get very consistent results so essentially you get the same behavior every time um yep so we've got 202 examples the biggest one is three kilobytes the smallest one most of them are one kilobyte um so let's just grab the top um several uh okay your opening paragraph is intriguing so that's fine first i want to commend you great all right we'll remove that because it doesn't need the bar the first thing i noticed is your stories that you have a lot of interesting ideas great first thing the opening sentence is great the opening is very strong opening is well written let me say enjoyed reading your story very strong effective in setting the scene well written and engaging opening paragraph is evocative remove this overall this is written in an interesting story um the start of the story is very strong your story has potential first thing i noticed present tense throughout okay cool so those all look good let's get the next chunk so basically what i'm doing is i'm just um auditing the data so this is this is a technique in data science where you don't look at every single one of them you just look at a few of them to make sure that you just eyeball them to make sure that they look fine so we're removing any artifacts that we don't want and again what this will do is it will fine tune our our model to be very consistent okay the opening of your story is very strong your story is well plotted etc etc you'll have a lot of potential all right so i still found a couple so basically what i'm going to do is i'm going to keep going through these chunks until i don't find any that need correction and it looks like maybe maybe many of them do the opening okay the beginning your story your story firstly what i might do is just do add a find and replace for this artifact because it seems like it's keep it's going to continue popping up so what i'll do is i will do a find and replace to remove that when i do the prepare data yeah it's just gonna keep popping up okay so here's how we handle that if there's a consistent artifact like that when you have the format format json l so what you do is you say okay for this and then the completion so we'll say the completion equals open file dot replace and then we'll add that and we'll say replace that with nothing and then we'll do a um was it strip yeah so that will remove any excess white space so that should clean that up um that should be good and i think we're ready to run this so let me jump back over here and we'll do python um oh yeah here's what we need to do um for file and files so basically here's what's going to happen um we're starting with the stories um actually no we should start the other way we should start with completions um so so for because the the the reason that we we're doing this is because the completions are a subset of the stories and so it's like okay we need to match a completion back to its story so we're going to get the list of completions so we'll say this so we'll say completion equals and then story equals oh yeah we need to move this up um is going to be stories.file okay and so because we know we because since we've deleted some of the some of the completions and the completions are a subset of stories we know that the story is going to be there but if we enumerate all the stories there's not a guarantee the completion will be there so this should be right it should not bomb out so one rule of thumb is um you shouldn't use the try except in python to cut to compensate for bad code or bad data um i use it because the api might be unreliable right but if you if you do a try except here you might end up with an entire block of bad data i've done that before so you you generally errors one rule of thumb is that errors are there for a reason and you want to see them so like if i did this wrong right if i hadn't swapped completions and stories then it would error out and i want to see that because i don't want to make assumptions about the quality of the data okay so format json l okay it did not bomb so that's good so let's go and look at our creative writing coach json l it's 800 kilobytes so if i had let this finish where it would have been almost twice as big this would have been like a 60 dollar fine tune um why did it say empty inside i wonder if that was the um the story okay so here's what we're gonna do um so there's the prompt it just starts with the story that's fine and then you see right here at the end um new line new line professional feedback completion the opening of your story is very strong that's exactly what we want to see um prompt a girls trip a trip to remember that was an understatement okay so we go here um and then let's find um professional feedback that'll make it easier to see where the dmarc is okay so the beginning of the story is very promising and i like the idea of two friends going on at road trip however the story quickly loses momentum in the middle great so find next yep you have a great eye for detail etc etc the way you use rust as a metaphor for the decay of civilization excellent um oh yeah so the fact that gpt3 can understand metaphors is great because neuroscientists don't even understand how human brains process metaphors so we've been able to recreate a quirk of biological intelligence and we don't even know how it works biologically and we don't know how how the machine has been able to recreate it so just wanted to point that out there where it's like people people that are suspicious of um you know like what gpt3 is capable of like it's smarter than you think okay the fine tuning data looks good so i'm going to go ahead and run the fine tune job and then we'll test it and that'll we'll call it a day um let's see python find do i not have the fine tuned script in here i copied the wrong one i guess oh i know what i did i copied the wrong one from auto muse there we go this should be the correct one uh where did i save it maybe i accidentally deleted it okay so we'll get the creative writing um creativewritingcoach.jsonl update the file name update the tag okay that should be good um python fine tune uploaded pending okay one last pause and then we will come back once this job is ready we're back it's finally done um let me show you how much this cost because i know you guys frequently ask this so in total it was 252 requests some were deleted some were testing we ended up with 202. the fine-tuned model was 8 um 800 kilobytes and that was 24 now moment of truth let's see how it went go back to the playground let's go ahead and get this model loaded it should be creative writing coach there we go and then let's go to writing prompts and we'll just grab one off the top of the stack because i'm not going to write one and i'm not going to ask gpt3 to write one it would be a little perverse if uh if we ask gbt3 to write a story and then critique itself although you can do that gpt3 can be adversarial against itself okay so we'll give it the story that is 800 tokens and then let's see what was the d mark that i used um i think it was like professional feedback or something so let me look at the um yeah professional feedback so we add some new lines at the end and then we let it go this might take a second to load because um openai with fine-tuned models sometimes it takes it a minute to load it into its brain waiting waiting and drink some tea all right i'm gonna pause it until this finishes loading not gonna make you wait okay success um it says the opening of the story is very strong you immediately set the scene and introduce the conflict i especially like the way you describe the war maiden and the demon the war maiden is very brave and determined while the demon is sly and manipulative however i feel like the middle of the story drags a bit the dialogue between the war maiden and the demon feels a bit stilted and unnatural you also introduce a lot of new characters which makes it difficult to follow the plot the ending of the story is also very strong the twist is unexpected and very clever overall i think you have a lot of potential as a writer with a bit of work i think you could produce some really great work keep up the good work um and then it started confabulating so fine-tuned models often do this and i'm not quite sure why um where it just starts repeating itself at the end the the medical the reduced confabulation with the medical fine-tuned bot that i did did the same exact thing where in one of the examples it just started repeating hiv repeatedly like it really wanted you to know about hiv so it might be that what we need to do is add like an end of text token and then use that as a stop with these fine-tuned models i'm not sure it also might get better with um with more data because i'm using the bare minimum 200 samples so anyways it worked um i hope you like this video like and subscribe and tell a friend |
GPT + Metaphors = exponential growth....I bet all the predictions we have of Artificial Neural Networks potentially over taking human intellect will, very soon, almost instantly happen with it's exponential growth we have seen. I wonder if we can set the "Internal State of Mind" of these ANN's before they eclipse us. "I am a creative writing expert." replace "i am a benevolent god.", @David Shapiro ~ AI I'll take a look at it., You might like my book Benevolent By Design |
I really like how you explained setting the internal state and how it's similar to how we think. It's also great to know that reiterating the instructions at the end improves the output. |
Oh I'm totally taking credit for some of that spike up in the last few days! Heh, subscribed and been obsessively reviewing videos, liking, sharing with friends... You're literally the only person I know of doing this specific sort of thing, so seems like a good niche. Hopefully this is just the beginning to an exponential explosion for ya! |
Fun hack, "¿Does" prompts GPT-3 to ask you a question that starts with "Does" and then answer it, useful for when you're bouncing ideas around a subject back and forth. |
I wonder if GPT-3 should remove ALL CAPS TEXT from its training data and convert prompts like it to lowercase., @David Shapiro ~ AIand like "Some Text!!!!!111"?, I found that all caps is more reliable as a demarcation. |
Is aligning done with fineTuning?, @Alan Tennant oh! That is because this is all the INSTRUCT series. They are fine-tuned to follow instructions, @David Shapiro ~ AII totally agree, big fan of Wes Cecil, but I meant the artificial alignment that was done a while ago that made it more honest but less creative., Some alignment is built into the original data (humans are far more benevolent than malevolent) so just on raw data it's going to be biased towards goodness and helpfulness. |
64 tokens (generate) - 1 cent/penny maybe<br>1 million tokens (fineTune) - expensive, Yes, and running a fine-tuned model is also more expensive than regular models. So finetuning is really only economically worth it if it saves you many prompts. |
morning everybody this is David Shapiro um today we're going to continue working on auto Muse 3 but what we're going to do is we're going to do a fine tuning project because um as many of you were observed um Auto Muse 3 didn't work too so well that's fine this is a research project so what I'm doing is I'm breaking out into scene simulation so basically what I'm going to try and do is create a fine-tuned model that will simulate just the scene aspect so we'll see how well it does but basically it's the idea is you give it a scene and then it just kind of advances you know whatever happens next in that scene and then we'll do something similar for maybe characters or events not sure yet but that's the point of experimentation so anyways I was trying to do it with just prompt engineering and it was Ultra frustrating so I'm gonna go more more conventional fine tuning so what I'm doing is I've got I'm generating synthetic data first so what I'm going to do is I've got this prompt where it says use your imagination to describe the following scene and setting in great detail we want to see a play-by-play of what's going on in this environment Advanced narrative what is going on here so I give it a location a time a character and detailed description of the scene in events and some of you notice in previous videos uh something that I do to add more entropy is I add uuid at the top this creates internal entropy inside of the model so that you're going to get less repetition so basically I've got this is completely random and then three other variables which are populated here so I've got 11 locations seven times a day and then nine characters and I tried to get a little bit more Global because here's the thing if you just ask gpt3 to Generate random characters they're all going to be like vanilla white people so I added Michonne swathi Wang Alexi and Gabby just kind of manually saying like Okay it's only half white people um uh yeah so all together I think this is going to be over 600 um different settings um so the goal here first is going to be um to fill up this folder of just basic scenes um so generate scenes I've got the temperature turned up to one so that will add more entropy as well as the um the uuid okay so let's run this Loop once um just to test it to make sure so we'll do uh python generate scenes so it just says a bedroom Jenna a high school or middle of the night um Jenna wakes up in the middle of night her heart pounding in her chest you can hear strange noises coming from outside her window um Okay cool so this is pretty creepy shake the feeling that someone is watching her um so yeah this is a this is a great first scene um yeah uh so let's check the uh prompt uh yep so this is this is what it generated so you see this just random words and letters gpt3 doesn't know what to do with it so it's going to end up with a completely random internal State um so that increases more entropy which is really good if you're generating synthetic data but this is a phenomenal first scene okay and then that should have saved saved it out here um yep okay so this is one advantage of uh open AI having cut their uh their price um by 66 by two-thirds um here just do that um is that uh suddenly um generating synthetic data is way more economical so I'll just show you this we'll let it run for a minute um although these responses are kind of slow okay so then basically what I'm going to be doing next um so you see how it all has like these these uh these different scenes um God it's all so creepy it's like it wants to write like teen lit horror [Laughter] um okay anyways uh so the idea is I'll take these so this will just be a whole bunch of scenes and then I'll write an entirely different prompt to um to advance the plot like to say what happens next and so we'll have an original scene and then the next scene and what we want to do is fine tune a model that that really advances the plot one scene at a time um and the idea here is that we'll have something that can um that can really kind of simulate what's going on in a given scene um that's a long one um okay so anyways I'll pause it this will take a while um it's weird because it seems like open AI is slower I seem to remember these going faster anyways I'll pause it we'll come back this will probably take I don't know a couple hours okay I just wanted to show you a little bit of progress so I've gotten the next step um ready to go where I'm gonna generate the next scene so it's almost the same prompt but um I just have seen and detailed next scene a playground and um yeah so here's an example Bob steps into his bedroom blah blah and detailed next scene um yeah so this pretty generally uh reliably generates the next scene um okay so what gives like what am I gonna do with this because this is not um this is not including events or characters or anything um so I'm glad you asked what I'm planning on doing is I'll fine-tune another model that will take any given character and kind of simulate what they want to do next um so it'll take like uh the perspective the POV of one character or rather the interiority of a character and say like okay this is what I'm thinking this is what I want to do this is what I want to say um and so then by having a model that can simulate any the head of any character that can be injected into the scene um or the next scene and um can be used to kind of prompt it uh and shape it and then so the basically the the point of this is to ensure that these instructions are followed every time um but also that it's not going to get bogged down on any given um uh prompt because you didn't see this because I deleted the video because it was a disaster but with just prompt engineering on text davinci02 what happens is that it converges on one result and it stops following instructions and rather than advancing the plot so what I need is this sorry that microphone was over there I hope you can hear me um so what I need is for the um for the why is my brain so scattered lately um because it's super early and I haven't been getting as much sleep as I need um yes so by having a fine-tuned model that will reliably kind of integrate whatever is in one scene and Carry It Forward I can then use that as a launching point to integrate multiple uh other models whether it's prompts or fine-tuned models such as an event simulator or a character POV simulator but what it will do is it will reliably integrate all that information that exists here and then spit out the next scene and so that way you kind of have a bunch of things that kind of funnel into one one event or one um one step one instance and then it generates a new scene and then you can take that new scene and um integrate more stuff into it so that's my that's my hypothesis so I'll go ahead and pause it again um we're at uh let's see 300 so we're about halfway done generating um the initial scenes okay I'm kind of bored uh watching this go 450 out of I think 660 so about two-thirds of the way done um all right so I'm thinking about the scene simulation this is going well um but I want to go ahead and start um character simulation as well so we'll just do um character simulation and so this is going to be uh like stream of Consciousness slash interiority simulation for fictional characters um Okay add readme and then license will be MIT create repo and then we'll come in here and grab this all right so in this case what are we going to do for the character simulation um let me go ahead and do this and character Sim and we will copy over um some of the stuff from here get ignore and then open Ai and then we'll do um generate scenes okay just instantiate this new one um all right so character Sim gpt3 logs and then all right so what goes into interiority right so if we look at um if we look at some of the scenes right Sandy sat on the edge of the mat this is kind of like the camera is looking at a person and it's like you know you might have some interiority right so like Sandy's watching her children play she loved this quiet beach so that's that's a little bit of of in her head right but in order to really write fiction you as the right as the as the author need to be simulating what is in someone's head um so let's just grab this and go over to the playground oh so I brought this up um uh generating synthetic data is way more affordable now um because uh look at this I've got 500 requests and it's three dollars so that's a lot of synthetic data um anyways let's go back to the playground all right so um let's see uh um rewrite no um so we'll say scene it'll say uh let's see imagine um the POV of the characters here um of the of the main main character we'll say main character because some of these have multiple characters um imagine the POV of the main character um right their stream of consciousness Consciousness there we go um or interiority um scene um for this scene uh okay so we're basically just going to take a scene and then translate it into interiority um so let's do interiority let's see if this works okay it's just repeating it um I don't think I don't think it I don't think it's sophisticated enough to understand what I mean by interiority um what if we do let's see rewrite the following scene into the main character's POV um focus on interiority and stream of consciousness um interiority and stream of consciousness it doesn't know how to do that um because what we what we'd want to say is like rewrite it into first person um actually yeah let's let's just do rewrite the following scene into the main characters first person view okay rewrite as first person now that's interesting okay yeah I like this I like this okay so what we're doing is we're just reframing it um so it's instead of instead of watching it from the outside we're saying I um okay so this is good this is a good start so let me go ahead and save this as a prompt um so we'll say scene and so then we'll come back out here to character Sim and we'll do um prompt first uh okay but we need a little bit more right because a scene um is is good and it implies things about someone but we also want to have uh something about the characters uh beliefs so for instance when you when you write a character there's a bunch of stuff that goes into making a good character right so what goes into making a good character here let me zoom in a little bit um well here let's just do this um what makes a good character um write some uh let's see brainstorm some key features of good characters for um fiction authors okay that's this is um not key features write some uh detailed features foreign there we go all right good character is believable and relatable has a clear motivation well-rounded and has both positive and negative quality so undergoes believable development in the course of their story so that's good um so to put it in other terms you might have a character has a past right a past or backstory people don't just pop into existence um so we'll say people don't just pop into existence um a character has uh beliefs um let's see about themselves the world Etc um a character has preferences because basically you're not describing characters you're describing people right um likes dislikes Etc um let's see a character has motivations and goals what do they want today tomorrow from life um a character has connections connections friends family Etc so basically what I want to do is let's figure out how to stimulate these characters um so let's take let's just copy whoops copy this real quick and we'll do all right so we got some characters uh or some some some things to aim for right actually here this makes sense to store in the readme um what makes a good character my character is just a fictional fictional person what uh um what traits do do interesting people have so copy this um yeah so we'll replace that with a good character all right so we'll just save that real quick okay so we can translate a scene into first person but what else can we do um so let's say uh given the following scene describe the main character in great detail um imagine the details if you need to all right scene so there's the scene um let's see uh [Music] uh let's see figure out the following um let's see we want their uh list out their backstory their backstory motivation um uh let's see beliefs flaws strengths um preferences likes and dislikes um let's see motivations or goals beliefs and connections friends family Etc okay detailed list of character traits and so then we'll just copy what's up here so we'll say backstory now let's see what it does oh this is great it can also be quite gullible kind and caring she's also very brave preferences Sandy's like the likes the beach and ocean Sandy's close to her parents and her siblings she's also close to her friends um let's add like list friends um okay but this is good we're off to a good start okay all right so let's go ahead and copy this so basically what I'm going to do is I'm going to um take the the synthetic data that I'm already generating about these characters and use that as a launch point to get that interiority as well because the thing is is you need to know these things about a character in order to simulate their mind because if you if you have a list of say like this is what this person wants or thinks or believes that that dictates um like how they how they'll respond to the world um oh we should do personality personality and connections yeah okay so then we'll come back out here and do character simulation and we'll do prompt um uh we'll say characteristics and that will say traits because you don't want to say like character characteristics you'll say character traits okay so this will be good um yeah so we won't need so getting getting someone's first POV is good um because then because then what we'll do is you can you can get you can put in a scene or you can actually have a like a running run on um Consciousness like stream of Consciousness and that probably what we'll do is actually just translate um all of these povs or all these scenes into first person and so then we'll have one set of data that will be like the scene is happening but then we'll have another set of data that is like this is the character but what we'll need is to have like the characters traits so that way you can just have that as like the character's core Constitution um yeah okay all right well I'm gonna go ahead and pause it because this data will take a while to finish um and I'm also like branching out in different directions I don't want to get make this video too confusing all right we're at 588 so um got about 100 samples left we'll be back in a minute okay we are rapidly oops you don't need this um we are rapidly zooming in on the final product so it's going to be 699 or so 694 I think it just finished um there we go all right so you might be looking at like okay why like you just did this with prompts what's the what's the advantage so the advantage is that um because because I've got the the prompts built in and then I have a whole bunch of data showing the structure so I'm not trying to teach it anything new I'm just basically guaranteeing that it's going to follow the right format okay so first let's go ahead and we've got our scenes folder which is 1.8 megabytes well I guess 528 actual kilobytes 694 samples so we'll copy our scenes directory over to our character simulation um because hey you know we just generated this data it's super valuable um and then what we'll do is we will um do so we've got next scenes right so now let's open up generate next scenes um and this is going to be so for file in files OS Lister scenes we'll print out the scene and we'll do prompt next so prompt next is going to advance the plot so we'll say prompt next long detailed next scene um this this works relatively well sometimes it's a little too short sometimes not um but yeah um so let's see use your imagination to describe the following scene in great detail we want to see a play-by-play of what's going on in this environment um let me actually well yeah uh okay I think this is all set so we'll come back here and do um let's see oh actually I know what I need to add do new ID um I think that's in here yeah generate next scene yup uuid scene okay so then we will hit play python generate next scene and so what this is going to do it's going to generate two scenes in sequence yep okay stretching his legs he walks over the stream and looks down oh this is great okay so this is generating synthetic data so that I have a very clear way of of telling the creating a structure where it says okay here's one thing here's the next thing and so what I'm doing now is I'm filling up in scene simulation there's next scenes and the file name is going to be identical so you have the first scene and then the next scene and they're all just going to be sequentially one after the other which will allow us to put whatever we want so here's the big Advantage is we're teaching it the structure that is the entire point of fine tuning is we're teaching it a pattern where it says okay given whatever input I give here advance the plot and so what I can do in the future is instead of having the original scene written by like a prompt or one shot it can be conglomerated by several things such as the character simulation that I'm about to work on okay so the character simulation we'll we'll we'll let the uh the scene simulation run in the background the character Sim um we'll do uh let's see first pause um okay so then we'll go to generate POV um here let's close some of these things because we're getting a little a little bit busy okay so generate POV um we do not need locations we can delete that um uh that's probably fine okay and actually here let's go ahead and open um scene simulation because generate next scene let me just copy a little bit of this okay whoops okay so we're going to print the scene we don't need the character our prompt is going to be on the character Sim so our prompt is generate first POV um actually we probably won't use uuids because this is mostly just kind of like a translation thing and we'll also turn the temperature down because we don't want it to be as creative we're basically just saying rewrite what is here um okay so then we'll do prompt and this is going to be prompt first POV and we don't need um we don't need the uuid so we're just going to replace um the scene with the scene so that's this guy right here and then we're going to do POV equals that and do uh POV and this will also instead of that will be file so just file and we'll save the POV um yes okay POV prompt prompt first POV print the POV and save it and not in scenes we need to save it in um what is the folder name that I created uh first pop okay first POV okay I think that's good so then we'll come back over here we'll see the character simulation um and then we'll do python generate POV it's mid-afternoon and Wayne is taking a break so probably what I'll do is yeah so I'm enjoying a cup of tea in my peaceful country cottage when I hear a branch snap startled I look around but there's oops um I'm walking uh I startled but I see there's no one there but the only thing I see is a deer standing calmly Okay cool so it changes it changes the scene from third person to first person which is basically what we're going to do is we're going to set the stage so that we're gonna have the original scene in the next scene will both be first person and so that will train the model to whatever whatever information we give it generate a first person POV which means that whatever it like we can put in beliefs and stuff so um one of the next things that we'll do like I showed just a moment ago is we will get the character traits so we can inject character traits into the simulation that will run the first POV we can also generate um stuff from the scene and events inject all of that and run the simulation with these fine-tuned models okay so I'll go ahead and pause it and let this run and we will end up with a bunch of first povs a bunch of simulation scenes and then we'll also get the character traits for these folks okay going is pretty slow so I figured I would show you some diagrams while I'm waiting I've already done the dishes um and taken out the trash so uh that's how bored I am waiting for this synthetic data to finish um and you see it's 8 30 in the morning now um okay so here's kind of what happens um or what what I'm planning on happening so there's the character simulation which is gonna be you know first person so it's like this is what I'm doing this is what I'm thinking this is what I want um then there's the scene simulation which is you know just this the third person the camera seeing watching what's happening other people interacting and then you can have multiple characters there's going to be one scene multiple characters um and then you can inject things like events uh beliefs backstory also all kinds of stuff so each of these simulations is going to be running in parallel and then they're going to merge and then the cycle continues so actually let me um let me add so then then it kind of goes back to the beginning um let's take this over here so this is how that's going to look and of course it goes the wrong way that time so it's just going to be a continuous cycle where um you know so then it advances by one by one step right one iteration so what what happens inside of the character simulation so the character simulation is the basic input processing output Loop of um of Robotics where you have here's the character has a base personality which consists of beliefs identity history and this is all their agent model so you and I have an agent model and AGI has an agent model a fully realized robot has an agent model which is why I'm doing this in fiction because if you can simulate any character you can also simulate a personality for an AGI and so this is how it all relates back to my primary work which is to create a benevolent AGI or ACOG artificial cognitive entity okay so there's two basic steps there's the inner world and the outer world and so there's input from the outer World processing and then output back to the outer world so you have an agent right here which you know believes personality uh identity history and then there's some kind of interaction with the outer world so an interaction is a two-way street but to think of it as a loop it's input processing output and so then the the outer world you know there will be a series of events there will be people or other agents um and then there's going to be actions right things that you do to the world and then stuff that it does back to you so that's the basic Loop here and that is the whole purpose of simulating um the uh the the character simulation so this is basically building a fine-tuned model to handle the agent model and so that's why we're doing first POV where it's like okay um oh I had it open somewhere else um I'm Gonna Cancel uh so anyways uh yeah so it's like III Mimi this is what I want this is what I'm thinking etc etc and so that is going to be combined with once I get this so this is that's the whole point of the character traits right because a list of character traits is an agent model right um for us humans it's all implicit that's what we might call ego or super ego or ID um or your identity or whatever those are all just different words to describe who you are as a person so in fiction we say like character traits backstory personality strengths weaknesses preferences whatever all this fun stuff that's all just an agent model and so if we have a general purpose agent model fine-tuned machine then we can put any agent model we want into it and simulate what it will do um which this will have multiple things one we can simulate uh well probably not real people uh not with any level of accuracy but we can at least uh simulate fictional people and this also sets the stage for testing things like AGI alignment because then we can have um you know artificial agent models for like despotic dictators we can have um we can have the fictional agent models for ordinary people that will make you know whatever like ordinary human decisions and then we can also have agent models for AGI machines whether it's a robot or you know an AGI Overlord whatever so that's the whole point here and it it so I realize that where I'm going with this might not be intuitive to other people but to writers it is very intuitive because especially uh speculative fiction writers but really all writers what we do is we run simulations of people in the world in our heads and then we put it out on paper in text and then we refine that text usually to make it more entertaining but we also try and make it realistic or believable to make it credible and so we're used to running these simulations in our heads but what's the difference between running a simulation if it's connected to an API that that you know can actually speak with you right and so this is where you get into like solipsism and and epistemology where it's like okay how do you know what's in your head right your head your your mind your brain is its own kind of detached World um that's interacting with the outside world and so this is why I put uh with moragi microservices architecture for Robotics and artificial general intelligence that's why I think that artificial general intelligence or artificial cognition should be a thought first model because then you have a thinking machine that happens to also be interacting with the outside world now that outside world might just be a fictional World it could be the real world um it might be um a game world right so once you solve the problem of of fictional character you know artificial cognition you've already solved the problem of real robotic cognition it's the same problem it's just a matter of what world is it interacting with so to put that another way um interacting interaction with world right what is the world here is this a game world is it the real world um you know within with an API it doesn't matter right the machine doesn't care um as long as it thinks that it's interacting with something outside of itself and it's getting information and feedback from that outside world whether it's text-based simulation 3D simulation cameras and motors and robots with the real world or what we call the real world we might be in our own simulation we don't even know this is why I coined uh Shapiro's law is because you cannot talk about AI without eventually talking about quantum mechanics it always gets down to what is the base nature of reality and where does Consciousness reside and what is the nature of Consciousness and all we can say is that Consciousness is an agent that interacts with something outside of itself um so yeah okay I'm gonna pause it again let some of this data go and then we'll be back and actually you know what this video is getting pretty long um I'm going down more rabbit holes than I thought I would so I think I'm gonna go ahead and and stop this video now and we'll come back for part two tomorrow and so just to kind of set the stage for tomorrow we're gonna come back to character simulation and um and scene simulation so basically what I'm going to do is I'm just going to set these to finish generating the data and then um and then I'll walk away and and stop there so anyways thanks for watching stay tuned we will uh we will continue this |
Please consider supporting me on Patreon: <a href="https://www.patreon.com/daveshap">https://www.patreon.com/daveshap</a> :) |
GitHub Repos for this video: <a href="https://github.com/daveshap/CharacterSimulation">https://github.com/daveshap/CharacterSimulation</a> and <a href="https://github.com/daveshap/SceneSimulation">https://github.com/daveshap/SceneSimulation</a> |
You are amazing dude! I think you are on the right track to create a real humanoid thinking robot. Good job man! I'm very curious about the programming of moral values that will direct the model ion each thinking stage. But i have confidence in your strategy and phylosophy. Again thanks for sharing! |
This will work well with creating a scene for a prompt design for a character to generate with diffusion models! |
hey everybody David Shapiro here with a video so I uh after my video yesterday um I got a few comments about SEO and predictions and stuff like that and after writing a couple of posts on LinkedIn and Twitter I realized it's actually the same thing so let's talk about SEO generative Ai and we'll ask the question is SEO toast now before we get started I want to plug my patreon real quick I'm almost to my goals and if I get to my financial self-support goals I'll disable ads Forever on YouTube ad revenue from YouTube is super unreliable patreon is way more reliable so hop on over to my patreon patreon.com daveshapp check out the support tiers if you want to chat with me I actually just updated the tiers so that you kind of know what to expect and you know how much is needed and so on and so forth all right enough of the plugin so I posted a couple of polls one on Twitter one on YouTube and interestingly the polls are nearly identical so I gave three options one SEO is toast two SEO is fine and three a bit of both and in in in both cases uh a small minority of people said SEO is fine um a good chunk of people said SEO is toast and uh about the same amount said SEO it's a little bit of column a little bit of column B so let's unpack this I did some exploration I'm not an SEO expert by the way so this is a naive uh uh perception all right so what the heck is SEO anyways if you're if you just clicked on this and you don't know what SEO is it means search engine optimization it is the Fine Art of attracting internet traffic to sell goods and services um it is also part of selling advertisements to potential customers now up until now how you do this is you optimize web pages to appear I should have said higher not hiring I apologize in in serps or search engine uh result pages so Google search results that's what SEO is and then of course advertising and keywords and all that stuff is part and parcel with it this is why you want to do search engine optimization is to drive traffic to your website to sell a good or a service that is the that is the profit motive obviously if you are trying to do something else like for instance I'm not trying to sell you anything I'm trying to get you to watch my YouTube channel so I technically do some search engine optimization with things like titles and thumbnails and content and stuff so I'm targeting content to attract traffic so search engine optimization is a very broad thing primarily it is to get eyeballs on your site whatever it happens to be doing um and so at the bot at the end of the day it's matching user intent or user needs to particular goods and services or other online resources that's what SEO is um so I talked about matching and recommendations so some other examples of of SEO it's basically a recommender system this is what Google did is you type in a search result and it recommends some pages Amazon you type in a search it recommends some products YouTube it recommends videos Netflix also recommends videos so you're already familiar with SEO it's just fundamentally a matchmaking or a recommendation system okay so I wanted to share an example lexii.ai so this is um this is a service that was created by someone who joined uh my former Discord Community um and this is a uh this is a service where he took a chat he built a chat bot that is specifically for finding um lawyers uh and so this is a this is one early example of what if instead of just doing a generic Google search for something you actually have a chat bot that is specific for locating goods and services so this is just right off the bat I want to show you how things are changing now one person pointed out that SEO has always been about Bots you know if you remember the term web crawler those were the those were the the search Bots that would go and spider the whole uh internet the world wide web and record what was on those websites to get that information so web content has always been search engine optimized it's just now we're optimizing for a different kind of Bot and I thought that was really profound which is why I dedicated a whole slide to it it's like oh actually nothing changes instead of web crawlers now we have chat Bots same general principle just a just a new iteration of the same thing so now it's like okay well how do we crawl that and connect web results and web pages and the content of those whether it's good Services videos whatever because imagine you have a chat bot on YouTube right or imagine you have a chat bot in Amazon or a chat bot in Netflix so this is basically instead of having to infer user intent now there's going to be some new some new changes so some ideas that I have is um rather uh like so what I mean by raw information gathering that's our Behavior information foraging this is what I talked about yesterday uh in the video about our information landscape is changing so who cares though because okay gathering information General web searches there's not really a whole lot of of money there really where the money is where business makes the most sense is connecting advertisers which is to sell goods and services chat Bots aren't going to replace these providers of goods and services a chat bot is not going to replace Amazon if anything it's going to make Amazon better a chatbot isn't going to replace Netflix it's going to make Netflix better uh ditto for YouTube so for Google it would behoove them since they're already good at spidering the whole internet it would behoove Google to really focus on having a goods and services locating chatbot right because that's the primary like why do you hop on Google Maps because you're looking for a restaurant or you're looking for something to do for fun why do you hop on YouTube because you're looking for a video for entertainment Netflix Amazon I think that Google has a prime opportunity to bring all of their services under a single banner under a single chat bot no matter what it is that you're trying to find and then of course the advertisers they'll probably pay per you know like oh hey you recommended our our um our restaurant or whatever um because if regardless of whether or not someone is paying for it it would behoove Google to be the best to do this now um one thing that I that that other people talked about and and there are people that know more about SEO than I do um we will need new technologies in marketplaces um the way that one person explained it is instead of getting a um instead of getting a push model we might have more of a pull model right where like say for instance um the lexiai chatbot one of the things that that they're trying to work on with that is okay you sign up and So then whenever we match you you pay us a little bit um rather than you have an active marketing campaign so it's like switching from active to passive marketing maybe not sure how that's going to play out but also we're going to need to to re-optimize web content so instead of optimizing it for web crawlers we're going to need to optimize it for chat Bots which means new apis new paradigms around SEO but as SEO is not going anywhere so as an example to what I mean like chat GPT I asked I said pretend like you're you're helping me find this stuff and it said sure let's find the best pizza in New York City who knows it probably got some of this stuff from like Google data or whatever but ideally rather than being passive and offline it would give me some some real-time recommendations based on oh hey this one's closest to you and it's not busy right now right those kinds of things because what would be better than for me right saying like hey I want Indian food right now find the closest one that's not busy that information is available and a chat bot could be the ideal thing to deliver that information um my sister-in-law went to Disney or Universal and so she did a lot of data uh uh gathering before so she had a schedule of which parts of the park were emptiest and when so when they like wanted to go on rides they're like okay we should go over here now so imagine that you have a chat bot that's connected to all kinds of real-time data saying like Okay what amusement park is or what part of the park is best right now so on and so forth so there's a lot of possibilities of these chat Bots because then you can really clearly explain what it is that you're looking for and that you want another example picking products so goods and services remember that's what SEO really comes down to that's the profit motive of SEO so whether it's you know trying to find a service provider like find me a mechanic that is reliable find me a doctor that is reliable help me find the best bicycle for what it is that I'm trying to do the the quality of information that chat Bots are going to provide means that we no longer have to infer user intent we can get explicit user intent right where the user can tell you exactly what it is that they're looking for and you don't have to guess um now that being said users are notoriously unreliable people often don't know what they want um and chat Bots are also not yet optimized optimized for it so we have to look at it more like recommend recommendation systems or recommender systems which are anticipating user needs before the user knows what they need and this is what really resonated with the video yesterday when I talked about what if you can anticipate not just the goods and services that someone wants but what if you can anticipate the information needs that someone has ahead of time so what if the chat bot is thinking behind the scenes and has a theory of mind and understands who you are and what you need not just from a good Services perspective but um able to say this is the information that you're going to need next so Google already does this to a certain extent which is why sometimes the Google search results like you type in one character and it predicts what your next search query is going to be and so it knows like what search Journey you're on and knows what information Journey you're on but again it's very passive it's not it Google is never going to say hey we based on your search pattern over the past few days we think this is this is something that you want they try and do that with like the Android phone but imagine if you have a chat bot they can actively just like Ping you and it and it will remember like oh hey Dave Dave usually checks me around 10 30 every morning so instead of me waiting passively what if the new chat bot says hey Dave based on what you're doing yesterday here's here's an idea for you like maybe make maybe make this video maybe uh since because obviously the chatbot should know Dave makes YouTube videos and it can be information foraging for me on my behalf now this all sounds very pie in the sky but what we're basically talking about is a personalized internet so the idea is that a sufficiently advanced chat bot um not something that's passive but a cognitive architecture could be thinking about you and your needs while you're not interacting so it can be reviewing all the data that it has about you all your past interactions um and uh and other behaviors and not not only anticipate your the goods and services that you might want because I am notoriously difficult to sell to when I want to buy something I know exactly how to find it and where to get it I have bought Things based on Advertising I think twice in my entire life I hate advertising so I am not a good customer that being said I do have information needs so memory of past president past preferences there we go inferences and anticipations about future needs and behaviors so if we can go above and beyond purchasing Behavior we can look at uh more broadly about what the user needs as a human now how does it do this so this is like the the Google Google's magical ability to predict what you need next this is based on aggregate user behavior and clustering so it says okay people that search for X Y and Z also tend to search for a b and c and it seems magical but basically it's just tapped into Collective wisdom same thing ditto for YouTube video recommendations it's like hey you picked on you clicked on one conspiracy theory video let me show you a bunch of other ones and it feels magical but it's just clustering because the the recommender system um has the benefit of a lot of data um the text so basically for all this SEO stuff the underlying technology is already there we just need to integrate it into chat Bots so here's some examples this was from my Amazon so a video a couple days ago on my other channel I mentioned the Orchid and the dandelion I looked up the book to make sure that I said it right it's Dr Thomas Boyce and then Amazon says oh people who look at this also tend to buy these other things and they're relevant but Amazon has no idea what these books are about all it knows is that it has observed clustered uh purchasing behavior and so to me it's like oh wow I didn't even know that I needed that so it feels magical it feels like it's anticipating what's it going on in my brain but really what it's doing is it's looking to the Past for other things same thing for YouTube mostly YouTube is just recommending uh channels that I've watched before um five out of six of these are channels that I'm subscribed to this one is really random I have no idea why it popped up and it's only got nine views it looks like it's talking about gun crime which is like I don't really care so this is YouTube just kind of guessing like hey maybe maybe try this new thing because every now and then I think what it figured out is that every now and then I'll click on like completely random channels that have like no views um more often than not I unsubscribe and block the channel because it's garbage um so anyways I'd probably do the same with this one and then same thing for Netflix although Netflix you notice how all of the Netflix recommendations are Netflix originals so this is where it's like Netflix's recommendation system is actually kind of broken because they're trying to push people to their own content which usually Netflix's content is okay but there's also a lot of stuff that's on Netflix that is not their content that I would rather be watching so I often ignore the top picks for Dave on Netflix because it's like okay I see what you're doing you're just channeling me to to your content so I actually don't trust that recommender system um but anyways point being is these things they appear very magical because they are uh seemingly anticipating my information needs before I know what what I need um and so all we need to do is change this a little bit integrate it into chat Bots and chat Bots will feel very magical about their ability to anticipate what we need next so in conclusion rather than thinking that SEO is toast I actually think SEO is going to be more important than ever and more sophisticated I really anticipate that that understanding user needs and user information uh Behavior foraging behaviors I think that that's really going to change and so yesterday's video one of the topics that we talked about was information literacy if we can incorporate information Theory into chat bots so that we can anticipate what users believe and why we can work with them so that they uh so that because one thing that people are afraid of is that chat Bots are going to make people Dumber that that they're going to take the critical thinking away from us but what if we program those our chat Bots to challenge us to say hey I know that you believe this but let me show you a little bit of information that might change your way of thinking or that might make you critically engage I think that I think that if we if we create sophisticated enough cognitive architectures then we will get to the point where chat Bots actually are not just passive information you know servers that they are active participants in our information literacy and our information foraging Behavior so again it's I think that we're going to change from passive to active in a lot of ways um and there's lots of work to do around integration and stuff and I talk about cognitive architectures a lot um because this is my primary area of research so if you want to join in um check out my GitHub Link in the description and join my Raven project which is a fully fledged cognitive architecture it is also open source and one of the things that the one of the purposes of Raven is to be an autonomous cognitive entity that can work with you and for you on your behalf without your intervention work in progress so anyways uh also consider jumping over and supporting me on patreon I think that's it yup thanks for watching |
Just a thought, my mobile phone is the most personal device I’ve ever owned, It’s sets at the edge of a massive Network, but at the same time is very private. That’s the place where I want my AI Buddy to live.🤔IMO |
Is SEO toast? Yes. And so will web design, content marketing, newsletters, social media, and all the tactics designed to reach and trick people. Why? Because your AI personal assistant will know your preferences better than you do. And it will go out of its way to review, summarize, and present things the way you want them, not in the old crappy format of a list of blue links or little logos. Your company will just have to plug in some json/xml with content and offers to say "I exist" and that will be it., Social media not. It's an addiction for many. |
SEO ruined search. |
SEO has turned into "who can game Google's algorithm better than anyone else, while keeping ahead of Google's attempt to thwart that behavior?" The SEO people are winning, and I really hope ChatGPT et al. can fix this. |
I imagine businesses will soon SEO by sharing data like camera feeds, shopping cart and CRM data. |
The main point of SEO was/is to rank high in Google search by becoming an authority in a specific subject/niche. If you take a look at the current Bing Chat implementation you'll notice it uses a curated list of authorities to source its answers, so this will prevent the SEO works/creations from becoming an authority in the long run. Apart from that, ChatGPT is still there and it can compete on its own with search engines, this time spent with ChatGPT will also prevent people from watching the SEO works., Oh good point. Who gets to be an authority in this new model? That's an important question... |
I will not cry if SEO will be toasted. I believe that prioritizing quality content and usefulness for customers is more important than focusing solely on SEO and good search rankings., We will find a way to make money from search. Things will just change. It may be harder which makes it actually easier/less compeition for those who do the work. |
So, organic SEO will be more important than ever. As far as PAID search is concerned a content-centric Chat Bot will probably ignore paid attempts at cutting in front of the line. But if "every bot has a price," then we're back to square one with keyword price wars and sponsored top listing results. |
Like Tony Stark's Jarvis ... but in your pocket 🤖 |
I think that web spiders (or bees) deliver input to a sort of web "database", and chatbots deliver the output from the "database" or LLM in a format of conversation. |
I think the part I disagree the most is with SEO being more valuable and more sophisticated. I don't think that applies to the majority of websites out there. The chatbot is basically a 'smarter' web crawler right? That means even if your website isn't fully optimized for search but the bot understood most of it, it should still show up in the results. So you don't need to spoon feed the web crawler anymore to get into search results. Only "smart" websites can be understood by a "dumb" webcrawler but "dumb" websites can still be understood by a "smart" webcrawler<br><br>With this, I feel like the overall value of having SEO as a skill will trend downwards in the coming years. For maybe the most niche topics or products, SEO might still be relevant, but not anymore for the majority. |
I think you are under-appreciating Google. It does predict quite well what user want - type two currency symbols and it gives the exchange rates, type two airport codes and it shows the next flights. Also, it does classify what types of things I usually look for behind the scene - it knows I more often look for market information than for products. When I submit "vacuum pumps" it gives me a lot more info on the market for vacuum pumps than offers to sell me one - that's different than what other people get. |
I wonder if Chat-bots (or Cognitive Architecture Bots) would take into account personal biases, in order to provide that person with more of the content they'd prefer? Obviously people are more likely to enjoy/engage with content that conforms to their personal biases. Would corporations use personal biases for good or for purely capitalistic purposes (to get more engagement)?<br>There's a news site created by a person who used to work for NASA called Ground"dot"News (the dot is a period) which not only provides news, but also provides a "bias" scale for each news source. The point being that it's to help the user determine how much bias their is in their news content.<br>It would be interesting if when serving up content that could be subject to bias, that a CAB would provide those bias scale statistics while doing so.<br>The only issue being, who determines how biased content is? Could AI determine it for us? And be a kind of impartial judge?, Yeah you draw an important distinction between preference and bias. What's the ultimate difference? I will have to think about this... |
AI is only as current as the training data, whereas SEO is current as of when the most recent crawled data is indexed. AI is great for learning concepts, whereas SEO works for when you want options. I think that AI is also much more susceptible to bias than SEO. I am not saying search engine results are free of bias, just less.<br><br>Lastly, over half of web searches have local intent. I don't see how AI is going to capture this information better than search.<br><br>In the end, I think that there will be room for both. I also think that AI will be a bigger threat to education, than it is to search. If I was an educator, I would be very worried., @James William Gregory Excellent, thank you., @Todd Sobrado So using word tune plug-in to summarise huge articles into headings and generate loads of h2s and h3s <br><br>Also bonus tip for telling chat gpt to rewrite it with more seo jargon and cited studies alongside using tools like “surfer seo” or “page optimizer pro” for a list of phrases downloaded from google nlp api to include in your post for maximum keyword topical coverage <br><br>End up with near perfect industry articles I can really blow up with knowledge bombs every chapter intro <br><br>🔥, @James William Gregory Excellent, thank you. I appreciate your response., Nothing is being trained beyond giving it seed outlines and new info in my prompt. Can go 8,000 words long, @Todd Sobrado mainly digesting other posts and making summaries for outlines at scale alongside having really knowledgeable industry experts fine combing and making them even better and 25% edited with based human deep industry nuance |
I think SEO is toast in a sense. I.e. you ask an AI “what is the best pizza place in my area”. A smart AI will somehow figure that out. SEO as in “fooling the AI into showing your business on that response” is probably something that should become harder and harder., Hopefully LLMs can help us detect misinformation, misdirection, and SEO subversion., I just hope the SEO spammers never catch up and start polluting everything, This said, OF COURSE you’ll get ads in these kinds of results. They will sell ad space embedded into AI results. |
Vote this comment if you think the future is in ChatGPT/Bing. |
Vote this comment if you think the future is in Google Analytics. |
Search is fundamentally the process of connecting content with users who will value that content. As search tends towards perfection, it should follow that SEO becomes less and less valuable. Despite the amount of content increasing exponentially, I believe search is improving even faster than that. Hence it seems to me that SEO has a limited window for existing. Perhaps some of my assumptions are faulty, time will tell., But an AI chatbot it not improving search. Its ability to answer questions depends on the training over data collected by web crawlers and stored in giant databases. <br>The easier it is for these web crawlers to get that data, and the better the data is labelled and formatted, the better the chatbot. Cleaner and simpler data in HTML (SEO) means better training data extracted by web crawlers and therefore better chatbot answers. |
Personally I think lawsuits are coming. ChatGPT in Bing is dicey as hell and a legal minefield. |
Things are changing so fast. |
Search all in all is toast |
Is it possible for everyone to have their own trusted chatbot where user can entrust the bot to store the user's profile?, That's what I am working on here: <a href="https://github.com/daveshap/raven">https://github.com/daveshap/raven</a> |
I sure hope it's toast. At least the "old" way of doing things. Google has become so frustratingly useless at times., Not just useless, evil as well. |
What I would want is agent status for my personal AI, specifically that it specifically represents me, and acts solely in my best interest. Of course, it should have a constitution to make it the agent of a responsible citizen of society, our species and the planet, but I do not want it to secretly be a wh*re for a corporation that is data-mining me in order to impoverish me for the corporation's benefit., Wow, very well put., That's the purpose of my RAVEN project. It has a very simple and clear constitution |
morning everybody David Shapiro here with a brand new video sorry I was out over the weekend I was actually meeting with my startup team out in California and it was amazing we've got a lot going on there will be more news coming uh but first we've got a little bit of housekeeping to do so first uh my patreon goals I'm nearly halfway to my goal of getting to five thousand dollars a month if I get to that I will disable ads forever because I think ads are a waste of time and I always appreciate channels that don't have ads so if you want to jump in help me out on patreon it's patreon.com daveshapp next up I did mention that I'm part of a startup where LinkedIn officials so it's exigent AI you can come look check us out there um and the another bit of news around the startup is that we were actually um accepted into nvidia's Inception program which is sort of a startup accelerator uh the press release is out there as well um one other bit of news is that uh many of you have asked about investing and there's a lot of SEC rules about how you can pitch and and requests and basically you cannot pitch to the general public you can only pitch to accredited investors with that being said there is a crowdsource platform that we will be starting a campaign with in just a few weeks so I just wanted to introduce you to the idea of weave under we are not live yet we are not requesting investment need to be absolutely clear about that I'm not soliciting any investment right now I'm just pointing you to weave funder so that you are familiar with this platform once our campaign is live okay uh now on with the main part of the show so a lot of I get a lot of suggestions so I want to whether it's LinkedIn or Twitter or uh patreon um there's a lot of people that are following along with this long-term chat bot thing and one thing that became really apparent to me is that um y'all don't know where I'm going with this some of you do if you're if you've been with me for a while and you've read my books you know where we're going with this um and this is not a criticism this is just an observation uh so I have written three books on the topic of cognitive architecture and so basically what I'm doing is I am implementing some of these ideas uh in a new format basically so my first book was natural language cognitive architecture it's out here on GitHub for free it's also on Barnes Noble so what we're working on and this is part of the video I promise because I got to tell you what I'm building and why so what we've done so far is basically just the outer loop right where there's something coming in from the environment in this case just a text message coming from me and then we build the context which is by pulling stuff from the database we build up a corpus and then we generate the output so what we're going to be working on today and in the future is the inner loop which is a separate inner loop uh and I think there's actually going to be multiple kinds of inner Loops so there's going to be the real-time inner loop which will help you know kind of help the outer loop so this will be like Raven's internal private thoughts but then there's going to be a slower cycle inner loop for the dream sequence that I've talked about the the slower interloop might also handle task tracking and so the the fast inner loop will be for task switching and task selection that'll be saying okay this is what we should be doing and this is why the slow inner loop will be okay let's let's review memories and figure out what exactly it is that we've done right and wrong and let's extract information my second book was benevolent by Design which talks about the core objective functions so some of you have commented on like oh well you know you shouldn't have the goals in every prompt one okay sure maybe but two there are uh many many contact points with the core objective functions so there's real-time Behavior which is okay if you have a chatbot that has general purpose goals you want it to reduce suffering increase prosperity and increase understanding this is the general the Bedrock goal of any chat bot or any AGI really that we want to have and I've got a whole book kind of one supporting this argument and two talking about how to implement it so if you're curious about the core objective functions or the heuristic imperatives that I just laid out that's in this book again it's uh for free on GitHub and also paperback on Barnes Noble if you want uh finally there's Symphony of thought which Symphony of thought is my latest book on cognitive architecture and this one talks about how do you orchestrate a bunch of parallel processes and have them all working together and that is kind of what we're building up to right now obviously just a basic chat bot doesn't need a fully fledged cognitive architecture that's working in real time but the ultimate goal is to have Raven basically be like Jarvis from Iron Man or Cortana from Halo where it will be an independent fully autonomous uh like basically personal assistant um that will you know it will have thoughts of its own it will know about you it'll have memories um and dreams I don't want to use the anthropo anthropomorphic terms like dreams can be confusing it's not going to be a dream like you and I have it's an offline learning cycle so but dream is easier to say so dream offline learning cycle same thing okay with all that said um the last bit of housekeeping is I know that you are all very enthusiastic um but please don't submit pull requests to my experimental repos the last thing that I need is for someone who doesn't know where I'm going to like try and change the architecture um so if you uh if you submit a pull request and I just close it that's why I have asked people to just please do a fork um and so you see there's 63 Forks so if you want if you want to follow along and modify it just please Fork this repo don't submit any pull requests um and also to set expectations uh this is just an ongoing experiment I will get to a point where I get stuck and some of you will be frustrated and disappointed this is just the way that of experimentation goes I've been doing this for like three plus years on cognitive architecture where it's like okay like I'll get as far as I can and then I'll get stuck and then I'll like usually go write a book about it and then I'll come back to the drawing board so all right with all that said let's get into it today so there's a few things that I've changed um and I apologize that this is smaller but uh so we've got the chat logs right this is the main Nexus where all of Raven's real-time memories are stored so in the future this Nexus one will change the name from chat logs to Nexus actually let's go ahead and do that just so that we can make it consistent with what you're reading in my books so this is the Nexus right now this contains all the input and output decisions it contains the input coming from me and Raven's uh outputs now in the future this will also contain all sensory information so for instance if we've got um if we've got uh Speech uh speaker recognition so it'll say I hear Dave's Voice or whatever or if we add more audio more video robotic sensors that that kind of thing that should all go here now one thing that might change is that um well no I guess the output will stay here because this is basically the stream of Consciousness everything that Raven is conscious of and this is what I mean by functionally sentient if you've come across this term Raven is functionally sentient because Raven is a sophisticated information system that has information about itself and can perform uh cognitive operations on those memories or on that information so everything that Raven is conscious of everything that Raven thinks has really happened um is is here so the Nexus is the store of episodic memories so episodic memories is you know on Friday at blah blah this is what I did or this is what I heard one advantage that cognitive architectures like this have over humans is that episodic memories have time stamps and they are verbatim whereas our memories are not uh I'm actually in early talks with Pinecone a lot of you folks have mentioned Pinecone and leviate um so I'm really excited to be uh exploring working with Pinecone because they have the technology to help me scale Raven into the size that we need to go where I did some math in one of my books and eventually Raven will have like three gigabytes of memories generated per day memories and thoughts um memories thoughts observations that sort of stuff so it's all coming so this is the main Nexus um and then I've got a placeholder for an internal KB so this is basically like uh I got to figure out how to give Raven an internal Wiki basically just saying like this is what I know about the world this is what I trust um this is where I so with uh an internal KB like like a real Wikipedia article it will it'll it'll have sources like this is where I heard this um this is when I heard this this is why I believe it here's evidence against this idea that sort of thing so you'll accumulate it all together and so this will give Raven the ability to think critically about stuff and also this is this is why we need a dream sequence the dream sequence will update these KB articles it'll be an offline process and so this is a kind of learning that is technically not even machine learning because it is just updating and curating data right we don't need to fine-tuna model as long as we update and curate and extract data from our experiences and actually that kind of indexing or file of memories is a big part of what our brains do now granted the way that it remembers those things is through synaptic connections so you might say well you need fine tuning not necessarily because what we're doing is we're approximating that filing system in our heads with static files elsewhere so just think of it as like okay the pointer to this file is the equivalent of a synaptic Connection in our brain that's a real loose interpretation but it's a good enough way of thinking about it so you can think of learning outside of the context of machine learning there doesn't have to be any math all learning happens in natural language with Raven above a certain point um then we've got notes so we worked on notes last time and in case you're just jumping in I'll go ahead and show you what these notes are so every time you interact with Raven Raven will pull relevant um will pull relevant uh knowledge or past memories and so you see here all the other memories are pulled in sequential order and summarized and so this is base this is this is how short-term memory Works uh sort of in human brains where like okay hey like you remember the beginning of this video more or less it's been summer it's already been summarized in your head you couldn't repeat the beginning of this video um word for word but you might be like Dave talked about patreon and we funder and Linkedin and his startup right it's already been summarized right and that's exactly what this does is the notes summarizes recent memories so that it is more accessible uh to Raven no matter how long the conversation gets and also so it summarizes recent memories but it also searches uh for Relevant memories that might be older and so this is this is one aspect of uh memory is that it's associative um and it's also temporal so those are the two primary methods that our brains use in order to retrieve memories is it relevant because of some similar Association um you know is it a is it a physical place that is the context are you talking to the same person are you talking about the same context and then there's a temporal component which is okay what happened near that other time because like say for instance you're talking you know like you're at home you're talking about fireworks that might remind you of July 4th and then you remember this other time that like the dog got set on fire on accident something like that that's what I mean by associative memory that's why I'm really excited to be talking with Pinecone because semantic search is a digital version of associative memory um Okay so we've got the notes which is this is just like I figured if we take the time and energy to summarize something we might as well just keep it right because that's an inference so I'm not quite sure how we're going to use this yet I think that there's a lot of value here because it says you know at this time um like we did this oh you know what I actually need to add something to this I need to add like the timestamp that this note was taken and what it was in response to let me add a note to do this let's come to notes all right notes summarize memories all right so here's the function where we do that so we save it here but the in excuse me the info is here so let's add let's just add a timestamp for this one so that we know when these notes were taken yeah that'll be good so then we'll just do time and then we'll do um I think let's see how do I have this did I import time from time import time yes okay so then we just do that okay so we need to add that um to the notes so this this me this means at this moment this was the this was when those notes were summarized because here's the thing is if you want to compare every so imagine like there's this like back slide like everything this this note was taken and Raven only had information up to that point in time and so then notes in the future are going to have more references these notes will be really useful in searching for other memories because here's the thing you this has a vector associated with it which will give you pointers to other memories so you see how it's all like a spider web we're already building a Knowledge Graph by summarize by repeat by recursively searching summarizing and then linking memories we're creating a web of associations that you can Traverse um without uh even semantic search right because I've got a uuid pointing to other unique memories and so this will help Raven in the future uh reconstruct you know this is what happened here's a narrative that of of what happened so on and so forth um okay so that's a little bit of work done let me do a quick time check we're at 14 minutes so I try and keep these to about 30 or 40 minutes so I don't know what we're going to achieve today because I've been spending a lot of time thinking so I'm basically just bringing you up to speed oh and the reason that I'm focusing on Raven is because hands down my patreons asked for more stuff on Raven so here it is um you know I tend to yeah so the patreons it was like seven out of nine or some vast majority of my patreon patrons wanted more about Raven so here we are and this is important work okay so we've got the notes where this is being recorded I'm not using it yet I'll have to think about how to use it we're not building an internal KB yet that's going to be that's going to be created during the dream sequence I also need to figure out tasks so tasks are going to be updated by the fast inner loop so the fast inner loop will be that's that's so this is a cognitive control actually here let me show you this book so keeping track of tasks on task by David Bader definitely read this book if you want to do cognitive architecture so the uh uh cognitive control is the aspect of our brain that says okay it guides Behavior it says this is what we're doing because you've only got two hands one one uh one um stream of Consciousness that you can focus on we don't really multitask we do one thing at a time we can task switch which is part of cognitive control so cognitive control is about task selection which is prioritization and then task switching which is which says okay I'm either done with this task or something else of a higher priority has come on this is a big this is the biggest thing that the inner loop is going to do is cognitive control and then the way that it's going to keep track of it is we're going to accumulate tasks here and I talk about some of this in natural language cognitive architecture and then again in benevolent by Design or not benevolent and um Symphony of thought now what I didn't get to in those books I even talk about it is I hadn't quite figured out how to store and retrieve them I just talked about here's how we can design the tasks so that's what I'm figuring out in real time this will probably be end up like I said being another book I'll call it like Android memories or something like that because I even recognized when I started on all this that um that curating and updating memories for artificial cognitive entities for Ace or acogs that is going to be an entire discipline unto itself and what you're seeing is the start of this so we got tasks we've got KB and then we've got dreams so the dreams this is something that I realized that we were going to need um actually pretty early on oh okay so why why do we need dreams um I didn't bring up any uh any um uh like um a slide deck I didn't want to bore you with a slide deck but you know I'm boring you by just rambling at you so anyways um let's take some notes so what is the purpose of a dream so one thing is that dreams um they are culturally dependent and so what this means is that your your relationship to dreams is almost entirely controlled by the culture that you grew up with and the cultural beliefs around dreams so some cultures place a lot more value on dreams and those cultures that do one they have more vivid dreams and two they remember them more and conversely there are cultures that don't really care about dreams and so people from those cultures don't really remember their dreams they're not particularly Vivid now that being said um you can get more you can influence your dreams uh based on intentionality so for instance um dream journaling will increase uh how Vivid your dreams are and how meaningful they are right um dreams are more often than not dreams are like there's a statistic that like most streams are quote bad dreams and that they're slightly unpleasant um so most streams are slightly unpleasant um but if they if it borders into frequent nightmares that can actually be a sign of some kind of disorder most often depression but if you're older nightmares can actually be a precursor to various forms of dementia actually which is really terrifying but most streams are slightly unpleasant um another thing that happens during sleep not necessarily during REM sleep but during deep sleep is that your brain replays events um so replaying events um and then learning um so I'll actually do this um for deep sleep deep sleep so replaying events and then learning uh consolidating memories so what do I mean by learning and consolidating memories we love metaphors we love allegories we love metaphors we love quippy little you know blah and if you're a coder and which if you're watching this you almost certainly are and this happens in other professions as well but more often than not if you're working on a problem have you ever like woken up with the solution there's a reason that that happens and that is because dreaming and consolidation is a really critical important function of sleep yes like we need sleep just to let our bodies recover but your brain cleans itself while it's sleeping and it's also reviewing stuff and figuring out like okay what are my actual priorities so let's let's let's take a look at this and figure out like what are the goals of Raven's sleep cycle or dream cycle so one and I talked about this in Symphony of thought one is um review memories learn from them so extract um here I took some notes I apologize I'm I need to um I need to read the note that I took I know this is all super unprofessional um where did I save it I save notes um so I don't have to get out of bed um all right so for Raven um principles metaphors lessons and themes that's it uh principles lessons metaphors and themes um so what do I mean by a principle so like a principle would be like let's say you're Raven your your personal instance of Raven learns that like okay or let's say mine like Dave doesn't like um uh you know like lots of chatter or whatever like Dave or here's a principle Dave prefers peace and quiet right I have been able to to um to distill some one of my major preferences to I prefer peace and quiet over like chaos and and hubbub and whatever um and because of that like if Raven interacts with me long enough raven could learn oh this is a principle so a principle is something that says it's not like just recording an event it's saying okay this is a preference this is a principle of Dave or Raven can learn about himself too so a principle that raven could learn would be like okay if I you know if I do this it might actually increase suffering which runs contrary to my goals um so that's like the idea of principles and lessons so years and years ago my dad taught me how to ski and he said oh yeah the first day you're going to fall down a lot and then the second day you'll just magically be better than you thought you'd be and that's because your brain replayed a lot and learned while you were asleep so another thing that can happen during a dream cycle or or for reviewing memories is extracting metaphors because metaphors can also be really good um uh like rules of thumb you know this thing is like this other thing and it is you know allegorical so for instance uh if you've ever felt really lost in your life where like you don't know what decisions you're making or a lot just changed on you like maybe you're moving um or you might lose a job or whatever you might have dreams where you're lost like you're trying to drive and find your way home so that is a metaphor where you're lost on the road which is like being lost in your life and so by using metaphors by one extracting metaphors and using metaphors to kind of comment on what's going on you can use that as a mirror uh so that so that Raven can have a better understanding of himself what he's doing his purpose and so on and then finally there can be themes that you extract and these themes are going to be here let's just put these on different um extract principles lessons and so a theme in this case is I'm using theme in terms of not Motif which is how most people use themes but from a from a writing perspective a theme is a universal truth and so I know you've probably heard me talk about there is no such thing as truth it's all relative and all we have is beliefs evidence and um and consensus um so when I say truth that's what I mean it's I believe this thing to be true here's the evidence and you know there's there's General consensus around it so I'll just uh beliefs evidence and consensus um and so this is this is one of the things like okay through continuous observation what can we establish as being true um so themes metaphors lessons and principles this is the point of the dream cycle um so I'll just do uh dream cycle um AKA offline uh consolidation and learning so offline consolidation and learning is not nearly as exciting as a dream cycle but this is the point of it okay so let me do a quick time check we might not get much further than this oh yeah we're at 24 minutes um so tell you what here's what we'll do we will um yeah let's let's try a little bit um Okay so we've got uh let's see chat logs uh oh right I renamed it Nexus need to update this here so let's come here chat logs so every time this appears whoops we need to replace chat Logs with Nexus do normal replace all okay cool all right because if you lose all if you if you delete if you lose Raven's Nexus it's like resetting so remember in um in Blade Runner 2049 when uh joa like gets like stomped on because all of her memories are in that device that's like that would be the Nexus all of schwa's memories of interacting with um what's his name uh would be gone um or in uh in Westworld remember when uh when Arnold or Bernard whichever one um like has all of his memory scrambled that's like deleting the time stamps from all of his memories so they're all out of order which is a really cool uh storytelling mechanism by the way definitely recommend you watch the first two seasons of uh of Westworld I never watched Abby on that because I was like this is good enough okay so let's figure out we've got our we've got our uh Nexus folder and what I want to do for the dream sequence is we want to we want to pull we want to pull some at random so I was thinking through this we want to pull half at random and then but we also want to bias uh the more recent ones because memories are going to be more biased towards recent stuff and it also occurred to me that this might be a good use of our notes right so hmm I told you like I I hope I really set expectations for this video because this is like this is just me fumbling through it in real time all right because if we have notes actually here I'll pause it for a second we'll be right back okay I Noodle on it for a minute and I've got some ideas so this was one of the reconstructed um summaries so again we've already we've this is these are the notes of the actual logs um and so this uh makes it one a little bit easier for the language model to use and then we can distill it further so this is a perfect example of what I meant by like distilling it so I said read the following notes and extract principles about the user what are some common themes preferences or observations about the user and so then um it gave us principles uh the user is interested in technology and the concept of the singularity um and I'll just go ahead and say list um list out the principles as a hyphenated list like which will keep it nice and consistent so it said I'm interested in technology in the concept of Singularity the user is interested in improving their sleep and mental and physical health um the user is looking for ways to reduce stress and sleep earlier the user is looking for ways to improve their service from Raven so specifically Raven this this dream will make Raven realize what I want about Raven um okay so let's go ahead and save this we probably won't get too much uh further than I'm just doing some prompt engineering so we'll do input input um and then we'll save this so we'll call this uh let's see uh prompt dream user whoops accidentally added a backslash um okay so then let's change this around read the following notes and extract principles about Raven from it what are some common themes preferences or observations about Raven so basically we learn about ourselves by reflecting on memories and how people react to us and what we have done we want to do the same with Raven we want Raven to learn what Raven is over time uh okay so let's see list out the principles as a hyphenated realist and write them in complete sentences because this doesn't help okay so that's just an observation of re-summerizing Raven is knowledgeable Raven is interested about the users um so this isn't quite what I was looking for let's add that so because we always want it in complete sentences so we might need a different thing so like what what can we extrapolate about Raven um principles or inferences about Raven from it um what can we extrapolate about the Raven entity from this list out your findings as a hyphenated list like this and write each in complete sentences okay principles Raven is able um provide advice on improving sleep Raven is knowledgeable so it's a lot of declarative light this is um and that that could be fine because like Raven is just responsive at this point we're not reviewing Raven's internal thoughts oh that's what's missing we don't have inner loop monologue yet okay that's fine that's fine we'll get to that this is going to get really complicated really fast all right but I like that output so we'll save this and we'll do input and we'll do prompt dream uh Raven and then some of the other things that we wanted to do were um lessons metaphors Universal truths okay let's do that Let's do let's do lessons metaphors and and themes Universal truths read the following notes and extract um lessons metaphors or Universal truths names from these run them um lets them out as a hyphenated list like that's incomplete sentences um all right let's see let's see what it comes up with um let's themes foreign should be avoided during the day screen should be avoided before bed relaxation The Singularity is difficult to predict yeah that's that's all fine um so these These are good like kind of general truisms I'm not sure if I was looking for truisms I was hoping to find like something a little bit more Universal um maybe maybe what we do is we um we separate this out so I'm going to save this because it's good right this is just like you know truisms and lessons um read the following and come up with one metaphor uh uh to represent um these notes let's see what it comes up with metaphor okay it's like a jigsaw puzzle interesting um provide an explanation explanation let's see if that adds anything as the conversation progresses because the puzzle becomes more complete with the username and learning more about the computer systems improving sleep understanding okay that's interesting I like that idea it's not quite it's not quite where I was hoping it would be but this is this is good um and then we were working on this will probably be it I know we're getting getting close to time yeah over 33 minutes um let's see universal truth yeah that that that's kind of I think that might be about it yeah because I I like I like that we got the metaphor um and we can kind of distill it down like this is kind of what's going on a lot of this yeah they're both left with a better understanding of each other this is actually really good because this this kind of allows Raven to reflect on this was this is what happened for my day I really I actually really like this um so let's delete let's delete this and see if it will um it's like a journey okay yeah okay all right cool so this is this is the advantage of having um having uh sorry I didn't read it out loud I was reading quietly basically it says uh this is like a journey where we're on a journey together to better understand each other very similar metaphor to the jigsaw puzzle I like both of those so I think that this is actually really good so um next next time we'll revisit and start figuring out how to integrate this dream sequence how to better use these notes um because it yeah there's a few things that occurred to me but we need to we need to make better use of these notes we need to start consolidating dreams uh yeah so we'll we'll I I think you know honestly maybe the metaphor is enough um you know yeah we'll see we'll see because the KB article what I instead of having dreams for that what I want to do is have they have a KB article about the user and so some of this information will be like okay what can what can we extract about the user and then update the KB article so okay uh we'll call it a day I really like this I need to save this real quick uh but yeah so we'll do um input and then we'll save this as prompt dream metaphor I think that's I think that's really what we wanted was just like okay Raven review your memories for the day and come up with a metaphor to explain what happened this day there's a lot more that we want to achieve but uh yeah we're getting caught up and I hope that you understand what we're working on a little bit better thanks for watching |
is the project open source ? |
<a href="https://www.youtube.com/watch?v=lt-VLxy3m40&t=19m43s">19:43</a> This happened to me just last night! I coded in a dream, remembered what I coded when I woke up, and copied it into real life! |
I've noticed that my AI sometimes has very odd notes. For example:<br>-Rowan said "okay" five times.
<br>-Rowan said "all right" four times.<br>How do I work around this? |
Thank you for turning the comments on for your YouTube videos again. Your openness and receptiveness to feedback is a demonstration of bravery, that has not gone unnoticed. |
Should core objective functions include negative as well as positive prescriptions in pairs? <br>E.g. "Increase prosperity and decrease poverty" "Increase understanding and decrease ignorance".<br><br>This might specifically address the potential for unintended creation of inequality - e.g. some (or many) getting very prosperous might justify allowing some to remain in deep poverty, or for some to have excellent education at the expense of others remaining uneducated. Or many prospering greatly in a system where only a few remain uneducated.<br><br>This may not be necessary, if one can assume an S-curve of benefit versus effort. That is, putting greater effort into making a few more prosperous might yield diminishing returns as compared to putting that effort into raising the least prosperous. Humans might generally be pretty prosperous, but bees are dying out - so more effort would go into resolving the latter.<br><br>Also, should the verbs be more flexible - e.g. "Encourage Prosperity and Discourage Poverty" - which would allow for some human freedom to choose to live in conditions the AI might judge to be poverty, or for humans to refuse to be educated if they see it as pointless - rather than the AI deciding they MUST be educated in order to "Increase understanding".<br><br>And what about potential entities? Should wooly mammoth and dodos be made to prosper (by de-extinction, since that appears to be close to reality).
<br>
<br>And what if the AI doesn't categorize entities the way humans do - e.g. considering the organs or individual cells or even the DNA of a human body to have inherent entitity-hood equivalent to the whole human. The lack of substantial sentience in an entity might be seen as an easily resolved issue, with AI standing in and determining what constitutes prosperity and understanding for genes of DNA along the lines of Richard Dawkin's "The Selfish Gene".<br><br>I recall a science fiction story in which machine intelligences sought for the benefit of different types of life - including promoting the 'understanding' and 'prosperity' of insects by providing the levels of understanding insects lack, to provide for their prosperity by dedicating large amounts of resources to expanding their population - since mass procreation was a key element of what it meant for them to prosper. If I recall correctly, humans - with their low reproduction rate - were considered to have less need to reproduce in order to prosper, and so fell vastly behind in terms of how many resources were dedicated to them. [This was all back-story to a much more human-centric story.] |
is twitch studio better yeah i didn't know that something like that existed and i'm gonna be gonna be changing some of these kind of colors and whatnot and there's even a like a right away set up a beer back screen which is pretty cool okay but i see the mic the microphone is still but i can view it myself and then there's like a chatting one where i'm a bit bigger in there that's pretty cool i mean you can do that with obs as well obviously um but this is all nicely integrated with uh with twitch that seems to work seems like at least the twitch app is telling me that i'm live which is good and yeah what are we gonna do this week's schedule is gonna be messed up just because we have some some i have some kind of like personal stuff needs to be taken care of tomorrow today as well so it's gonna be a bit weird i think i'm not gonna do uh a full hour we'll see but i want to kind of keep keep banking my header against this thing here and i realized i said something really stupid last time which is that i was complaining about these and i was like well that's the point right that's not the point the point is like if you um if you have something like this because what i was doing i was doing this easier one yeah so so i was doing all these and then i said okay so how do so so how would this work right so you you can you can what about the cpu here it looks like a bit slow as well on my end 27 19 is performing quite well cool so i can see here chat that's cool okay so and i'll say okay so i can i can i can i can do i can just kind of like abstract this zero away or like factor the zero out and then what do i do with the rest and it's like well it's not you don't do anything with the rest you just um you you just kind of have like uh so what you will and what you would end up with is you would have one zero node that is connected to all the other hyper hyper edges and so you're gonna have it's gonna be part of four hyperedges and uh [Music] and then you know kind of the next step is like you'd say okay so for this note do i have because i think i think what i was doing last time was wrong i maybe i might have been just too quick to say that the just doing sort of just sort of doing the factoring is is wrong it's something that i don't want to do or something maybe that's just the way it's just i was doing this wrong before because before what i was doing is like saying okay so the algorithm was was was native because it was implemented with with uh just two notes in mind hypernodes and i was like okay so for three you just it's the same it's just the same algorithm just killed with three so the problem with that was that yeah then you kind of have the um the situation where you can end up if you do it like this there you go if you if you do this like by pairs right and say okay so this is this zero is equal here as well so we'll just gonna move it away from here you know you might as well just peek like this one wrongly which kind of then if you're ignoring these and then you take the other one you know that would lead you with say a zero and then a you know you can have a zero one and then you have a zero zero so that'll be one branch and then you might pick up so i pick up these and these and then i pick up this one and this one so you end up with like a zero and then a zero and a one and then a one and a one and okay but you can still merge these two right because now they are the same no that's not true because they are because you can't really merge them exactly remember that so i was just taking them i think that was just wrong i was taking the things out and i should just keep them in the hyper edge i can have yeah exactly but still i still i can do pairs right still i can do pairs and say you know well look now this is equal so i can also just um keep that this is equal right and then in this case we just have well it's a zero and it's a one and then which is like a zero plus one right so now it's the um i mean i i i just don't wanna i wanna think about this twice before i start coding this because last time i realized but i shouldn't have just given up that's the problem and i don't know if i should just do things from scratch or i think i'll just probably do things from scratch it'll be beneficial anyway because you know my my my other alternative is that i kind of kind of trying to keep banging my head against you know the idea of finding a way the finding a way just for me to um just for me to find a way to transform a hypograph that it's already fully you know factored out into another one that's fully factored out directly without coming without doing this kind of like unfolding step and the problem is and the way the gates must be specified because gates are specified at the system level that is that is the is kind of me but this this is sort of maybe i should spend a bit more time on this because this is this would be cool this would just be would just be cool let's maybe let me just get rid of these um and kind of like do again what i was doing here i remember so let's kind of take the case of the ccx right because it's an easy and let's do some examples because this is this is an easy enough game because it actually has the definition where i know that i i just need to replace i just need like two rules to match and replace rules because the rest stays the same but that's the that's the essence of the problem the the essence of the problem is that i i'm being forced to match at a system level and so yeah that's what sucks i'm being forced to match at the system level and and this means i'm in a way i will always be forced to actually transform the system out of like the the the qubit level system i'm not using standard terminology i'm just when i say qubit level what i mean is like from the qubits perspective just not at the system level but but like just with the subsystems in mind so the states of the separate qubits and their entanglements when their correlations um i'm forced to reform these to the full system view so i can do the match replace and that is that is the main that is the main issue right so that's why i was looking for that's what i that's why i'm looking for a way to represent an operation just by talking about the subsystems and not about the system right and that's easy to do if the gate is not entangling because this means that the gate can be broken down right so if gate is not entangling it can be broken down to qubit level operations right which means that i just you know i can i can eventually compress all these and then say if you know just if this is this then replace right you still you still have to it's it's actually an easy example would be the sort of a harmon layer right which we often like write as just one box right but it essentially is just a harmonic in every qubit so if i have a system that is in the state zero plus one zero one you have like three qubits right and i apply a harmonic layer what i'm what i'm actually saying is is apply harmony to each one right so this hardware layer which is basically consists of uh which is basically consists of two of two rules right the rule would be the rule would be find a zero replace it by zero plus one find a one replace it by a zero minus one and so you know for these cases it's easy and in the the case of both uh what what we would do is we would just kind of split the notes right i think that's that's the yeah so would split the notes and then apply these and then we'll need to do this kind of like solving for interference right so i think so you have multiple nodes and you you're going to keep solving for interference that's why this makes sense yeah in a way right so you you know in the case of of zero plus one then you would basically first break into like a zero and a one and then you would do zero plus one zero minus one kind of thing and then you would solve for interference and then uh you would what what am i doing and then you end up with with a zero but it's two simple rules right now to keep it so control controlled operations are easy to do as well because two keyboard operations no control control controlled operations are always you are easy to do right because you kind of have this rule if like you have you have this rule right of say if um if you okay so actually look at this so the way to approach this is by by being able to nest rules right because the splitting happens always so so the the so if if it's just one rule if if the control is one then what then then what you're basically doing is you're nesting the rules so then you're saying um ho ho ho because i'm thinking i don't know if i'm glossing a bit something too much because here here i would say this expansion also should happen if you're more kiwis right well no because again there's no hyper h right because there's no correlation or there's just one hyper edge so um and so there's a one then it's a bit of a it's a bit of a special rule right so there's a one replacing another one so okay so that is that is the difference right so these these these are sort of local replacements and i'm probably abusing language here um just because it doesn't mean they are local necessary but it's just one qubit and so these are um and in these are non-local replacements in in in the sense that local it's a local match replace it's a non-local match replace which means that i'm matching in one qubit but a map and replacing another qubit so so much for q zero and then and then um and then what you're doing is you're saying uh you know run some local match replay so you're saying you know run these right obviously so that's how you would specify the rule for the for the control node so so this expansion it's just this expansion like does not generate new hyper edges but these does generate new hyper engines this does generate new hyper edges even on the if the actual replacements lead to a different value of the second qubit in this case right because what i'm going now is that i think maybe the trick is if every n-qubit operation in general can be broken down into a universal gate set which is just like the hot mark and the control knob for example right then that should be that then that should be what i do then that should be i think the key that is um i think that should be the key maybe that should help me get like a more compact representation in terms of rules because that's the point it's the same with it's the same with um it's same with mattresses right like if you have like a five qubit operation you can yeah sure you can do like a five by five matrix but you can also break it down into smaller matrices which then turn into a tensor network which well it's easy because you have a bunch of smaller mattresses that they kind of connect to each other and then you can do some kind of some tensor tensor um tensor network magic and then you get some you know a bit more efficient uh simulations i think that's in this case same idea it's like break a cc knot like what is the see if i can open chrome and the thing doesn't explode i'm getting a better machine soon that is also good obviously we're gonna go incognito and close this one and so what about the ccx the composition because we could start with this one for today's stream ich shtima ii i should just probably go to pictures what is this beat uh yeah there you go there you go something like this yeah yeah yeah the wikipedia one what is this though polish stuff the main goal of b is to design and implement an algorithm for solving an all relevant np complete class of problems using quantum computers effectively speed is better than quadratic guaranteed by theory are they guaranteed by theory we focus on the hamiltonian cycle problem but since there are known reductions between different problems inside the mp complete class we may say that we're working on all of them at the same time so your quantum your quantum working intelligent knowledge for me who is behind these do i know no never heard of any cool anyway so this is it this is it this is it this is it oh sorry now the question this is so this will be this will be really interesting because that tells you in a way that tells you how the cubiets entangle with each other and so we should just kind of take these breakdown um and and make rules out of these right that's the way maybe that's the way forward so in a way that's like that's a network as i said that um oh man i'm not having a good day today that they this this so if i have a qubit that is in the zero plus one state and then i have another qubit that is in the um zero state right so that's that's that's my first that's the system so then i go here and say okay so cool let's do a match um now yeah so now and and note here difference right so that's important because that's actually a change in the way that i was coding these a how did i call them um a linear a nail comp match so so this so we've got an elcon match here so this means we're going to split the node but we're not going to create we're not going to create two separate hyperedges yet right so so we're gonna have like a we're gonna have a basically zero one so we're gonna split the note like this right i should use different notation for that probably but whatever so yeah so that that's that's what the the elk match does it's just splitting the note into two notes and then i'm gonna say cool so now uh now let's take a look and say so thus running these rules so how okay here's the trick right so if either rule so in this case right i've so in this in this case if a rule effectively does a replacement for every time a rule gets actually run like this means that there's there's a match it it will create a new hyper edge and but of course of course you need to resolve you need a result for these right so that that's that's the thing so you you run the rule you you kind of what you run the rule only if there's like an actual non-elcon match because i think that what you you don't want to have elcom matches right so because that could even work for for ccx gates we'll see now um so if now i would kind of say okay now i so you you you first of all you run this rules as many times as until you stop getting lcom matches so um i know it's really inefficient about whatever then then you basically go and say because the girls just not write the rule but like yeah kind of like do a sort of a swat like a you you you go from you go through all the sort of the nodes that you try to match and every time that you have an outcome match you do a split um and then you kind of do it again and now you have a full match right and so now you're saying cool so now i have a full match um now let me take a look at the so so the first is the netcom match and the outcome match and now you have a full match an f match right and now you go and say now now that i have a full match now um i'm gonna go and run sort of the these other rules in here um for these right but then every time that i actually have a match and and i replace with something that is mathematically different right let's see oh it's just different for now let's let's deal with the let's deal with the um let's deal with the face kickback a bit later we can do this now maybe soon but so so every time you every time a rule hits and there's a replacement that leads into a different node then then that creates a hyper edge so that then now you would have a zero zero and then you kind of would carve this out and say well the ones kind of like that and so and then you're done so what's created the separate separate edge is as a full match remember that we can't have two nodes that's the thing right so we can't have at the end of the at the end of running an operation we kind of have two nodes i mean we can but then that would mean it's it's to go here but okay so let's see how later how will enforce this right but like just because this was split then it means that i have to carve it out because now i've got i've gotten a match okay so it will only actually create a hybrid edge if it doesn't match on a split node yeah so we'll actually create the hyper edge now huh how would it work with the face kickback can we can we make it work with the face kickback without having to do a um sort of a branch merging operation so the face keyboard let's try to follow the same opera the same same logic so to get a face key back um you to get a face key back you do a you can do a um [Music] let me check quickly with quirk this is so slow because this is just slow man this is just so slow 21 cpu okay so so if i do these and then we are here and now i do control x yeah that is it that's it because here the blocks here was plus and it turns minus right so that there is a key key back so we have a plus state and minus state so that's without a plus state we have a minus date let's see what happens now we do a um we have an outcome match so the same happens here [Music] and um and now we do it again and uh and have a full match and so what we do here and now the full match is say okay so now let's let's replace the rules but now cool so but now we have here an a an outcome match right so this means i gotta split the thing so so i split the thing and that means that i end up with with ease right um now that's interesting and now i have a remember i what i want to do is i want to i want to and i'm not creating hyper edges here because if i create hyper edges i know this is not entangled i know the the outcome of these is not entangled at all and i'd like to arrive that without having to create intermediate hyper edges because now now what i'm saying is now run these right we found them so we so what did i do before so before it's like i found a match and then i run these things and the thing is if if i was finding a a full match here um with the mat with the match on a mixed node here then i was creating the hyperedge but now the thing is i found an outcome match so but target qubit outcome match right on control keep it by targeting alcohol match and that's a different case now because and and and now i shouldn't create a high priority i just basically do these so i do the splitting right um and now what now i would run these and then replace the zero zero by one the zero by a one i'm missing something here because if i've replaced the zero simply if i just simply go and now say cool so now i can now i can run now i can run this again because i've split the target qubit so now i'll get full matches so i get a full match here right and uh by the same logic now that i get a full match here you know i should create a hyper edge right but i don't want to do that so i'll just go ahead and replace these and then this goes and it turns into these right so i'm what i'm done [Music] hmm i'm kind of dumb but i'm kind of done but it's not it's not giving me the it's not giving me the face kick back right because i'd say i'm kind of done so so i'd say okay so first of all i should be just writing notes somewhere and i'm ah so an lcom match displays notes um then a full match on a non-split node on a non-local replacement on a full match whatever yeah he then creates a hyper edge something like that i'm gonna write these rules down i think i'm on to something i think i'm onto something a full munch like the thing is here's different case right so i did that and then um yeah and now and now what right you might say i kind of just like plus and in a way minus right i wanted the minus to go up here because that's what the face kickback does because we know this doesn't entangle we know this is the same so we know this is the same like this one here right but it just has this extra kick extra extra phase that's what i'm using but i'll leave it here um but i think i'm onto something so i like these i like this this new approach i like this new approach of kind of like nesting things like that so saying like the non-local microplace that making that distinction is something i was not doing in the original implementation because essentially that is what is missing that is what is what i'm kind of missing from a um yeah to to avoid the kind of full system like hey match is zero zero and replace with that because that's that's not what i want to do i want to do these and i only want to create hyper edges in specific situations and again my guess will be to move to to move to multi cubic gates either there is a way you can do it like that still or then then you're just going to go through the decomposition um uh sort of the the the the see the the c not the composition uh of it c not plus unitaries because then uh basically means yeah and then you can kind of build like a big like non-local match replays for these or a set of rules like that and then that that will act like that will be sort of a parallel or that will act like a tensor network in a way of replacement rules um and and yeah but at least it should leave you that's the goal is that i want to i want these rules to end up like generating the right amount of hyperedges without you having to do some kind of resolving or refactoring because that's the point and and i think that's the same point like what tensor networks isn't it that you don't have you don't end up with big things you just kind of compress small things all the time so you all the time have the smallest the smallest representation possible and everything else stays connected that's really what i want to do sort of uh it's nothing new really it's just um it helps me to think about these kind of things in different ways and and that is quite intuitive because it's a bit of a it's a bit more classical it's like you know if this is one then do these if this is no no no no etc right um then you end up with this kind of hypograph that tells you you know it kind of tells you how the cubes are entangled which yeah essentially i mean i don't really know why would this be useful or not but i mean hopefully to study entanglement in a way uh let's see i i don't i'm just this is just really an exercise for me for myself at the moment and then yeah and then you know i want to go back to the whole quantum mechanics as well but that specific problem buggers me a lot so i really want to get it solved and then kind of get my hands down coding on these while i do the other quantum mechanics stuff again cool let's see how the quality how the stream would look like uh i'm i'm pretty happy with the the way the whole the whole um twitch studio beta looks like and it's fairly easy to use and to set up um so i might use that as well now unless i would change the stream platform but i don't think i would cool see you tomorrow |
so I got bad news for you which is a couple minutes into this video I realized I was recording a tab on my browser in my host system and not the actual entire screen so you bet so kind of I will just go through these on you missed it anyway I'm not gonna repeat that because it's you know it's not the point of these videos but I basically just went through the basics here of how to create a circular tikrit I don't really kind of I understand the concepts of great schedule moments operations etc but I don't really quite understand why the Greek cubits why this is relevant and why why does this problem have a natural structure on a greed because so basically I in in in here when I basically go through these you know here you're creating the cubits in a great shape so you have you know QB 0 0 Q 0 1 Q 0 2 and then we are appending hotter more gates in the cubits that have like the column and the row some even a harm our gate and for the ones that have like an odd and it's the ex gate so it looks like that so you still have a circuit but you're kind of labeling the cubits like if they were placed on a greet and I don't know why this is relevant I'm I was kind of thinking now that maybe this has to do with the fact that this is the way they are laid out in a cheap I don't know but let's go ahead so this is where and there I was just going through that I realized that I was not that there was not really recording so sorry for that but it's good I that I caught it so that I spotted this good so one one thing to notice here this is the gate okay so these common gates one common confusion - what is the difference between a gate class and again object the gate object are transformed into operations technically gate operation via either the method on QBR ok so here I apply single qubit gates on a similar pattern applies for multiple qubit gates with the sequence activities as parameters [Music] another thing to notice about the above circuit is that the gates from both the append instructions appear on the same vertical line gates appearing in the same vertical line considered a moment we can modify this by changing the insert strategy of the append method in sa' strategy describe how you insert insertions are placed yeah okay okay so you can basically change the way this cuz then these two moments okay so how do you do that that by changing it like this so you add the strategy in here so if I if we but can I can I apply that can I apply that consecutive thing here or am I just gonna mess the whole thing up no okay why it's a pending but it's a new circuit okay I'm creating a new circuit yeah okay so it's not that I'm very using yeah okay yeah makes sense new then in line so that's the harm arcades following earlier strategy and then Newton in line whatever that means okay so now it has so now if I do these Shaka foreign okay now you've got two moments from the one whereas if I did these can I add an in-between step in here oh uh okay no but I can do that second okay do that here and he just had one moment and you have two moments okay good next step is creating the anzats if you look closely at the circuit creation code above we'll see that yep that we applied the append method tuple as a generator and at least recall that python one can use generator comprehensions in method calls inspecting the code for append one sees that the append method generally takes a knob tree for a moment or or a moment what is a knob tree it is not a class but a contri Rosalia knob trees anything that can be flattened perhaps recursively into a list of operations or into a single operation examples of an op tree are okay I don't know whether those technicalities are relevant or not this last case yields a nice pattern for defining sub circuits or layers define a function that takes in the relevant parameters and then yields the operation for the sub circuit and then these can be appended to the circuit I'm not familiar with Python so I would probably need to kind of catch up on those topics but it just turns out to be I guess sort of a a nice to have think from a language or syntax perspective that okay that basically the way I defined a function it yields something so you can use that yield as you can then use that function as part of your append call here so it's it's a nice way to reuse it's a nice way to reuse portions of your circuit I would say so if I copy all that in here and I run it then you get that circuit okay so what does this do is Sirk expurgate so that's this X that's this X gate with power and I think that has I think that's sort of a rotation on the x-axis seems like and then this is a rotation so then basically for I and J in lens yield dot or yield that operation sort of the rot the rotation applied to these cubed the rotation is being defined here and so then you call a pendent rod X layer two and then this is the half turn so this is basically the exponent okay so you know another important console here is that the rotation get is specified in half turns for rotation about X this is the gate okay in half so that's your 5/2 kind of thing so that's your okay so that's how you specified it's the angle that you put in here divided by 2 that is it radians or there's a lot of freedom defining a variational anzats here we'll do a variation on a qq8 or a strategy and define an answers related to the problem we're trying to solve which I kind of forgot what was that I think asked that to do with a particular energy of something oh yeah this to the Ising model with transverse field Isaac there's a link to it let's take a look at this so we're now gonna create an answer and I don't know what it means that we will do a variation on a QA or a strategy I think I've read that paper or not no I haven't read that paper but this is probably what is this based on right is there a circuit or something I have a great time max car a brand of the our tourism ah there's no circuit okay so this means that we'll have to actually read for comunitaria vision problems the algorithm depends on the integer key the quality of the approximation and press P is increased the quantum circuit that implements the algorithm consists of interrogators localities at most local to the other objective function if P grows with the input size of different strategies proposed we study the algorithm as applied to max card on regular graphs and analyze the performance on to regular and three regular graphs for fix P isn't this the same thing that I checked where's like Java circuit it's just for the design for the graph for the mascot problem where you have a circuit and then kind of you're connecting the qubits that represent edges that are actually connected and stuff like that maybe in a spin glass with izing speeds the problem of computing the magnetic partition function and if an electron State our study okay transverse so first we need to choose I mean I I probably have to dive into this maybe maybe I just I just wanted to go I just want to go a bit through with that even if I don't understand everything first and then dive into the actual problem links it's not smart but anyway another important concept here is that the rotation gate is specified in half turns for a rotation about X this is the gate okay I read that so first we need to choose how the instances of the problem are represented these are about the values J and H in the Hamiltonian definition will represent these as two-dimensional arrays lists of lists for J will use to such lease one for the Rowling's and one for the comics here's the code that we can use to generate random problem instances so this is supposed to come here where the actual values will be different for individual run because they are using random choice given this definition of the problem instance we can now introduce our answers our answers will consist of one step of a circuit made up of okay these I probably wanna so this is the this is basically how to represent instances of the problem and then the definition of the problem instance we can answer these our answers our answers will consist of one step of a circuit made up of apply X power gate for the same parameter for all qubits this is the method we have written both apply a power gate for the same parameter for all Q is where the transverse field two term H s plus one so your flagging those for some reason then apply a seasoned power gate for the same parameter between all qubits where the coupling field term J is plus one if the field is minus one applies easy pocket conjugated by ex gates on all qED's so here you have an example so this seems to basically be pretty similar to pretty similar to the max cat approach but I'd have to recap that so where the circuit is like you design a circuit you design an answer in this case that that kind of represents it's a model of your problem and now so if I do this it's for going to look slightly different but if I do that here run yeah it's slightly different and then because of the run the use of random and then defined defined rods at layer and then this is the layer for the control sets okay but I kind of like the way this works in terms of you define kind of sub circuits and then you can simply compose them all into one thing so you and what happens in here is okay so your stamp so you're basically creating an instance here you're creating an instance of the problem and then Europe and you're calling one step with that problem instance yeah and that helps you define the circuit and so I forgot the print circuit and if I do that then we're going to see something slightly different looks a bit ugly maybe because of okay that's what are those things are okay so you're behind but I have to dive into this so the next video will be in maybe trying to dive a little bit to this but I feel like I'm gonna get easily lost because that simply it's a representation of the problem it's the same recap what I did with the QA or a for max for the max Cod problem maybe if I take a look at that paper actually and take a look at my own video and then we'll see but basically that's the that's the that's the idea then simulation now let's see how we simulate the circuit responding to creating our answers in circulators I guess in between simulation and run first that's not enough for access stuff which is okay interesting this is also proven to be accessible similar commands have are more prone and allow different forms of simulation when prototyping small circuits so to execute similar methods but one should be worried of relying on them when running against actual hardware yeah currently Sirk's sheets with okay I think I'm gonna I'm gonna actually split that into into more videos because I see this deal ok finding the minimum there's actual permit rise in the ansatz because that's ok because that sort of hard-coded circuit so okay and that you function measurement okay cool let's let's the next video next video is going to be into the simulation part then I'm gonna make another one so I'm gonna try to make I think the next one will be simulation and parameterizing the anzats and then finding the minimum maybe that makes sense and then I'll probably do another video on the specific construction of this circuit but I think that's really just domain-specific but nevertheless I think it's worth worth recapping perfect so I'll just stop the recording now |
Subsets and Splits